1

Original idea

I have just found my old Commodore 64 computer, hooked it up, and decided to try to learn Basic again. I have just finished chapter 3, in which a simple FOR-loop is demonstrated:

10 FOR NB = 1 TO 10 STEP 1
20 PRINT NB,
30 NEXT NB

This yields, as expected, the following:

1       2       3       4
5       6       7       8
9       10

Introducing floating point numbers

The above result is the same when step is set to 1.0. Other numbers, except 0.5, however cause problems:

If I change the step increment to anything but .5 (or 1), I get strange floating points, apparently appearing earlier the lower the floating point number is set. For the first test, I changed NB to 1 TO 40.

Test results

  • FOR NB = 1 TO 40 STEP .6: Normal results for 1–31, then 31.6000001. To see if I would get weird results further up, I increased NB to 100, and saw the weird numbers starting again from the 42s: 41.2, 41,8, 42.4, 42.9999999, 43.5999999 etc.
  • FOR NB = 1 TO 40 STEP .4: Normal results for 1–7.4, then 7,8000001, then normal results 8.2–22.6, then 22.9999999, 23.3999999 etc.
  • FOR NB = 1 TO 40 STEP .2: Normal results for 1–6.2, then 6.3999999 in .2 increments up until 8.5999999, then changed from 8.7999998 up until 9.9999998, then normal results from 10.2.
  • FOR NB = 1 TO 40 STEP .1: Normal results for 1–3.6, then 3.6999999 etc.
  • FOR NB = 1 TO 40 STEP .05: Normal results for 1–2.3, then 2.34999999 (note extra digit) up until 2.59999999, then 2.65–2.7, then 2.74999999 etc.

Failure iteration number

The steps fail at the following iterations:

  • 0.6 increment fails at iteration
    • 52 (31.6000001),
    • 51–70 is fine,
    • then 71–87 is 0.0000001 to little (exmpl.: 42.9999999),
    • then 88–103 is further one less (ex.: 53.1999998),
    • then 104 onwards is further reduced (ex.: 62.7999997).
  • 0.4 increment fails at iteration
    • 18,
    • 19–55 is fine,
    • 56–64 is at −.9999999,
    • 65 is fine,
    • 66–84 is at −.9999999,
    • 85–100 is fine,
    • 101–116 is +.0000001,
    • 117 continues at 0.000002, and so on.
  • 0.2 increment fails at iteration
    • 28 at −.9999999,
    • 47–107 is fine,
    • 108–140 fails at +0.0000001,
    • 141 onwards fails at +0.0000002, and so on
    • 0.1 increment fails at iteration
    • 28 at −.9999999,
    • 79–88 is fine,
    • 89–90 fails at +0.00000001 (sic),
    • 91–116 is fine,
    • 117–187 fails at +0.0000001,
    • 188 onwards fails at +0.0000002, and so on.
  • 0.05 increment fails at iteration
    • 28–33 at −.00000001,
    • 34–35 is fine,
    • 36–68 fails at −0.00000001,
    • 69–78 is fine,
    • 79–92 fails at +0.00000001,
    • 93–106 fails at +0.00000002,
    • 107 onwards fails at +0.00000003 and so on.

Notes to the above

For the record, I added a counter to ease reporting; the program therefore looks like this:

05 NC = 1
10 FOR NB = 1 TO 100 STEP 0.05: REM 0.6, 0.4, 0.2, 0.1, 0.05
20 PRINT NC;":";NB,
25 NC = NC + 1
30 NEXT NB

Main question

I suspect the issue is with how decimal is translated to binary, but find it strange that it works perfectly fine with .5 steps. What is causing this error, and how could one either remediate it, or how should one account for it? My Commodore runs Basic v2.

Canned Man
  • 734
  • 1
  • 7
  • 26
  • 2
    Welcome to the wild, wacky, wonderful world of floating point numbers. This is not limited to BASIC. As explained briefly in aframestor's answer, it's because of the nature of dealing with the precision of storing decimal numbers as binary. – Bill Hileman Aug 16 '18 at 13:24
  • 3
    (a) Consider what would happen if you tried incrementing by 1/3 but your computer could only handle decimal numerals, and only with two digits after the decimal point. It could not count 1/3, 2/3, 1, 4/3, 5/3, 2,… It could only count .33, .66, .99, 1.32, 1.65, 1.98,… The same thing happens when you use binary floating-point to try to count with a decimal fraction. The numbers are slightly off, and the error increases as things go on. (b) You do not always see the errors right away because the floating-point output is formatted with just a few digits, not showing the entire value. – Eric Postpischil Aug 16 '18 at 17:59
  • @EricPostpischil I presume this is why I get apparently random occurences of offset numbers, which are then corrected for a while, and then finally seem to be continuously wrong after enough iterations. – Canned Man Aug 17 '18 at 09:42
  • Possible duplicate of [Why Are Floating Point Numbers Inaccurate?](https://stackoverflow.com/questions/21895756/why-are-floating-point-numbers-inaccurate) – Jonathan Hall Feb 04 '19 at 13:07

1 Answers1

6

I would guess that since multiples of .5 can be easily translated to base 2 that's why it doesn't produce any issues. I bet that if you try with .25 increment it will also work fine.

aframestor
  • 176
  • 6
  • But the C64 clearly distincts whole numbers from floats. On my manual’s p. 34ff, under Variables, we are explained that variables are distinguished by two characters, the first a letter, and either nothing more (a float), a percentile (an integer), or a dollar sign (a string); furthermore, on p. 35 it limits integers to be −32768–+32767, so 2¹⁶ bytes (I might be wrong here, though). – Canned Man Aug 16 '18 at 14:36
  • 2
    The arithmetic will be exact for reasonable fractions that can be exactly represented as 'A/(2^B)' for integers A and B. 0.5, 0.25, 0.75, 0.125, 0.325, 0.824... – Patricia Shanahan Aug 16 '18 at 15:48
  • You are quite correct. I just tested it with increments of 0.25 and 0.0625, and there were no errors. The second part of my question then, I would presume, would be answered in the posts that deal with binary calculations. – Canned Man Aug 17 '18 at 09:39
  • 1
    Minor nit-pick, that would be 2^16 bits (not bytes) as far as integer values are concerned. Later generations of BASIC implemented double precision (8 bytes) versus the C64's (float) single precision (4 bytes) but there are even different formats of those two numbers as to how they are stored internally. I remember having a CVDMBF function I wrote to convert from "Microsoft Binary Format" to the IEEE format of doubles, for example. Then there's different kinds of rounding. For fun, look up "Banker's Rounding." which is what VB6 decided to go with by default. – Bill Hileman Aug 17 '18 at 14:18
  • @BillHileman Excellent comment! You are of course quite correct in that it is bits and not bytes (not nitpicky, sir!), but it took some time for my head to wrap around it. I would correct my coment if I could (referencing you, of course), but that is not allowed after such a delay. I assume ‘Banker’s Rounding’ explains the difference between the amount of bits available for numbers, versus _n_ byte precision. – Canned Man Aug 23 '18 at 09:47
  • 1
    Banker's rounding refers to a method of rounding half-cents. Normally, a value of 0.5 will alway round to 1 and -0.5 will always round to zero (highest integer value) but in Banker's rounding the direction it rounds depends on what the digit to the left of the decimal is, either odd or even. I don't recall the exact formula off the top of my head, but it might, for example, round both 3.5 and 4.5 to three, or to three and five correspondingly, based on what the trigger is (odd or even). It's believed to be a "fairer" rounding system for banking. – Bill Hileman Aug 23 '18 at 11:26
  • 1
    @CannedMan C64 BASIC distincs integers (whole) numbers and floating point ONLY for storage. All math operations are using floating point. integer numbers before any math are first converted to floating point; only when assigning to an integer variable the result is converted to integer value; so integer variables are slow – alvalongo Aug 29 '18 at 14:55