**THE HIDDEN PITFALLS OF COMPUTER ARITHMETIC**

Michael A. Covington

*Computers sometimes give "false" results after performing calculations. This article discusses the way a computer handles numbers, describes the most common types of errors, and offers solutions.*

Here is a simple — and surprising — BASIC program to try on your computer.

10 LET A = 0 20 LET A = A + 0.1 30 PRINT A 40 GO TO 20

You'd expect it to print the numbers, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, and so on until you stop it. But unless your computer is a TI-99 — which is different in a way we'll get to presently — you probably won't get what you're expecting. If you let the program run long enough, you'll get numbers that are just a bit off, such as 5.00001 or 4.99999 instead of 5. The margin of error may increase as the program runs, or it may rise for a while, then diminish, then go off in the other direction, then diminish to zero again, over and over.

**The Computer's Approach To Numbers**

What's going on? Well, you've just seen that numbers are not always what they seem inside a computer. We humans ordinarily write numbers in *base 10* notation — that is, there are ten different digits (0 through 9); and in a number like 1234.567, the successive digits represent thousands, hundreds, tens, ones, and, to the right of the point, tenths, hundredths, and thousandths. But numbers inside the computer are represented in *binary (base 2)* notation. In the binary system there are only two digits, 0 and 1, and the successive digits represent sixteens, eights, fours, twos, ones, and, to the right of the point, halves, quarters, eighths, sixteenths, and so on. Thus, for example, the decimal number 9.5 goes into binary as 1001.1 (one eight, no fours, no twos, one one, and one half). The place value associated with each digit is half that of the preceding one.

So far, so good. In binary, 2 becomes 10 (one two, no ones), 8 becomes 1000, 39.125 becomes 100111.001, one-sixteenth becomes 0.0001, and so on. But the binary system suffers from a problem that we're already familiar with from the decimal system — there are numbers which can't be represented using a finite number of digits.

Consider 1/3, for example. In decimal notation, 1/3 is approximately 0.3333. A better approximation is 0.3333333333. But a completely correct representation would require an infinitely long list of 3s — you can keep adding decimal places until your paper leaves the galaxy and still never quite get to 1/3. Not surprisingly, 1/3 isn't representable with a finite number of binary digits either.

What is surprising is that many numbers that give us no trouble in decimal notation aren't representable exactly in a finite number of binary digits. In fact, *most* decimal numbers can't be represented exactly in binary. Consider 0.1, for instance. There is no combination of halves, quarters, eighths, sixteenths, and such that exactly adds up to 0.1. If we had an infinite number of binary digits, we could represent 0.1 as 0.00011001100110011001100110011..., with the 0011 repeating ad infinitum. But the computer has only a finite number of binary digits – usually about 24 — and hence it can't represent 0.1 exactly. That's why what gets added to A in the program above isn't exactly 0.1.

**A Matter Of Precision**

In order to be representable exactly in binary, a number has to be divisible by an integral power of 2, such as 16, 8, 4,2,1,1/2, 1/4, 1/8, and so forth. Since 1 is in the list, all integers (numbers divisible by 1) go into binary without any problem, and you can trust your computer's representation of them. But numbers with decimal places almost always get distorted a bit within the computer.

This is of practical concern because if numbers aren't represented exactly within the computer, your program can't test for precise equality between numbers that were arrived at in different ways. Try this program, for example:

10 LET A = 0 20 LET A = A + 0.3 30 PRINT A 40 IF A = 3 THEN 60 50 GO TO 20 60 END

Add 0.3 to 0 ten times and you get 3, so the program will terminate after ten cycles through the loop, right? Wrong. What you're adding to A isn't 0.3 exactly, but some binary number very close to 0.3. Add that number to 0 ten times, and you won't get 3 exactly, though you'll be awfully close — probably so close that your computer will round the value to 3 before printing it out. Line 40, however, asks whether A is equal to exactly 3 (unlike 0.3, 3 is an integer and is representable exactly). And A will never hit 3 exactly — so line 40 never has any effect, and the program runs without end. (A few computers have rounding routines that will catch the discrepancy and make line 40 work the way you intended — but don't count on it.)

This leads to an important rule:

Never test whether two numbers are exactly equal *unless both are integers and result from a process that can't possibly produce anything that isn't an integer. Instead, use "less-than-or-equal-to" or "greater-than-or-equal-to" (to catch numbers going over or under a limit), or test whether the difference between two numbers is sufficiently small.*

For example, in the program above, we could change line 40 to:

40 IF A >= 3 THEN 60

This will make the program terminate when A reaches or exceeds 3. But that may not be quite what we want — we don't know whether our first attempt to get 3 will be a little low or a little high, and if it's a little low, the statement we've just formulated will not catch it. So we try this:

40 IF ABS(A-3) < 0.001.THEN 60

This will catch a number that comes within 0.001 of 3 in either direction.

We noted earlier that TI-99s were different. To be specific, the TI-99/4 is the only computer in widespread use (aside from certain large business computers) that does not convert its numbers into binary. Instead, it represents numbers internally with codes for decimal digits (or rather pairs of them, so that its actual base is 100 rather than 10). Hence, anything you type—with up to 14 significant digits—will be represented exactly. This is, in my opinion, one of the unsung virtues of the TI-99—there are no errors of representation to worry about.

**Calculations With Fields Of Various Lengths**

Most home computers allow you the equivalent of about seven decimal digits of accuracy (sometimes rounded off to five or six digits for printing in order to conceal various slight errors). You get seven significant (nonzero) digits regardless of the position of the decimal point, so that, for example, 12345.67, 0.1234567, 12345670000, and 0.000001234567 are equally good. The computer keeps a separate record of where the decimal point goes, and it can be within or outside the string of digits that really count.

Seven digits are usually enough; after all, it's unlikely that you'll be doing calculations based on measurements that are accurate to better than one part in ten million, or dealing with eight-figure salaries, or anything like that. But problems can arise when you're calculating with numbers of widely differing sizes.

Suppose, for instance, you want to compute 0.000853 + 4256.3-4256.203. First, the computer adds 4256.3 to 0.000853, giving 4256.300853. But this has too many digits, and the computer truncates it to 4256.300 (that is, 4256.3) —the addition of 0.000853 has had no effect at all. Then 4256.203 is subtracted, giving 0.097. But the correct answer is 0.097853. If you had performed the calculations in a different order, you would have the right answer: 4256.3-4256.203 gives 0.097, and this added to 0.000853 gives 0.097853 without any problems. The rule here is:

Group your calculations *so that, as far as possible, each addition works on numbers of nearly equal size, and operations on numbers of widely differing size are saved until last.*

There really are no sure-fire rules about how to avoid numerical accuracy problems. It's often best to work through some typical cases with a hand calculator, looking at the size of the intermediate results and trying to imagine what could go wrong.