The Editors and Readers of COMPUTE!
Computer Counting
I don't understand the difference between ASCII, hexadecimal, and decimal numbers.
Don Lyles
ASCII (pronounced "askey") is an acronym for American Standard Code for Information Interchange. It is a standard code used for communication between computers. Among other things, it lets different types of computers communicate with each other using telephone modems.
Each ASCII number stands for a character. For instance, the ASCII code 65 stands for the uppercase letter A. Because an ASCII number consists of one byte containing eight bits, there are 256 possible code numbers (2 to the eighth power). But only the first 128 characters are defined by ASCII, while the remaining 128 characters are different on each computer. Some computers tinker with the first 128 codes, too, creating their own version of ASCIIsuch as PETASCII (Commodore ASCII) or ATASCII (Atari ASCII). Departures from regular ASCII can cause compatibility problems when these computers try to communicate with other computers.
The figure below shows the 128 standardized characters which make up the ASCII character set:

0 
1 
2 
3 
4 
5 
6 
7 
8 
9 
A 
B 
C 
D 
E 
F 

0 


1 

2 

3 

4 

5 

6 

7 
As you examine the figure, you'll notice some rather unusual designations. Not all ASCII numbers stand for characters you would normally recognize. These are control codes and are considered to be nonprinting machine instruction characters. In other words, instead of printing a character on the screen, they perform some functionsuch as clearing the screen, moving the cursor, or forcing a carriage return or linefeed.
To answer the second part of your question, decimal and hexadecimal are just two different numbering systems, not coding systems like ASCII. Decimal is the system we normally use, sometimes called base 10 because it's based on 10 digits0 through 9. Hexadecimal is base 16 and uses 16 digits0 through 9 plus A, B, C D, E, and F (any symbols could have been chosen to represent the extra six digits, but AF were selected because they're commonly available on keyboards).
When counting in hexadecimal, just as in decimal, you don't start using twodigit numbers until you've run out of onedigit numbers. For example, in decimal you count 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, and then 10. In hexadecimal, you would count 0, 1, 2, 3, 4,5,6,7,8,9, A, B, C, D, E, F, and then 10. Notice that A in hexadecimal equals 10 in decimal, B equals 11, C equals 12, and so on. Therefore, hexadecimal 10 equals decimal 16. In any numbering system, the first twodigit numberrepresented as 10always equals the base of that system. (Incidentally, we might be using the hexadecimal system for everyday counting if humans were born with 16 fingers instead of 10.)
It's not too important to learn hexadecimal unless you want to write programs in machine language. Machine language programmers use the hexadecimal numbering system (and sometimes the base 8 system, called octal) because it's a more compact way of writing binary numbers and a more efficient way of visualizing binary patterns. Binary, in turn, is the base 2 numbering systemit uses only the digits 0 and 1. Computers "think" in binary and are programmed that way on the machine language level. But binary numbers take up lots of space when written down, and they are difficult to read. For instance, the binary number 11010010 is eight digits long and is hard to interpret at a glance. Expressed in decimal, 11010010 equals 210a threedigit number that obscures the binary pattern. Expressed in hexadecimal, 11010010 equals D2, a more compact number which an experienced programmer can break down into two partsD = 1101 and 2 = 0010. This can be very important in machine language programming.
Because machine language programmers are likely to encounter decimal, hexadecimal, and binary numbers in books, magazines, and program listings, special symbols have been agreed upon to. keep the different systems from being confused with each other. Otherwise, the number 100 could be interpreted to have a decimal value of 100 in decimal, 64 in hexadecimal, or 4 in binary. Needless to say, this could result in a programming snafu that would leave the computer pretty confused, too. Many programmers use the dollar sign ($) to denote hexadecimal and the percent sign (%) to denote binary. So $FF means hexadecimal FF, which equals 255 in decimal. Other programmers use the letter H to represent hexadecimal, so $FF would be written FFH. A number with no special symbol is assumed to be decimal.
For a more thorough discussion of these numbering systems, consult a programming book, such as Machine Language for Beginners, Programming the VIC, Programming the 64, or Programming the PET/CBM from COMPUTE! Books.