by Karl Wiegers
So far, Boot Camp has focused on the most basic method for storing numbers in the computer, as binary integers. As you know, each byte of RAM can contain decimal value from 0 through 255 (hex $FF), based on the pattern of ones and zeros in the eight bits which make up the byte. If both positive and negative numbers must be accommodated, the most significant bit (bit 7) is reserved as a sign bit. If bit 7 is set, the number is negative; if cleared, the number is positive. This method leaves only seven data bits, so signed numbers ranging from -128 through + 127 can be represented in this way.
Often we must deal with numbers larger than 255. We've used two adjacent bytes in RAM for this purpose, giving us 16 bits of unsigned data, or 15 bits for signed numbers. With 16 bits we can represent decimal numbers ranging from 0 through 65535 ($00 through $FFFF), or signed numbers from -32768 through +32767.
This is all very fine, but it doesn't cover all our needs. Many numbers encountered in real life have fractional (decimal) parts, such as 7345.022. Obviously, the integer representation fails here. A more elaborate method for storing these so-called floating point numbers is used in the Atari, wherein each number occupies six bytes of RAM, regardless of its magnitude. Floating point storage uses a numeric representation called "binary-coded decimal" or BCD, which we'll discuss more next month. The Boot Camp column in issue 43 covered floating point numbers and computations in grim detail.
Another problem arises when we wish to write a program in which the user enters a number that's used in subsequent calculations, or when a calculated number needs to be output to the screen or printer. A number like 7239 is stored internally in only two bytes, with the hex value $1C47. But to print "7239" on the screen requires four characters. To further complicate the issue, to make the character "7" appear we actually have to print the ASCII character code 55 ($37).
(Things are even worse than they appear. The character with ASCII code 5 actually is stored internally in the Atari as character code 23. We won't worry about this today.)
So, if we know that we want to print some numbers, we may want to choose another method for storing them internally, rather than using the standard two-byte integer. One possibility is to reserve one byte for each digit in our number. For the example of "7239," we would use four bytes. But what to put in each byte? We could, of course, simply store "7" in the first byte, "2" in the second, and so on. But we still couldn't print the number out this way. If we output an ASCII code of "7" to the screen, we get the same graphics symbol as you obtain by typing a control-G on the Atari keyboard (a diagonal slash). And you can't even print an ASCII code 7 on a printer. Sending an ASCII 7 to an Epson printer makes the printer's bell ring!
Here's another option. Rather than storing "7" in the first byte, store the ASCII code for "7." Table 1 lists the ASCII codes and bit patterns for the digits 0-9. Note that each digit has an ASCII code equal to the digit value plus $30. Hence, if we printed a byte containing $37, a 7 would indeed appear on the screen or printer.
There are a few problems associated with storing numbers in ASCII form. First, this requires more RAM than does the binary integer form. Also, you can't use the normal addition and subtraction operations, since they are designed to work with binary numbers.
One good solution to the problem is to go ahead and store numbers in two-byte integer format, and simply convert them to an ASCII string before printing. For input, we must convert the ASCII string typed by the user into its binary numeric representation. The Atari operating system contains built-in routines to convert ASCII strings into their floating point form and back again. Unfortunately, no such routines exist to interconvert integers and ASCII strings. Today I'll present some macros and subroutines to perform all the necessary conversions.
Interconverting ASCII and Binary
Today's example program lets you enter a number containing 1-5 digits at the keyboard. This number is checked to make sure it's valid and then is converted to a two-byte binary integer. Then, the value 25 is added to the integer, and the result is converted to ASCII format and printed on the screen. Let's dive in.
Listing 1 contains three macros (MAC/65 format) that should be appended to your by-now-enormous MACRO.LIB file, using the line numbers shown. These macros use some bytes for work space, which I've defined in the equates in Lines 7380-7400. ASCII is the address where the ASCII string being converted is stored, and NUM is the address where the binary integer value for the number resides. Six bytes are reserved for ASCII, five for digits (the maximum value that works correctly is 65535) and one for an end-of-line character, $9B. The input routine uses the EOL character to know when to stop converting digits, and the output routine adds an EOL so the result can be printed on the screen. COUNTER is just a one-byte work variable.
The first macro, ASC2INT, converts a numeric ASCII string into a two-byte binary integer. Parameter 1 is the address of the string to be converted (for example, an input buffer address), and parameter 2 is the address where the integer should be placed after conversion. This macro calls two subroutines that do most of the work, VALIDASC and ASC2INT (you can give a macro and a subroutine the same name). These and some other subroutines are found in Listing 2, which should be appended to your SUBS.LIB file using the line numbers shown.
The second macro, INT2ASC, converts a binary integer into a printable ASCII string. Parameter 1 is the address of the integer to convert, and parameter 2 is the address where the ASCII string should be placed. As you might expect, this macro calls subroutine INT2ASC, which is also found in Listing 2.
The ASCII string produced by the INT2ASC macro might not require all five characters reserved for it. For example, converting the number 43 to ASCII requires only two bytes for the character string. These digits are rightjustified in the five-character ASCII string produced, so the result produced from INT2ASC would have the form 00043.
The LDGZERO macro (Lines 8110-8360 of Listing 1) can be used to convert any leading (that is, on the left) zeros into blanks for printing purposes. However, this macro does not left-justify the result in the five-character field, so if you printed the output ASCII string, you really would print three blanks in front of the 43. LDGZERO doesn't call any subroutines. It takes two parameters. The first is the address of the string to be processed, and the second is the maximum number of digits to examine for leading zeros.
Now let's walk through a sample program and see how these conversion macros and subroutines do their stuff.
ASCII to Integer
Please type in Listing 3, today's sample program. Note the .INCLUDE directives in lines 160 and 650. If your MACRO.LIB and SUBS.LIB files are not on a RAM disk, change the drive designation from D8: to the correct drive number.
Almost every line in this example program is a macro call. This makes the source code much shorter and easier to understand than if we had to expand each procedure into its individual instructions. Also, notice my approach of using a macro in combination with one or more subroutines. The macro sets up the specifics of the particular operation, by virtue of addresses or values passed as parameters. I place the common details of the procedure into a subroutine wherever possible, using reserved pieces of RAM as general work variables. This method makes the resulting object code shorter and yet keeps the source code compact; a satisfactory compromise from my point of view.
Line 380 of Listing 3 makes sure we are in binary mode for arithmetic operations (more about this next month), and Line 390 clears the display screen. Line 400 prints a message prompting you to enter a number containing from one to five decimal digits. Lines 410-420 store your response at address ENTRY, a block of six bytes reserved in Line 540. Line 430 invokes the macro to convert this ASCII string to a two-byte binary integer stored at address INTEGER (defined in Line 550). If the carry flag is set upon completing the macro execution, we know an error has taken place, so Line 440 simply branches to the end of the program.
If we've ended up with a valid number at INTEGER, Line 450 adds 25 to that number. There's nothing magical about this; it's just a way to change the number you entered before I print it out again. Line 460 then converts that sum into an ASCII string at address ENTRY. Line 470 uses the LDGZERO macro to translate any leading zeros to blanks. You might try commenting Line 470 out and seeing what you get. Finally, Lines 480-520 print the resulting ASCII string on the screen and wait for you to press RESET. As usual, you can run this program from address $5000.
Let's look at the ASCII to integer conversion in more detail. The first step is to make sure the user has entered a valid string of ASCII digits. Lines 7560-7640 in the ASC2INT macro definition in Listing 1 handle this chore. The loop simply looks through all the characters stored at the input buffer address (passed as parameter 1) until it finds an end-of-line character. Line 7600 stores each character in the appropriate position in the work variable called ASCII as the checking takes place. The subroutine VALIDASC is called to make sure the characters are all legitimate.
I apologize for bouncing you around the listings, but now we need to examine subroutine VALIDASC, starting at Line 2980 in Listing 2. Lines 3100-3160 pluck one character at a time out of the ASCII string and check for an EOL. If the first character found is an EOL, then the user just pressed RETURN without entering anything, so Line 3160 branches to an error routine at label INVALID (Line 3280). An error message is printed and the carry flag is set to indicate to the calling macro that an error took place.
The CHKASC routine beginning at Line 3190 tests whether each character has an ASCII value greater than $30 (decimal 0) and smaller than $3A (":," the first character past decimal 9). If not, control again branches to the INVALID routine. If the digit is okay, Lines 3240-3270 strip off the four high-order bits (thereby changing a $37 into a 7, for example), store the result back into the correct position in the ASCII string, and go get the next character.
This procedure underscores my contention that the largest portions of most good computer programs are devoted to input/output routines and error checking. If we knew our users would make only valid entries, our programs could be much shorter. Never make such a shaky assumption, though!
Okay, now the string at address ASCII consists only of valid digits, from one to five of them. The next step is to convert these digits into a binary number. The ASC2INT subroutine (Lines 3390-3700 of Listing 2) does the trick.
Let's contemplate the philosophy of number representation once again. A decimal number like 7239 actually means to multiply 1000 by 7, multiply 100 by 2, multiply 10 by 3, multiply 1 by 9, and add all these products together. To transform a bunch of characters from the ASCII string "7239" into the binary equivalent, we must perform precisely these same operations. The ASC2INT subroutine does the work, with the help of another subroutine called MULT10 (Lines 3740-4020 of Listing 2). The MULT10 subroutine actually carries out the power of ten multiplications.
We begin with the most significant digit in the string to be converted. In the case of "7239," this digit is a 7. Load the 7 into a byte and multiply by 10. This gives 70. Add the next digit in, yielding 72.
Multiply this result by 10 to get 720 and add in the next digit, giving 723. Multiply this result by 10 to get 7230 and add in the final digit, to wind up with 7239. Of course, this answer doesn't look like 7239 in its binary representation. In binary it will look like 0001110001000111, and in hexadecimal it will be $1C47. There's one final twist. The Atari stores two-byte integers in low-byte/high-byte format, so decimal 7239 is represented in two adjacent bytes of RAM in the Atari as hexadecimal 4710. And you thought this stuff was going to be simple!
Lines 3480-3510 of Listing 2 store a zero in the high-byte of our destination integer at address NUM and load the first (most significant) ASCII byte into the low-byte of NUM. If there's only one ASCII character, our conversion is complete; Lines 3520-3550 check for this condition. If the second character is indeed the EOL, Lines 3560-3570 clear the carry flag (our signal to the calling macro that all is well) and return. Otherwise, we go on to the NEXTDIGIT label to continue processing.
The first step is to multiply this leftmost digit by 10. Subroutine MULT10 (Lines 3740-4020 of Listing 2) takes care of this for us. But how do we multiply using the 6502 processor? We've learned how to add and subtract using the ADC and SBC instructions. However, the 6502 contains no intrinsic multiplication or division instructions. You may recall that performing an ASL or Accumulator Shift Left operation is the same as multiplying the contents of a byte by two, and a LSR or Logical Shift Right operation divides the contents of a byte by two. Now we need to extend these concepts to handle a two-byte number and combine shift and add operations to perform integer multiplication.
Remember that multiplication is really just a bunch of sequential additions. The 6502 gives us an easy way to multiply by 2. To multiply some number by 10 we could multiply it by 2; multiply by 2 again (net result is multiply by 4); add the original number back to the result (net result is multiply by 5); and multiply by 2 once again, to give a net result of multiplying by 10. This is precisely what happens in subroutine MULT10.
One more point and then we'll look at the code. Suppose our original number is decimal 150, stored in a single byte as $96. If we multiply that by 2 we get 300 in decimal terms ($012D), but the maximum value that fits in a single byte is 255. Whatever shall we do? When an overflow like this takes place, the carry flag in the processor status register is set, and the original byte contains the value of 300 minus the maximum 255, or 45 ($2D). This carry value must be added to the high-byte of our two-byte number, which also underwent a left shift operation during the multiply by 2 step. Fortunately, the 6502's instruction set contains an instruction to handle all these details, the ROL or Rotate Left instruction.
Each bit shifts to the next higher order position (i.e., to the left). The carry flag shifts into bit 0, and bit 7 shifts into the carry flag. If the carry flag is cleared, ROL is the same as an ASL, simply multiplying the byte's contents by 2. But if the carry is set, the ROL effectively multiplies by 2 and adds 1 to the original byte contents. Hence, a twobyte number can be multiplied by 2 simply by performing an ASL on the lowbyte, followed by an ROL on the highbyte to account for the carry flag. I can't believe you didn't think of this solution immediately. (Wiegers' First Law of Computing: Almost nothing you can do with a computer is difficult. Wiegers' Second Law of Computing: Almost nothing you can do with a computer is obvious.)
In sum (pun intended), to multiply a two-byte binary integer by 2, you can simply perform an ASL operation on the low-byte, followed by an ROL operation on the high-byte.
As promised, you may now look at the MULT10 subroutine in Listing 2. Lines 3820-3830 store the high-byte of the original number on the stack so we can grab it for the necessary addition. Line 3840 places the original low-byte into the accumulator. Lines 3850-3860 multiply the original number by 2, and Lines 3870-3880 do it again. Lines 3890-3930 add in the original number, so now we've effectively multiplied it by 5. (Notice that all intermediate results are stored back in the original location at NUM and NUM + 1.) Lines 3940-3950 complete the multiplication by 10. Lines 3960-4010 add in the next digit, as we discussed earlier.
The loop in Lines 3580-3640 of Listing 2 (subroutine ASC2INT) continue this monkey business until an EOL character is reached in the ASCII string, at which point the carry flag is cleared to indicate success and control returns to the calling ASC2INT macro.
We're now back at Line 7660 of Listing 1, in the middle of the ASC2INT macro. If the carry flag is set, there was a problem with the conversion, and an appropriate error message (which lives at Lines 3680-3700 of Listing 2) is printed. Otherwise, the binary result in address NUM is moved to the location specified in the second parameter in the ASC2INT call (Lines 7670-7700), and we're all done.
Integer to ASCII
Whew! We finally got the simple number you entered stored in binary form. Now let's see how to go the other way. Our sample program adds 25 to whatever number you enter, just to change it. The INT2ASC macro converts the number whose address is supplied in parameter 1 to a character string stored at the address specified in parameter 2. The INT2ASC macro is in Lines 7820-8070 of Listing 1. Lines 7940-7970 just copy the number to be transformed to our work space at address NUM.
Subroutine INT2ASC does all the work, creating a five-digit ASCII string of printable characters at address ASCII. Lines 7990-8050 copy this string, up through the EOL character, to the desired destination address in parameter 2. Subroutine INT2ASC is in Lines 4060-4590 of Listing 2. As with ASC2INT, this procedure is based on the fact that the position of a digit in a decimal number indicates the number of times a particular power of 10 must be added to zero to obtain that number. Algorithmically, it's easier to work backwards, performing multiple subtractions. You keep subtracting a particular power of 10 (10, 100, 1000 or 10000) from the integer in question until you obtain a negative result. The number of subtractions you can do before going negative is equal to the value of the digit in a specific column (tens, hundreds, thousands, or ten thousands).
Here's an illustration. Begin with the familiar integer 7239. Let's set a counter equal to 0. How many times can you subtract 10000 from 7239 before you get a negative result? The answer is 0. Hence, the first of our five output digits (the ten thousands column) is 0. Next, how many times can you subtract 1000 from 7239 before obtaining a negative result? Seven, of course. Increment the counter for each successful subtraction. If your counter reaches 8 (representing 7239 minus 8000), the subtraction result is negative, and you know you've gone a digit too far. Add 1000 back in to get back to a positive number (7239-7000 = 239), and use the counter's value of 7 for the second digit in the output ASCII string.
Continue this procedure until all powers of ten from 10000 to 10 have been done, and the remainder (the units column) is the fifth and final digit in the ASCII number. This is awkward to describe in words, but it actually makes some sense.
We'll have to set bits 4 and 5 (ORA -$30) in our counter for the number of successful subtractions to convert it to the ASCII representation. If you walk through the commented INT2ASC subroutine you should understand this technique better. As you can see, it's a pretty cumbersome way to turn a two-byte binary integer into a five-character ASCII string, but it's just about the only way to do it. Lines 4500-4520 of the subroutine add an EOL to the end of the string so it can be printed properly using the PRINT macro, as done in the sample program of Listing 3.
The INT2ASC conversion routine produces a five character ASCII string, plus an EOL character. If the integer being converted is smaller than 10000 decimal, the first ASCII digit will be a zero. The number of leading zeros equals five minus the number of decimal digits in the number being converted. Often, you wish to print a number with just significant digits shown, that is, without any leading zeros appearing. The LDGZERO macro, Lines 8110-8360 of Listing 1, replaces leading zeros with spaces.
LDGZERO requires two parameters, the address of the string to be processed and the number of bytes to process before quitting. If a non-zero character (ASCII values $31-$39) is encountered, the routine terminates. The entire logic of this macro consists of looping through the bytes in the ASCII string replacing characters with ASCII code $30 (zero) with ASCII code $20 (blank or space character), until an end condition is satisfied.
As you see when you run the program in Listing 3, leading blanks do "print," effectively shifting the significant digits to the right on the screen. You might want to write a macro or subroutine (or combination) to left justify a string by simply removing leading zeros, rather than translating them into blanks. That's not a hard exercise to do. While you're at it, why not write a routine to right justify a string in a field of some specified length? Don't forget error checking. What would happen if you tried to right justify a string of 11 characters in a field only 8 characters long? Oops.
I alluded to another numeric data storage format, binary-coded decimal. Next month we'll take a close look at BCD and see some routines for converting ASCII strings to BCD and vice-versa.
|Table 1 .
ASCII / Character / Binary Equivalents.
|Character||ASCII Code||Binary Value|
7380 ASCII = $0690
7390 NUM = $0696
7400 COUNTER = $0698
7440 ;ASC2INT macro
7460 ;Usage: ASC2INT chars,number
7480 ;'chars' is address of ASCII
7490 ; string to convert,ending w/ EOL
7500 ;'number' is address of integer
7520 .MACRO ASC2INT
7530 .IF %0<>2
7540 .ERROR "Error in ASC21NT"
7560 LDX #255
7590 LDA %1,X
7600 STA ASCII,X
7610 CMP #EOL
7620 BHE @ASCL00P
7630 JSR VALIDASC
7640 BCS @DONE
7650 JSR ASC2INT
7660 BCS @ASCERROR
7670 LDA NUM
7680 STA %2
7690 LDA NUM+1
7780 STA %2+1
7720 BCC @DONE
7740 PRINT CONVERTMSG
7820 ;INT2ASC macro
7840 ;Usage: INT2ASC number,chars
7860 ;'number' is address of integer
7870 ;'chars' is address of resulting
7880 ; ASCII string, ending with EOL
7900 .MACRO INT2ASC
7910 .IF %<>2
7920 .ERROR "Error in INT2ASC"
7940 LDA %1
7950 STA NUM
7960 LDA %1+1
7970 STA NUM+1
7980 JSR INT2ASC
7990 LDX #255
8020 LDA ASCII,X
8030 STA %2,X
8040 CMP #EOL
8050 BHE @INTLOOP
8110 ;LDGZERO Macro
8130 ;Usage: LDGZERO address,bytes
8150 ;'address' is beginning of ASCII
8160 ; string of digits
8170 ;'bytes' is max number of digits
8180 ; to check for leading zeros
8200 .MACRO LDGZERO
8210 .IF %0<>2
8220 .ERROR "Error in LDGZERO"
8240 LDX #255
8270 LDA %1,X
8280 CMP #$30
8290 BNE @LZDONE
8300 LDA #$20
8310 STA %1,X
8320 CPX #%2
8330 BNE @SUPZERO
2980 ;subroutine VALIDASC
2990 ;called by ASC2INT,ASC2BCD macros
3010 ;makes sure all characters in
3020 ;string beginning at address
3030 ;ASCII are valid ASCII codes for
3040 ;numeric digits; looks until it
3050 ;hits an EOL; error message is
3060 ;printed and carry flag is set
3070 ;if an invalid char. is found
3100 LD# #0
3120 LDA ASCII,X ;get a char
3130 CMP #EOL ;EOL?
3140 BHE CHKASC ;no,go check it
3150 CPX #0 ;yes, 1st char?
3160 BEQ INVALID ;yes,null entry
3170 CLC ;no, all done
3180 RTS ;go back
3200 CMP #$30 ;less than 0?
3210 BCC INVALID ;yes, no good
3220 CMP #$3A ;greater than 9?
3230 BCS INVALID ;yes, no good
3240 AND #$0F ;clear 4 hi bits
3250 STA ASCII,X ;save 4 lo bits
3260 INX ;ready for next
3270 BCC LOOPASC ;char
3290 PRINT ASCERRMSG
3300 SEC ;set carry to
3310 RTS ;show an error
3340 .BYTE "Non-numeric "
3350 .BYTE "character found",EOL
3390 ;subroutine ASC2INT
3400 ;called by ASC2INT macro
3420 ;converts string of ASCII digits
3430 ;at address ASCII to a 2-byte
3440 ;binary integer at address NUM;
3450 ;carry flag set if error
3480 LDA #0 ;zero hi byte
3490 STA NUM+1
3500 LDA ASCII ;get first char
3510 STA NUM
3520 LDX #1 ;next char EOL?
3530 LDA ASCII,X ;next char EOL?
3540 CMP #EOL
3550 BNE NEXTDIGIT ;no, go on
3560 CLC ;yes all done
3590 JSR MULT10 ;multiply by 10
3600 BCS ABORT ;carry set? error
3620 LDA ASCII,X ;next char EOL?
3630 CMP #EOL
3640 BNE NEXTDIGIT ;no, go on
3670 RTS ;exit
3690 .BYTE "ASCII conversion"
3700 .BYTE " error ...",EOL
3740 ;subroutine MULT10
3750 ;called by subroutine ASC2INT
3770 ;multiplies binary integer at
3780 ;address NUM and NUM+1 by 10
3790 ;and adds the next digit in
3820 LDA NUM+1 ;save high byte
3840 LDA NUM ;get low byte
3850 ASL NUM ;multiply x 2
3860 ROL NUM+1
3870 ASL NUM ;times 2 again
3880 ROL NUM+1
3890 ADC NUM ;add to self to
3900 STA NUM ;effectively
3910 PLA ;multiply X 5
3920 ADC NUM+1
3930 STA NUM+1
3940 ASL NUM ;times 2 again,
3950 ROL NUM+1 ;total is now x10
3960 LDA ASCII,X ;add in next char
3970 ADC NUM
3980 STA NUM
3990 LDA #0 ;adding 0 to high
4000 ADC NUM+1 ;byte just pulls
4010 STA NUM+1 ;in carry value
4060 ;subroutine INT2ASC
4070 ;called by INT2ASC macro
4090 ;converts a 2-byte binary integer
4100 ;at address NUM to a string of
4110 ;ASCII digits at address ASCII
4140 LDY #0 ;pointer to table
4150 STY COUNTER ;of powers of 10
4170 LDX #0 ;digit counter
4190 LDA NUM ;get low byte
4200 SEC ;subtract to byte
4210 SBC DECTABLE,Y ;of current
4220 STA NUM ;power of 10
4230 LDA NUM+1 ;now subtract hi
4240 INY ;byte of current
4250 SBC DECTABLE,Y ;power of 10
4260 BCC ADDITBACK ;if neg,restore
4270 STA NUM+1 ;save hi byte
4280 INX ;digit counter
4290 DEY ;point to to-byte
4300 CLC ;of current power
4310 BCC SUBLOOP ;of 10 again
4330 DEY ;point to to byte
4340 LDA NUM ;add to byte of
4350 ADC DECTABLE,Y ;power of 10
4360 STA NUM ;back in
4370 TXA ;convert digit
4380 ORA #$30 ;counter to ASCII
4390 LDX COUNTER ;and store at
4400 STA ASCII,X ;next position
4410 INC COUNTER
4420 INY ;point to next
4430 INY ;power of 10
4440 CPY #8 ;at end of table?
4450 BCC NEXTDIGIT2 ;no, go on
4460 LDA NUM ;get units column
4470 ORA #$30 ;convert to ASCII
4480 LDX COUNTER ;store it
4490 STA ASCII,X
4510 LDA #EOL ;add an EOL in
4520 STA ASCII,X ;next position
4530 RTS ;all done
4560 .WORD 10000
4570 .WORD 1000
4580 .WORD 100
4590 .WORD 10
0100 ;Example 1. Interconverting ASCII
0110 ;strings and 2-byte integers
0130 ;by Karl E. Wiegers
0150 .OPT NO LIST,OBJ
0160 .INCLUDE #D8:MACRO.LIB
0190 ; PROGRAM STARTS HERE
0210 ;You'll be prompted to enter a
0220 ;number with 1-5 digits. This
0230 ;is stored at address ENTRY.
0240 ;The binary integer produced
0250 ;is stored at address INTEGER.
0260 ;If the number is too large,
0270 ;missing (null entry) or has non-
0280 ;digits in it, you'll get an
0290 ;error message. 25 will be added
0300 ;to the value you entered, and
0310 ;the result will be converted to
0320 ;ASCII and printed on the screen
0360 *= $5000
0390 JSR CLS
0400 PRINT PROMPT
0410 POSITION 5,5
0420 INPUT 0,ENTRY
0430 ASC2INT ENTRY,INTEGER
0440 BCS END
0450 ADD INTEGER,25
0460 INT2ASC INTEGER,ENTRY
0470 LDGZERO ENTRY,5
0480 POSITION 2,8
0490 PRINT AFTER
0500 POSITION 5,10
0510 PRINT ENTRY
0520 END JMP END
0540 ENTRY .DS 6
0550 INTEGER .DS 2
0580 .BYTE "Enter a number
0590 .BYTE "with 1-5 digits:",EOL
0610 .BYTE "After adding 25:",EOL
0650 .INCLUDE #D8:SUBS.LIB