How modems work. (data transfer)(part 2) (Column)
by Mark Minasi
Last month, we looked at how PCs transferred data over phone lines at the start of the computer revolution.
Originally, a bulletin board system (BBS) would basically toss the data over the phone lines to the receiving computer, and the computer would capture the data as it came in, line noise and all. Line noise wasn't a problem then, as we were usually communicating at 300 bps, and phone lines look almost perfectly clean to 300-bps modems.
Then came faster modems, at 1200 bps and up. Pushing phone performance made for occasional errors--still no more than a bad bit every hour or two, but a measurable amount.
XMODEM was the first attempt in the PC world to solve the problem of transporting data over phone lines and ensuring that any errors in transmission were caught and automatically corrected.
XMODEM has been largely outclassed by newer transfer methods, but it retains a great strength--it's ubiquitous. You can find the old guy everywhere. Every communications program supports XMODEM, at a minimum.
Nonetheless, XMODEM has four deficiencies. First, its block size is too small and makes for inefficient transfers. (We'll see why this month.) Second, it requires the operator to tell both the receiver and the sender the name of the file. Third, it only transfers one file at a time. Fourth, its checksum-based error-detection scheme is too simple in the eyes of some people. These four weaknesses led to the development of today's file-transfer methods or, as they're commonly called, protocols.
For the rest of this column, I'll talk about that first characteristic, block size. It's the really big difference in the newer protocols--the transfer block size. That's the big story--and the secret to increasing the speed of your file transfers by as much as 300 percent.
Recall how XMODEM works. The sender sends the first 128 bytes of the file, then waits while the receiver determines whether or not the 128-byte block has transferred without transmission errors, using a simple checksum. Once the receiver has acknowledged the receipt of the first block, the sender sends the next 128 bytes, and so on.
The key to understanding why this is really inefficient (for most applications) is in knowing that the process of the receiver's checking the checksum and sending the acknowledgement to the sender may take more time than is required to send the entire block in the first place.
To see this, imagine this exaggerated scenario. You're communicating at 9600 bps with a BBS. This is 960 bytes per second, so each XMODEM 128-byte block takes about .13 second. Suppose it took 1 second for each acknowledgment to be computed and sent. That would mean that the sender would be spending .13 second sending, then 1 second waiting, then .13 second sending, then 1 second waiting, and so on. You would only be transmitting the file 11 percent of the time. While the average situation isn't that bad, it's close. For example, many communications programs save each block to disk as it's being received, so changing the block size from 128 to 1024 would reduce the number of disk accesses by a factor of 8.
Catching Some Z's
Today's protocols allow for blocks ranging in size from 128 bytes to 1024 bytes. YMODEM, ZMODEM, and the CompuServe Quick "B" protocol are three popular examples. Your communications software probably allows you to set your block size, but the interesting question is, What is it already set to? I use Crosstalk for Windows extensively, and I like it a lot, but I'd used it for about a month before I realized that it set all protocol block sizes to 128 bytes by default. To see just how important block sizes are, I transferred several large files from CompuServe using block sizes of 128 and 1024. The 128-byte block size averaged a throughput of 362 bytes per second; the 1024-byte block averaged 987 bytes per second. A stunning difference that didn't cost me a cent--but it sure saves me money in CompuServe charges.
Now, there's a caveat to understand about setting your block sizes large. If you have a noisy line and your protocol discovers that a 128-byte block has been garbled, the sender need only resend 128 bytes. But when lines are noisy and you're using 1024-byte blocks, every block with even a single bad bit in it requires that you resend 1024 bytes. So the rule in picking block sizes is this: The cleaner the line, the larger the block size. Experiment to find the best block size, and don't just accept the default block size. You'll probably find that local calls are more noise-free than long-distance calls--optical fiber lines notwithstanding.
Calling a Timeout
A related performance tip has to do with timing. After the sender has sent the block of data, it will wait a specified amount of time for the ACK that means "I got the data OK; send me the next block" or the NAK that means "I didn't quite get that; please resend it." But the receiver can't acknowledge what it didn't get, so in case there's been a line hit that obliterates an entire block, the sender will only wait a certain amount of time for the receiver's response. If it doesn't get it, the sender assumes that the data was lost, and resends. The question of how long it waits is where timing comes in.
Crosstalk for Windows, for example, allows you to set protocol timings to sloppy (wait a long time for acknowledgments), loose, normal, and tight. As before, a clean line can handle more strenuous timing than a noisy line, so finding your best settings will require some experimentation. I found that the best throughput I could achieve with sloppy timing was 894 bytes per second, but I got a throughput of 974 with tight timing. In both cases, I was doing 1024-byte block size transfers with a 9600-bps modem. That's 9 percent knocked off my CompuServe bill.
My final suggestion this month for speeding up your file transfer has to do with error-correcting modems. We've been talking about protocols such as XMODEM, YMODEM, and ZMODEM that let the computers on either end of a conversation make sure that the data transfer is error-free. Notice the word computers. There are programs running in your computer and the sender's computer that support the file-transfer protocol. It takes two to tango, so you've got to have both sides supporting the same protocol. But some modem manufacturers have taken a different tack. They've built a file-transfer protocol into the modems themselves. To see why, let's look at a non-PC application of data communications.
Once, I was doing some consulting for a doctor. I noticed that he had a printer and a modem sitting all by themselves off in the corner. I asked what the printer did.
"That sends us the results of our lab tests," he replied. "We used to have to wait for results in the mail, or we'd have to pester the lab on the phone. Now, the printer just comes to life a few times a day, and their computer uses our printer to deliver the lab test reports."
Nifty, I thought. The lab sold him a normal Okidata dotmatrix printer with a serial interface and a modem--a regular old PC-type smart modem. But a problem occurred to me--what about line noise? I'd hate to get a report that said, "CANCER DIAGNOSIS: P[caret]%SKD##@ob@cb." Looking closely at the modem, I noticed that it had a label that said, "MNP Level 5/Error Free." The testing company uses modems with built-in file-transfer protocols. Such modems use protocols with small blocks, usually under 32 bytes in size. One way to tell if you're working with an error-correcting modem is to see if the text appears on your screen in spurts. The modems are examining the data in small groups, so, after acknowledging that the data is error-free, the data is released to the PC, which quickly puts it up on the screen.
If you have an MNP modem or one that supports V.42 or V.42 bis, you've got an error-correcting modem. MNP stands for Microcom Networking Protocol, and it's an error-detecting and -correcting standard developed by Microcom. V.42 is the name of a modem standard promulgated by the CCITT (Consultative Committee on International Telephones and Telegraphs, a committee of a commission of the United Nations). All the V. standards refer to modems. V.22 bis is the standard that most 2400-bps modems are built around, V.32 is a very popular 9600-bps standard, and V.24 is the standard that describes the serial ports on your PC.
Paying the Overhead
It seems that if the modems do the hard work of file transferring, that can't be a bad thing. In fact, it's valuable in many cases, but the vast majority of phone lines (in the U.S., anyway) are fairly clear. And, of course, there's a price to pay--it takes time for the modems to do the error checking, and that's time that they're not transferring data. My experience is that the extra overhead of the modem error checking usually doesn't pay off.
Think about disabling error checking (it's sometimes called ARQ) if your modem has this built in. You can generally turn it off either with a DIP switch or by altering your modem's setup string to include the three characters &MO. Again, my experimentation showed a best-case transfer of 974 bytes per second when error checking was disabled versus 894 bytes per second when it was left on.
What about when you do have noisy lines? Should you disable error checking and set your protocol block size small, or should you let the modems handle the error checking and use the maximum protocol block size? Definitely the latter, for two reasons. First, modem protocols have less overhead than most PC file-transfer protocols. Second, my unscientific tests with noisy phone lines have shown that modem protocols recover from noise much better than PC file-transfer protocols do. Given the choice, let the modem do it.