SLTF Consulting
Technology with Business Sense

 


 

Why don't programmers think like normal people?

Scott Rosenthal
May, 1993

One of the more interesting aspects of writing this column is the variety of sources I encounter for story ideas. This month's effort is the result of a frustrating experience I recently had with a PC-based CASE tool I evaluated to enhance code development. Although no package is perfect, this one's $8500 price tag led me to expect fewer problems than I got. Rather than simply complaining, though, I've chosen to use the experience as a wake-up call to ensure that my own products don't fall into any of the same traps. The installation and usage problems I experienced with this CASE tool seem to fall into several recurring categories. I'll list them and also give some examples of how to avoid these situations in your programs and products.

Installation blues

The first problem I encountered was simply installing the program on a hard disk. Although software installation for an embedded system typically means plugging in EPROMs, it's not uncommon for an embedded system to need a PC-based control program that the user must install. In order to load the CASE software, I not only had to attach a security lock on the PC's printer port, but I also had to create a text file containing the eighteen 10-digit numbers making up the lock's security codes. If the manufacturer insisted on this lock, why couldn't it have typed this data in for me?

As aggravating as this situation was, it wasn't the first time I've had problems with installations. An engineering program I once owned required a particular type of keyboard. To make matters worse, the vendor supplies only a list of acceptable keyboards after you purchase the software. Next, a schematic-capture package I bought worked only with a serial mouse because you had to attach its security lock on the same serial port. Going even further, another engineering program I wanted to run on the same computer required a 3-button bus mouse-now I have two mice on my computer! The most bizarre case, however, concerns a vendor that required us to reboot the computer with different AUTOEXEC.BAT and CONFIG.SYS files depending on which of its programs we wanted to run.

The bottom line for me is that the smoothest installations are the ones that proceed automatically, requiring little or no user interaction. If the software must know if a certain driver is installed on the computer, it should first check-don't automatically ask the user. Likewise, if your software requires modifications to the PC's configuration, don't blindly change the CONFIG.SYS file without looking at what's there already to see whether it really needs alteration. For example, if a package requires a setting of FILES=20, don't change the file if you find it already contains the line FILES=30. Although your program runs fine following the change, it's likely you'll disturb someone else's.

Dubious memory limits

After installing a program, the next hurdle is scraping together enough of that precious DOS memory to allow it to run. My CASE program requires more than 610k bytes of free DOS memory, while at the same time it uses extended memory. Although I'm not lamenting the good old days when programs were 40k bytes in size, I sometimes think that programmers need reminding about reality. Just because the PC has 640k bytes of DOS address space doesn't mean that a program can use all that space. People need this memory for other applications, too, especially networking software. It's not acceptable for a program to have problems on a computer with at least 512k bytes of memory available. A simple solution is to set an arbitrary lower memory limit that everyone can live with, such as 512k bytes, and if you need more memory remember these options:

  • Overlays-These structures provide an easy way to free up main memory at the expense of performance. Although your first impulse is to not include anything in a program that reduces performance, two factors tend to reduce the negative impact of overlays. First, very few applications might need access to the entire program at all times. Therefore break a program into pieces that you load as needed. Second, most PCs incorporate disk caches that reduce the number of required disk accesses.
  • Expanded memory-This technique uses a window to allow standard DOS programs to access memory above the 1M-byte 8088 limit. At my company we once inherited the source code for a regression analysis program, but it could take literally days to run. One way we modified the program to cut down the execution time was to keep all 4M bytes of data in memory at all times. We used expanded memory as a data reservoir and eliminated disk accesses.
  • Flat-programming space-New 32-bit compilers allow you to write programs as if the PC had a flat-address space of many megabytes. Not only do these compilers open up essentially unlimited memory, the resulting programs also execute faster in the '386/486 native mode. Be careful of these compilers if your application requires a significant amount of screen or disk I/O, as 32-bit programs grind to a halt switching in and out of Protected mode.

Speed vs. reality

The last area where I consistently have problems with software is performance. I wish everyone at my company had the fastest computers money can buy. Then I wouldn't have to deal with software that runs as if it's stuck in molasses. Recognize that users don't always have the newest and fastest computers. I once tried a database program that was extremely easy to use, but it could take as long as 6 min to complete a search that our previous database performed in 15 sec. One reason for this slow operation was that the program updated the screen form with every record it searched while looking for the one particular record. Because screen updates take time, create programs that write to screen only when necessary. The reason the database wrote to the screen was to show me that it was still active, but maybe if it did so only every tenth record we'd still be using it today.

Another way programs easily get bogged down is keyboard scanning. If you must write applications that continuously look for user keypresses, be careful how you perform this vital function. If the program looks at the keyboard every time through the main loop, this scanning can incur a significant penalty on execution speed. Instead, check the keyboard every nth time through the loop. This change allows a program to respond to operators in less than a second, but the scanning now has minimal impact on the overall execution.

I firmly believe that programmers should have the fastest computers available for developing software, but they should use the slowest computers commonly available for testing their programs. Some programming styles minimize program execution time, and if you had to wait for a slow computer, you can bet that you'd quickly find out how to program faster code. Whatever the application, code size and execution speed are important.

All of these gripes boil down to one central and hopefully familiar issue. The reason for creating a product is to meet a user's needs. If a customer must expend more energy fighting with your product than using it, you'll soon get what you deserve-unemployment insurance. PE&IN



Copyright © 1998-2012 SLTF Consulting, a division of SLTF Marine LLC. All rights reserved.