SLTF Consulting
Technology with Business Sense



Desktop engineers: beware pitfalls when transitioning to embedded systems

Scott Rosenthal
June, 1998

Writing software for the embedded world is a foreign activity to most programmers accustomed to creating software for desktop PCs. This assertion isn't to say that programming for the desktop is easier or harder than for an embedded system. Rather, I'm saying that the two worlds have many differences, and the transition from desktop to embedded can be rocky.

The proper way to approach the transition is with an open mind while setting aside preconceived ideas about the computer. For the purposes of this column, I'm defining an embedded system as one based on an 8- to 16-bit processor with a memory space of less than a megabyte. Embedded PCs don't count in the world I'm defining (even though they have their own idiosyncrasies). A typical system might consist of an 8051 with an external ROM, or a PIC processor with 4k bytes of ROM space. These smaller processors are the workhorses that go into the mundane things we take for granted such as thermostats, handheld meters and even medical diagnostic equipment. No multimedia stuff exists here.

Knowing how to code in C or assembly, which suffices in the desktop world, isn't enough when it comes to deeply embedded systems, which all deal with hardware in one form or another. The simplest hardware interfacing is figuring out how to get a program down into the chip on the board; remember, in this world, disks don't exist on boards. Likewise, they usually supply no Ethernet connection or TCP/IP, and even an RS232 downloader probably doesn't exist. In other words, before writing one line of code, you must figure out how to get it into the target (the embedded system) and then how to debug and test it.

Tools that cost

One of the biggest eye openers to desktop programmers is the cost of tools for the embedded world. In the high-volume commercial sector, it's very difficult to spend more than $500 on a software tool such as a compiler. In contrast, it's not unreasonable to spend $2000 or more on a compiler in the embedded world. Next, with a desktop program, running software you just wrote is an easy way to debug it. Yet many times the only sane way to debug embedded software is with an emulator. Again, these instruments cost money, anywhere from $5000 to approximately $40,000.

Once you write and debug the software, the next issue is figuring out how to load it onto the target system. Doing so generally requires a device programmer that, depending on the situation, might need to handle UVEPROMs, flash memory or the selected processor. These instruments cost anywhere from $200 to $5000. If you prefer to work with UVEPROMS, an erasing light is also necessary, and low-cost versions run approximately $100. Then, if the target system uses surface-mount parts and its memory or processor is soldered to the board (typical in SMT boards), you'll need at minimum a surface-mount rework station costing roughly $2000 plus training.

So, without writing a lick of code, you've now accumulated a benchtop of equipment just to support the programming habit. And remember, this investment comes on top of a first-class PC for doing the actual code writing, compiling and linking. And yes, I almost forgot-if you switch the target processor it's necessary to repurchase most of these tools, unlike when dealing with the desktop PC and its legacies (for good or bad).

Assuming you've bought all these tools and have progressed down the learning curve with them, the next hurdle in the embedded world is understanding that there probably isn't any operating-system support to call on. Thus, functions such as serial communication, memory storage and display options all require you to develop custom software "drivers." To do so, you'll need datasheets for the ICs on the target system, a memory and I/O map along with details about how the hardware devices function together. Do they employ interrupts? If so, which ones? Are these interrupts edge or level sensitive? How do you initialize or rearm the interrupts? How do you mask them? Are there any timing constraints on the software talking with the device? Just as with the old Outer Limits TV show, you now control the horizontal and the vertical.

Before main()

Without any OS, initialization also troubles many desktop-turned-embedded programmers. When writing an application for a desktop system, you never worry about what happens before it runs. In an embedded system, when writing in C you must know what happens before main(). For example, it's necessary to set up memory areas, clear uninitialized data areas to zero, initialize all static RAM locations (whose information comes from ROM!), initialize the floating-point package, set up stack and heap pointers and initialize any other hardware devices that just can't wait until main() runs. Most compilers come with prototype assembly routines—remember, no C before main()—for handling these chores, but they're only prototypes. The programmer must go in and configure them to a particular situation.

A word to the wise—make sure you set all RAM locations to known values on power-up. This step means clearing all uninitialized RAM areas to zero and setting a constant value into the stack and heap areas. Doing this small step ensures that a program always starts from the same place. One of the hardest problems to track down is when a system uses a location prior to its initialization. Avoid this problem by initializing everything. By initializing the stack and heap, you can then use a debugger to find out how much of those areas the system has used and how much room is left. This memory-use study is mandatory before shipping a product. If your customer buys a product that ends up needing more memory, he can't just pop open the box and add more.

Interrupts in a flash

A discussion of interrupts could take far more space than this column allows. Suffice it to say that the biggest problem I see with desktop people who start programming embedded systems falls into this area. Interrupts are simple to understand, and they happen asynchronously to all other operations; thus, they can happen in the middle of a C statement and cause unsuspected problems.

Basically, interrupts don't know about the "borders" of a C statement. On an 8051 and many other 8-bit processors, assigning one integer to another is at least a 2-step process: first move the MSB and then the LSB. An interrupt can and inevitably will occur between these two steps. If the ISR (interrupt service routine) manipulates the source integer, such as decrementing it, this action could lead to a system failure on a major carry such as from 200H to 1FFH. Instead of 1FFH in the destination, the system could end up with 2FFH. Remember to be careful with what code does within an ISR; minimize its operations to save time, and remember to disable and reenable interrupts around areas that the ISR manipulates. Also, as a rule of thumb, don't perform floating-point math in an ISR. Not only does the time penalty violate the concept for conducting work in an ISR as fast as possible, but many floating-point routines aren't reentrant.

Just reset it

Finally, what's likely the most common problem novice embedded programmers confront, error handling, requires a change of mindset. Embedded systems in the 90s don't come with reset buttons or <ctrl><alt><delete> key sequences. The software must work in all cases or fail gracefully and safely. Obviously, that's not the case with desktop computers. So when designing an embedded system, plan for failure. Use a watchdog timer to catch locked-up processors. Plan for power being turned off in the middle of a write to nonvolatile memory. Plan for data corruption and how to recover without user intervention. Plan for handling divide-by-zero problems. Don't plan by saying that it's just good enough. The market is a tough place to debug your software. PE&IN

Copyright © 1998-2012 SLTF Consulting, a division of SLTF Marine LLC. All rights reserved.