Don't Forget Bit Basics While Using an HLL
Scott Rosenthal
September, 1992
In my opinion, computer programming is nothing more than the manipulation of numbers.
Programmers programmed the first computers by entering series of numbers which in turn
instructed the computer on how to manipulate other numbers. Even though some people call
these numbers "data", the computer contains nothing but bits (ones and zeros)
that we humans group into meaningful information. Computers use numbers to represent
abstract ideas. A simple example of this is alphanumerics. You can't directly express the
letter 'B' in a computer but you can instead use a numeric representation such as ASCII
(0x42 or 42 Hex). Embedded systems use numbers to represent the amplitude of an audio
signal, the depth of the water beneath the keel of a sailboat, and the manifold pressure
in an automobile. Numbers are essential to programming, especially embedded systems, that
I think it would be very worthwhile to explore various techniques that will make it easier
to write program code and at the same time reduce code size and execution time.
Bits and Bytes and Words, Oh My!
Just so that we all use the same terminology, let me review some of the basic number
representations used in embedded systems. A Bit is the smallest information unit in a
computer. It represents, in computer speak, either a 1 or a 0, which also happens to be
the only two digits in the binary or base two number system. (Mike, I know that the
following is passive voice but I couldn't find any other way that worked as well) This was
drilled into me by my sixth grade teacher (thank you, Mr. Campbell) who spent a half hour
switching the room lights off then on and asking, "Why is binary used in
computers?" We weren't dumb but sometimes the obvious just seems too simple!
Binary is an inconvenient numbering system in which to express larger numbers. As an
example, take the number 987 and convert it into binary: 1111011011. We humans tend to
confuse long strings of numbers and look for ways to group the information into manageable
chunks; we arrange our 7 digit telephone numbers as groups of three and four, and we group
our nine digit Social Security Numbers into groups of three, two, and four. If we regroup
this binary number into groups of three, it now looks like this: 1 111 011 011. This is
easier but we can make it even simpler. The grouping by threes is really an octal (base
eight) grouping and we can now rewrite the number as 1733Q (Q means octal).
Many memory systems group bits together into multiples of eight called bytes. You can
use octal to represent the values of these bits and it will take 3 digits (two groups of
three and one group of two). A more convenient grouping is by fours, which requires just
two digits. This is hexadecimal, base 16 (also known as hex). Hexadecimal uses the digits
0 to 9 and A to F to represent numbers from 0 to 15 (four bits). We can now write our 987
as 3DBH (H means hex) or in C terminology as 0x3db. The point to keep in mind is that
octal and hex are just representations of the same binary pattern and any action to any
one bit of the number will change that value in every numbering system. Decimal has no
easy translation between a bit change and a number and you should entirely avoid using it
when programming embedded systems. Don't extend this though to the user of your system!
Most normal people don't think in computerese so you'll have to translate these INTERNAL
number representations into normal, everyday usage. Also, even though computers can start
counting from zero, don't make your customer also count this way. People start counting
from one. Always! Enough of the simplicity.
Bit Juggling
Now that we know that all numbers in computers are just sequences of bits, we can use
this information to make our number manipulation chores easier than normal math allows.
Computers are generally very slow in doing multiplications and divisions and therefore you
should try to avoid these instructions whenever possible. Most computers allow operations
on the individual bits of a number such as setting and clearing. For a computer without
direct bit control, the 'or' and 'and' operations are synonymous. Using the ASCII
character set as an example, you can convert an upper case letter, such as 'B', to its
lower case equivalent, just by setting the sixth bit. This converts the ASCII number for
'B', 0x42, to its lower case value of 0x62. This is one of the ways that you can implement
the C function tolower():
#define tolower(c) (c  0x20)
You can easily relegate other tasks to bit manipulations. Odd and even numbers are a
very simple case. Even numbers are those integers that are evenly divisible by two. You
can easily test to see if a number is even by checking the first bit. If it's set, the
number's odd and if it's not set, then the number is even. The standard way you would
round an odd number up to the next even number might be by using the following code
snippet:
if (number & 1) number++;
This technique, when translated to its assembly language, requires a test, a
conditional jump, and an increment instruction. You can shorten this by a third using the
following:
number = (++number) & ~1;
This technique works by incrementing the number, which causes the odd number to become
even and the even number to become odd. After incrementing, the number has the first bit
turned off by the 'and'. This doesn't require a conditional jump and will always take the
same time to execute, unlike the first way. Realtime data taking is generally very
intolerant of varying execution times; adopting programming techniques that require fixed
periods of time will generally give you a better design with fewer quirks and problems.
Most language compilers support some type of modulo function that returns the remainder
for some divisor. This is analogous to the clock math we learned in elementary school. A
typical way that you can implement the modulo function is the following:
#define modulo(number, divisor) \
((number)  (((number) / (divisor)) * (divisor)) )
If your task allows the divisor to be a power of 2 (e.g., 2, 4, 8, 16, etc.) then the
problem simplifies to just bit manipulation:
#define modulo(number, divisor) ((number) & ((divisor)  1))
This is not only faster executing but it also requires less program memory.
As I said before, an embedded system's programmer must know how to juggle bits. But,
even programmers on large systems can benefit from intimate dealings with the bits and
bytes. I was recently porting a PC regression analysis program to a Cray2 (surprisingly
easy) and we came across a question about the arrangement of the Cray pointers. I needed
this information to make sure that my program could access any of the eight bytes making
up its basic word. I wrote a simple test program and looked at the pointer value. The
default decimal numbers made no sense but redisplaying them in hex showed that the pointer
used the upper three bits as the byte offset within the word. On numbers representing
internal program values or hardware data, get yourself used to using the natural numbering
systems of the computer  binary, hex, and maybe even octal.
A lot of microprocessors and microcomputers don't have multiply and divide instructions
and even those that do sometimes take an horrendous amount of time to perform these
operations. For these micros that lack multiply and divide instructions, you'll need to
either develop or purchase an integer math package. For simple multiplications, though,
you'll find that specialized functions are much faster.
As you know, a multiplication by 2 is the same as shifting a binary number to the left
by one bit. The number 3, written in binary as 0011, multiplied by 2 yields 6, binary
0110. Shifts execute fast on micros as do add instructions and you can use this shifting
and adding to quickly perform multiplies of constant values.
Integer multiplication is really just a shortcut for multiple additions. Five times x
is the same as x + x + x + x + x. You can regroup this as (4 * x) + x, which in our shift
and add world is now simply two shifts of x to the left and one add of x. Most C compilers
on a PC do simple constant multiplication in this manner. The following table shows you
the combinations of shifts and adds (in C nomenclature) for the numbers two to ten:
x * 2 (x << 1)
x * 3 (x << 1) + x
x * 4 (x << 2)
x * 5 (x << 2) + x
x * 6 (((x << 1) + x) << 1)
x * 7 (((x << 1) + x) << 1) + x
x * 8 (x << 3)
x * 9 (x << 3) + x
x * 10 (((x << 2) + x) << 1)
These constant multiplications are convenient but just remember to make sure that you
watch out for overflow problems.
Johnny, Remember to Expand Your Polynomials
A lot of the instruments and software I design involve higherorder mathematics and as
such, the program is constantly calculating polynomials. For example, my program might be
solving a third order polynomial of the form:
y = ax^3+ bx^2+cx+d
and the most obvious way to write this in C code is the following:
y = a * x * x * x + b * x * x + c * x + d; (1)
An even less experienced programmer might have written it like this:
y = a * pow(x, 3.0) + b * pow(x, 2.0) + c * x + d; (2)
As I said before, we want to minimize multiplies and divides wherever possible,
especially if the above equation is in floating point! You can rearrange the above
polynomial and rewrite it in the following form:
y = (((a) * x + b) * x + c) * x + d; (3)
You can see that (3) uses fewer math operations than (1); three multiplications and
three additions compared to six multiplications and three additions. It also doesn't
require any intermediate storage of each newly calculated term. Its number of
multiplications and additions are always equal to the polynomial order while the code line
(1) multiplications always equals the sum from 1 to the polynomial order. In one
instrument I designed, there was a fifth order polynomial divided by another fifth order
polynomial. If I had used (1), that would have been 30 multiplies, 10 additions, and one
division. Using (3), my program had 10 multiplies, 10 additions, and one division, which
is almost half the math operations! In other words, I doubled the speed of my calculations
just by rearranging the equation.
In my experience, programs always need to run faster. Compilers are fine for
translating some higher level ideas into instructions the computer can execute but they
aren't a substitute for proper program coding. No compiler is going to rearrange (1) as
(3) and likewise, no compiler will optimize a rounding function. PE&IN
Adapted from an article that appeared in Personal Engineering & Instrumentation
News.
Return to the article index.
