Data types and code portability
I recently attended a presentation about the challenges with porting code from 8/16-bit CPUs to 32-bit devices. The speaker addressed all the major issues, giving me a few things to think about. One area that particularly caught my attention was the thorny issue of C language data types. With the intention of making it easy to write portable code, the specification for (traditional ANSI) C explicitly avoids specifying the bit size of data types. The effect upon some embedded code is quite the reverse, as data size may be a concern …
In the presentation, the speaker pointed out that a compiler for an 8-bit CPU would probably size int to be 16 bits. So code would need to be modified to make any int variable a short instead. I think that this is very short-sighted [no pun intended!]. A better approach is to replace all the standard data types with typedefs that indicate size – U8, U16, U32, S8, S16 … – and store these in a header file. Then modify the code to suit. Porting the code to another compiler just needs that header file to be edited appropriately.
I was rather taken aback by an example used in this presentation. It showed a loop, which simply incremented a variable (on a timer tick). A particular function was called each time the value wrapped around to zero. So changing from 16 to 32 bits would result in it being called every 4 billion ticks instead of every 65 thousand. Although this is true, it is appalling programming style. Clearly the variable should be loaded with the number of ticks and counted down as that code would be no less efficient, but infinitely more readable.
Comments
Leave a Reply
You must be logged in to post a comment.
Colin,
I agree completely. In fact, I don’t write code without including C99′s stdint.h (or “rolling my own” if necessary, for example code not compiled as C99).
I’ve noticed that practically all of the software I work with written by third parties – such as the Quantum Platform (QP) state machine framework or Micrium’s uC/OS variants – are written entirely with these sized datatypes (e.g., uint16_t, etc.). The main reason being that these software packages are portable to a huge range of processors and cross-development toolsets, from the very small to the very large, and this approach virtually eliminates the pain of porting.
Another nice thing about are the compile-time constants which identify ranges for each type, e.g., INT8_MIN, UINT32_MAX, etc.
There are also a couple of lesser-known uses of stdint.h — minimum width integer types (e.g.uint_least16_t) and fastest-width minimum width integer types (e.g. uint_fast8_t).
I’m always surprised how many shops are completely unaware of this very useful header file (or shops that don’t write their own).
Colin,
Another issue is the fact that chars are either signed or unsigned by default – but the C standard does not define this, it leaves this up to the compiler writer to pick one. This can result in some really surprising side effects when porting code. I noticed this recently as armcc and gcc use different defaults!
Very good point Russ. Obviously a case for U8 and S8. I guess that moving code between those 2 compilers is a common activity, so the problem is probably very widespread.
I second stdint.h, and I’m surprised that the speaker didn’t recommend Dan’s approach. I’m tempted to call it “professional malpractice” (a term Capers Jones got me started on) to not use the stdint.h approach, as I’ve run into reuse problems from in-house and third-party software libraries, where the developer decided to define their own header file names containing typedefs to U8, uint8_t, BYTE, etc. This causes a lot of redefinition errors.
I basically agree. I don’t want to say which company the speaker worked for, but I would not want to be publicly seen to criticize them at the conference or here.