When compilers do magic
What is a compiler? Ask an average engineer and you will get an answer something like: “A software tool that translates high level language code into assembly language or machine code.” Although this definition is not incorrect, it is rather incomplete and out of date – so 1970s. A better way to think of a compiler is: “A software tool that translates an algorithm described in a high level language code into a functionally identical algorithm expressed in assembly language or machine code.” More words, yes, but a more precise definition.
The implications of this definition go beyond placating a pedant like me. It leads to a greater understanding of code generation – and just how good a job a modern compiler can do – and the effect upon debugging the compiled code …
Some time ago, I put forward an argument that a modern compiler could [under specific circumstances] produce better code than a skilled human assembly language programmer. To illustrate my point, I will show another example of the same phenomenon. Consider this code:
#define SIZE 4
for (i=0; i<SIZE; i++) buffer[i] = 0;
This is very straightforward. One would expect a simple loop that counts around four times using the counter variable i. I tried this, generating code for a 32-bit device and stepped through the code using a debugger. To my surprise, the code only seemed to execute the assignment once; not four times. Yet the array was cleared correctly. So, what was going on?
A quick look at the underlying assembly language clarified matters. The compiler had generated a single, 32-bit clear instruction, which was considerably more efficient than a loop. The loop variable did not exist at all. I experimented and found that, for different values for SIZE, various combinations of 8-, 16- and 32-bit clear instructions were generated. Only when the array size exceeded something like 12 did the compiler start generating a recognizable loop, but even that was not a byte-by-byte clear. The operation was performed 32 bits at a time.
Of course, such optimized code is tricky to debug. Indeed, even today, some debuggers just do not allow debugging of fully optimized code. They give you an interesting choice: ship optimal code or debugged code. Realistically, I would recommend that initial debugging is performed with optimization wound down to avoid such confusion and enable this kind of logic to be carefully verified. Later, verify the overall functionality of the code with aggressive optimization activated.
I was very impressed by how smart the compiler was. I know that it is not magic, but it sure looks like it.
Posted July 5th, 2010, by Colin Walls
- Why develop embedded software bottom up?
- The Greek tragedy – what is it all about?
- Time for a new programming paradigm?
- Working on the weekend
- Authors wanted. Can you write about embedded software?
- Desert Island Discs
- ESC Silicon Valley: The RTOS Smackdown Revival
- How to run a hotel
- malloc() – just say no
- What size drink would you like?