When compilers do magic

What is a compiler? Ask an average engineer and you will get an answer something like: “A software tool that translates high level language code into assembly language or machine code.” Although this definition is not incorrect, it is rather incomplete and out of date – so 1970s. A better way to think of a compiler is: “A software tool that translates an algorithm described in a high level language code into a functionally identical algorithm expressed in assembly language or machine code.” More words, yes, but a more precise definition.

The implications of this definition go beyond placating a pedant like me. It leads to a greater understanding of code generation – and just how good a job a modern compiler can do – and the effect upon debugging the compiled code …

Some time ago, I put forward an argument that a modern compiler could [under specific circumstances] produce better code than a skilled human assembly language programmer. To illustrate my point, I will show another example of the same phenomenon. Consider this code:

#define SIZE 4
char buffer[SIZE];
for (i=0; i<SIZE; i++)
   buffer[i] = 0;

This is very straightforward. One would expect a simple loop that counts around four times using the counter variable i. I tried this, generating code for a 32-bit device and stepped through the code using a debugger. To my surprise, the code only seemed to execute the assignment once; not four times. Yet the array was cleared correctly. So, what was going on?

A quick look at the underlying assembly language clarified matters. The compiler had generated a single, 32-bit clear instruction, which was considerably more efficient than a loop. The loop variable did not exist at all. I experimented and found that, for different values for SIZE, various combinations of 8-, 16- and 32-bit clear instructions were generated. Only when the array size exceeded something like 12 did the compiler start generating a recognizable loop, but even that was not a byte-by-byte clear. The operation was performed 32 bits at a time.

Of course, such optimized code is tricky to debug. Indeed, even today, some debuggers just do not allow debugging of fully optimized code. They give you an interesting choice: ship optimal code or debugged code. Realistically, I would recommend that initial debugging is performed with optimization wound down to avoid such confusion and enable this kind of logic to be carefully verified. Later, verify the overall functionality of the code with aggressive optimization activated.

I was very impressed by how smart the compiler was. I know that it is not magic, but it sure looks like it.

Post Author

Posted July 5th, 2010, by

Post Tags

, , ,

Post Comments

2 Comments

About The Colin Walls Blog

This blog is a discussion of embedded software matters - news, comment, technical issues and ideas, along with other passing thoughts about anything that happens to be on my mind. The Colin Walls Blog

Comments

2 comments on this post | ↓ Add Your Own

Commented on 7 September 2010 at 18:45
By Miroslaw M.

I would like to share a link to short article or news about scripting embedded software by python script. It’s support compilation and build an embedded software.

http://www.reembedded.com/index.php?option=com_content&view=article&id=73:embedded-compilation-by-python-scripts&catid=50:latest-embedded-news&Itemid=115&lang=en

    Commented on 7 September 2010 at 21:07
    By Colin Walls

    Thanks for the input. However, I am unclear how this is better than using a good IDE, like EDGE from Mentor Embedded, which is designed to do just this job.

Add Your Comment

You must be logged in to post a comment.

Archives