The Chris Hallinan Blog

Mentor Embedded’s Open Source Experts discuss recent happenings in the Embedded Open Source world.

22 November, 2016

Virtually every conversation I have today with customers and prospects inevitably gets around to the subject of security.  Ask five developers what security means to them, and you will get five different answers.  One thing everyone seems to agree on is that we can’t accept the status quo.  For many of us in the embedded software industry for the past decade and a half, security has historically been a back-burner issue.  “…it’s somebody else’s problem…” was a frequent quote heard in product development team meetings.  Not any more.

The proliferation of connected devices across many industries has morphed into the explosive growth of IoT and IIoT.  By some estimates, the number of Internet-connected devices will reach into the many billions in the near future.  It is no longer sufficient to simply shut down unnecessary services and restrict access to system services to only privileged users.  In fact, my guess is that many consumer-grade devices manufactured today still ship with a single user account, and that user is often the root user, with unlimited access to every service and device in the system.  It would take little more than a screwdriver and an adapter cable to gain full access to the box.  Some are even easier, leaving a service like telnet or ssh running on a user-facing network port.

The good news is that moving beyond “no security” to something more robust is not necessarily a black art, nor skills exclusive to only a privileged few at the PhD level in computer security.  There are several tools and techniques that are relatively easy to integrate into one’s embedded software image that can help you achieve two important goals: 1) analyze your system to determine where the vulnerabilities might exist and 2) add a basic level of system security on top of an already secure (Linux) operating system.

I recently presented a webinar titled “Securing Embedded Devices: From Boot to Applications” where I presented some high level concepts, and then toured some of the tools and utilities that are readily available to embedded developers using Mentor Embedded Linux based on technology from the Yocto Project.  You are welcome to view the webinar on-demand as we talked about topics from secure boot, trusted platform modules, SELinux and SMACK and a variety of useful userland tools and utilities designed to analyze and protect your embedded devices from the bad guys.  You can view the webinar at at the following url:—from-boot-to-applications

, , , , , , , , , , , , , , , , ,

22 October, 2015

You may have seen the press release by AMD announcing their new embedded R-Series SOC processors (  I recently had the opportunity to exercise one of these high-performance R-Series SOC platforms.  My goal was to experiment with OpenCV and it seemed the newly announced R-Series with advanced third generation Graphics Core Next GPU would make an ideal platform for this task.  I used Mentor Embedded Linux for AMD, our recently released embedded Linux runtime and development tools targeted specifically at the AMD range of embedded processors.  You can find Mentor Embedded Linux Lite for AMD from the Mentor Graphics website.  Simply go to this page, select the Download button labeled “Mentor Embedded Linux Lite: AMD R-Series x86” and request the download link.

Using Mentor Embedded Linux (MEL), based on technology from the Yocto Project, I was able to integrate OpenCV and the necessary support packages on the AMD R-Series Merlin Falcon board.  I will be the first to admit that I am certainly not a graphics guru, and what I know about OpenCV will fit in a paragraph. (Actually I’ve learned much from this little exercise!)  OpenCV is a large library of computer vision and imaging functions which has a range of applications.  You are no doubt familiar with some of the advanced graphics capabilities found in our smart phones, such as gesture recognition, face detection, etc.  In fact, I recently discovered that my Android phone often looks back at me, to discover if I’m still actively looking at the screen, and if so, it does not dim the display.  (If you haven’t figured out what that little eyeball icon in your Samsung S5 is, well now you know.)  These are examples of the many uses of computer vision and imaging technology.


I experimented with several algorithms available in OpenCV for generic imaging tasks.  One popular set of algorithms deals with edge detection.  OpenCV supports Canny Edge Detection, originally developed by John F. Canny in 1986.  It is a multi-stage algorithm for detecting edge gradients in both still images and video streams.  The figure here depicts my keyboard and the top edge of my trusty HP-41C calculator as captured by a simple web cam attached to my R-Series SOC platform.  The frame at the top left is the raw video after being processed by a Guassian filter.  This is done to remove noise from the image before edge detection.  The frame labeled “Canny” is the direct result of the algorithm. The other two frames represent different features of the Canny Edge detection algorithm, colorizing the angles of the edges and depicting the relative magnitude.

These basic algorithms can be used in a wide range of industrial and scientific applications, such as robotics, manufacturing quality control, security, human-machine interfaces and much more.

I experimented with several OpenCV algorithms on the AMD R-Series SOC board that I had access to.  One of the more interesting was motion detection which chained together a series of algorithms to detect motion in a video stream.  First each frame was converted to gray-scale, and compared against a reference background computed from the video stream.  Then a difference was produced against that reference background, weighted and displayed.  I will leave you with a video of the motion detector that I experimented with.   The left side of the frame is the raw video feed, while the right side is the extracted foreground after processing. Notice that even the shadow and some reflections from the glass surface of the background is visible in the detected feed.  These are almost undetectable by the  human eye.  Very cool indeed.

9 October, 2013

This week I presented a live webinar titled “Linux Fast Boot: Techniques for Aggressive Boot Time Reduction”.  One of the slides talked about strace, and in the presentation, I referred to strace as my favorite tool.  Indeed, strace is easy to use, readily available on virtually every platform, and provides much more than just system call tracing, which was it’s first job.  One reason why strace is so popular is that strace is one of the few tools that can provide very useful diagnostic and profiling data on an application for which source code is not available. In fact, the author of the man page states that “In some cases, strace output has proven to be more readable than the source.”

In it’s simplest form, strace is used to invoke an application, and reports every system call invoked by the process and each signal received by the process until the process exits. This information is displayed in real time on stderr.  Obviously, that means it seriously degrades performance, but in many cases, the issue you may be chasing is readily apparent using strace.  Have you ever launched a Linux process and have it just die without providing any error messages nor log output? I pull out strace as the first order of business in these cases.  Many times, the file or condition that the failing application requires is readily apparent.  strace has many options to control its behavior.  For example, if you suspect a failure due to a missing or nonexistent file, strace can be run to filter its output showing only open calls:

$ strace -e trace=open your-application

But strace can be used for much more.  The -c flag to strace is used to provide a summary report of each system call including the time, number of times called, and the number of errors recorded for each system call. In my Linux fast boot webinar, I pointed out that most developers would be surprised by the number of superfluous system calls that actually return errors. They are not exactly “errors” in the sense that a programmer made a code error, rather they are usually a result of these utilities being made generic and adaptable to a wide variety of environments. For example, the ssh client on my Ubuntu 12.04 build machine generates 88 system call errors each time it is invoked.  47 errors on open, 20 on access, 12 on stat and a few more. Looking deeper, the failures on open are due to the fact that ssh searches for a variety of different certs and keys as part of its authentication method, the idea being that any of them would satisfy the authentication requirement.  Only after trying several different cert and key names does it finally find my rsa key file.  If my goal were to optimize the startup time of this application, the first thing I would do would be to find a way to preconfigure it to look for only a single cert file, and eliminate all those ENOENTs on open.  (Maybe there already is a way, I wouldn’t be surprised.)

Other cool strace tricks

Do you want to discover where the instruction pointer is at the time of the system call?

$ strace -i your-program

Or maybe you want to determine which system calls take the longest time:

$ strace -r your-program

Or maybe you want to learn how much time was spent in each system call:

$ strace -T your-program

The man page for strace provides details on all the capabilities and options of this powerful diagnostic and profiling tool.

Watch my Fast Boot Webinar

You can watch the recorded version of my webinar here:–techniques-for-aggressive-boot-time-reduction

If you just want the slides, go here:

24 September, 2013

Some things change, some do not

Recently I had an opportunity to present a technical session titled “Achieving Faster Boot Time with Linux”.  As I reviewed my slides for the last time before the session, it dawned on me that much of what I was about to present was little changed over the last few years.  I gave a similar presentation at Freescale Technology Forum several years ago.

Over the last few years, many elements of a basic embedded Linux system have either evolved, been replaced or are downright new.  Some examples of this rapid evolution of technology include the following:

  • Migration away from System V init to the more performance-oriented systemd initialization system
  • udev being rolled into systemd
  • Linux kernels starting with the major version number 3 instead of 2
  • Multi-core processors are now commonplace in embedded Linux devices rather than exceptional
  • UBI/UBIFS has largely displaced JFFS2 as the embedded Flash file system of choice

The list of things that are new or different could fill the rest of this page. But what struck me as I reviewed my slides was that much of what I described at the FTF event several years ago was still very relevant today.

One of the primary messages then and now is still true.  Linux boot time can be dramatically improved without substantial engineering effort.  Several of the techniques I presented are easy to achieve and don’t require extraordinary skill or super human effort.  Some techniques are tedious and may take some trial and error, such as reducing unnecessary features and device drivers in the Linux kernel.  But many are easy to apply and result in significant improvement in boot time.

Optimizing U-Boot requires at least some knowledge of U-Boot’s architecture and build system, but anyone competent in boot-style firmware can make quick progress reducing U-Boot’s footprint and thus reducing load and execution time.

Of course, achieving very aggressive Linux boot times down in the 1-2 second range or less requires advanced techniques and substantial skill in many aspects of Linux systems.  Each architecture and platform presents unique challenges.  Mentor Embedded has developed significant experience building fast boot Linux systems, using a variety of interesting techniques and technologies.  Many of these techniques and technologies can be found in production vehicles today.

Attached to this blog post is a link of the presentation I gave at the IESF 2013 event in Detroit this year.  Several attendees asked for a copy so I thought I’d post it here:

21 November, 2012


I recently attended the Embedded Linux Conference – Europe 2012 in Barcelona.  It was great to see all my open source friends (and meet some new ones) from around the globe.  By far, the most interesting and entertaining technical session that I attended was presented by Matt Porter.  It was titled “What’s Old Is New: A 6502-based Remote Processor”.  In summary, Matt developed a working prototype of a BeagleBone integrated with a 6502 microprocessor connected in such a way that the Linux RemoteProc facilities can manage it and treat it as a Linux remote processor resource.  The details were spectacular, especially for those in an age group similar to my own.

One of the first computers (thought not actually the very first) that I owned was the popular Commodore VIC-20 which contained a 6502 processor.  I logged many hours playing Gorf on my VIC-20 connected to my Heathkit console model color TV.  I also used it as a learning platform, building custom hardware gadgets and software that worked with it. I suspect some of you reading this had similar experiences with either this or a similar platform.  As it turns out, there is a vibrant hobbyist community around “retro” processors and tools such as the 6502.  However, I also discovered that the 6502 still has a vigorous commercial market.  The commercial home of the 6502, Western Design Center makes some bold claims about annual volumes in the hundreds of millions of units annually! (

The Details

The design was conceptually quite simple, though some of the actual techniques used to realize the design turned out to have some interesting complexity. I won’t go into those details – you can find Matt’s presentation online at ( and you can watch a recording of his session for those gory details.  (Note: as of the date of this writing, the videos have not yet been published.  Keep checking at and/or

This project consisted of connecting a bare 6502 processor (using only 4 octal bus transceivers) directly to the BeagleBone’s TI Sitara™ AM335x system-on-chip (SOC).  It turns out the the AM335x SOC used on the BeagleBone has a cool subsystem called Programmable Realtime Unit (PRU).  It is basically a dual-core, 32-bit RISC processing engine with single-cycle instruction execution (so long as there are no off-subsystem accesses.)  This determinism makes this subsystem very suitable for a variety of tasks with typical real time characteristics.  In his presentation, Matt called it “The Ultimate Bitbanger”.

Bone 6502

Simplified Block Diagram

The PRU was used to implement a bit-banged memory subsystem for the 6502 processor. Reset was controlled by a GPIO pin, and the clock for the 6502 was supplied by an on-chip PWM.  In his presentation, Matt described the actual PRU assembler code he developed for the memory low-level read and write cycles that executed on a PRU 32-bit core.

(Re)Using Open Source

Perhaps one of the most interesting aspects of this project is just how much open source infrastructure already existed to make this project a reality.  The Linux kernel’s RemoteProc framework was used to manage the “remote” 6502.  RemoteProc was also responsible for downloading the firmware to the 6502 and bring the microprocessor out of reset.  This is the very same infrastructure that manages multi-core systems and downloads firmware to modules on platforms that contain remote processors.  A good example of the use of RemoteProc is in managing the DSPs commonly found in cell phones and other network gear.

The toolchain used for compiling and linking 6502 code is freely available for download.  It is called cc65 and can be found at This toolchain is easily compiled on your favorite host machine.  It is claimed to be supported on many host operating systems including Linux, MAC OS X and that “other” popular desktop operating system.  Indeed, in this project, the 6502 C tools were run directly on the ARM Cortex-A8 based BeagleBone running an Angstrom distribution.

The point is that very little original software needed to be written for this project to be realized. The RemoteProc framework was used almost unmodified. Matt added a simple userland interface to boot/halt a remote processor from userspace, as this feature is only available from kernel space in the framework.  He also wrote a small driver stub for RemoteProc which describes the 6502/PRU hardware particulars (SRAM, clock and reset, etc). Due to some technical limitations (for details, view the video) the virtual console infrastructure used by the RemoteProc framework could not be used, so Matt had to write a trivial virtual console driver. Beyond the code that ran on the PRU, virtually all of the software used to make this work was either Linux kernel mainline or freely available on the Internet, with minor or no modifications.

Userspace Workflow

One of the requirements that Matt set for himself in this project was to be able to exercise the infrastructure from userland. It would have been cumbersome at best if he required kernel context (a driver) just to download and run a hello world on the 6502.  A userspace loader program (b6502_pruss) was used to download the PRU firmware and start it running.  This firmware was the heart of the “hardware” design which implemented the bitbanging algorithm that operated the 6502 memory bus.

The 6502 toolchain was used to compile and link the programs that the 6502 was to execute.  The steps to compile and run a hello world program on the 6502 looked like this:

  1. Write and compile (assemble and link) the hello world program.
  2. Copy the resultant binary to the standard location on Linux machines where firmware is stored (/lib/firmware on most distros).
  3. Using the b6502_pruss program, download the PRU firmware and start it running.
  4. Poke a location in /sys to start the 6502 processor.  (This is the userspace hack to boot/halt a remote processor that Matt added to RemoteProc.) Note that the RemoteProc framework is also responsible for locating and downloading the firmware image (in this case, the 6502 hello world application) to the 6502.  The firmware image is downloaded to the PRU module’s SRAM, which is mapped to the 6502 bus.

The highlight of the demo at the end of the presentation showed the legendary WOZ monitor ( slightly modified for this project, running on the 6502 microprocessor through a Linux virtual console.  Of course, the Woz monitor had to be modified to accomodate a virtual console through the PRUs SRAM.  The room broke into applause when Matt hand-typed in a short assembly language program and ran it, without referring to a script or document. That is, he had these op codes committed to memory.  I suspect the reason for the applause was because, like me, many of those in attendance can recall entering lengthy assembly language programs where many of the common op codes were stuck in our brain, and we didn’t need any references for it.  For those of us packed into this ELC-E 2012 session, our familiarity with computing platforms of this era contributed to our collective enjoyment of Matt’s session.

BeagleBone Can Do That

I’m sure I’m not alone when I say that Matts ELC-E presentation helped me to realize the flexibility of the BeagleBone, and the family of TI Sitara™ AM335x SOCs.  I didn’t realize that several models of the Sitara™ family contain a dual-core 32-bit RISC realtime capable processing engine (PRU) on the chip.  I can certainly think of several applications for such a subsystem ranging from communications to medical and industrial systems. Of course, the PRU is but a small subset of capability in this family of ARM Cortex-A8 based SOCs.  The best news is that there is plenty of information readily available from TI and on the Internet in the form of open source projects for just about any type of design based on the AM335x SOC.


Thanks to Matt Porter for suffering through my questions while writing this.

18 May, 2012

Uprgrading Your Toolchain

I am often reminded by my customers that changing toolchain versions can be fraught with peril.  Indeed, as any software project (open or closed sourced) progresses, inevitably new “features” render old ones obsolete or worse.  Probably one of the most common issues one might encounter with a new toolchain version are the class of compile errors due to warnings being promoted (or would that be demoted 😉 ) to errors in newer versions of gcc.  It seems that the gcc developers are trying to force good coding practice by converting some classes of warnings to errors.  In this case, the application developer has two possible courses of action: fix the code or use a -W compiler option to disable the warning as error.  We’ll call this the right way or the lazy way!

Then there are the inevitable compile-time errors due to header file changes.  This is also a common category of error when a new toolchain is applied to an existing codebase, especially when the codebase in question is large and has been around for some time.  Often these changes are due simply to changes in the standards (C/C++).  These errors are usually fairly easy to locate and resolve.  These types of issues are compounded when moving to a newer version of glibc along with the newer compiler.

The Really Tough Issues

The most insidious errors are those runtime errors which have no obvious cause.  An application that was working when compiled on one gcc version crashes or otherwise fails when compiled on a later version of compiler.  Frequently there are no unusual compiler warnings, no runtime diagnostic messages, etc. Small changes in compiler behavior can introduce subtle runtime issues, or expose a software bug that has been lurking but undetected. I recently encountered one such case.

I had been working on a side project to get a Yocto Project Poky image running on my old Dell Mini 10.  The platform has an Intel Atom Z530 processor, and a graphics controller (GMA500/Poulsbo) developed by PowerVR. Let’s just say I don’t think this graphics controller has enjoyed the attention from developers that some of the more popular ones have.  I’ve had alot of difficulty getting graphics to work with modern distributions such as Ubuntu and others on this platform.

My first Poky build for this platform was remarkably trouble-free.  I added the meta-intel and meta-emenlow layers and built for MACHINE=emenlow.  This booted my Z530 but when the xserver started it would crash and leave the display unreadable.  It looked like an old black and white TV with the horizontal sync out of adjustment. Thank goodness for the dropbear ssh server and ssh logins!

Xorg Crashes

Following that exercise, I decided to try another build based on Mentor Embedded Linux 5 technology (our Yocto Project-based product), using our commercially supported gcc 4.6 toolchain – Sourcery CodeBench 2011.09-101  (  To my surprise,  this time it booted to a nice pretty sato user interface screen.  After some trial and error, I determined that the Xorg binary itself was to blame.  Xorg compiled with gcc 4.7 crashed and left the display unusable, but the same exact source compiled with Sourcery CodeBench 2011.09 (gcc 4.6) resulted in a working Xorg binary.  I’ll say right up front that the 4.7 compiler was not to blame.  Read on.

At first I started by comparing the numerous compiler warnings from Xorg (way too many to even count, but that’s another story!) but that yielded nothing interesting.  Just at the point where I was about to give up, Gary Thomas posted a patch to the oe-core mail list with a fix to xserver-kdrive that looked promising. The subject line listed xserver and gcc 4.7, and the content looked promising that it could in fact be the issue I was facing.  If you want the gory details, the patch came from this bug:  Let’s look at the offending code:

This listing is shortened for purposes of this discussion, with non-interesting lines removed and indicated by an ellipsis:

int XaceHook(int hook, ...)
    pointer calldata;   /* data passed to callback */
    int *prv = NULL;    /* points to return value from callback */
    va_list ap;         /* argument list */
    va_start(ap, hook);
    switch (hook)
        case XACE_RESOURCE_ACCESS: {
            XaceResourceAccessRec rec;
            rec.client = va_arg(ap, ClientPtr);
   = va_arg(ap, XID);
            calldata = &rec;
            prv = &rec.status;
    /* call callbacks and return result, if any. */
    CallCallbacks(&XaceHooks[hook], calldata);
    return prv ? *prv : Success;

Notice the use of the local stack variable inside the case statement, the declaration of the rec structure:

XaceResourceAccessRec rec;

This variable is only valid in the scope in which it is declared, that is, between the braces of the case statement.  In that same scope, the structure is filled in, and a pointer to that structure is stored in the variable calldata.  (Each case statement in this switch construct had similar logic.)  Later near the end of the function, outside of the switch statement, calldata is passed to the callback function in CallCallbacks().  By this time, the pointer stored in calldata is invalid, as it is out of scope.

We can only presume that some subtle change in compiler behavior exposed this coding error in gcc 4.7, while in gcc 4.6 this bug did not result in a runtime error.  Technically, the compiler is free to reuse the memory locations (in this case stack memory) that stored the structure in the case statement, once that variable goes out of scope, and in the case of gcc 4.7, we assume that it did.  In the 4.6 case, for reasons which certainly elude me, we got away with it.  I am the farthest thing from a compiler expert, but it isn’t hard to imagine that as compilers get better at doing what they do, a bug like this can be exposed.


One day soon, I’m going to count the actual individual compile and link operations that go into building a typical embedded Linux distribution.  That starts to illustrate the scope of the problem and potential for failure when anything changes.  Suffice it to say that it’s probably in the tens of thousands.  A purely non-scientific observation shows the Poky-derived Sato image on which this article was based has somewhere around 135,000 C files, performs around 450 compile tasks at the recipe level, and logs over 70,000 calls to the compiler.  (And you wondered why it took so long to build?)

If you ever wondered why development organizations loathe the thought of changing compilers, this example might help clear that up.  Serendipity led me to the solution to the real problem I have described in this article.  Had I not gotten lucky, this problem could have easily taken days, weeks or worse to resolve, especially since I have no particular expertise with Xorg or X servers in general.

The toolchain (along with the C runtime library) are the foundation of your project. The best advice I can offer here is to chose wisely when making this decision.

27 April, 2012

As I mentioned in my previous blog, much confusion exists around terminology.  I think this is especially true when we speak of recipes and packages in the Yocto context.  In the early days of Open Embedded (OE) , on which Yocto is based, there was a very simple relationship between recipes and packages.  The relationship is still a close one, but it has become more complicated, as everything does over time.

Recipe Basics

A recipe is a set of instructions that is read and processed by the build engine called BitBake.  In its most basic form a recipe is a single file with some lines in it describing the software to be built.  In practice, most recipes consist of multiple files, having a .bb (bitbake) file which conveys version information and often upstream source location, and a .inc (BitBake include) file which has the bulk of the processing instructions for a given software collection.  I resisted the urge to use the term ‘software package’ because package is an overloaded term and the goal here is to remove some of the ambiguity surrounding these terms.  In fact, most recipes also include other functionality beyond what is defined in the .bb and .inc files.  The most common example of this is for a recipe to include the autotools bitbake ‘class’ which provides the well-known functionality of ‘./configure’, ‘make’, and ‘make install’ for many packages based on autoconf and friends.

Package Basics

A package is an archive that most typically contains binary artifacts from a build.  The two most popular package formats are still .deb and .rpm, the former from the Debian project and the latter used by Redhat and it’s derivatives.  Packages can also contain source code, but certainly in the embedded Linux community, that practice is dwindling.  One of the more common package formats found in the embedded space is .ipk, manipulated with the opkg tools found on a Google source repository ( This was the default package format for OpenEmbedded-based distributions, while Yocto currently favors the RPM format. (Yes, I’m guilty of using the term Yocto when I really mean “Yocto Project”!  See my previous blog on this subject.)

By default, a recipe produces at a minimum, 4 packages.  The most obvious one is the binary package itself, which is used to populate the root file system with the artifacts from that recipes’ build output. The other three are the development (-dev) package, the documentation (-doc) package, and the debug (-dbg) package.  The development package most typically contains the header files and libraries, if there are any, for a given package.  The documentation package should be self explanatory, and for many packages, is empty!  Documentation has not always been the top priority for development engineers, and this is no less true for open source developers.  The debug package typically contains copies of the binary executables and/or libraries that have been compiled with debug symbols included.  The debug packages are what your debugger/IDE reads in order to enable symbolic debugging.

Some recipes produce many more packages.  Take a look at the recipe for python for a good example.  The current version of that recipe in the poky repository produces seventy packages.  As you might expect, this is a pretty extreme example because python has many modules. You can see what packages a given recipe produces using BitBake’s -e (show environment) command line switch:

$ bitbake -e python | grep ^PACKAGES=
PACKAGES="libpython2 python-dbg python-2to3 python-audio python-bsddb python-codecs python-compile python-compiler python-compression python-core python-crypt python-ctypes python-curses python-datetime python-db python-debugger python-dev python-difflib python-distutils-staticdev python-distutils python-doctest python-elementtree python-email python-fcntl python-gdbm python-hotshot python-html python-idle python-image python-io python-json python-lang python-logging python-mailbox python-math python-mime python-mmap python-multiprocessing python-netclient python-netserver python-numbers python-pickle python-pkgutil python-pprint python-profile python-pydoc python-re python-readline python-resource python-robotparser python-shell python-smtpd python-sqlite3 python-sqlite3-tests python-stringold python-subprocess python-syslog python-terminal python-tests python-textutils python-threading python-tkinter python-unittest python-unixadmin python-xml python-xmlrpc python-zlib python-modules python-misc python-man"

When BitBake completes “baking” the python recipe, a package is created for each of the named elements show above in the ‘PACKAGES=’ listing.

Images, Recipes and Packages

This is most confusing when trying to add new packages to your root file system image.  A recent example of this was encountered when one of our customers asked how to incorporate the NET-SNMP package into his image.  It seemed obvious to him (and to this author) that because there is a nice, tidy recipe called, all he needed to do was add this to his image.  He tried the obvious, based on general instructions in the various manuals which describe adding packages to images:

IMAGE_INSTALL += "net-snmp"

However, that failed because while there is a recipe called net-snmp, there are no packages by that name.  Root file system images are created from packages, and not recipes.  Looking at the packages produced by the net-snmp recipe, using BitBake’s -e switch, we have:

$ bitbake -e net-snmp | grep ^PACKAGES=
PACKAGES="net-snmp-dbg net-snmp-doc net-snmp-dev net-snmp-staticdev net-snmp-static net-snmp-libs net-snmp-mibs net-snmp-server net-snmp-client"

Looking at this output, it becomes clear that the packages desired for installation on the root file system are net-snmp-server, net-snmp-client, and possibly the supporting packages such as the -libs and -mibs. So your image must be modified to have these packages installed.  In summary, the correct way to specify this is as follows:

IMAGE_INSTALL += "net-snmp-server net-snmp-client net-snmp-libs net-snmp-mibs"

BitBake -e is your friend.  Use it often – it can really help you understand what’s going on when things don’t work out as you expect.

13 April, 2012

There’s a lot of talk about Yocto and Poky and Angstrom these days.  Unfortunately, there’s also a fair amount of confusion in the terms.  In some popular websites and presentations, the terms Yocto and Poky seem to be used interchangeably.  But there are quite unsubtle differences in the terms, and these differences will continue to grow over time, as other bits and pieces of technology find their way into Yocto.  And just what, exactly is Angstrom and how does it relate to Yocto?  In this short article, I will attempt to provide some clarity around the various terms being thrown around in the Yocto space.

Yocto Project

You might find it interesting to note that it’s not “Yocto” but more properly, it’s the “Yocto Project”.  The Yocto Project is an umbrella project covering a fairly wide swath of embedded Linux technologies.  It is not a Linux distribution.  Taken directly from the Yocto Project website ( “…The Yocto Project™ is an open source collaboration project that provides templates, tools and methods to help you create custom Linux-based systems for embedded products regardless of the hardware architecture.”  There are two key concepts here. First, it is a collaboration.  A number of different open source projects have come together under the Yocto Project umbrella.  The second key concept is custom.  Using technology from the Yocto Project allows you to build a customized embedded Linux distribution that is suitable to your own project, rather than using a cookie-cutter, one-size-fits-all, take-it-or-leave it embedded Linux distribution.  One of the stated objectives of the Yocto Project is to make embedded Linux development and customization easier.

Yocto Project Projects

No, that’s not a typo!  It is interesting to note that the Yocto Project website currently maintains a list of projects here:  That page lists ten different projects that currently make up the Yocto Project.  They include the Poky “distribution” (it’s arguably more or less than that, depending on your point of view), an Eclipse plugin, and the openembedded core repository of metadata which makes up the bulk of the packages for a typical small embedded Linux distribution. Take a look at that projects page for a detailed look at the Yocto Project’s projects.


So then, what exactly is Poky?  Poky can be thought of as a reference distribution, using Yocto Project technology.  Some people use the term “build system” to refer to Poky, but I think that’s a misnomer.  Certainly there are components within Poky that together make up a build system. Poky is simply one of the projects under the Yocto Project umbrella.  Currently, Poky is the most visible and possibly the most active project within the Yocto Project.  Today, when someone mentions Yocto, many people tend to think of Poky.  Over time, this perception will surely change.


Angstrom is another distribution.  Angstrom is based on the OpenEmbedded project, and more specifically, the openembedded-core (often abbreviated simply oe-core) layer.  As of the date of this article, you might consider Angstrom and Poky to be close cousins, because Poky is also based on oe-core.  But Angstrom is not officially part of the Yocto Project (yet).  Discussions are underway to change that relationship to something more like siblings.  There was a long-winded discussion on the yocto mailing list about pulling Angstrom under the Yocto Project umbrella.  There was general support for doing this, but time will tell whether or not that happens. Angstrom is somewhat different in that the developers call it a binary distribution, in a similar way that you might think of Ubuntu.  One primary output of Angstrom builds is a binary package feed, allowing a developer to simply install a package on a compatible ARM target, for example, without having to compile it, in a similar manner that you might use ‘sudo apt-get install <package>’ on a Ubuntu distribution.


The Yocto Project has attracted a significant amount of industry attention, both in terms of developers and end-users.  One thing is certain.  When you take a look at the Yocto project a year from now, it will probably look different than it does today!  There are several distributions that are Yocto derived, and I’m happy to report that Mentor Embedded Linux has been recognized in the community as being one of the few options for a commercially supported embedded Linux customization platform.

23 March, 2012

Embedded Linux Terminology

Often the hardest part of an initial learning curve is learning the specific language related to the subject.  It is especially difficult when the language is not necessarily defined by any particular authority, as is the case when learning to speak German or Spanish.

The language of Linux and especially embedded Linux has evolved over time.  Some terms are overloaded – the same term can have more than one meaning depending on the context.  The very term “Linux” is a perfect example of such an overloaded term.  Linux, of course, is the name of the kernel, nothing more.  Over the years, I’ve heard many interesting questions from developers related to confusion over the terminology.

In this short article, I’ll attempt to put some clarity in the terminology used around Linux and Linux distributions.  Just what exactly is a Linux distribution?  Where did that term come from? Wikipedia defines a Linux Distribution as “…a member of the family of Unix-like operating systems built on top of the Linux kernel.” (

What is a Linux distribution?

Let’s be more specific.  A Linux distribution is certainly more than the Linux kernel upon which it is built.  A Linux distribution is a collection of software components that make up an operating system, the tools to manage it, and one might argue, the tools to modify and even rebuild it.

The origins of the term “Linux Distribution” remain a mystery at least to me.  However, it is interesting to note that the term “distribution” is mentioned 19 times in the GNU General Public License version 2. (  I prefer to think of the word distribution as a verb rather than a noun. Indeed, the very act of distributing a collection of open source software triggers some very specific legal requirements.  So let’s redefine a Linux distribution as a collection of “packages” and tools you receive from someone or somewhere, which together make a hardware platform (and possibly a development environment, more on that in an upcoming article) come to life.

What is a package?

A package is the fundamental unit of software delivery in a Linux distribution.  A software package contains anywhere from one to potentially thousands of related files.  Packages are usually either source packages or binary packages.  Source packages contain the source code and usually contain the build configuration and instructions to build the binary artifacts.  Note that although source packages are still relatively popular formats for delivery of source code, they are seldom used as the build mechanism.

Binary packages are essentially the artifacts from building the source code packaged in a convenient form that enables easy creation of root file system images.  Binary packages include any related libraries, configuration files, sample databases, and other supporting files required for basic operation of the software in a default configuration.

Packages are not just simple collections of related files.  Packages exist in a special format that can be read and written by package manipulation tools.  Packages come in many forms.  Several popular package formats are currently in use, including .deb format from Debian, .rpm format from RedHat, and .ipk format that has become popular for embedded Linux. For example, the popular bluez package (Bluetooth protocol stack) configured for an embedded system (with docs, man pages and some test utilities removed) could contain these files:


Note that the bluez package contains the daemon itself (bluetoothd) plus many supporting programs.  It also contains configuration files and some templates (rules) to instruct the target system how to automatically configure the system for any Bluetooth devices it finds either during boot or as a result of plugging in a Bluetooth peripheral. The bluez package also contains a shared library ( of helper routines that provides access to low-level Bluetooth services.

Packages make up a root file system

A typical root file system for an embedded system might have between fifty and two hundred packages.  A root file system for a simple console boot with networking support can be assembled with fewer than 20 packages.  A full featured rootfs for a system with many services including a graphical display and multimedia capabilities might be made up of several hundred packages or more.

Build Systems

You might be wondering where all these packages come from.  Well, we build them!  There are many public repositories of binary packages, especially if you’re interested in packages for a typical x86 desktop system.  Debian and Ubuntu are two examples that both maintain public repositories of pre-built binary packages.  However, when you are targeting embedded systems, the choices are considerably narrowed.  Most development teams building embedded Linux systems have the requirement that these packages be built locally.  That’s a very non-trivial exercise!  We’ll definitely cover these challenges in an upcoming post.

There are a number of build systems capable of building packages. One of the more popular build systems in use for embedded systems today lives under the Yocto project umbrella, ( and derives from the Open Embedded project.  Components of the Yocto project can be downloaded and configured to build complete embedded Linux systems.  Mentor Embedded Linux is an example of a commercially available product that contains a Yocto-derived build system and utilities to build a custom embedded Linux distribution.

We’ll expand on many of these topics in coming posts.  For now, if you are reading this and have any preference for topics that might interest you, please let me know.

16 March, 2012


If you’re reading this for the first time, that makes two of us.  I’m writing here for the first time!  Welcome to my inaugural blog post on I hope you’ll find the coming content interesting, timely and varied.  Thought it will be my primary focus, I won’t always write about Linux and open source related subjects.  I will occasionally add some fun topics of interest to me, and hopefully to a subset of you as well. I’d especially like to hear from you if you find an article particularly useful or even boring or irrelevant.  Your feedback will help me to keep the content relevant and interesting.

A little about me

Who am I and why should you be reading this blog?  For starters, I’m a technical marketing engineer for Mentor Embedded, the embedded systems division of Mentor Graphics.  Mentor has a long history in embedded software, though most folks in the electronics industry know Mentor Graphics from their long history of success in the EDA (electronic design and automation) space.  I’ll be writing more about Mentor’s history in the embedded space, so if you’re interested, check back for that.

If you’re involved in development of products using embedded Linux, you may have discovered my book, Embedded Linux Primer.  I was surprised and pleased by its initial success.  The 2nd edition, released last year, has also been well received.  Yes, I know a thing or two about embedded Linux, and I’ll be blogging frequently about some of my favorite topics in this space.

Beyond Linux, I have a wide variety of hobbies and interests outside of work.  I’m an avid boater, and live in one of the boating capitals of the world, in sunny southwest Florida. I love gadgets, and it seems there is a never-ending list of boating related gadgets.  Embedded software defines the feature sets in many of these cool gadgets.  If only my wallet were as big as my wish list… 😉

Another hobby I enjoy is building and flying large radio controlled model airplanes.  These are basically UAVs for the hobbyist!  I’m the secretary and board member for our local R/C flying club, and we maintain a flying field courtesy of our little city on the water.  I am constantly amazed at the technology that has made its way into this exciting hobby.  Modern R/C systems are loaded with software-driven features!  A vibrant UAV industry over the last decade has brought some amazing technology and affordable products to the hobby. You can find more information about this great hobby here:

I’m a life-long guitar player and wannabe singer, and I enjoy making and listening to live music of almost any genre.  I have used Linux audio programs for a wide variety of tasks, from MIDI sequencing to drive my keyboard to audacity for ring tones (yes, I own an Android phone, the Droid Razr Maxx and I love it) and Ardour for multi-track recording.  I am a novice at digital music making, but I enjoy it and I’m looking forward to finding more time in my life to learn more about it.  Online resources related to Linux and audio are almost limitless, but here’s a good place to start if you’re interested:

In 1990, I got the urge to learn to fly and by 1992 had earned my private pilots license and later the instrument rating.  Although I’ve owned two different airplanes in the past, I currently rent when I get the urge to fly.  Most of my flying lately has been of the light sport category, for those that know what that is. I am a veteran of Oshkosh, having flown that crazy approach half a dozen times. Is anyone going to Sun ‘n Fun this year in Lakeland?  It starts March 27th:

General Aviation is another area where flight safety has been hugely advanced by software-defined features.  Affordable products provide the weekend pilot with tools for route planning and enroute weather avoidance.  Many airline pilots are now carrying iPads with aviation charts installed, instead of that huge briefcase full of charts mandated by the FAA. Live, onboard weather radar has become affordable even for casual recreational pilots. These are all software-defined products built around fairly generic hardware platforms.  How many Linux gurus know that there’s actually a GNU/Linux Aviation HOWTO?  How cool is that?  If you find you’re interested in learning to fly, start here:

CQ de K1AY

For those of you who recognize this “code”, it’s older than Motorola.  I’ve been a ham radio operator since the age of 13, and it’s been a life long hobby.  I’m an accomplished morse code radio operator, and I love practicing with turn-of-the-century “bug” and other nostalgic morse code keys.  I operate mostly CW on the HF bands, and especially love late-night DX on 80 and 40.  If you’re a fellow “ham”, you’ll know what all that means! Hams have been recognized innovators in many technical endeavors from satellite communications to the Linux kernel.  Hams have had their hands in Linux since the very beginning.  (I’m sure you’re all familiar with the CONFIG_HAMRADIO kernel configuration option!)  You can find much more about this fascinating hobby at

Now that you know a little about me, check back often for more interesting technically oriented articles focused on Linux and open source.  Thanks for reading, and for any feedback you care to share.