Mike Jensen's Blog

Views, insights, and commentary on mechatronic system design and analysis.

17 December, 2015

Reading license plates is one of my favorite stuck-in-traffic pastimes. I don’t mean reading plates with the standard, state-assigned combination of letters and numbers, but rather the words and phrases folks make up to create their own vanity plates. Most of the combinations are personal statements and give little pause for thought. Examples include LUV2DIG, GR8CAR, 1NDRFUL, MRBIG, REDCRVT, SUNTANR, GADGET — you get the idea, and have no doubt seen many others. And as with most things in our Internet age, websites that highlight select vanity plate phrases are just a few mouse clicks away. If you are interested, just fire up your favorite search engine and enter “vanity plates” to see just a small sampling. Some are funny, some are strange, and some are more than a little shocking (roughly translated…for adults only).

Creating recognizable phrases using combinations of 6, 7, or 8 letters or numbers is not that difficult since there are a huge number of possibilities. Take, for example, a 6 character license plate. If you do the math, you will see there are over 2 billion possible character combinations (26 letters + 10 numbers = 36 possible characters, and each of the 6 license plate slots can hold any one of the 36 options, therefore 36×36×36×36×36×36 = more than 2 billion). Naturally, only a small fraction of the 2 billion combinations will actually make some sort of sense, but even if just 0.01% are in any way intelligible, that is still over 217 thousand options. Amazing. Of course, if you are in the market for your own vanity plate, the challenge is finding the combination that sends your message…and is not already owned by another driver.

On a recent drive home from my sister-in-law’s house where I helped with a fireplace re-lighting project (I am sort of the family handyman), I happened to glance at the license plate on the car in front of me. That is how the vanity plates get you: the glance. Most license plates are just the random, government generated character sequence and therefore deserve little attention. But vanity plates always give me pause. Some are easy to understand. Others require a bit of thought to translate the word or phrase, but I can usually figure them out. On this plate, however, the message was simple: DO EPIC. I immediately thought “what great advice”. If I am going to do something, why even think about good enough, or average, or even a cut above. Just leapfrog them all and go right to epic. But wait…what does “epic” mean? My dictionary app (yes, I have one on my tablet — doesn’t everyone) defines it as “very great or large and usually difficult or impressive”. Yeah. That’s it. So to Do Epic means to “do great or large” or to “do difficult or impressive”. Either one sounds cool. And even worthwhile.

As I thought about the DO EPIC license plate message, I wondered how I might do more “great, large, difficult, and impressive” stuff. How to leave my mark on my own little sphere of influence, so to speak. And I am still thinking about it. Then I thought about categories of epic things, mostly through my engineering thought filter. What are some of our modern epic successes? Broad categories include transportation, construction, communications, and exploration. Epic projects in each of these categories include automobiles, airplanes, and ships; skyscrapers, mega power plants, and football stadiums with retractable roofs; computers, cell phones, and satellites; rocket ships, space stations, and Mars rovers. There are, of course, many more categories and individual examples. But each owes its success to one or more folks with inspired vision, a dash of good old dumb luck, and a whole lot of technical know-how and engineering discipline.

I have spent most of my career working with tools that help engineers do their job better, making it easier for them to understand and develop ever more complex systems. But most of the epic successes we see today — including in the categories mentioned above — have their roots in engineering eras when paper, pencil, and slide rules were the tools of the day. Computers, and software tools like the SystemVision modeling and analysis environment, help build on, but did not create, the technical foundations of so many amazing and astounding things we regard as commonplace today. So to the engineers that preceded me by generations, Did Epic in their own right, and made Do Epic possible for my generation, I salute you. And to the next generation of engineers that will certainly take Do Epic to new levels, and perhaps even build new foundations of your own, aided no doubt by powerful number-crunching design tools and computing platforms, I wait and watch with interest…and expect to be equally amazed by your technical accomplishments.

, , ,

26 June, 2015

As humans, we often see, think about, and interpret the world around us through our own filters. Our individual filters are unique: mine to me, yours to you. They are fashioned from our life experiences. We are what we are, we do what we do, and we think what we think precisely because of our personal history. And we often think our personal view of the world is the most correct, the most broadminded, the least colored by outside influences. Ponder this long enough, while thinking about your own filters, and I think you will agree.

If I visit a customer, my filters want me to believe engineers at the site should be like me ethnically and culturally. This is, of course, seldom the case. The more customers I visit, the more I realize the engineering profession is a great, wonderful, and diverse melting pot of international talent benefitting from diverse ethnicities and cultures – perhaps more so than any other profession. And being an engineer trained to ask questions about the world around me, I wanted to know why. My question was answered during a recent customer visit.

During a lunch time break after a morning meeting, I visited with several engineers over hamburgers, french fries, and sodas. I like getting to know folks a little better, to get beyond modeling and simulation discussions for a bit to learn more about who they really are. I find casual chats over lunch are a perfect opportunity to get better acquainted. Two of the engineers at the table were born and raised outside of the United States, so our conversation turned to “Why did you come to the United States?” and “Why did you choose engineering as a profession?” The answer to the first question was easy – the United States offered better professional opportunities than their home countries. The answer to the second question, however, was a bit surprising. It turns out engineering offered the easiest path to a well-paying career. Remembering my college years studying electrical engineering, I returned a surprised look, punctuated by a single, slightly raised eyebrow, to convey a “you must be joking” effect. I certainly did not think engineering was the easiest collegiate path to a career, well-paying or otherwise.

For my lunch mates, however, engineering offered one distinct college advantage. Though they may have struggled some with the English language, the education system in their native countries taught them the language of engineering: mathematics. Their ability to understand numbers and formulas made up for their difficulties understanding written and spoken English. Ohm’s law is Ohm’s law, Maxwell’s equations are Maxwell’s equations, and Newton’s laws are Newton’s laws, no matter what language you order lunch in.

So there you have it: Math, the universal language and opportunity equalizer. No translation needed.

22 April, 2015

I have mentioned before that one of the more enjoyable parts of my job is getting out of the office to meet with folks who use, or have an interest in using, SystemVision in their product development process. I always learn something, and can hopefully share a bit of what I know as well. I recently enjoyed just such an experience.

Earlier this year, a long-time Mentor Graphics customer, and recent SystemVision client, called to schedule VHDL-AMS language training. As a quick reminder, VHDL-AMS is an IEEE standard language for modeling mixed-signal, multi-physics systems (click here to read one of my earlier posts on the language). After a little back-and-forth, we agreed on dates and I marked them on my calendar.

Teaching training classes is often an interesting experience, and perhaps even a little nerve wracking, since I usually have few details about how the students use, or intend to use SystemVision. It is interesting for the very same reason it is nerve wracking: student expertise and work duties can cover a broad range of applications and technologies which sometimes makes it hard to cover all use cases, and answer all of the questions, in enough detail to make the class immediately useful in their work. For this most recent class, however, I was in luck. All of the students worked together in the same group, and they all had similar applications for SystemVision. But this time the application was a little peculiar for modeling and simulation generally, and SystemVision specifically.

Most customers use SystemVision to design systems from scratch, or to jump in mid-design to investigate and solve a particularly troublesome problem. And a handful of customers use SystemVision to get an early start on test program development, using a virtual prototype of the finished design as a test development platform. But the group in my training class uses SystemVision on the opposite end of the system life cycle. The end system is already in service, and has been for many, many years. Over time, existing test procedures for these systems get a bit antiquated, or the need for additional tests arises. So my students reverse engineer these legacy systems to not only figure out how they work, but also to develop test procedures based on the new, documented understanding.

So how does modeling figure into their work flow? Turns out the answer is pretty simple in theory, but a bit more complex in practice. As is often the case when building large, complex systems (think airplanes, automobiles, etc.), many of the sub-systems are developed and acquired through third parties. And these same third parties very often do not supply detailed design data, let alone simulation models, sighting intellectual property rights and restrictions. So students in my class often need to create simulation models with little in the way of specifications, detailed documentation, or bench measurement data. Sounds fun, right?

Once they have a working design based on schematics created from a combination of custom and standard models, they run through a series of fault simulations, failing one part at a time, documenting the change in performance, then failing the next part and repeating the simulation and documentation process. When the modeling and simulation are complete and the results documented, all of this information is turned into test procedures that technicians use to keep the systems running for years to come.

So the VHDL-AMS class ended, and I learned another way to use simulation — this time in a post-design, post-deployment, reverse engineering and test development process. Who knew? What is the most peculiar way you have seen simulation used for design analysis?

, , , , ,

5 December, 2014

In my last post I wrote about system reliability versus system robustness. I briefly explained my definition of the two, and suggested some design process shifts to help improve both. Sometimes the required process change is small; sometimes it is substantial, almost like an entire design paradigm shift. But the reward, whether measured in improved product reliability or robustness, is usually worth the investment. A recent experience brought this fact into focus.

My church runs a food cannery just a couple of miles from my home. As food production plants go, it is a small to medium-sized facility. Permanent staff probably numbers less than ten people; food production relies heavily on church and community volunteers. Despite its modest size, however, this cannery packages food to help feed folks in need throughout the United States.

I usually work one or two half-day cannery shifts each year. If you have never visited a food production facility, but are fascinated by machines that whir and spin and rattle and shake and sputter to produce a product, you should put a production plant visit on your Bucket List. Such plants are an interconnected, entertaining mix of technologies. Depending on the product in production, you will find mechanical, electrical, electronic, optical, hydraulic, pneumatic, software, and even chemical elements in the plant system mix.

Improvements in process technology often automate manual steps in many production flows. Human-in-the-loop inconsistencies are frequently replaced with mechanized precision. The upgrade process, however, usually takes place over time. One automation upgrade paves the way for another, and another, and another. During my years working at the cannery, I have watched a gradual transition from a labor intensive process with volunteers working at stations throughout the plant, to a handful of folks placed at key locations to monitor (and occasionally help) the automated process, while most of the volunteers now work at a near-end-of-process station doing what machines currently cannot: visually inspect processed fruit for blemishes that might lower quality or reduce consumer appeal.

During canning season, fruit is moved from one side of the plant to the other along several conveyor belts. The final step in fruit preparation, just before canning, is inspection. The fruit inspection area is a long conveyor belt that runs one-half the length of the production floor. A typical crew of thirty workers, divided into two teams of fifteen, can stand comfortably spaced along either side of the belt. Fruit fresh from the peeler enters at one end of the inspection belt, and is conveyed along in front of the workers who inspect and trim the pieces before they are dumped into a water bath prior to being stuffed and sealed inside a can. On my most recent visit to the cannery, I was the anchor on my side of the inspection belt — the last person to check the fruit pieces before canning. Fortunately the inspection process is a team effort, so one position on the belt is not much busier than any other. As I worked away trying to keep up with my inspection duties, I noticed the flow of fruit slowed a bit, and finally just stopped altogether. “Great!” I thought. “A short break!” So I waited. Then I waited some more. After staring at an empty belt for a short while, I figured there had to be equipment trouble somewhere in the plant. Then a fellow volunteer pointed to the problem: a failed motor, which the permanent plant staff was trying to replace. The motor was maybe ten inches in diameter and perhaps sixteen inches from end-to-end, so not really very big as electric motors go. But that single, small motor’s failure brought the entire cannery to a standstill. Volunteers waited while plant mechanics replaced the failed motor with a spare. Even though replacing the motor only took twenty minutes or so, the breakdown reduced production volume, and could easily have cost money in terms of non-productive workers — luckily we were all volunteers.

So I was reminded that day that failures have a very real cost beyond just the price of the repair. The failure of an inexpensive part, in this case a $200 motor, can cost many times its value in wasted resources and reduced production. It is not uncommon for even smaller, cheaper part failures to cost much, much more in production losses. System components, in this case a simple motor, fail — an engineering fact of life, and a reminder that no matter how diligent our design process, systems are only as reliable as the weakest component. Choosing system components for their reliability and robustness metrics is just as important as making sure we account and compensate for system variability in our design process. Component selection, and designing reliable and robust systems, naturally go hand-in-hand.

, , , ,

7 October, 2014

If a system is reliable, is it also robust? And is the converse also true: does a robust system have to be reliable? The answers are no and yes. A reliable system performs its intended function when conditions are nominal. As long as design details and environmental conditions remain stable, you can count on a reliable system to do its job time after time. But what happens if these same design details and environmental conditions start drifting significantly off nominal? The answer is simple: system reliability goes out the door. Fortunately, if we have done our jobs right, when variation attacks a system, robust performance takes over. Making a system reliable usually requires a pretty straightforward design process: the nominal design specification gets turned into a functioning system. Making a system robust, however, adds complexity to a design process. First, we need to determine possible variation sources. Next, we need to determine how these variations affect system performance. And finally, knowing how variation affects performance helps us improve our system design. Sound like a lot of work? No doubt it can be. But if system performance must be robust as well as reliable, the extra work is necessary and worthwhile. When designing to just a specification, we often focus on meeting the written specification, but forget our design often plugs into a larger system that has its own set of tolerances – tolerances that quite possibly were not accounted for in our specification. So when the tolerances for our design get thrown into the mix with the parent system’s tolerances – sometimes called tolerance stack-up – strange things can happen. And then we scramble to diagnose the problem and try to find design answers. We end up trying to test-in reliability and robustness. If you have spent much time at the test bench, you know exactly what I am talking about. But there is a better way, and by now you have probably figured out it involves modeling and simulation.

Reliability and robustness can be designed, rather than tested, into a system. But doing so usually means making a few design process changes. Teaching your simulator to tell you the right system story requires two things: the right models, and the right analyses. Getting the model piece right means choosing options that let you model whatever technology your system uses. If your system is 100% electrical, SPICE might be an adequate modeling answer, though there are better and more informative ways to tackle Ohm’s law. But the moment your modeling needs extend beyond Ohm’s law, accurate system modeling requires more horsepower, the horsepower found, for example, in a multi-physics hardware modeling option like the IEEE standard VHDL-AMS language. I’ve commented on the power and flexibility of VHDL-AMS before, so browse my earlier blog posts for more information. Just know that with a language like VHDL-AMS, you can create models that tell some pretty amazing system performance stories.

Getting the analysis piece right means selecting a simulation toolset that takes advantage of your model library to run your system through its paces. Hint: you need more than standard time and frequency domain analyses. Standard analyses will get you to the “reliable design” stage. But the “robust design” stage requires looking at system performance as key parameter values start wiggling of nominal. When parameters start to wiggle, rest assured that performance issues are not far behind. Accurately analyzing variability requires advanced analyses such as those found in the SystemVision modeling and analysis environment:

  • Parametric sweep
  • Relative, tolerance-based, and statistical sensitivity
  • Monte Carlo (statistical), including standard and sequential extreme values
  • Worst case

To learn how better modeling practices and advanced analyses can contribute to more reliable and robust systems, grab your lunch and click over to our new seminar: Improving Complex Design Reliability and Robustness. Then post a comment to let me know what design methods you most use to improve system performance when design parameters vary.

, , , , , , ,

16 June, 2014

I have mentioned in earlier posts that one of my responsibilities on the SystemVision team is teaching training classes. If users want formal SystemVision or VHDL-AMS training, I am usually the instructor. It is fun and occasionally gets me out of my office for a few days.

Because SystemVision is such a flexible modeling and analysis environment with a broad range of applications (think mixed-signal analog and digital + multi-technology), students in my classes often have a mix of technical expertise and design responsibilities. One class may be for an aerospace customer, another for an automotive customer, and still another for an industrial controls customer. And even though a class may be for a single customer, students invariably have a mix of engineering assignments. This makes the classes even more interesting for me since I learn about advances in many different areas. As usual, the teacher often becomes the student.

One of the reasons I like teaching classes is the Demo Factor. And what is the Demo Factor? Simply the opportunity to demonstrate capabilities students are interested in and will most certainly benefit their design flows, but capabilities usually not covered in the training material. These are usually on-the-fly demonstrations, meaning a student asks “Can SystemVision do [something]?”, and I spend part of the class time, usually while students work on lab exercises, creating a short demonstration. Building these short demos is fun and interesting, and often leads to the Wow Factor, the response I often get when a student learns SystemVision will do something useful and cool that their current design process does not support. Take, for example, a short waveform analyzer demo I did in a recent class.

One of the students wanted to know if SystemVision supports converting a waveform into a text or comma separated value (csv) file. The short answer is “yes”. Wow Factor One. But like most answers when SystemVision is the topic, the initial response is often followed with “but there is more”. I then went on to demonstrate that SystemVision’s waveform analyzer can also plot data from a text or csv file. Why does this matter? Because this little feature lets users directly compare their simulation results with lab test data. Just save lab measurements to a text or csv file, then load the measurement file into SystemVision’s analyzer and plot the data. Wow Factor Two. And then to make the demonstration even more interesting, I showed how to quickly and automatically generate a simulation model directly from a plotted waveform. Such models can be used as system driving functions during simulation. Wow Factor Three. Students immediately chatted about how this simple text-based capability might be useful in their design work.

Okay, so manipulating text-based data is not really that complicated. While it may seem a cool capability to some users, for others it is an expected feature, like cup holders in your car. And compared to the long list of SystemVision’s really cool and more advanced modeling, simulation, and analysis capabilities, it may seem worth little mention. But my recent training class Wow Factor experience reinforced an important reminder: simple can be both useful and impressive, and is almost always better. If a task is simple, keep it that way; if it is complicated, simplify it.

, , , , ,

31 May, 2014

It is official: SystemVision 5.10.3 is released and ready for download from SupportNet. The SystemVision engineering team made over 120 updates and improvements for this new release. Here are some of the highlights:

  • Relative and Absolute statistical tolerances support improved flexibility when defining device parameter tolerancing. “Relative” defines a tolerance as a percentage of the default value; “Absolute” defines a tolerance as a fixed delta from the default.
  • Asymmetric VHDL-AMS statistical distributions give you more flexibility when setting designs for a Monte Carlo (statistical) analysis. Asymmetric distributions can be applied to a model’s internal or external parameters, can be coupled with symmetric or asymmetric tolerances, and now work for SystemVision’s Sensitivity, Worst Case, and Extreme Value analyses.
  • Model Wizard is renamed as “Model and Symbol Wizard” to reflect new support for generating simulation models and schematic symbols from existing design schematics. The wizard is now SystemVision’s consolidated, central location for creating models and symbols for your simulations.
  • Experiment Manager is re-released after key improvements to help you setup, run, and manage simulation experiments for your designs.
  • New datasheet-based models add flexibility and more detail to your system analyses. Several standard datasheet models expand the Datasheet Model Builder’s capabilities. And the first additions to SystemVision’s new advanced datasheet model library support important device effects such as aging, high temperature, and low temperature.
  • 64-bit waveform analyzer and configurable analyzer memory allocation let you run longer/larger simulations (think a Monte Carlo analysis of several thousand runs for a complex system), and easily work with larger simulation databases.
  • Shared Library updates make it easier to include your custom libraries in a new or existing project, and to access your custom libraries from SystemVision’s Search/Place Symbols browser.

And SystemVision 5.10.3 includes a sneak peek at two new productivity tools:

  • Worst Case Scenario Manager uses the advance datasheet models (mentioned above) to help you setup and run multiple worst case simulation scenarios. The result is a detailed view of your system’s worst case performance as components change due to aging and temperature effects.
  • Batch Simulation Tool lets you setup multiple simulation and analysis tasks that can run while you work on other projects, or while you are away from the office. Along with running batch simulations, you can easily compare new simulation results with those from an earlier analysis – a handy feature if you are updating simulation tools, tweaking model behavior, etc. And you can tell the tool to send you an email when a batch simulation starts and ends.

The Worst Case Scenario Manager and Batch Simulation Tool are beta features for this release. Read the release notes for more information on accessing SystemVision’s beta features.

If you are already a SystemVision user, download the new release and run the install program. Then be sure to review the release notes for more release details. And if you are new to SystemVision and want to take a closer look, contact your local Mentor Graphics sales and support team, or add a comment to this blog post and I will get back to you.

, , , ,

31 March, 2014

The 2014 session of the Integrated Electrical Solutions Forum (IESF) for the Military & Aerospace industries is coming up fast. Join us to see how Mentor Graphics tools can help you develop safer, more reliable systems. Here are the dates and locations:

  • April 22 in Dallas, Texas
  • April 24 in Everett, Washington
  • May 1 in Long Beach, California

If you are not familiar with IESF, here is a short description from mentor.com:

“The Integrated Electrical Solutions Forum is a global conference program for electrical/electronic design engineers, managers and executives. Each IESF event focuses on EE design issues in a specific industry such as Automotive; Aerospace; Off-Highway; Military or Commercial Vehicles sectors. IESF is free to attend, and is supported by Mentor Graphics, IBM and SAE International.”

Each location includes technical sessions in the following disciplines:

  • Platform-level Systems Engineering
  • Electrical & Wire Harness Design
  • Model-based Systems Design and Analysis
  • Network Verification (DO-330/DO-178C)
  • Design Lifecycle Management
  • Integrated PCB Systems Design & Manufacturing
  • Thermal Design for Reliability

Along with the technical sessions, Richard Aboulafia, Vice President – Analysis at the Teal group, will give a keynote address titled “Back in Black: Aircraft and Defense Markets Outlook and Forecast”. Sound interesting? Click here for more details, including a look at the conference agenda and access to registration information.


25 March, 2014

Awhile back I wrote about the importance and joy of design practice. In that post I suggested that everytime we design something, we are practicing and improving our design skills. And while we do learn from our successful designs, we no doubt learn more from the failures that sometimes litter the path to a project success. If you have done any design practice at all, you have no doubt had a design failure or two. I wonder sometimes why we say doctors are “practicing medicine” and lawyers are “practicing law”, but engineers are just “engineering”. Seems engineers should have the luxury of “practice” one in awhile too. But I digress…

I recently attended a presentation by Dirk Kramers. If competitive sailing is not your passion, particularly on The America’s Cup scale, Dirk may not be a household name in your home. But he has the distinction of being the chief designer for the boat that won the 2013 Amercia’s Cup race. He spent nearly an hour sharing the story of his team’s preparations leading up to the victory. While the race ended with the United States team defending their trophy, the run up to the race is a cautionary tale of boat design specifically, and engineering design in general.

It is easy to watch the America’s Cup race and be amazed at how competing boats maneuver on the water. Even if water craft do not interest you, you have to be impressed at how well the boats and their crews perform on the water. Twin-hull craft are particularly fun to watch since a good stiff wind, coupled with an aggressive angle of attack into it, often lifts one of the hulls out of the water and turns a sedate twin-hull cruise into a slalom thrill ride. In recent years, teams competing in the America’s Cup have universally adopted twin-hull designs. Why? One word: speed. A twin-hull craft is generally faster than a mono-hull design of similar size.

So Dirk’s team started with a twin-hull design and decided to innovate by combining two additional sailing technologies: a fixed sailing wing and hydrofoils. The fixed wing wraps around the mast and makes the boat more maneuverable. Once at speed, the hydrofoils lift the boat hulls until the craft is essentially riding on stilts above the water. So what do you get when you add a wing and hydrofoils to an already fast twin-hull boat? Yep. More speed. But it turns out there is an engineering design price to pay for this boost in performance: instability. Add a fixed wing and hydrofoils to a twin-hull boat and you make it harder to control. While Dirk and his design team knew and anticipated this, they soon discovered the price demanded when the design and performance envelope gets pushed too far into uncharted waters, to use a maritime metaphor.

Dirk and his team set out to create a competition crushing nautical speedster. Members on his team were experts in their individual areas, no doubt some of the best in their fields. Given this pool of talent, was there risk in their approach? Yes, but they felt the risk manageable. The team did their due diligence in design, including running simulations, before building a boat to test. And the first seven testing days went well, but on the eighth day disaster struck. As the test day wound down, the crew lost control of the craft which toppled and started sinking. The fixed wing was destroyed and the rest of the boat badly damaged. Pushing the design and performance envelope crumpled carbon fiber, endangered crew lives, and nearly scuttled the team’s chance to compete in the race, let alone win.

Obviously Dirk and his team recovered and rallied to get back in the running, eventually besting Team New Zealand with a score of 9 to 8. But what design lesson can we learn from Dirk’s story? The answer is obvious: risk is an inherent part of engineering, particularly when dealing with unproven technologies, or combining proven technologies in unproven ways. And real danger often follows. Are these reasons enough to stop taking design risks? Of course not. Innovation in almost any design field requires risk, often by challenging old or traditional ways of doing things. Design risk drives innovation, and innovation often wins races, whether in sports or business. A key design objective, then, is to mitigate risk while still advancing technology – a perfect role for modeling and simulation. Design teams in many industries make their mistakes in simulation long before committing resources to prototype testing. And while failures still happen, the risk is better quantified.

, , ,

17 March, 2014

Most of the customers I work with design small systems, or smaller pieces of larger systems. Occasionally I get to see an end product: a car, an airplane, a mockup of the International Space Station to name a few. Most of these systems are built or assembled in one location, then put to work in another. In other words, the systems most of my customers work with are portable in a very general sense.

On the other end of the scale – what I call Big Engineering – are systems whose pieces may be designed and manufactured at multiple locations, but when the parent system is built, it lives and works in a fixed location throughout its serviceable life. Think large industrial sites. My sister is an engineer at just such a place – a coal fired power plant.

Power plants are some of the biggest industrial sites on our planet. Get beyond a handful of kilowatts and the space requirements are almost mind-boggling. Generating electricity, no matter the method or amount, requires a physical footprint proportional to the number of watts generated.

My sister works at the Intermountain Power Plant, which sits on roughly 4600 acres of high desert acreage in Central Utah. It runs two turbine-driven 950 megawatt generators. The footprint of the turbine + generator room is roughly the square footage of a medium-sized shopping mall. Everything else at the site is dedicated to either making or keeping the turbines and generators running and generating electricity. The plant is one of the biggest of its type in the nation.

My sister recently took me on a tour of her plant, which I thought would keep us busy for maybe an hour. But six very interesting hours later we turned in our hard hats and safety goggles, and headed home. During the tour, we drove and walked to areas not on any normal plant tour. She explained in interesting detail every phase of the generation process, from when the coal is dumped onsite from train or truck, to when the remains of spent coal are transported by conveyor belt to a remote corner of the site. While I am not fluent in power-plant speak, it was fun to talk engineer-to-engineer about electricity, our common technical language. There are definitely worthwhile perks when your sister is a lead engineer at a power plant.

Coal-fired power plants are simple in principle: coal makes fire, fire boils water to make steam, steam is collected under pressure and channeled to the generation room where it turns the turbine that turns the generator to create 3-phase electricity. The electricity is then routed to a big transformer (my sister’s baby) where it is stepped-up to a mind bogglingly large number of kilovolts before being converted to DC and shipped to Southern California. Yep. Utah residents very rarely benefit from any electricity generated at the plant. It all goes to help power California communities. As is often the case, however, what is simple in principle often gets pretty complicated when you peek under the hood. You might think a system that burns coal to boil water to make steam to spin a turbine to turn a generator is simple. Nope. Within the power plant are essentially three separate, complex systems: fire, steam, electricity.

The fire system tracks fuel from when it arrives on site to when waste from burnt coal is sent out to the ash bed. Coal is pulverized to dust, then mixed with air to create a highly combustible fuel that fires in what is essentially a big boiler. Once the fuel is spent, the remains are filtered through a multi-step process to mitigate air pollution. Some of the ash gets recycled at a local cement plant; the rest is piled on spare acreage.

The steam system is the middle process, converting heat from the coal to pressurized steam. Lots and lots of water is moved around the plant by motor-driven pumps of all sizes. It is a great example of a pretty efficient thermodynamic system at work. Water is transformed from liquid to gas then back to liquid again. And sometimes ice enters the process if temperatures dip far enough below freezing.

All of this leads up to the electrical system. Pressurized steam drives the turbines which spin the generators to create 3-phase electricity. All of that juice is routed to a big transformer and sent to a second facility not far away to get converted to DC for the trip to the West Coast. The combined turbine + generator units are massive (note the picture), as are the transformers. Despite their size, however, there is barely a vibration in the generation room. A good thing, too, since even a small vibration could lead to some pretty serious damage.


One recurring thought I had during my tour was “How did someone figure thus out?” The short answer, of course, is “engineering”. But that may be too simple an answer. The real answer is “iterative engineering” which is how some of the most elegant system solutions are found. And while some of my tour questions appeared complicated, many of the solutions were elegantly simple. Even though a process seems complex, it is often built on a series of very simple steps or sub-processes.

I have often wondered whether power plant design could benefit from simulation-based, multi-physics modeling and analysis. After seeing a power plant in action, I believe the answer is yes, without a doubt. It is hard to appreciate what goes in to making the lights turn on in your house until you see all of the power plant pieces working together. But it is a well-balanced, finely-tuned, closely-monitored process that can easily go haywire if, as my sister said, just one screw is out of place.

Engineering at any level is interesting. From millions of transistors crowded inside an integrated circuit, to the turbine and generator deck of a coal-fired power plant, amazing things happen. The cool thing about Big Engineering is it is easier to see the system parts and pieces working together. Not so easy to look inside of an integrated circuit, let alone to figure out even the basics of what it does (unless, of course, you either designed the chip or have a datasheet). But how industrial-sized systems work is easier to decipher, with the added advantage of being able to reach out and touch the toys.

, , , ,