Mike Jensen's Blog

Views, insights, and commentary on mechatronic system design and analysis.

5 December, 2014

In my last post I wrote about system reliability versus system robustness. I briefly explained my definition of the two, and suggested some design process shifts to help improve both. Sometimes the required process change is small; sometimes it is substantial, almost like an entire design paradigm shift. But the reward, whether measured in improved product reliability or robustness, is usually worth the investment. A recent experience brought this fact into focus.

My church runs a food cannery just a couple of miles from my home. As food production plants go, it is a small to medium-sized facility. Permanent staff probably numbers less than ten people; food production relies heavily on church and community volunteers. Despite its modest size, however, this cannery packages food to help feed folks in need throughout the United States.

I usually work one or two half-day cannery shifts each year. If you have never visited a food production facility, but are fascinated by machines that whir and spin and rattle and shake and sputter to produce a product, you should put a production plant visit on your Bucket List. Such plants are an interconnected, entertaining mix of technologies. Depending on the product in production, you will find mechanical, electrical, electronic, optical, hydraulic, pneumatic, software, and even chemical elements in the plant system mix.

Improvements in process technology often automate manual steps in many production flows. Human-in-the-loop inconsistencies are frequently replaced with mechanized precision. The upgrade process, however, usually takes place over time. One automation upgrade paves the way for another, and another, and another. During my years working at the cannery, I have watched a gradual transition from a labor intensive process with volunteers working at stations throughout the plant, to a handful of folks placed at key locations to monitor (and occasionally help) the automated process, while most of the volunteers now work at a near-end-of-process station doing what machines currently cannot: visually inspect processed fruit for blemishes that might lower quality or reduce consumer appeal.

During canning season, fruit is moved from one side of the plant to the other along several conveyor belts. The final step in fruit preparation, just before canning, is inspection. The fruit inspection area is a long conveyor belt that runs one-half the length of the production floor. A typical crew of thirty workers, divided into two teams of fifteen, can stand comfortably spaced along either side of the belt. Fruit fresh from the peeler enters at one end of the inspection belt, and is conveyed along in front of the workers who inspect and trim the pieces before they are dumped into a water bath prior to being stuffed and sealed inside a can. On my most recent visit to the cannery, I was the anchor on my side of the inspection belt — the last person to check the fruit pieces before canning. Fortunately the inspection process is a team effort, so one position on the belt is not much busier than any other. As I worked away trying to keep up with my inspection duties, I noticed the flow of fruit slowed a bit, and finally just stopped altogether. “Great!” I thought. “A short break!” So I waited. Then I waited some more. After staring at an empty belt for a short while, I figured there had to be equipment trouble somewhere in the plant. Then a fellow volunteer pointed to the problem: a failed motor, which the permanent plant staff was trying to replace. The motor was maybe ten inches in diameter and perhaps sixteen inches from end-to-end, so not really very big as electric motors go. But that single, small motor’s failure brought the entire cannery to a standstill. Volunteers waited while plant mechanics replaced the failed motor with a spare. Even though replacing the motor only took twenty minutes or so, the breakdown reduced production volume, and could easily have cost money in terms of non-productive workers — luckily we were all volunteers.

So I was reminded that day that failures have a very real cost beyond just the price of the repair. The failure of an inexpensive part, in this case a $200 motor, can cost many times its value in wasted resources and reduced production. It is not uncommon for even smaller, cheaper part failures to cost much, much more in production losses. System components, in this case a simple motor, fail — an engineering fact of life, and a reminder that no matter how diligent our design process, systems are only as reliable as the weakest component. Choosing system components for their reliability and robustness metrics is just as important as making sure we account and compensate for system variability in our design process. Component selection, and designing reliable and robust systems, naturally go hand-in-hand.

, , , ,

7 October, 2014

If a system is reliable, is it also robust? And is the converse also true: does a robust system have to be reliable? The answers are no and yes. A reliable system performs its intended function when conditions are nominal. As long as design details and environmental conditions remain stable, you can count on a reliable system to do its job time after time. But what happens if these same design details and environmental conditions start drifting significantly off nominal? The answer is simple: system reliability goes out the door. Fortunately, if we have done our jobs right, when variation attacks a system, robust performance takes over. Making a system reliable usually requires a pretty straightforward design process: the nominal design specification gets turned into a functioning system. Making a system robust, however, adds complexity to a design process. First, we need to determine possible variation sources. Next, we need to determine how these variations affect system performance. And finally, knowing how variation affects performance helps us improve our system design. Sound like a lot of work? No doubt it can be. But if system performance must be robust as well as reliable, the extra work is necessary and worthwhile. When designing to just a specification, we often focus on meeting the written specification, but forget our design often plugs into a larger system that has its own set of tolerances – tolerances that quite possibly were not accounted for in our specification. So when the tolerances for our design get thrown into the mix with the parent system’s tolerances – sometimes called tolerance stack-up – strange things can happen. And then we scramble to diagnose the problem and try to find design answers. We end up trying to test-in reliability and robustness. If you have spent much time at the test bench, you know exactly what I am talking about. But there is a better way, and by now you have probably figured out it involves modeling and simulation.

Reliability and robustness can be designed, rather than tested, into a system. But doing so usually means making a few design process changes. Teaching your simulator to tell you the right system story requires two things: the right models, and the right analyses. Getting the model piece right means choosing options that let you model whatever technology your system uses. If your system is 100% electrical, SPICE might be an adequate modeling answer, though there are better and more informative ways to tackle Ohm’s law. But the moment your modeling needs extend beyond Ohm’s law, accurate system modeling requires more horsepower, the horsepower found, for example, in a multi-physics hardware modeling option like the IEEE standard VHDL-AMS language. I’ve commented on the power and flexibility of VHDL-AMS before, so browse my earlier blog posts for more information. Just know that with a language like VHDL-AMS, you can create models that tell some pretty amazing system performance stories.

Getting the analysis piece right means selecting a simulation toolset that takes advantage of your model library to run your system through its paces. Hint: you need more than standard time and frequency domain analyses. Standard analyses will get you to the “reliable design” stage. But the “robust design” stage requires looking at system performance as key parameter values start wiggling of nominal. When parameters start to wiggle, rest assured that performance issues are not far behind. Accurately analyzing variability requires advanced analyses such as those found in the SystemVision modeling and analysis environment:

  • Parametric sweep
  • Relative, tolerance-based, and statistical sensitivity
  • Monte Carlo (statistical), including standard and sequential extreme values
  • Worst case

To learn how better modeling practices and advanced analyses can contribute to more reliable and robust systems, grab your lunch and click over to our new seminar: Improving Complex Design Reliability and Robustness. Then post a comment to let me know what design methods you most use to improve system performance when design parameters vary.

, , , , , , ,

16 June, 2014

I have mentioned in earlier posts that one of my responsibilities on the SystemVision team is teaching training classes. If users want formal SystemVision or VHDL-AMS training, I am usually the instructor. It is fun and occasionally gets me out of my office for a few days.

Because SystemVision is such a flexible modeling and analysis environment with a broad range of applications (think mixed-signal analog and digital + multi-technology), students in my classes often have a mix of technical expertise and design responsibilities. One class may be for an aerospace customer, another for an automotive customer, and still another for an industrial controls customer. And even though a class may be for a single customer, students invariably have a mix of engineering assignments. This makes the classes even more interesting for me since I learn about advances in many different areas. As usual, the teacher often becomes the student.

One of the reasons I like teaching classes is the Demo Factor. And what is the Demo Factor? Simply the opportunity to demonstrate capabilities students are interested in and will most certainly benefit their design flows, but capabilities usually not covered in the training material. These are usually on-the-fly demonstrations, meaning a student asks “Can SystemVision do [something]?”, and I spend part of the class time, usually while students work on lab exercises, creating a short demonstration. Building these short demos is fun and interesting, and often leads to the Wow Factor, the response I often get when a student learns SystemVision will do something useful and cool that their current design process does not support. Take, for example, a short waveform analyzer demo I did in a recent class.

One of the students wanted to know if SystemVision supports converting a waveform into a text or comma separated value (csv) file. The short answer is “yes”. Wow Factor One. But like most answers when SystemVision is the topic, the initial response is often followed with “but there is more”. I then went on to demonstrate that SystemVision’s waveform analyzer can also plot data from a text or csv file. Why does this matter? Because this little feature lets users directly compare their simulation results with lab test data. Just save lab measurements to a text or csv file, then load the measurement file into SystemVision’s analyzer and plot the data. Wow Factor Two. And then to make the demonstration even more interesting, I showed how to quickly and automatically generate a simulation model directly from a plotted waveform. Such models can be used as system driving functions during simulation. Wow Factor Three. Students immediately chatted about how this simple text-based capability might be useful in their design work.

Okay, so manipulating text-based data is not really that complicated. While it may seem a cool capability to some users, for others it is an expected feature, like cup holders in your car. And compared to the long list of SystemVision’s really cool and more advanced modeling, simulation, and analysis capabilities, it may seem worth little mention. But my recent training class Wow Factor experience reinforced an important reminder: simple can be both useful and impressive, and is almost always better. If a task is simple, keep it that way; if it is complicated, simplify it.

, , , , ,

31 May, 2014

It is official: SystemVision 5.10.3 is released and ready for download from SupportNet. The SystemVision engineering team made over 120 updates and improvements for this new release. Here are some of the highlights:

  • Relative and Absolute statistical tolerances support improved flexibility when defining device parameter tolerancing. “Relative” defines a tolerance as a percentage of the default value; “Absolute” defines a tolerance as a fixed delta from the default.
  • Asymmetric VHDL-AMS statistical distributions give you more flexibility when setting designs for a Monte Carlo (statistical) analysis. Asymmetric distributions can be applied to a model’s internal or external parameters, can be coupled with symmetric or asymmetric tolerances, and now work for SystemVision’s Sensitivity, Worst Case, and Extreme Value analyses.
  • Model Wizard is renamed as “Model and Symbol Wizard” to reflect new support for generating simulation models and schematic symbols from existing design schematics. The wizard is now SystemVision’s consolidated, central location for creating models and symbols for your simulations.
  • Experiment Manager is re-released after key improvements to help you setup, run, and manage simulation experiments for your designs.
  • New datasheet-based models add flexibility and more detail to your system analyses. Several standard datasheet models expand the Datasheet Model Builder’s capabilities. And the first additions to SystemVision’s new advanced datasheet model library support important device effects such as aging, high temperature, and low temperature.
  • 64-bit waveform analyzer and configurable analyzer memory allocation let you run longer/larger simulations (think a Monte Carlo analysis of several thousand runs for a complex system), and easily work with larger simulation databases.
  • Shared Library updates make it easier to include your custom libraries in a new or existing project, and to access your custom libraries from SystemVision’s Search/Place Symbols browser.

And SystemVision 5.10.3 includes a sneak peek at two new productivity tools:

  • Worst Case Scenario Manager uses the advance datasheet models (mentioned above) to help you setup and run multiple worst case simulation scenarios. The result is a detailed view of your system’s worst case performance as components change due to aging and temperature effects.
  • Batch Simulation Tool lets you setup multiple simulation and analysis tasks that can run while you work on other projects, or while you are away from the office. Along with running batch simulations, you can easily compare new simulation results with those from an earlier analysis – a handy feature if you are updating simulation tools, tweaking model behavior, etc. And you can tell the tool to send you an email when a batch simulation starts and ends.

The Worst Case Scenario Manager and Batch Simulation Tool are beta features for this release. Read the release notes for more information on accessing SystemVision’s beta features.

If you are already a SystemVision user, download the new release and run the install program. Then be sure to review the release notes for more release details. And if you are new to SystemVision and want to take a closer look, contact your local Mentor Graphics sales and support team, or add a comment to this blog post and I will get back to you.

, , , ,

31 March, 2014

The 2014 session of the Integrated Electrical Solutions Forum (IESF) for the Military & Aerospace industries is coming up fast. Join us to see how Mentor Graphics tools can help you develop safer, more reliable systems. Here are the dates and locations:

  • April 22 in Dallas, Texas
  • April 24 in Everett, Washington
  • May 1 in Long Beach, California

If you are not familiar with IESF, here is a short description from mentor.com:

“The Integrated Electrical Solutions Forum is a global conference program for electrical/electronic design engineers, managers and executives. Each IESF event focuses on EE design issues in a specific industry such as Automotive; Aerospace; Off-Highway; Military or Commercial Vehicles sectors. IESF is free to attend, and is supported by Mentor Graphics, IBM and SAE International.”

Each location includes technical sessions in the following disciplines:

  • Platform-level Systems Engineering
  • Electrical & Wire Harness Design
  • Model-based Systems Design and Analysis
  • Network Verification (DO-330/DO-178C)
  • Design Lifecycle Management
  • Integrated PCB Systems Design & Manufacturing
  • Thermal Design for Reliability

Along with the technical sessions, Richard Aboulafia, Vice President – Analysis at the Teal group, will give a keynote address titled “Back in Black: Aircraft and Defense Markets Outlook and Forecast”. Sound interesting? Click here for more details, including a look at the conference agenda and access to registration information.

,

25 March, 2014

Awhile back I wrote about the importance and joy of design practice. In that post I suggested that everytime we design something, we are practicing and improving our design skills. And while we do learn from our successful designs, we no doubt learn more from the failures that sometimes litter the path to a project success. If you have done any design practice at all, you have no doubt had a design failure or two. I wonder sometimes why we say doctors are “practicing medicine” and lawyers are “practicing law”, but engineers are just “engineering”. Seems engineers should have the luxury of “practice” one in awhile too. But I digress…

I recently attended a presentation by Dirk Kramers. If competitive sailing is not your passion, particularly on The America’s Cup scale, Dirk may not be a household name in your home. But he has the distinction of being the chief designer for the boat that won the 2013 Amercia’s Cup race. He spent nearly an hour sharing the story of his team’s preparations leading up to the victory. While the race ended with the United States team defending their trophy, the run up to the race is a cautionary tale of boat design specifically, and engineering design in general.

It is easy to watch the America’s Cup race and be amazed at how competing boats maneuver on the water. Even if water craft do not interest you, you have to be impressed at how well the boats and their crews perform on the water. Twin-hull craft are particularly fun to watch since a good stiff wind, coupled with an aggressive angle of attack into it, often lifts one of the hulls out of the water and turns a sedate twin-hull cruise into a slalom thrill ride. In recent years, teams competing in the America’s Cup have universally adopted twin-hull designs. Why? One word: speed. A twin-hull craft is generally faster than a mono-hull design of similar size.

So Dirk’s team started with a twin-hull design and decided to innovate by combining two additional sailing technologies: a fixed sailing wing and hydrofoils. The fixed wing wraps around the mast and makes the boat more maneuverable. Once at speed, the hydrofoils lift the boat hulls until the craft is essentially riding on stilts above the water. So what do you get when you add a wing and hydrofoils to an already fast twin-hull boat? Yep. More speed. But it turns out there is an engineering design price to pay for this boost in performance: instability. Add a fixed wing and hydrofoils to a twin-hull boat and you make it harder to control. While Dirk and his design team knew and anticipated this, they soon discovered the price demanded when the design and performance envelope gets pushed too far into uncharted waters, to use a maritime metaphor.

Dirk and his team set out to create a competition crushing nautical speedster. Members on his team were experts in their individual areas, no doubt some of the best in their fields. Given this pool of talent, was there risk in their approach? Yes, but they felt the risk manageable. The team did their due diligence in design, including running simulations, before building a boat to test. And the first seven testing days went well, but on the eighth day disaster struck. As the test day wound down, the crew lost control of the craft which toppled and started sinking. The fixed wing was destroyed and the rest of the boat badly damaged. Pushing the design and performance envelope crumpled carbon fiber, endangered crew lives, and nearly scuttled the team’s chance to compete in the race, let alone win.

Obviously Dirk and his team recovered and rallied to get back in the running, eventually besting Team New Zealand with a score of 9 to 8. But what design lesson can we learn from Dirk’s story? The answer is obvious: risk is an inherent part of engineering, particularly when dealing with unproven technologies, or combining proven technologies in unproven ways. And real danger often follows. Are these reasons enough to stop taking design risks? Of course not. Innovation in almost any design field requires risk, often by challenging old or traditional ways of doing things. Design risk drives innovation, and innovation often wins races, whether in sports or business. A key design objective, then, is to mitigate risk while still advancing technology – a perfect role for modeling and simulation. Design teams in many industries make their mistakes in simulation long before committing resources to prototype testing. And while failures still happen, the risk is better quantified.

, , ,

17 March, 2014

Most of the customers I work with design small systems, or smaller pieces of larger systems. Occasionally I get to see an end product: a car, an airplane, a mockup of the International Space Station to name a few. Most of these systems are built or assembled in one location, then put to work in another. In other words, the systems most of my customers work with are portable in a very general sense.

On the other end of the scale – what I call Big Engineering – are systems whose pieces may be designed and manufactured at multiple locations, but when the parent system is built, it lives and works in a fixed location throughout its serviceable life. Think large industrial sites. My sister is an engineer at just such a place – a coal fired power plant.

Power plants are some of the biggest industrial sites on our planet. Get beyond a handful of kilowatts and the space requirements are almost mind-boggling. Generating electricity, no matter the method or amount, requires a physical footprint proportional to the number of watts generated.

My sister works at the Intermountain Power Plant, which sits on roughly 4600 acres of high desert acreage in Central Utah. It runs two turbine-driven 950 megawatt generators. The footprint of the turbine + generator room is roughly the square footage of a medium-sized shopping mall. Everything else at the site is dedicated to either making or keeping the turbines and generators running and generating electricity. The plant is one of the biggest of its type in the nation.

My sister recently took me on a tour of her plant, which I thought would keep us busy for maybe an hour. But six very interesting hours later we turned in our hard hats and safety goggles, and headed home. During the tour, we drove and walked to areas not on any normal plant tour. She explained in interesting detail every phase of the generation process, from when the coal is dumped onsite from train or truck, to when the remains of spent coal are transported by conveyor belt to a remote corner of the site. While I am not fluent in power-plant speak, it was fun to talk engineer-to-engineer about electricity, our common technical language. There are definitely worthwhile perks when your sister is a lead engineer at a power plant.

Coal-fired power plants are simple in principle: coal makes fire, fire boils water to make steam, steam is collected under pressure and channeled to the generation room where it turns the turbine that turns the generator to create 3-phase electricity. The electricity is then routed to a big transformer (my sister’s baby) where it is stepped-up to a mind bogglingly large number of kilovolts before being converted to DC and shipped to Southern California. Yep. Utah residents very rarely benefit from any electricity generated at the plant. It all goes to help power California communities. As is often the case, however, what is simple in principle often gets pretty complicated when you peek under the hood. You might think a system that burns coal to boil water to make steam to spin a turbine to turn a generator is simple. Nope. Within the power plant are essentially three separate, complex systems: fire, steam, electricity.

The fire system tracks fuel from when it arrives on site to when waste from burnt coal is sent out to the ash bed. Coal is pulverized to dust, then mixed with air to create a highly combustible fuel that fires in what is essentially a big boiler. Once the fuel is spent, the remains are filtered through a multi-step process to mitigate air pollution. Some of the ash gets recycled at a local cement plant; the rest is piled on spare acreage.

The steam system is the middle process, converting heat from the coal to pressurized steam. Lots and lots of water is moved around the plant by motor-driven pumps of all sizes. It is a great example of a pretty efficient thermodynamic system at work. Water is transformed from liquid to gas then back to liquid again. And sometimes ice enters the process if temperatures dip far enough below freezing.

All of this leads up to the electrical system. Pressurized steam drives the turbines which spin the generators to create 3-phase electricity. All of that juice is routed to a big transformer and sent to a second facility not far away to get converted to DC for the trip to the West Coast. The combined turbine + generator units are massive (note the picture), as are the transformers. Despite their size, however, there is barely a vibration in the generation room. A good thing, too, since even a small vibration could lead to some pretty serious damage.

aa_power_plant_a_resized

One recurring thought I had during my tour was “How did someone figure thus out?” The short answer, of course, is “engineering”. But that may be too simple an answer. The real answer is “iterative engineering” which is how some of the most elegant system solutions are found. And while some of my tour questions appeared complicated, many of the solutions were elegantly simple. Even though a process seems complex, it is often built on a series of very simple steps or sub-processes.

I have often wondered whether power plant design could benefit from simulation-based, multi-physics modeling and analysis. After seeing a power plant in action, I believe the answer is yes, without a doubt. It is hard to appreciate what goes in to making the lights turn on in your house until you see all of the power plant pieces working together. But it is a well-balanced, finely-tuned, closely-monitored process that can easily go haywire if, as my sister said, just one screw is out of place.

Engineering at any level is interesting. From millions of transistors crowded inside an integrated circuit, to the turbine and generator deck of a coal-fired power plant, amazing things happen. The cool thing about Big Engineering is it is easier to see the system parts and pieces working together. Not so easy to look inside of an integrated circuit, let alone to figure out even the basics of what it does (unless, of course, you either designed the chip or have a datasheet). But how industrial-sized systems work is easier to decipher, with the added advantage of being able to reach out and touch the toys.

, , , ,

21 January, 2014

I wrote a month or so ago about the challenge of finding — or creating — simulation models. In that post, I suggested there are three general categories of engineers looking for a model:

  • Give me a model
  • Help me understand the model
  • Help me develop a model

For the “help me develop a model” category, I mentioned Graphical Modeling and Language-based Modeling as two popular model development options. These are useful methods when all you have is equations describing a device’s performance. But there is a modeling need that sits between having a canned simulation model and needing to create one from scratch. It’s a modeling middle-ground where you have component data from which to generate your model. For example, you may have a list of SPICE parameters, or perhaps VHDL-AMS code, that you want to turn into a simulation model for SystemVision. Or you may want to create a model from component datasheet performance parameters or curves. Enter the SystemVision Model Wizard, newly added to the  SystemVision 5.10 release.

Model Wizard accepts component data in multiple formats, including SPICE parameters, VHDL-AMS code, datasheet performance tables, and datasheet performance curves. From these data sources, the wizard generates a SPICE or VHDL-AMS model along with a schematic symbol you can immediately use in a design. The generated model’s format depends on the input data’s format. If you start with SPICE parameters, you end up with a SPICE model. If you start with VHDL-AMS code or datasheet information, the end result is a VHDL-AMS model. With device data in-hand, the wizard walks you through five easy steps:

  1. Select your model data type (SPICE, VHDL-AMS, Datasheet)
  2. Choose the base SystemVision model that matches your data
  3. Select or create your symbol
  4. Match the ports in your model to the pins on your symbol
  5. Set default values for model parameters

When these steps are complete, you simply save the model and start using it, which brings me to another new feature in SystemVision 5.10: Shared Libraries. Shared Libraries support the Model Wizard and let you create your own custom libraries of simulation models and schematic symbols. The Model Wizard’s final step is saving your model to the local project or a standalone library . If you want to use the model only in your local project, then save it with the project. But if you save your new model in a library outside of a project, you can access that library’s models and symbols from any of your SystemVision projects.

If you have SystemVision 5.10 installed, try the Model Wizard and see how easy it is to create simulation models from a variety of data formats. And if you still need to install the 5.10 release, hop over to SupportNet to download the software and begin the installation. Since feedback is always a good thing, as you use the wizard to create models, post a comment or send me an email and let me know what you think.

, , , ,

16 December, 2013

SystemVision 5.10.2 is finished and available for immediate download from Mentor Graphics’ SupportNet website. What will you find in this new release? Here are some of the new/updated features and capabilities:

  • Asymmetric parameter tolerances
  • Custom, user defined statistical distributions
  • Global Parameter support for Monte Carlo analysis details
  • Worst Case and Extreme Value analysis support for asymmetric parameter tolerances when using SPICE-based Uniform and Normal built-in distributions
  • Shared Library Manager for creating, editing, and managing user-defined model libraries
  • Menu option for switch between SystemVision and Corporate central libraries
  • Automatic symbol editing for new generic box symbols
  • Use reference designators to map schematic symbols to simulation models
  • User-defined parameters for schematic-based models
  • Define design parameters with a schematic scope
  • Save a simulation end-point file to continue the analysis in a different SystemVision session

 All-in-all, the SystemVision Engineering team addressed 90+ enhancements and improvements since the 5.10.1 release in mid-summer. Want more information about SystemVision 5.10.2? Would you like to see a demonstration of any of the new features or capabilities? Post a comment to this blog entry, or contact your local Mentor Graphics representative, to ask questions or arrange a discussion.

, ,

10 December, 2013

Many of my recent interactions with customers have been on a single theme: modeling. Simulation is a great tool, but only if you have models that tell the simulator what to do. Models vary in type and complexity, but they all share a common purpose: to tell you something about how a device or system works, often in a specific application or under specific operating conditions. But there is a gap, an often very wide gap, between wanting to simulate a design and having a model that tells an accurate story for your system (notice that I did not say “the accurate story” since how you model your system will depend on what you want to know about it – accuracy is a byproduct of purpose).

When it comes to modeling, I find that engineers are generally divided into three groups: give me a model, help me understand the model, and help me develop a model. Depending on project priorities and how critical the need, the same engineer may spend time in each group on a single project.

The “give me a model” group is either under a time crunch to complete the analyses and finish the design, or is not familiar with other modeling options. The advantage for this group is that models are plentiful. They are available from a variety of sources including component manufacturers, third party vendors, in-house modeling groups, and tool suppliers. SPICE models are particularly popular in this category. The disadvantage, however, is twofold: model quality may be suspect, and model functionality may be limited with no way to improve it.

An engineer in the “help me understand the model” group typically has a model but needs to better understand how it works. Requirements for understanding the model range from needing to document the model’s operation, to improving performance by updating or adding functionality. Depending on structure and format, however, understanding a model someone else developed can be a real challenge. For example, have you ever tried to decipher the internal workings of a SPICE macromodel for a PWM controller chip? Not so easy.

For the “help me develop a model” group, either an existing model is not doing the job, or a model search returned zero results. Engineers in this group can either go without, or build a model from scratch. Going without usually means doing manual calculations, and pen, paper, and calculator become the engineer’s most trusted design companions. Funny thing, but not many years ago I visited a group of engineers at a major automotive OEM who claimed this is exactly how they handled much of their design work: reams of paper shuttled between design groups. Apparently modeling and simulation were not a priority. A bit hard to believe, given the complexity of modern automotive design. But I digress. Fortunately, if you are in the “build a model” category, you have options for model format and structure. In working with customers, I find the two most popular approaches are graphical modeling and language-based modeling. I will talk about each of these in future posts.

, , , , , ,