How Sensitive is Your System?

Running a Sensitivity Analysis is one of my favorite mechatronic system simulations. I like SystemVision’s ability to analyze my design and tell me which parameters have the most effect on a performance metric of my choosing. Makes designing systems a bit easier if I know which parameters to tweak in order to influence a particular measurement.

Before joining the SystemVision team…in another life with another simulator…I used a standard sensitivity analysis with a traditional flow: change a parameter, run a simulation, measure the results, then compare the measurement with nominal results to determine the sensitivity. Straightforward and simple. But SystemVision lets me dig a little deeper into my system’s sensitivities with two additional sensitivity analyses, Tolerance-Based and Statistical, added to its standard Relative Sensitivity analysis.

SystemVision’s Relative Sensitivity analysis is the standard simulation outlined above. The results tell me the percent change I can expect in a measured metric for a 1% change in parameter value. So if the waveform analyzer returns a relative sensitivity measurement of 0.45 for a performance metric, the default interpretation is “a 0.45% change in performance metric is expected for a 1% change in parameter value”. More generally, a parameter change of N% will lead to a change of sens_result*N% in the measured value, where “sens_result” is the raw number returned by the sensitivity measurement.

The Tolerance-Based Sensitivity analysis adds additional detail. If tolerance information is added to a parameter, and the parameter is included in the sensitivity analysis, the tolerance information is considered in the sensitivity measurements. By combining the Relative Sensitivity and Tolerance-Based Sensitivity analysis results, I end up with some interesting insight into my system. Turns out that a parameter with a high relative sensitivity but a very tight tolerance may have very little affect on my system’s performance metric. Even though the relative sensitivity is high, there isn’t much room for the parameter to vary within its tight tolerance. On the other hand, if a parameter has a low relative sensitivity but its tolerance is very loose, the parameter is more likely to have a measureable effect on my system. Even though the relative sensitivity is low, the parameter has lots of room to vary within its loose tolerance.

Finally, the Statistical Sensitivity analysis might be the most interesting of all. It tells me which parameters contribute most to a performance measurement. In other words, it shows me what percentage of the performance metric variability is due to a specific parameter change. When all of the percentages are added up, the total should be approximately 100% — the more runs I require for the statistical analysis, the closer to 100% the total reaches. Results are interpreted a bit differently from either the Relative or Tolerance-Based sensitivity analyses. To reduce the variability of my design, I need to tighten tolerances on the parameters that have the most effect on my chosen performance metric. To reduce the cost of my design, I can loosen tolerances on the parameters that have little or no effect on my chosen performance metric – less precise components cost less money.

So what does all of this mean? First, and perhaps most important, if I only use the standard sensitivity analysis, I will miss important insights into my system – insights that will not only improve my system’s performance, but also might improve reliability and perhaps reduce overall manufacturing and maintenance costs. Second, no performance metric should be considered in isolation. Optimizing my system for a specific performance metric may have a negative effect on other metrics. I need to make sure I understand the performance metric priorities for my system. Finally, and probably most obvious, a sensitivity analysis should not be used in isolation. It’s just one option in SystemVision’s toolbox for making sure my system works as it should.

Post Author

Posted September 10th, 2010, by

Post Tags

, , , , ,

Post Comments

No Comments

About Mike Jensen's Blog

Views, insights, and commentary on mechatronic system design and analysis. Mike Jensen's Blog

Comments

Add Your Comment

Archives

October 2014
  • Reliability vs Robustness
  • June 2014
  • Wow Factor
  • May 2014
  • SystemVision 5.10.3
  • March 2014
  • IESF 2014: Military & Aerospace
  • Engineering Oops!
  • Big Engineering
  • January 2014
  • SystemVision Model Wizard
  • December 2013
  • SystemVision 5.10.2
  • Modeling: An Engineer’s Dilemma
  • October 2013
  • What is Your Legacy?
  • September 2013
  • Automotive IESF 2013
  • July 2013
  • Simple Design Solutions
  • June 2013
  • SystemVision 5.10
  • May 2013
  • Engineering Muscle Memory
  • EDA vs. Windows 8
  • March 2013
  • VHDL-AMS Stress Modeling – Part 3
  • January 2013
  • VHDL-AMS Stress Modeling – Part 2
  • VHDL-AMS Stress Modeling – Part 1
  • December 2012
  • Practice! Practice!
  • November 2012
  • Sharing Tool Expertise
  • October 2012
  • Preserving Expertise
  • Virtual Prototyping — Really?
  • Innovations in Motion Control Design
  • September 2012
  • Game Changers
  • Do We Overdesign?
  • August 2012
  • Tsunami Remnants
  • July 2012
  • A New Look at Device Modeling
  • SystemVision 5.9
  • June 2012
  • Veyron Physics
  • May 2012
  • Rooster Tail Engineering
  • April 2012
  • Automotive IESF 2012
  • Teaching and Learning CAN Bus
  • March 2012
  • Analog Modeling – Part 6
  • Analog Modeling – Part 5
  • Analog Modeling – Part 4
  • February 2012
  • Analog Modeling – Part 3
  • Analog Modeling – Part 2
  • January 2012
  • Analog Modeling – Part 1
  • Connecting Tools and Processes
  • December 2011
  • Turning-Off and Tuning-In
  • Use vs. Experience
  • Analyzing the Big Picture
  • November 2011
  • Simulating for Reliability
  • October 2011
  • SystemVision 5.8
  • VHDL-AMS Model Portability — Fact or Fiction?
  • September 2011
  • IESF 2011 Moves to Frankfurt
  • Simulation Troubleshooting
  • August 2011
  • Qualities of VHDL-AMS Quantities
  • Military & Aerospace IESF 2011
  • Touring Johnson Space Center
  • July 2011
  • Engineering versus Science
  • June 2011
  • System Reengineering
  • May 2011
  • Integrating Hardware and Software Design
  • Engine Remote Start
  • Integrated System Design
  • Simulation Experiments (Part 3)
  • April 2011
  • Automotive IESF 2011
  • Pushbutton Cars
  • System Simulation with FEA-Base Motor Models
  • March 2011
  • Simulation Experiments (Part 2)
  • Simulation Experiments (Part 1)
  • Japan: Patience and Grace Amid Disaster
  • Top Gear = Driving Fun
  • February 2011
  • Buoyancy
  • Ideas in Motion
  • January 2011
  • The Mechanical Half of Mechatronics
  • Detroit Auto Show
  • Signal-flow vs Conserved System Modeling
  • SystemVision 5.7…Ready, Set, Go!
  • December 2010
  • SystemVision and Windows 7
  • Friction Vacation
  • Simulation Beyond Volts and Amps (Part 4)
  • November 2010
  • Simulation Beyond Volts and Amps (Part 3)
  • Simulation Beyond Volts and Amps (Part 2)
  • Simulation Beyond Volts and Amps (Part 1)
  • October 2010
  • SAE Convergence Recap (and an Unexpected Surprise)
  • VHDL-AMS Black Belt
  • Converging on SAE Convergence
  • System Design vs System Repair
  • September 2010
  • What’s the “AMS” in VHDL-AMS?
  • How Sensitive is Your System?
  • Do You Trust Your Simulator?
  • August 2010
  • What’s in a SPICE Model?
  • Cycling + Gravity = Pain
  • NI Week: Fun for Engineers
  • June 2010
  • Are You a Flexible Thinker?
  • VHDL-AMS and Switch Hysteresis
  • May 2010
  • VHDL-AMS Revisited
  • Segway to U3-X
  • Atomic Glue
  • March 2010
  • IESF Recap
  • February 2010
  • IESF is Coming…
  • System Level HDL-topia
  • January 2010
  • Mastering Design Abstraction
  • The Joy of Disassembly