Simulation Experiments (Part 1)
One of the really useful benefits of simulation is the ability to analyze multiple aspects of system performance without actually building the system and testing it. With the right models and simulation tools, you can investigate system performance metrics in a much shorter time than required for prototype and bench testing. Simulation is essentially a way to ask system questions and get answers: system models ask the questions; simulators calculate the answers; data analysis tools help interpret the results.
System design questions come in a wide range of types and complexities. You may only want to know the basics of system operation – what happens over time, with nominal system parameters, when you apply a simple stimulus. Or you may want to know how changes in system parameters affect system performance. You may even want to know what happens when factors outside your system (think environmental conditions) change beyond “room temperature” values. Whatever system performance metrics you want to analyze, the process for asking questions and getting answers can be summed-up in one word: experimenting. Each time you run a simulation on your system, you are, in effect, running a system experiment.
Whether we realize it or not, running experiments is a part of our lives. From a very early age, we started asking ourselves experiment-provoking question like “I wonder what will happen if I…?”, then we try and find out. Most of the time, our approach is unorganized, even spur of the moment. If we stopped to think through every option before trying something new, chances are we wouldn’t try many new things. Though our approach to life’s experiments may be a bit spontaneous, we none the less learn from each experiance.
Simulation experiments are no less instructive, but are typically best run using a structured and organized methodology. Once you finish your system model, to get the most out of simulation experiments you need to manage what design factors to consider, what analyses to run, and what performance metrics you want to analyze. With these and other options, it’s easy to see that your experiment matrix can get pretty complicated. So a way to manage simulation experiments is important.
Like most mechatronic systems simulators, SystemVision lets you quickly setup and run individual simulation experiments (e.g. transient analysis to measure risetime, frequency analysis to measure the lowpass 3 dB frequency, etc.). But SystemVision also features Experiment Manager, a unique and flexible tool for setting up and running simulation experiments. In my next blog post I’ll talk more about the SystemVision Experiment Manager, including how it simplifies the creation, execution, and management of simulation experiments.