Posts Tagged ‘Simulation’

22 August, 2015

Impact of Design Size on First Silicon Success

This blog is a continuation of a series of blogs related to the 2014 Wilson Research Group Functional Verification Study (click here).  In my previous blog (click here), I present verification results findings in terms of schedules, number of required spins, and classification of functional bugs. In this blog, I conclude the series on the 2014 Wilson Research Group Functional Verification Study by providing a deeper analysis of respins by design size.

It’s generally assumed that the larger the design—the increased likelihood of the occurrence of bugs. Yet, a question worth answering is how effective projects are at finding these bugs prior to tapeout.

In Figure 1, we first extract the 2014 data from the required number of spins trends presented in my previous blog (click here), and then partition this data into sets based on design size (that is, designs less than 5 million gates, designs between 5 and 80 million gates, and designs greater than 80 million gates). This led to perhaps one of the most startling findings from our 2014 study. That is, the data suggest that the smaller the design—the less likelihood of achieving first silicon success! While 34 percent of the designs over 80 million gates achieve first silicon success, only 27 percent of the designs less than 5 million gates are able to achieve first silicon success. The difference is statistically significant.


Figure 1. Number of spins by design size

To understand what factors might be contributing to this phenomena, we decided to apply the same partitioning technique while examining verification technology adoption trends.

Figure 2 shows the adoption trends for various verification techniques from 2007 through 2014, which include code coverage, assertions, functional coverage, and constrained-random simulation.

One observation we can make from these adoption trends is that the electronic design industry is maturing its verification processes. This maturity is likely due to the need to address the challenge of verifying designs with growing complexity.


Figure 2. Verification Technology Adoption Trends

In Figure 3 we extract the 2014 data from the various verification technology adoptions trends presented in Figure 2, and then partition this data into sets based on design size (that is, designs less than 5 million gates, designs between 5 and 80 million gates, and designs greater than 80 million gates).


Figure 3. Verification Technology Adoption by Design

Across the board we see that designs less than 5 million gates are less likely to adopt code coverage, assertions, functional coverage, and constrained-random simulation. Hence, if you correlate this data with the number of spins by design size (as shown in Figure 1), then the data suggest that the verification maturity of an organization has a significant influence on its ability to achieve first silicon success.

As a side note, you might have noticed that there is less adoption of constrained-random simulation for designs greater than 80 million gates. There are a few factors contributing to this behavior: (1) constrained-random works well at the IP and subsystem level, but does not scale to the full-chip level for large designs. (2) There a number of projects working on large designs that predominately focuses on integrating existing or purchased IPs. Hence, these types of projects focus more of their verification effort on integration and system validation task, and constrained-random simulation is rarely applied here.

So, to conclude this blog series, in general, the industry is maturing its verification processes as witnessed by the verification technology adoption trends. However, we found that smaller designs were less likely to adopt what is generally viewed as industry best verification practices and techniques. Similarly, we found that projects working on smaller designs tend to have a smaller ratio of peak verification engineers to peak designers. Could the fact that fewer available verification resources combined with the lack of adoption of more advanced verification techniques account for fewer small designs achieving first silicon success? The data suggest that this might be one contributing factor. It’s certainly something worth considering.

Quick links to the 2014 Wilson Research Group Study results

, , , , , ,

10 August, 2015

ASIC/IC Power Trends

This blog is a continuation of a series of blogs related to the 2014 Wilson Research Group Functional Verification Study (click here).  In my previous blog (click here), I presented our study findings on various verification language and library adoption trends. In this blog, I focus on power trends.

Today, we see that about 73 percent of design projects actively manage power with a wide variety of techniques, ranging from simple clock-gating, to complex hypervisor/OS-controlled power management schemes. What is interesting from our 2014 study is that the data indicates that there has been a 19% increase in the last two years in the designs that actively manage power (see Figure 1).


Figure 1. ASIC/IC projects working on designs that actively manage power

Figure 2 shows the various aspects of power-management that design projects must verify (for those 73 percent of design projects that actively manage power). The data from our study suggest that many projects are moving to more complex power-management schemes that involve software control. This adds a new layer of complexity to a project’s verification challenge, since these more complex power management schedules often require emulation to fully verify.


Figure 2. Aspects of power-managed design that are verified

Since the power intent cannot be directly described in an RTL model, alternative supporting notations have recently emerged to capture the power intent. In the 2014 study, we wanted to get a sense of where the industry stands in adopting these various notations. For projects that actively manage power, Figure 3 shows the various standards used to describe power intent that have been adopted. Some projects are actively using multiple standards (such as different versions of UPF or a combination of CPF and UPF). That’s why the adoption results do not sum to 100 percent.


Figure 3. Notation used to describe power intent

In an earlier blog in this series, I provided data that suggest a significant amount of effort is being applied to ASIC/IC functional verification. An important question the various studies have tried to answer is whether this increasing effort is paying off. In my next blog (click here), I present verification results findings in terms of schedules, number of required spins, and classification of functional bugs.

Quick links to the 2014 Wilson Research Group Study results

, , , , , ,

29 July, 2015

VA DAC2015 smlIf you were not one of the 100’s of visitors to the Verification Academy booth at DAC 2015 and missed an opportunity to get a printed copy of the DAC 2015 issue of Verification Horizons, don’t worry.  You can also download it as well.

Questa Vanguard Partners Highlighted

Eight of the eleven articles were authored or co-authored by our partners and represent a wide range of topics.  There are two articles on DO-254 (Partners: eInfochips and Verisense).  There is an article on Formal and ABV of MBIST MCPs (Parnter: FishTail Design Automation).  There is an article on how to start formal analysis “right” (Partner: OSKI).  For UVM users, reuse of MATLAB® functions and Simulink® functions is covered (Partner: Mathworks).  Continuing with another article for the UVM users, intelligent testbench automation with UVM and Questa® is explored (Partner: Codasip Ltd.).  For the Agile community, unit testing your way to a reliable testbench is explored (Partner: XtremeEDA & User company: NVIDIA).  Lastly, a noted emulation consultant (Lauro Rizzatti) shares part 2 of his three decades of emulation evolution and a customer paper (Marvell) covers techniques to accelerate RTL simulation.

VH DAC 2015 CoverAll of this is inside the 60-page mega-issue of Verification Horizons in 11 articles.  Direct links to each of the articles is shared below along with the article titles and authors.  The editor introduction by Tom Fitzpatrick gives even more detail and background on this issue.  If you don’t already have some summer or vacation reading, get your electronic copy today!

, , , , , , , , , , , , , , , ,

21 May, 2015

Still having fun doing UVM and Class based debug?

Maybe a debug contest will help. I had a contest with a user not too long ago.

We’ll call him Bob. Bob debugged his UVM testbench using that favorite technique – “logfile” debug. He spent a lot of time inserting, moving and removing $display statements, all while re-running simulation over and over. He’d generate a logfile, analyze it (read: run or tuneup a Perl script) and cross-check with waveforms and source code. He’d get close, only to realize he needed more $display. More Simulation.

Hopefully, just one more $display in one more file… More simulation… Repeat…

He wanted to know if there was a better way to climb around through his UVM Testbench.

Blog 2.3 - UVM Debug - Agent UVM Netlist
<UVM Agent Schematic – One way to climb around>

A Better Way to debug Bob’s Testbench – The Contest!

Bob challenged me to a contest. He wanted to know how fast we could find the bug using Mentor’s Visualizer™ Debug Environment and a post simulation database. One simulation. One shot.

We ran Bob’s simulation, capturing the waveform database. Generating that database required two extra switches:

vopt -debug +designfile …
vsim -qwavedb=+signal+class …

Then we ran the debugger in post-simulation mode:

visualizer +designfile +wavefile

When Bob first saw his RTL signals AND his UVM Class based testbench in the waveform window together, he got a big smile – literally getting up out of his seat.

Our contest was this. Who would be fastest to find the bug? Logfile debug or class based debug? This contest was all hindsight. Bob had already figured out what the bug was and had fixed it before we ever met. In the end the bug was a simple coding mistake in the way that a SystemVerilog queue was being used. Just a simple coding error. But I’m getting ahead of myself.

Digging through the testbench

We setup a remote link so that we both could see the post simulation debug session. Bob provided clues about the design and I drove the debugger.

We chased class handles around his design, from driver, across to monitor and into the scoreboard where the problem existed. There was a failure where a transaction contained N sub-transactions. The last two sub-transactions had errors, but only for a certain kind of transaction. And the error happened very late in an otherwise fully functional simulation.

Blog 2.1 - UVM Debug - Driver Transactions - Sequence Parent
<A UVM Driver transaction with derived classes and the parent sequence>

We started at the driver that was driving that transaction. We looked at the sequence_item that the driver received. But we had no idea from looking at the driver source code, what the ACTUAL type of the sequence_item was. Some derived type sequence_item was coming through. We also had no idea which sequence was generating this transaction. There were many sequences running on this driver.

Blog 2.2 - UVM Debug - Class Inheritance

<UVM Class Inheritance Diagram>

We used the waveform and the class inheritance diagram to figure out which class we needed to look at. Really easy. Just put the driver in the waveform and expand the transaction ‘t’ to see the derived type and the parent sequence.

Blog 2.4 - UVM Debug - Actual Sequence Item Type

<The transaction ‘t’ contains a class of type “sequence_item_A_fa”>

Blog 2.4 - UVM Debug - Actual Sequence Type
<The parent sequence is of type “sequence_A”>

Tic – Toc

In about 60 minutes we were at the point of the bug. Bob had spent about 2 weeks getting to this point using his logfiles. Winner!

In our 60 minutes, we saw that a derived class was coming into the driver. We traced the inheritance of that class to find a base class which implemented the SystemVerilog queue processing. That code was the place where the error was. After inspecting the loop control we found and fixed the error.

Coffee break time.

Still having fun.

Testbenches are complex pieces of software. Logfiles are very important debug tools, but with debug tools like Visualizer, post simulation testbench debug can be more than just examining predetermined print statements. You can actually explore the UVM data structure and class based testbench. And you won’t need weeks to do it.

Come to the Verification Academy Booth at DAC in San Francisco June 8, 9 and 10 to hear more about UVM Debug and talk in person about your UVM Debug problems. See you then!

, , , , , , , , , ,

16 May, 2015

In a recent post on John Cooley wrote about “Who Knew VIP?”. In addition, Mark Olen wrote about this same topic on the verification horizon’s blog. So are VIPs becoming more and more popular? Yes!

Here are the big reasons why I believe we are seeing this trend:

  • Ease of Integration
  • Ready made Configurations and Sequences
  • Debug Capabilities

I will be digging deeper on each of these reasons and topics in a three part series on the Verification Horizons BLOG. Today, in part 1 our focus is Easy Integration or EZ VIP.

While developing VIP’s there are various trade off that one has to consider – ease of use vs amount of configuration, protocol specific vs commonality across various protocols. A couple of years ago; when I first got my hands on Mentor’s VIP, there were some features that I really liked and some that I wasn’t familiar with and some that I needed to learn. Over the last couple of years, there have been strides of improvements with ease of use (EZ-VIP) a big one in that list.

EZ-VIP’s is aimed to make it easier to:

  • Make connections between QVIP interfaces and DUT signals
  • Integrate and configure a QVIP in a UVM testbench


This makes it easier for customers to become productive in hours rather than days.

Connectivity Modules:

Earlier we gave the users the flexibility on the direction of the ports of VIP based on the use modes. The user needed to write certain glue logic apart from setting the directions of the ports.


Now we have created new connectivity modules. These connectivity modules enable easy connectivity during integration. These connectivity modules are wrapper around the VIP interface with the needed glue logic, ports having protocol standard names and with the right direction based on the mode e.g. Master, Slave, EP, RC etc. These now enables the user to quickly integrate the QVIP with the DUT.


Quick Starter Kits:

These kits are specific to PCIe and are customized pre-packaged for all major IP vendors, easy-to-use verification environments for the serial and parallel interfaces of PCIe 1.0, 2.0, 3.0, 4.0 and mPCIe, which can be used to verify PHY, Root Complex and Endpoint designs. Users also get example which could be used as reference. These have dramatically reduced bring up time for PCIe QVIP with these IPs to less than a day.

In the part 2 of this series, I will talk about QVIP Configuration and Sequences. Stay tuned and I look forward to hearing your feedback on the series of posts on VIP.

, , , , , , , ,

26 February, 2015

A colleague recently asked me: Has anything changed? Do design teams tape-out nowadays without GLS (Gate-Level Simulation)? And if so, does their silicon actually work?

In his day (and mine), teams prepared in 3 phases: hierarchical gate-level netlist to weed out X-propagation issues, then full chip-level gate simulation (unit delay) to come out of reset and exercise all I/Os, and weed out any other X’s, then finally a run with SDF back-annotation on the clocktree-inserted final netlist.


After much discussion about the actual value of GLS and the desirability of eliminating the pain of having to do it from our design flows, my firm conclusion:

Yes! Gate-level simulation is still required, at subsystem level and full chip level.

Its usage has been minimized over the years – firstly by adding LEC (Logical Equivalence Checking) and STA (Static Timing Analysis) to the RTL-to-GDSII design flow in the 90s, and secondly by employing static analysis of common failure modes that were traditionally caught during GLS – x-prop, clock-domain-crossing errors, power management errors, ATPG and BIST functionality, using tools like Questa® AutoCheck, in the last decade.

So there should not be any setup/hold or CDC issues remaining by this stage.  However, there are a number of reasons why I would always retain GLS:

  1. Financial prudence.  You tape out to foundry at your own risk, and GLS is the closest representation you can get to the printed design that you can do final due diligence on before you write that check.  Are you willing to risk millions by not doing GLS?
  2. It is the last resort to find any packaging issues that may be masked by use of inaccurate behavioral models higher up the flow, or erroneous STA due to bad false path or multi-cycle path definitions.  Also, simple packaging errors due to inverted enable signals can remain undetected by bad models.
  3. Ensure that the actual bringup sequence of your first silicon when it hits the production tester after fabrication.  Teams have found bugs that would have caused the sequence of first power-up, scan-test, and then blowing some configuration and security fuses on the tester, to completely brick the device, had they not run a final accurate bring-up test, with all Design-For-Verification modes turned off.
  4. In block-level verification, maybe you are doing a datapath compilation flow for your DSP core which flips pipeline stages around, so normal LEC tools are challenged.  How can you be sure?
  5. The final stages of processing can cause unexpected transformations of your design that may or may not be caught by LEC and STA, e.g. during scan chain insertion, or clocktree insertion, or power island retention/translation cell insertion.  You should not have any new setup/hold problems if the extraction and STA does its job, but what if there are gross errors affecting clock enables, or tool errors, or data processing errors.  First silicon with stuck clocks is no fun.  Again, why take the risk?  Just one simulation, of the bare metal design, coming up from power-on, wiggling all pads at least once, exercising all test modes at least once, is all that is required.
  6. When you have design deltas done at the physical netlist level: e.g. last minute ECOs (Engineering Change Orders), metal layer fixes, spare gate hookup, you can’t go back to an RTL representation to validate those.  Gates are all you have.
  7. You may need to simulate the production test vectors and burn-in test vectors for your first silicon, across process corners.  Your foundry may insist on this.
  8. Finally, you need to sleep at night while your chip is in the fab!

There are still misconceptions:

  • There is no need to repeat lots of RTL regression tests in gatelevel.  Don’t do that.  It takes an age to run those tests, so identify a tiny percentage of your regression suite that needs to rerun on GLS, to make it count.
  • Don’t wait until tapeout week before doing GLS – prepare for it very early in your flow by doing the 3 preparation steps mentioned above as soon as practical, so that all X-pessimism issues are sorted out well before crunch time.
  • The biggest misconception of all: ”designs today are too big to simulate.”.  Avoid that kind of scaremongering.  Buy a faster computer with more memory.  Spend the right amount of money to offset the risk you are about to undertake when you print a 20nm mask set.

Yes, it is possible to tape out silicon that works without GLS.  But no, you should not consider taking that risk.  And no, there is no justification for viewing GLS as “old school” and just hoping it will go away.

Now, the above is just one opinion, and reflects recent design/verification work I have done with major semiconductor companies.  I anticipate that large designs will be harder and harder to simulate and that we may need to find solutions for gate-level signoff using an emulator.  I also found some interesting recent papers, resources, and opinion – I don’t necessarily agree with all the content but it makes for interesting reading:

I’d be interested to know what your company does differently nowadays.  Do you sleep at night?

If you are attending DVCon next week, check out some of Mentor’s many presentations and events as described by Harry Foster, and please come and find me in the Mentor Graphics booth (801), I would be happy to hear about your challenges in Design/Verification/UVM and especially Debug.
Thanks for reading,

, , , , , ,

14 January, 2015

“Who Knew?” about verification IP (VIP), was the theme of a recent DeepChip post by John Cooley on December 18.  More specifically the article states, “Who knew VIP was big and that Wally had a good piece of it?”  We knew.

We knew that ASIC and FPGA design engineers can choose to buy design IP from several alternative sources or build their own, but that does not help with the problem of verification.  We knew that you don’t really want to rely on the same source that designed your IP, to test it.  We knew that you don’t want to write and maintain bus functional models (BFMs) or more complete VIP for standard protocols.  Not that you couldn’t, but why would you if you don’t have to?

We also knew that verification teams want easy-to-use VIP that is built on a standard foundation of SystemVerilog, compliant with a protocol’s specification, and is easily configurable to your implementation.  That way it integrates into your verification environment just as easily as if you had built it yourself.

Leading design IP providers such as ARM®, PLDA, and Northwest Logic knew that Mentor Graphics’ VIP is built on standards, is protocol compliant, and is easy to use.  In fact you can read more about what Jim Wallace, systems and software group director at ARM; Stephane Hauradou, CTO of PLDA; and Brian Daellenbach, president of Northwest Logic; have to say about Mentor Graphics’ recently introduced EZ-VIP technology for PCIe 4.0 (at this website ), and why they know that their customers can rely on it as well.

Verification engineers knew, too.  You can read comments from many of them (at Cooley’s website ), about their opinions on VIP.  In addition, Mercury Systems also knew.  “Mentor Graphics PCIe VIP is fully compliant with the PCIe protocol specification and with UVM coding guidelines. We found that we could drop it into our existing environment and get it up and running very quickly”, said Nick Solimini, Consulting DV Engineer at Mercury Systems. “Mentor’s support for their VIP is excellent. All our technical questions were answered promptly so we were able to be productive throughout the project”.

So, now you know,  Mentor Graphics’ Questa VIP is built on standard SV UVM, is specification compliant, is easy to get up and running and is an integral part of many successful verification environments today.  If you’d like to learn more about Questa VIP and Mentor Graphics’ EZ-VIP technology, send me an email, and I’ll let you in on what (thanks to Cooley and our customers) is no longer the best kept secret in verification.  Who knew?

, , , , , , , ,

7 May, 2014

My Feb. 4 post introduced Mentor Graphics’ three-step FPGA verification process intended to help design teams get out of the reprogrammable lab more effectively. Since then, I’ve engaged FPGA vendors, design managers and engineers to explain the process, paying special attention to the merits and technical detail for injecting automation into any FPGA verification environment, the hallmark of Mentor’s process. The feedback from these conversations helped me to develop a series of technical webinars, now available for free and on-demand. Check them out and let us know what you think in the comments below. My hope is the webinars might serve as a starting point for your own conversations on verification of FPGAs, demand for which seems to continue to grow as process nodes shrink.

Injecting Automation into Verification – FPGA Market Trends

Injecting Automation into Verification – Code Coverage

Injecting Automation into Verification – Assertions

Injecting Automation into Verification – Improved Throughput

, , , , , , , , ,

25 February, 2014

As DVCon expands, we at Mentor Graphics have grown our sponsored sessions as well.  Would you expect less?

In DVCon’s recent past, it was a tradition for the North American SystemC User Group (NASCUG) to sponsor a day of activity before the official start of the conference.  When OSCI merged with Accellera, the day before the official conference start grew to become Accellera Day with a broader set of meetings and activities covering many of Accellera’s standards.  This has all grown into a more official part of the DVCon program.  On Monday at DVCon – or as many still call it – Accellera Day – the tradeshow now joins in opening.  I covered this in detail in an earlier blog, so I won’t repeat myself now.

The pre-conference education and meet-up to discuss the latest in standards development is joined by an end of conference tutorial series that has expanded to allow four parallel sessions from three.  Instead of the one tutorial we at Mentor Graphics would otherwise sponsor at DVCon, we will offer two in this expanded series. Given the impact verification has on design it would seem right that more time be devoted to topics that address this.  One half-day tutorial is just to short to give the subject its due respect.

The two Mentor Graphics sponsored tutorials at DVCon, to be run in series, will devote a day to explore the application of current verification technology by us and users like you.  If you are already attending DVCon, you are making your tutorial selections now.  And for those who might only be interested to attend the tutorials themselves, DVCon offers a tutorials-only package ($145/Tutorial).  Mentor’s two tutorials are:

The first tutorial references “smooth sailing,” not because this will be a “no-pirate zone,” although I can tell you that since International Talk Like a Pirate Day is in late September, one won’t have to worry about a morning of pirate talk! [Interesting Fun Fact: Mentor Graphics’ headquarters in Wilsonville, OR USA is a short 50 miles (~80 km) north of the creators of this parotic holiday.]  The smooth sailing comes from the ability to easily use multiple engines from simulation, formal, emulation, FPGA prototyping to address your block to system-level verification needs.

The second tutorial is all about formal.  Or, in a more colloquial way to say it, we will answer the question: Whatsup with formal?  No, I doubt we will find more slang terms for formal technology being used and created in the tutorial.  But the tutorial will certainly look at more focused applications of formal technology.  As a pioneer in focused formal applications (like clock domain crossing) the creation of these focused formal applications has greatly simplified use and expanded technology access to verification teams with RTL design checks, X-state verification, and more joining the list.  Maybe we should ask Whatsapp with formal! But wait!  That slang question is already taken – and Facebook affirmed ownership with a $19B purchase of it recently.  Oh well, I lament.  Join me at this tutorial and we can explore something suitable and not yet taken as a replacement.  I can’t think of a better way to close DVCon than to see if we can invent another $19B term (or app).

, , , , , ,

4 February, 2014

Marketing teams at FPGA vendors have been busy as the silicon nanometer geometry race escalates. Altera is “delivering the unimaginable” while Xilinx is offering “all programmable SoCs” to design centers. It’s clear that the SoC has become more accessible to a broader market today and that FPGA vendors have staked out a solid technology roadmap for the near future. Do marketing messages surrounding the geometry race effect day to day life of engineers, and if so, how – especially when it comes to verification?
An excellent whitepaper from Altera, “The Breakthrough Advantage for FPGAs with Tri-Gate Technology,” covers Altera’s Stratix 10 FPGAs and SoCs. The paper describes verification challenges in this new expanded market this way: “Although current generation FPGAs require a rigorous simulation verification methodology rivaling ASICs, the additional lab testing and ability to reprogram FPGAs save substantial manpower investment. The overall cost of ownership must be considered when comparing an FPGA whose component price is higher than an ASIC of similar complexity.” I believe you can use this statement to engage your management in a discussion about better verification processes.

Xilinx also has excellent published technical resources. Its recent UltraScale backgrounder describes how they are solving the challenges in implementing a design with their reprogrammable silicon. Clearly Xilinx has made an impressive investment to make it easier to implement a design with its FPGA UltraScale products. Improvements include ASIC-like clocking and annealing dataflow bottlenecks without compromising performance. Xilinx also describes improvements when using its Vivado design suite, particularly when it comes to in-lab design bring up.

For other FPGA insights, it’s also worth checking out Electronics Engineering Journal’s recent article “Proliferating Programmability in 2014,” which claims that the long-term future of FPGAs tool flows even though, as Kevin Morris sees it, EDA seems to have abandoned the market. (Kevin, I’m here to tell you you’re wrong.)

Do you think it’s inevitable that your FPGA team will first struggle to make it across the verification finish line before adopting a more process-oriented verification flow like the ASIC market demands? It’s not. I base this conclusion on the many conversations I’ve had with FPGA designers, their managers, sales engineers and many other talented people in this market over the years. Yes, there are significant challenges in FPGA design, but not all of them are technology related. With some emotion, one engineer remarked that debugging the same type of issue over and over in the hardware lab and expecting a different outcome was insane. (He’s right.) Others say they need specific ROI information for their management to even accept their need for change. Still others state that had they only known the solutions I talked about in my seminar a year ago, they would have not spent months and months bringing up their design in the lab.
With my peers here at Mentor Graphics, I have developed a three-step verification flow that includes coverage, assertions and improved throughput. I’ll write about this flow and related issues in the weeks ahead here on this blog. The flow is built on fundamental verification technologies that benefit the broad FPGA market. The goal, in developing the technology and writing about it here, has been to provide practical solutions and help more FPGA teams cross the verification gap.

In the meantime, what are your stories? Are you able to influence your management into adopting advanced technology to aid lab bring-up? Is your management’s bias towards lower cost and faster implementation (at the expense of verification)? Let me know in the comments or, if you prefer, by e-mail:

, , , , , , , , ,

@dennisbrophy tweets

Follow dennisbrophy

@dave_59 tweets

Follow dave_59

@jhupcey tweets

Follow jhupcey