Thought Leadership

Getting More Value from your Stimulus Constraints

Verification engineers put lots of effort into writing and tuning constraints for random stimulus. It’s critical that the constraints correctly express the valid relationships between the stimulus variables. Otherwise, invalid stimulus will be generated or, worse, important valid combinations of stimulus will not be generated.
When it comes to bug hunting, running open-loop random stimulus is recognized as a good way to ensure that cases are exercised that the verification engineer wouldn’t intuitively think of. However, the very constraints that verification engineers work so hard to perfect get in the way of this goal by introducing random-resistant cases – value combinations that have an extremely low probability of occurring.

Consider the SystemVerilog class shown in Figure 1 below to see just what a dramatic effect a few constraints can have on the cases that a constraint solver produces. One simple constraint skews the entire random distribution!

Figure 1: Constraints skew random distribution
Figure 1: Constraints skew random distribution

 

This type of skewed distribution is easy to see and adjust for when the variable combinations are monitored by functional coverage. However, let’s face it, the whole premise of using random stimulus to find bugs is that random generation will produce cases that we didn’t think of (and, thus, didn’t create functional coverage for).
What if the very constraints that engineers spend so much time creating and refining could actually help ensure that corner cases are hit? If you’re attending DAC this year, come see a poster paper titled “Strategy-Driven Stimulus Generation: Constraint-Guided Test Selection” that proposes an approach that leverages the constraint description to identify high-value stimulus values and get more value from bug-hunting simulation runs:

Session Title: Designer/IP Track Poster Session – Wednesday
Session Number: 302
Presentation Title: Strategy-Driven Generation: Constraint-Guided Test Selection
Date: Wednesday, 6/4/2014 12:00-1:30PM
Room: 100

How do you ensure that your random simulations continue to provide incremental value, and aren’t just testing the same thing over and over again?

Matthew Ballance

Matthew is a verification technologist focused on the Questa inFact Intelligent Testbench Automation tool. He has worked at Mentor for over 15 years~ working with Hardware/Software Co-Verification~ Transaction-Level Modeling~ and Functional Verification tools.

More from this author

Leave a Reply

This article first appeared on the Siemens Digital Industries Software blog at https://blogs.stage.sw.siemens.com/verificationhorizons/2014/05/29/getting-more-value-from-your-stimulus-constraints/