Thought Leadership

Cats != Coverage

“Any sufficiently advanced technology is indistinguishable from magic.”

– Arthur C. Clarke, Profiles of The Future

“We actually made a map of the country, on the scale of a mile to the mile!”
“Have you used it much?” I enquired.
“It has never been spread out, yet,” said Mein Herr. “The farmers objected: they said it would cover the whole country, and shut out the sunlight! So we now use the country itself, as its own map, and I assure you it does nearly as well.”

– Lewis Carroll, Sylvie and Bruno Concluded

For about as long as Functional Coverage has been “a thing,” there has been the alluring vision of a magic system where you could write a testbench that would randomly stimulate your design, check your coverage and automatically adjust the stimulus constraints to target the remaining coverage holes. Run such a system through a few loops and voilà! – you’ve magically reached your coverage goals! In fact, there were a few papers presented at DVCon US last week that dealt with this very topic, and it even came up during a panel on the impact of Deep Learning on Verification. One might be tempted to think that Deep Learning has become Verification’s version of Arthur C. Clarke’s sufficiently advanced technology that is indistinguishable from magic. As much as I hate to be the bearer of bad news, it’s not going to happen – at least not anytime soon.

A closer examination of the papers that discussed this topic show that they only work when there is a direct correlation between the coverage points and the possible stimulus values being generated. In such an environment, it is indeed possible to randomize the stimulus, track the values generated and narrow the constraints so that the next randomization eliminates the already-covered values from consideration. While this sounds great, you still have to randomize the values every cycle and narrowing the constraints actually forces the solver to work harder each time. If your goal is to maximize coverage in the fewest number of tests without wasting time, you should definitely check out using Portable Stimulus, with a tool like Questa inFact® instead, since it uses the coverage goals to automatically generate the minimal set of tests that is guaranteed to hit your coverage.

I took advantage of the concentration of old friends and experts at DVCon US to spend some time with John Aynsley (who is both) talking about this very topic. John has been studying Deep Learning for quite some time and has shared his thoughts on this intriguing topic in standing-room-only workshops at the last two DVCons. The problem is that DL requires that there be some measurable, quantifiable and ultimately predictable relationship between the stimulus and the coverage, and it is a much more difficult – ultimately intractable – problem than recognizing pictures of cats. Instead of trying to recognize a pattern similar to one you’ve already seen, what coverage closure is trying to achieve is to determine what stimulus must be applied to hit coverage points that have never been hit before. It’s not unlike asking a neural network that was trained to recognize cats to recognize cars.

As I mentioned, it can work when the coverage points match the input values, but if you’re trying to establish a correlation between input stimulus and functional coverage metrics about the inner workings of a state machine deeply-embedded in your design, it simply can’t. Deep Learning requires a “cost function” that can be evaluated and minimized to achieve the “learning.” Modern complex designs simply do not have such a cost function that can be predictably evaluated. The best you could do would be to use a reference model to evaluate the cost function of a given stimulus sequence, but for more than the most trivial coverage, you’d need a “reference model” that is essentially the design itself. And then you’re looking at Lewis Carroll’s map with a “scale of a mile to the mile.” You could try to use it, but the farmers would object.

Tom Fitzpatrick

Comments

4 thoughts about “Cats != Coverage
    • Thanks for your comment!
      Portable Stimulus (PSS) lets you define a static graph that specifies the scheduling relationship between possible behaviors, yielding a large number of possible scenarios that can be generated from a single specification. Questa inFact can process that graph to analyze the possible scenarios, and it can also take your coverage specification into account so that when it generates test implementations, it will choose the scenarios that it knows will hit your coverage goals.
      For more information about how Portable Stimulus works, please see the Portable Stimulus Basics video course on Verification Academy.

Leave a Reply

This article first appeared on the Siemens Digital Industries Software blog at https://blogs.stage.sw.siemens.com/verificationhorizons/2019/03/08/cats-coverage/