{"id":14202,"date":"2020-05-04T06:10:59","date_gmt":"2020-05-04T13:10:59","guid":{"rendered":"https:\/\/blogs.mentor.com\/verificationhorizons\/?p=14202"},"modified":"2020-09-17T12:53:03","modified_gmt":"2020-09-17T16:53:03","slug":"the-ideal-verification-timeline","status":"publish","type":"post","link":"https:\/\/blogs.stage.sw.siemens.com\/verificationhorizons\/2020\/05\/04\/the-ideal-verification-timeline\/","title":{"rendered":"The Ideal Verification Timeline"},"content":{"rendered":"<p>Our discussion around building integrated verification methodologies started with <i>where <\/i>techniques apply to design by <a href=\"https:\/\/blogs.mentor.com\/verificationhorizons\/blog\/2020\/04\/20\/tools-in-a-methodology-toolbox\/\" target=\"_blank\" rel=\"noopener\">plotting options for verifying low, medium and high-level abstractions<\/a>. That was one step to understanding how techniques fit together in a complete methodology. Today we take the next step with a timeline for <i>when <\/i>all these techniques apply.<\/p>\n<p>When I started putting this timeline together I intended it to be generally applicable. That only lasted one conversation. Fellow product engineer Matthew Balance immediately pointed out a generally applicable timeline isn\u2019t possible. At a minimum, he suggested, a timeline depends on the scope of the test subject; are we looking at a timeline for subsystem development or an entire SoC?<\/p>\n<p>Thanks to Matthew, a subsystem timeline is what we\u2019re looking at for now; SoC timeline is TBD. What it shows are all the techniques at our disposal, a recommendation for when each is most effective and the relative impact we should expect. I loosely define impact as <i>value of new results<\/i> (i.e. portion of design verified, number of bugs found, etc).<\/p>\n<p><a href=\"https:\/\/blogs.sw.siemens.com\/wp-content\/uploads\/sites\/54\/2020\/05\/value-time.png\" target=\"_blank\" rel=\"noopener\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-14203 size-full\" src=\"https:\/\/blogs.sw.siemens.com\/wp-content\/uploads\/sites\/54\/2020\/05\/value-time.png\" alt=\"\" width=\"778\" height=\"253\" \/><\/a><\/p>\n<p>The curve for each technique describes how its impact grows, peaks, then tapers off as the value of new results diminish.<\/p>\n<p>The pace at which impact builds for a particular technique is proportional to its infrastructure dependencies and\/or the state of the design. As an example for infrastructure, constrained random test has large infrastructure dependencies so its impact grows slowly. Reachability checking, on the other hand, is deployed without infrastructure so it ramps up very quickly. With respect to design state, that may apply to the size of the code base or its quality. The impact of lint, for example, grows slowly at first with the size of the code base then flattens out to incrementally deliver new value every time lint is run. In contrast, constrained random coverage grows slowly as the quality of the design improves and more of the state space can be explored without hitting bugs.<\/p>\n<p>The relative height of the peaks is subjective. After <i>many <\/i>discussions and tweaks to this graphic, I think the height and duration of each peak pretty accurately captures what we\u2019d see if we averaged the entire industry. That said, I\u2019m certain someone out there will curse me for suggesting the impact of property checking is less than they\u2019ve experienced or take exception to a perceived bias toward constrained random testing. I\u2019ve done my best to capture relative impact while acknowledging case-by-case mileage may vary tremendously.<\/p>\n<p>Time on the x-axis is split into development stages. Design is the before-any-coding-happens step. It\u2019s not relevant to this discussion yet but I expect it to become relevant when we talk about infrastructure down the road. For now, just consider it as the start line. RTL coding is where the design is built. Sanity marks the first functional baseline for the subsystem where focus is on design bring-up and pipe cleaning features with happy path stimulus. The bug hunting phase is where we start flexing the design beyond the happy path. We expect to find bugs at this point &#8211; which is why I call it bug hunting. As the design matures, we transition into the regression phase. Focus here is still on testing; new and longer tests that push the design into darker corners of the state space. Last is closure where focus turns to complete state space coverage through comprehensive stimulus and analysis.<\/p>\n<p>Two sets of recommendations I intend for people to find in the timeline:<\/p>\n<ul>\n<li style=\"font-weight: 400\">When a technique is ideally applied in a development cycle; and<\/li>\n<li style=\"font-weight: 400\">When a technique is ideally applied relative to other techniques.<\/li>\n<\/ul>\n<p>For example, I\u2019m recommending X checking has the greatest impact after RTL coding is well underway right up until its complete. Further, I\u2019m recommending X checking best applied before directed testing or constrained random simulation begins. This isn\u2019t to say X checking couldn\u2019t be useful at the very end, but its impact would be far less because most X\u2019s would have already been discovered &#8211; much more <i>painfully<\/i> discovered I should add &#8211; by other techniques.<\/p>\n<p>A couple other points on steps in the timeline that will further drive decisions on which techniques are used and when\u2026<\/p>\n<p>The most dangerous stage on our timeline is bug hunting because it\u2019s thoroughly unpredictable. According to my favourite Wilson Research Group data from 2018, <a href=\"https:\/\/blogs.mentor.com\/verificationhorizons\/blog\/2019\/01\/29\/part-8-the-2018-wilson-research-group-functional-verification-study\/\" target=\"_blank\" rel=\"noopener\">verification engineers estimated 44% of their effort on debug<\/a> (i.e. bug hunting). Debug cycles are non-deterministic; some bugs take an hour to fix, others take a week. The fewer bugs you have, the less time you waste on bug hunting, the more predictable your progress. A complete verification methodology prioritizes predictability which means squeezing this part of the timeline. Higher quality inputs to the bug hunting phase is one way to do that.<\/p>\n<p>The most underrated step in the timeline: RTL coding. Coming back to the same Wilson Research Group data point from 2018, verification engineers estimated only 19% of their time is spent developing testbenches. The disparity between development and debug gives the impression that we\u2019re rushing to get testbenches written &#8211; a predictable activity &#8211; so we can quickly get to fixing them &#8211; an unpredictable activity. We don\u2019t have the same granular breakdown for design engineers but we do have a data point from the same survey that shows <a href=\"https:\/\/blogs.mentor.com\/verificationhorizons\/blog\/2019\/01\/29\/part-8-the-2018-wilson-research-group-functional-verification-study\/\" target=\"_blank\" rel=\"noopener\">design engineers spend almost half their time doing verification<\/a>. Hard to draw a strong conclusion from that, but I think there\u2019s enough anecdotal evidence kicking around to suggest much of that time is spent debugging tests; the same unpredictable activity. So assuming we\u2019re all involved in bug hunting, I\u2019d like to see us displace some of that effort with proactive verification during RTL (and testbench) coding. Looking at the timeline, there are eight techniques to choose from.<\/p>\n<p>That\u2019s a quick overview of the timeline with an explanation of how I\u2019ve modeled the impact of techniques and where they\u2019re placed in time. I\u2019ll be back in the next few weeks to pull the timeline apart and get a better feel for what\u2019s possible. Until then, I\u2019d like to hear peoples\u2019 thoughts on how techniques are positioned on the timeline, their impact and how they complement each other. If you\u2019ve got any strong opinions, here\u2019s your chance to let it out!<\/p>\n<p>-neil<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Our discussion around building integrated verification methodologies started with where techniques apply to design by plotting options for verifying low,&#8230;<\/p>\n","protected":false},"author":72194,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"spanish_translation":"","french_translation":"","german_translation":"","italian_translation":"","polish_translation":"","japanese_translation":"","chinese_translation":"","footnotes":""},"categories":[1],"tags":[],"industry":[],"product":[],"coauthors":[],"class_list":["post-14202","post","type-post","status-publish","format-standard","hentry","category-news"],"_links":{"self":[{"href":"https:\/\/blogs.stage.sw.siemens.com\/verificationhorizons\/wp-json\/wp\/v2\/posts\/14202","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blogs.stage.sw.siemens.com\/verificationhorizons\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blogs.stage.sw.siemens.com\/verificationhorizons\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blogs.stage.sw.siemens.com\/verificationhorizons\/wp-json\/wp\/v2\/users\/72194"}],"replies":[{"embeddable":true,"href":"https:\/\/blogs.stage.sw.siemens.com\/verificationhorizons\/wp-json\/wp\/v2\/comments?post=14202"}],"version-history":[{"count":1,"href":"https:\/\/blogs.stage.sw.siemens.com\/verificationhorizons\/wp-json\/wp\/v2\/posts\/14202\/revisions"}],"predecessor-version":[{"id":14694,"href":"https:\/\/blogs.stage.sw.siemens.com\/verificationhorizons\/wp-json\/wp\/v2\/posts\/14202\/revisions\/14694"}],"wp:attachment":[{"href":"https:\/\/blogs.stage.sw.siemens.com\/verificationhorizons\/wp-json\/wp\/v2\/media?parent=14202"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blogs.stage.sw.siemens.com\/verificationhorizons\/wp-json\/wp\/v2\/categories?post=14202"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blogs.stage.sw.siemens.com\/verificationhorizons\/wp-json\/wp\/v2\/tags?post=14202"},{"taxonomy":"industry","embeddable":true,"href":"https:\/\/blogs.stage.sw.siemens.com\/verificationhorizons\/wp-json\/wp\/v2\/industry?post=14202"},{"taxonomy":"product","embeddable":true,"href":"https:\/\/blogs.stage.sw.siemens.com\/verificationhorizons\/wp-json\/wp\/v2\/product?post=14202"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/blogs.stage.sw.siemens.com\/verificationhorizons\/wp-json\/wp\/v2\/coauthors?post=14202"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}