Thought Leadership

The myth of AI accuracy

In the world of artificial intelligence, there are few watch words as important as accuracy. Whether it’s the ability to recognize faces, recommend tools in software, or detect defects in a microprocessor, being able to do so with consistent accuracy is vital to the future of AI in every segment. However, accuracy is not a term that can be used without prior definition. In a recent keynote at the 59th Design Automation Conference Steve Teig, CEO of Perceive, claims that the generally accepted definition of accuracy in the AI industry is not the right one.

Currently, AI accuracy is defined as an average — simply comparing the number of right vs. wrong responses an algorithm gives. Steve Teig argues that this is the wrong approach and, in fact, average accuracy is almost never a useful metric to optimize for in AI-related problems. To illustrate his point, Steve gave some great examples. He imagines a hypothetical AI which can tell you if you have COVID-19, except that this model always answers “no”. Steve than sites data that there are ~18.6 million active cases of COVID-19 globally compared to a world population of ~7.753 billion so there is only a roughly 0.24% chance any individual has COVID-19 world-wide. This perfectly illustrates the problem with average accuracy in AI, since while always answering “no” the AI will have >99% average accuracy it provides no useful information since the ~0.24% of “yes” that is misses are far more important. Or consider AI controlling a self-driving car that inadvertently makes a lethal mistake 1 in 1,000,000 times.  Normally that ratio would be considered to have nearly perfect average accuracy. But since a human life was tragically lost, it should be labeled a failure since the single mistake is so severe.

Highlighted in this way, the notion of average accuracy seems almost silly. How can good enough be good enough when any failure has the potential to completely invalidates the thousands or even millions of successes? While it is certainly true that not all errors are as severe as the examples above, it is precisely because they are not all that severe that errors shouldn’t be treated as a black or white condition. To paraphrase Steve, “ML never wants to make a big mistake.” But by focusing only on average accuracy, big mistakes are exactly what can occur. Just as not all data is equally important in training, not all errors should be penalized equally. This is where the concept of a loss function is important since this would allow for errors of greater severity to be penalized more, focusing the training of the algorithms to prioritize avoiding the worst-case scenarios.

This is why non-uniform weights, activation functions, and biases are starting to gain traction in the AI community. One of the reasons ML models have grown so large today is precisely because of the uniform treatment of these variables, rather than tuning them individually to highlight important aspects and de-emphasize less valuable ones. Another major topic of Steve’s keynote was on compression and the need to shrink models to improve their efficiency (you can read more about that in a blog here), however compression has the added benefit of helping to highlight important information in data and models. By adding compression into the mix, AI researchers gain a strong starting point from which to better fine tune models while at the same time greatly reducing their energy consumption, both during training and after deployment.

The future landscape of AI training will look vastly different than the one we know today as the focus shifts away from monolithic models solving problems through brute force and towards one of fine-tuned models designed to operate quickly and efficiently at the edge without ever making a large mistake. A shift in this direction also opens up a whole new avenue for AI in critical areas, such as self-driving cars or medical diagnostic systems, where any error is too much. At the end of the day, there will be no magic bullet that pushes AI to “the next level” but the innovative work done by visionaries like Steve Teig will pave the way for AI as a whole to take a step forward into the next generation of smart, truly intelligent, AI.

Check out the recording of Steve Teig’s full keynote here.


Siemens Digital Industries Software helps organizations of all sizes digitally transform using software, hardware and services from the Siemens Xcelerator business platform. Siemens’ software and the comprehensive digital twin enable companies to optimize their design, engineering and manufacturing processes to turn today’s ideas into the sustainable products of the future. From chips to entire systems, from product to process, across all industries. Siemens Digital Industries Software – Accelerating transformation.

Spencer Acain

Leave a Reply

This article first appeared on the Siemens Digital Industries Software blog at https://blogs.stage.sw.siemens.com/thought-leadership/2022/09/08/the-myth-of-ai-accuracy/