Thought Leadership

Exploring artificial intelligence and machine learning

”It’s really exciting for me, as long as I’ve been in this high-tech industry, to see such rapid evolution of changes in algorithms and hardware.” – Ellie Burns

Recently, members of our high-level synthesis team Ellie Burns and Mike Fingeroff got together to discuss artificial intelligence (AI) and machine learning (ML). In their 4 part podcast, they capture why AI and ML is turning up everywhere. It seems that every company in the world is exploring how AI can help their businesses. Let’s take a look at Episode 1.

AI has been under development for about 70 years, so why does AI seem to be such a prevalent topic today? It took that long to bring about three factors that have now converged to allow developers to create amazing solutions:

  • Compute resources
  • Massive data access
  • A wide variety of high-quality algorithms

There is a very popular AI network called AlexNet that was created back in 2012. It took six days to train this network on two high-end NVIDIA graphic processing units (GPUs). That same process would take weeks using your home computer. 20 years ago, these compute tasks would not have even been possible, as they would have consumed years in CPU time. Today, researchers have access to enormous compute resources either in racks in a lab or through the Cloud.

The Internet has facilitated collecting the massive data sets required to train AI systems. For example, millions of people post photos online and that data can be used to train object recognition AI. In 2009 ImageNet was created by Fei-Fei Li and her very small team. It consisted of 3.2 million labeled images in over 5000 different categories. She set up contests across the world to recognize the best object recognition solution. Other examples of “free” data sets are the millions of reviews on Amazon that teams can access to refine natural language processing using AI. Or, Google’s Open Images data set that contains over 9 million images.

The compute resources are available (and the cost keeps going down) and there are new data sets being created every day. That means that algorithm developers have a solid platform to create new AI algorithms at an exponential rate. For example, algorithms working on AlexNet in 2012 correctly identified images about 70% of the time. Algorithms today correctly identify objects at close to 100% accuracy.

But what are these AI algorithms? For object and speech recognition, these algorithms are built by writing code that emulates the human brain. The brain is comprised of neural networks. To replicate it, AI algorithms define a deep neural network, which stacks layers of networks together that process the image or word in order to identify them.

There are many neural network types that are deployed and that are being refined today. At this time, a particular network is employed for a particular task. The Convolutional Neural Network is typically used in image processing. The Recurrent Neural Network is applied to natural language processing. This implies that for each task a different algorithm is needed, which has a huge impact on hardware and software developers. The ultimate goal is to create a simulated human brain that can target any task at hand. But, we are a long way off from that.

Listen to Episode 1 here.

Thomas Dewey

Thomas Dewey (BSEE) has over 20 years of electronic design automation (EDA) experience at Siemens EDA (formerly Mentor Graphics). He has held various engineering, technical, and marketing responsibilities at the company, supporting custom integrated circuit design and verification solutions. Since 2017, he has researched, consulted, and written about all aspects of artificial intelligence.

More from this author

Leave a Reply

This article first appeared on the Siemens Digital Industries Software blog at https://blogs.stage.sw.siemens.com/thought-leadership/2021/02/17/exploring-artificial-intelligence-and-machine-learning/