Thought Leadership

AI attacks!

At every given moment in the day, some hacker is attempting to break into an electronic system. This hacker might have financial gain in mind, wants to just wreak havoc or is testing if something can be done to brag about to other hackers. We hear about software hacks all the time. Hardware hacks are rarer and more difficult to pull off, so they are not covered that much by the media.

Hacks on IoT edge devices are particularly scary, as these devices are often in direct contact with people. I have not seen many news reports of actual malicious attacks on IoT edge hardware committed by hackers intending to steal data or wreak havoc. However, there are several reports of serious vulnerabilities uncovered by researchers, universities, and folks intent on publicly exposing the risk:

  • Samsung® Smart Home System: researchers at University of Michigan hacked the PIN code to open a locked door and to reset the vacation mode. In a separate hack, a Samsung Smart Fridge was shown to allow the intercept of Gmail credentials when synching to Google Calendar.
  • Jeep® Cherokee: security researchers disabled transmission and brakes. This event caught the most attention in the press.
  • Tesla®: a DefCon hacker discovered vulnerabilities in the Model S that allowed him to start the car. But he had to physically connect his laptop to a system bus.
  • Pacemaker: University of Alabama students wirelessly hacked a pacemaker installed in a dummy and sped up and slowed down the device.
  • Mattel® Hello Barbie and child monitors: Wi-Fi connectivity allowed interception of conversations between a child and the Barbie doll. And, there are several reports of hacking into child monitors to post live video online.

AI systems are often implemented as IoT edge hardware devices, so AI experts are turning their attention to AI hardware attacks as well as software vulnerabilities. In particular, AI-driven object recognition systems are being studied. The reason? These systems are key to autonomous drive and factory automation solutions. Attacks on these systems can cause loss of life. In addition, most smartphones employ AI solutions and hackers love to gain control over devices that almost everyone on Earth owns.

A category of hacks on object recognition is called adversarial machine learning attacks. For example, a hacker finds a way to attack the image classifier in the AI system to change the label of an image. The image classifier returns weights that indicate the confidence level that the image being analyzed is a certain object during neural network training. By changing the image label and adding “noise” to the system, the hacker can change image recognition from say, a stop sign to a street sign. You can see how that could cause a problem to an autonomous vehicle. Imagine attacking an object recognition system that controls factory floor robots and the problems that could occur if the robot picks up the wrong item.

There are ways to prevent this type of attack, but they are complex and computationally expensive. Researchers from Carnegie Mellon University and KAIST Cybersecurity Research Center believe they have found a better defense.

The researchers use unsupervised learning that employs learning explainability to monitor any input data that might have experienced adversarial machine learning attacks. The explainability inspector recreates the reasons that the output was selected by the network based on known good results on unhacked images, creating a map. When the unsupervised learning process compares that data with adversarial examples, the result is a clear difference that is flagged. Their research shows that abnormal maps are the same across known adversarial attack techniques. This is good news, as their system does not need to train the network based on particular attack techniques.

Hacking incidents actually have an upside in that companies learn from them and upgrade their design and verification processes. It is good to see the AI community getting out in front of potential AI attacks. Instead of worrying about AI taking over the world, SkyNet style, maybe we should worry more about AI security.

  • To learn about object recognition using computer vision of the factory floor, check out my blog here. We are doing some amazing things in this area in the design, realize and optimize space.
  • To learn more about the research I covered in this blog, search for the paper entitled, “Unsupervised Detection of Adversarial Examples with Model Explanations.”

Thomas Dewey

Thomas Dewey (BSEE) has over 20 years of electronic design automation (EDA) experience at Siemens EDA (formerly Mentor Graphics). He has held various engineering, technical, and marketing responsibilities at the company, supporting custom integrated circuit design and verification solutions. For the last 4 years, he has researched, consulted, and written about all aspects of artificial intelligence.

More from this author

Leave a Reply

This article first appeared on the Siemens Digital Industries Software blog at https://blogs.stage.sw.siemens.com/thought-leadership/2021/09/08/ai-attacks/