Thought Leadership

How generative AI is reshaping cybersecurity

During a recent series of cybersecurity talks host Michael Metzler, VP of Horizontal Management Cybersecurity for Digital Industries Siemens AG is joined by Boris Scharinger, Senior Innovation Manager for Artificial Intelligence at Siemens to discuss the impact artificial intelligence, especially generative AI, is set to have on the cybersecurity industry. With the recent advent of foundation and large language models, the way people and software interact is poised to change and with that, new cyberattack vectors will emerge. To continue reaping the benefits of digitalization, companies must be ready to address these generative AI powered attacks, be it through new practices and policies or even using AI itself. Check out the talk here or find some key insights below.

Recent advances in AI offer unprecedented new ways for humans and computers to interact through the use of large language models (LLMs) and generative AI. LLMs can both understand and respond to queries in an incredibly human-like way, enabling intuitive natural language processing (NLP) in many applications allowing the tool itself to almost become a colleague of sorts, accepting and responding to commands in a human-like way just like a real person would. While generative AI is poised to revolutionize human/computer interactions, it also presents unique risks and challenges for cybersecurity.

While the movies may portray the first step in a cyberattack as hacking a firewall or exploiting a vulnerability in an application, the reality is many cyberattacks actually begin with social engineering. These type of attacks seek to compromise the human element to gain privileged information such as passwords to infiltrate a computer system. These attacks often take the form of phishing emails, text messages, or even calls where the hacker attempts to impersonate a trusted figure to gain access to privileged information. Right now, these attacks are mostly generic and sent to many people in an organization with the hope that one of them will be successful, but LLMs could potentially offer hackers a way to create tailored attacks nearly indistinguishable from real humans.

Using an LLM, a hacker could create an attack designed to target a specific person using company-specific language, templates, and other information easily without having to spend large amounts of time crafting it manually, making attacks incredibly difficult to differentiate from legitimate communication while still targeting a large number of employeess. Advances in image and video generation as well as voice synthesis could make distinguishing whether even video calls are legitimate or not difficult. With that said, even as cyberattacks leverage AI to become more sophisticated, so does the defense.

The same ability that makes generative AI so much of a threat to cybersecurity can also make it one of its greatest assets, which is it’s ability to recognize patterns. For example, a security AI copilot integrated into messaging apps, trained to recognize a companies communication style, could help flag potential phishing emails by spotting the tiny discrepancies their intended recipients might overlook. AI could also be used to continuously audit the logs of connected devices across an entire company at all times, allowing it to spot overarching patterns that would otherwise be lost in the vast quantity of data – patterns that could be used to identify potential cyberattacks or even head-off attacks in progress.

When it comes to addressing the security concerns of AI however, the best way is through education. Just like learning to ride a bike or use a computer, AI education will soon be a critical part of daily life and it’s something that is never too soon to start with. Even though generative AI is still in its infancy and the full scope of its applications has yet to be seen, understanding its strengths, weaknesses and applications will be an invaluable skill going forward. With a strong foundation of understanding what generative AI is capable of, common sense and critical thinking can be applied to help counter any malicious use of the technology, just like it can already be used to fight back against existing social engineering attacks.

As much as AI poses a threat to cybersecurity, so too can it be a powerful tool in safeguarding the digital world. No matter how it’s used, understanding AI, its applications and its shortcomings is crucial to understanding how it can be applied not just to cybersecurity but technology as a whole and how to defend against it. Even though it’s still in its infancy, AI is here to stay and the companies that educate and prepare themselves for that future will be the ones to continue reaping the benefits of digitalization while staying one step ahead of anyone seeking to use this new technology for nefarious purposes.


Siemens Digital Industries Software helps organizations of all sizes digitally transform using software, hardware and services from the Siemens Xcelerator business platform. Siemens’ software and the comprehensive digital twin enable companies to optimize their design, engineering and manufacturing processes to turn today’s ideas into the sustainable products of the future. From chips to entire systems, from product to process, across all industries. Siemens Digital Industries Software – Accelerating transformation.

Spencer Acain

Leave a Reply

This article first appeared on the Siemens Digital Industries Software blog at https://blogs.stage.sw.siemens.com/thought-leadership/2024/04/25/how-generative-ai-is-reshaping-cybersecurity/