Thought Leadership

Trust, the basis of everything in AI

To follow-up on my recent podcast interview on AI ethics with Ron Bodkin, I wanted to say a bit more about the thinking on AI at Siemens, how we lead by example on trust. One example is our initiation of the cybersecurity Charter of Trust. Our companywide AI Educational Campaign is also a testament to this emphasis on a “trustworthy” approach to systems we create. We have applied industrial AI for many years and believe widespread use of it requires additional investment in research and development, as well as a sharper focus on education and skills-development broadly across organizations, both those that are creating and using the technology.

An AI system’s uniqueness lies mainly in its acquired ability, its aptitude, to scale up the performance of complex tasks through learning – what I refer to here as “practical intelligence.” Such a practical intelligence gives AI the ability to direct industrial products or ecosystems towards specific goals. AI is trusted in reaching its goals if it addresses robustness concerns, respects all laws, and complies with ethical principles.

At Siemens, we are committed to abiding by business-to-consumer and business-to-business principles beyond fairness and avoiding bias. Issues such as explainability and interpretability, privacy and protection of customer data, responsibility and accountability in product development processes, and doing our best to assure that “do no harm” is inherent in the systems we create, are all taught and practiced throughout our organization.

From our customers’ perspective, implementing an AI-based system should not be viewed as a traditional waterfall IT project. To find success with AI, digital industry leaders must rethink their approach to deploying AI into their environments and plan it carefully. For instance, if biased data is used to train an AI model, the feedback cycle of that model will make it more biased.

Take for example a factory where critical areas of the shop-floor are monitored more rigorously than other areas. Because the model will be biased, more errors will be discovered in that department, leading to even more rigorous monitoring – and the cycle continues.

Such monitoring on the shop-floor could have positive and negative outcomes. Use of AI-enabled Glass EE2 or HoloLens II could reduce errors and even assist in overseeing the health and well-being of the individual worker while conducting tasks. On the other hand, because of the feedback cycle of AI, it could end up displaying unintended bias and undue additional monitoring. Worse yet, it could be used maliciously by a supervisor with access to a worker’s private health data deciding that a specific employee dislikes that supervisor and/or the assigned tasks purely based on that employee’s emotional state!

Excerpt from WTMT#3

We are looking at techniques such as Federated Learning, Secure Multi-Party Computation, and Differential Privacy to address these concerns. I will cover these topics more rigorously in a future blog.

To summarize, in an increasingly networked world, trust is the basis for everything – even in the rather less-flashy realm of industrial AI, far removed from consumer-based software and social media where trust issues typically arise. Our customers demand that any system we provide be robust technically, conform to legal regulations, and be ethically correct. The overall challenge for AI developers like the ones we have at Siemens is to carefully constrain the reward function of the created models to ensure that the resulting behavior is desirable. We must also take precautions in the architecture of our AI-based systems against unintended consequences and malicious use.

Not easy challenges I admit but something that we will do because, at Siemens, we are inspired to achieve a trusting and fruitful relationship with our partners and customers.

Mohsen Rezayat

Comments

2 thoughts about “Trust, the basis of everything in AI
  • Thank you, Mohsen, for this crisp summary on what counts to make AI components in our products successful. On EU level, the challenges of AI you mentioned have been reviewed skeptically and even additional liability regimes for AI are currently discussed. Therefore, it will become more and more important that we convince our customers and the public by developing and offering trustworthy AI systems. I hope we will keep on exchanging on these topics, specifically between the legal and development teams.

    • Hello Sandra. Thanks for the kind words. I couldn’t agree more that our developers must work with legal and ethics teams (and other relevant partners) to answer tough questions that are bound to happen as AI becomes widespread in our products and services. As you indicated, the legal and ethical challenges that AI poses to a company like Siemens are enormous and we need a collective effort to address them. I see this as an opportunity for us, and will join you and other colleagues to place Siemens as a trend setter in innovation for Trustworthy AI.

Leave a Reply

This article first appeared on the Siemens Digital Industries Software blog at https://blogs.stage.sw.siemens.com/thought-leadership/2020/11/23/trust-the-basis-of-everything-in-ai/