“Trust what you see, not what you hear.”
Humans are wired to question facts and seek corroboration several times over. Developers of machine learning algorithms will testify to this methodical approach, of testing and re-testing until models deliver the right outcomes.
In an age where AI is the favourite buzzword of technology enthusiasts, reports about its failures are cause for concern. VentureBeat claims that only 13% of AI projects make it into business production. IDC finds that 25% of organizations report that half of their AI projects fail.
What do these failures look like in the real world? Some examples are gender biases creeping into Amazon’s recruiting system and Tesla’s autonomous cars making the wrong decisions.
AI is under scrutiny, and rightly so
The first thing to know about AI is that it is, typically, a black-box model. This means that when AI makes a prediction, users cannot sufficiently find answers as to why such a prediction was made, what other predictions were possible, to what degree can they trust the prediction.
The gaps in such information make black-box models very risky, especially to businesses. It leads to confusion and users are led to distrust AI models, resulting in sagging adoption.
Appending the term ‘explainable’ to AI seems to be a good solution to this problem, but is it enough to drive AI Trust? Lets see.
Explainable AI is a new concept that aims to build transparent AI systems.
Explainable AI – Making models transparent
Think of Explainable AI as a transparency toggle. They are separate sets of techniques and tools which are applied on existing black-box AI models. When users opt for Explainable AI, they are intentionally moving the needle from black and opaque to visible and transparent.
Explainable AI exposes black-box models to give business users access to the logic behind model decisions or predictions such as why the model made certain prediction decisions to a certain groups of customers in the way it did. AI models come with certainly acceptable model inaccuracies. As a result as a business user, you want to understand why the model is inaccurate in its decision in certain cases. Explainable AI provides the required answers to all such scenarios. Simply put, it allows humans to build trust for what they see.
From an enterprise standpoint, explainable AI provides a range of capabilities, many of which are vital for those looking to grow the success rates of AI programs, achieve transparency in AI systems, better ROI, and enhance operations.
Explainable AI helps verify machine learning models, debug predictions, and discover reasons for model decisions and enable transparency and explainability to drive Trust.
Is explainability or transparency enough to drive trust in AI models?
No, there are several aspects impacts user trust in AI models despite the underlying models are transparent or explainable with the help of Explainable AI. The following are key aspects that impacts user trust in AI models. One needs to ensure that the following 4 core aspects are maintained during design and operationalization of AI models.
Reliability – It leverages user knowledge to combat uncertainty.
There will always be some level of uncertainty when decisions are being made by AI/ML models. Tiny misclassifications lead to incorrect predictions. The solution is going one step ahead and instituting human-led guardrails. What enterprises should look out for is whether the AI model allows user-centric knowledge to be folded in during development. This gives analysts the ability to apply critical thinking and conditional decisions, so that the model delivers the right outcomes, reliably.
Safety – It actively de-biases itself.
Every human is guilty of their biases, and this is encoded in historical datasets and decision-making logic, too. AI models, therefore, inherit flawed assumptions and non-representative datasets that inevitably establish skewed patterns, resulting in skewed decisions. Model de-biasing is an active step whereby specialist tools and frameworks are used to de-bias predictions and consistently improve model results.
Transparency – It inculcates the ‘human’ perspective.
Explainable AI systems include the ‘human perspective when designing, developing, and deploying AI products. It supports human decisions, situations, and context, enabling superior user experience.
Responsibility – It is trained for accountability.
Rather than giving AI free rein, enterprises should define clear boundaries for the application of AI. The key is to have an in-built ethical framework that outlines the expected early wins and long-term value. To ensure trust, the same framework must also have protocols that ensure data privacy and security, without compromising the user experience.
Subex builds ethical AI products that empower citizen data scientists by bringing in transparent and explainable AI. Our solutions are equipped with platforms, frameworks, tools, guardrails that equip enterprises with AI they can trust. Discover more at www.subex.com.
About Author
Suresh is the CTO of Subex, and brings with him a wide-ranging leadership, managerial and technical experience of over 27 years. Prior to Subex, he was worked with companies like Motorola, ARRIS, and CommScope, where he built and scaled large global software engineering, professional services, and technical support services operations, serving Industry verticals like cable, telecom, mobile and wireless networking. Suresh holds a Bachelor’s and Master’s Degree in Electronics & Communications Engineering from Osmania University and Post Graduate Diploma in software enterprise management, from IIM, Bangalore. Suresh is based in Bangalore, India.