Trustworthy AI: Operationalizing AI Models with Governance  – Part 1 Trustworthy AI: Operationalizing AI Models with Governance  – Part 1
Editor’s note: Sourav Mazumder is a speaker for ODSC West 2021. Be sure to check out his talk, “Operationalization of Models... Trustworthy AI: Operationalizing AI Models with Governance  – Part 1

Editor’s note: Sourav Mazumder is a speaker for ODSC West 2021. Be sure to check out his talk, “Operationalization of Models Developed and Deployed in Heterogeneous Platforms,” for more info on trustworthy AI there. Read PART 2 of the series here.

Artificial intelligence (AI) is already having a significant impact on the development of humanity, already. For enterprises, the use of AI is not an option anymore. However, the core of AI relies on the use of data samples/examples to train a system/machine using algorithms so that it can behave intelligently like a human. Hence, just like any human, the behavior of an AI-based solution cannot be always entirely predictable. So, as it is important to have trustworthy humans involved in a business or social interaction, the trustworthiness of AI solutions is equally important when they are used in serious business/social transactions to augment humans. While AI is mandatory for business, trustworthy AI is crucial.

The Core Aspects of Trustworthy AI

So how one can ascertain the trustworthiness of AI solutions? The data being the raw ingredient of an AI solution, AI is only as good as the data used to create it. Next, it comes down to the algorithms used to build the intelligence using the data. Overall, there should be a “right way” to implement AI solutions so that they are trustworthy. Otherwise, they could be problematic just like any human raised in the wrong environment without good ethics/values instilled in them.


Based on our experience (40,000 AI-related customer engagements, for over a decade, spanned across 20 industries and 80 countries), the “right way” of implementing AI solutions can be broken into three key aspects to make the AI solutions trustworthy. Firstly, following the relevant principles of AI ethics. Secondly, making AI solutions able to work with an open and diverse ecosystem. Finally, last but not the least, the operationalization of AI solutions in a governed way. 

Trustworthy AI circle

AI Ethics

The subject of AI ethics encompasses the broader philosophical area of how to make AI-based applications, machines, robots, etc. not surpass humans but instead augment human capabilities. However, in practical implementation, trustworthy AI solutions have to primarily focus on four key principles of AI ethics: Transparency, Fairness, Robustness, and Privacy. 

  1. Transparency An AI solution should have details of essential information that AI consumers should be aware of, such as confidence measures, levels of procedural regularity, error analysis, etc. It should also be able to explain how and why it has arrived at a particular decision or prediction. This has become increasingly important among business leaders and policymakers, with 68% of business leaders believing that customers will demand more explainability from AI in the next few years. Finally, all this information should be made available in a contextualized and relevant way so that end-users can understand.
  2. Fairness – Properly calibrated AI solutions should not have any bias in its prediction. It should rather help counter our human biases and promote inclusivity and equitable treatment. Bias occurs when an AI system has been designed, intentionally or not, in a way that may make the system’s output unfair. Such bias can be present both in the algorithm of the AI system and in the data used to train and test it. It can emerge as a result of cultural, social, or institutional expectations; because of technical limitations of its design; or when the system is used in unanticipated contexts or to make decisions about communities that are not considered in the initial design. It is essential that businesses combat bias, as AI is increasingly used to inform high-stakes decisions about people. 
  3. Robustness AI solutions also need to be resilient towards misuse – intentional and non-intentional. An AI solution built and/or used in the wrong way can lead to improper decision making which can impact business, society, and individuals in adverse ways. Robust AI solutions need to effectively handle exceptional conditions, such as abnormalities in training and scoring data, malicious attacks with wrong inputs, etc., without causing unintentional harm. AI solutions must be built to withstand intentional and unintentional interference, by protecting against exposed vulnerabilities. For example, if attackers poison training data to compromise system security. Or an unmonitored AI solution in production predicts outcomes based on a scoring request that has feature values that were considered outliers in training data.
  4. Privacy AI systems need to safeguard consumers’ privacy and data rights and provide explicit assurances to users about how their personal data will be used and protected. Also, the insights generated out of data (through AI models or any data products) should be available only to the applications which are authorized to use them. 

More things to consider:

Open and diverse ecosystem – The other dimension of trustworthy AI is the ability to integrate with an open and diverse ecosystem. To achieve trustworthy AI at scale, it takes more than one company or organization to lead the charge. Trustworthy AI relies on an open and inclusive ecosystem – a community of users, contributors, technology providers; a variety of technology platforms; businesses and institutions across industries, academia, and research; and a culture of diversity, inclusion, and shared responsibility. Only that can deliver a real value of AI towards both business and society following principles of AI ethics in practice.

Operationalizing AI Solutions with Governance – This involves using the data and AI technologies that can help organizations in developing and productionizing AI solutions complying with relevant aspects of the AI ethics discussed above. This also means ensuring that AI solutions are developed to work with an open and diverse ecosystem (internal and external) and get continuously refined with the latest innovations and approaches. Apart from that, other key aspects that need to be tackled are compliance requirements, continuous monitoring, scalability to support a multitude of models and data products, ease of operations, and manageability.

Disclaimer: All opinions expressed here are my own and not of my employer.

About the author/ODSC West 2021 Speaker on Trustworthy AI:

Sourav Mazumder is an IBM Data Scientist Thought Leader in IBM Expert Labs and The Open Group Distinguished Data Scientist. Sourav has consistently driven business innovation and values through methodologies and Technologies related to Artificial Intelligence, Data Science and Big Data transpired through his knowledge, insights, experience, and influencing skills across multiple industries including Manufacturing, Insurance, Telecom, Banking, Media, Health Care and Retail industries in USA, Europe, Australia, Japan and India.

ODSC Community

The Open Data Science community is passionate and diverse, and we always welcome contributions from data science professionals! All of the articles under this profile are from our community, with individual authors mentioned in the text itself.