As a business leader, you’re likely well aware of the immense potential that AI can offer your organization. However, despite its many benefits, we often see projects fall apart in the last mile, or final stage of development.
The last mile of AI design involves integrating the model into real-world applications and ensuring its effective and reliable performance. Many organizations experience challenges like performance limitations, a lack of transparency, ethical concerns, and unaddressed biases.
All of these issues originate from a lack of trust in AI.
To overcome these challenges, both the builders and users of AI play an essential role in cultivating trust and creating an environment of transparency and accountability. Continue reading to learn how.
Who Are AI Builders, AI Users, and Other Key Players?
AI builders are the data scientists, data engineers, and developers who design AI models. The goals and priorities of responsible AI builders are to design trustworthy, explainable, and human-centered AI. Data scientists, data engineers, and developers tend to be highly technical individuals that understand the nuances of AI design well, but aren’t always able to communicate them back in business terms.
AI users are the individuals making decisions influenced by AI models. While their goals may vary by industry, company, and use case, most AI users are looking to positively augment areas of their business with AI. Users are often experts in their industry, but not overly familiar with builders’ processes and technical jargon.
AI Risk Managers
A third group involved with the design of responsible AI include ethical professionals, like lawyers, risk managers, and cybersecurity experts who ensure the reputation and safety of their organizations. While these individuals aren’t directly builders or users, they play a critical role in the adoption and implementation of trustworthy AI.
How Does Trust Enable a Responsible AI Culture?
Responsible AI involves designing, developing, and deploying AI technology with positive intentions that benefit both employees and businesses, while also considering the fair impact on customers and society as a whole.
Trust is a critical component of building a responsible AI culture in organizations—yet many find AI hard to trust.
As models increase in data and goal complexity, they become more difficult to understand and explain. Spreadsheets can be easily queried, however, videos are not as simple, making data difficult to benchmark and ground truth harder to determine.
Ground truth is how we evaluate models before deploying them. It’s easier to objectively validate if a model is accurately correcting items, but much harder to define what is correct in the context of generative AI. What constitutes a good summary when there are multiple correct possibilities?
In one example, we worked with a health tech client to identify patients for insurance coverage. We found that the model was successful at identifying borderline cases, ones that the team probably wouldn’t have considered. Initially, the team rejected the model because they didn’t trust it to make accurate decisions. However, after educating the team on how the model was arriving at decisions and why, we immediately garnered more buy-in and revenue growth—without ever making changes to the original model.
When people trust AI, they are more likely to use it and rely on its outputs, which can help drive innovation, efficiency, and productivity. Trust in AI can also help organizations avoid negative outcomes, such as biases, errors, and unintended consequences.
By cultivating a culture of trust and responsibility around AI, both builders and users understand the capabilities and risks of AI, create a process to embed it in everyday processes, and support one another with the tools, guidelines, and education required for success.
How AI Builders Can Create Trust
When cultivating trust between AI, end users, and other key stakeholders, AI builders can’t afford to not be strategists, partners, consultants, and humans first. Builders function as the necessary bridge between the technical side of AI design and the practical applications that industries are looking for. Because of this, focusing on creating trust should be their top priority. Here are a few ways trust can be achieved.
Facilitate productive conversations in business terms. KPIs, business goals, and other important strategic conversations should be communicated in a way that’s understood by everyone involved. Work with stakeholders to translate traditional ML metrics for things like accuracy and drift, to business-relevant KPIs, like performance met and risk reduced.
- Identify and select the right use case for AI. Narrowing a specific use case ensures that realistic expectations are set and relevant goals are more likely to be achieved. It’s also important that builders seek to understand how the model’s predictions will be used. Be sure that the output or predictions generated are actually useful to the end user.
- Check for model cards. Model cards accompany models and include key information like intended uses for the model, how the model works, and how the model performs in different situations. They should be an AI builder’s first reference when choosing foundational models. Not only do they provide information about an AI model’s performance, potential biases, and limitations, but they’re also designed to increase transparency and accountability.
- Focus efforts on stress testing. As models advance in complexity, stress testing becomes increasingly important. Just like aerospace engineers test plane wings under extreme circumstances, AI builders must spend time designing the right stress tests or scenarios to understand where AI models might fail, then clearly communicate these potential risks to stakeholders.
How AI Users Can Create Trust
The onus to build trust is not solely on your data scientists, developers, and data engineers. It takes a concerted effort to successfully develop and deploy AI that everyone trusts. Here are a few priorities that leaders and employees should be considering.
Improve the team’s AI literacy. AI literacy involves having a basic understanding of AI technologies and how they work. It also includes understanding the potential impact of AI on society, industries, and jobs. Oftentimes, users have unrealistic expectations for, and low trust in, AI because they do not fully understand its capabilities. Andrew Ng’s prompt engineering course and Cassie Kozyrkov’s Making Friends with Machine Learning series are two great resources for improving AI literacy.
- Understand and accept that AI is iterative. Over 80% of AI projects fail to make it to production because of factors like incomplete or discriminatory data, human error, complex real-world contexts, and unrealistic expectations. Knowing this, and setting realistic expectations and goals from the start can help prevent frustration if or when failure occurs.
- Overcome the barriers to accepting AI. Some users fear that AI may replace, rather than augment or improve, their jobs. However, studies like this one show that humans using AI to aid decision making can be more successful than either operating alone. When workflows are reimagined—AI is working on routine basics and specialists are focusing on more complex cases—users ultimately experience more joy and fulfillment.
It takes a combined effort from AI builders and users to create trust within companies. Building and nurturing trust must be a priority for company leaders looking to build AI now and into the future. Now is the time to have conversations around trust among your teams.
About the Author: Cal Al-Dhubaib is a globally recognized data scientist and AI strategist in trustworthy artificial intelligence, as well as the founder and CEO of Pandata, a Cleveland-based AI consultancy, design, and development firm.
Featured Image Credit: Canva