fbpx
How To Unlock Trust and Success Before You Start an AI Project How To Unlock Trust and Success Before You Start an AI Project
Editor’s note: Cal Al-Dhubaib is a speaker for ODSC East this April 23-25. Be sure to check out his talk, “Designing... How To Unlock Trust and Success Before You Start an AI Project

Editor’s note: Cal Al-Dhubaib is a speaker for ODSC East this April 23-25. Be sure to check out his talk, “Designing AI Systems for Trust,” there to learn more about how to start an AI project!

As AI becomes integral to business strategy, many organizations are navigating the complex relationship between technical innovation, creating business value, and managing risk. And despite the progress, fewer than 10% of organizations have successfully deployed generative AI solutions. 

Moreover, trust in AI companies has dropped globally to 53%, down from 61% five years ago. In the U.S., trust has dropped 15 percentage points (from 50% to 35%) over the same period. In many cases, this lack of trust can be tied to challenges with human adoption, alignment with business values, risk management processes, and unexpectedly costly data curation efforts.

One way organizations have found success in building trust and achieving higher rates of project success for companies is by starting every AI project with a discovery and design process

In-Person and Virtual Conference

September 5th to 6th, 2024 – London

Featuring 200 hours of content, 90 thought leaders and experts, and 40+ workshops and training sessions, Europe 2024 will keep you up-to-date with the latest topics and tools in everything from machine learning to generative AI and more.

 

Top AI Challenges That Businesses Face

There is a surprising number of times that AI projects fail for silly reasons. And often, the challenges businesses face deal with humans not using models, or mismatches between expectations of the real world and the lab environment where the model was designed. Here are a few others.

Data and AI literacy

We’re pretty good at knowing how to shop for cars, houses, new clothes (well…depends who you ask), but most humans don’t know what to ask for when it comes to AI. Fewer than 25% of the workforce would consider themselves data literate. Because of this, we’re regularly seeing users with unrealistic expectations for, and low trust in, AI. Investing in the right AI literacy resources can make a big difference when it comes to these projects. 

Defining the Human-AI Interaction  

Oftentimes, companies will have the right data, design a decent model, and identify the level of accuracy the model can achieve, but the team does not factor in the actual human or group of humans that will be making decisions based on the model (see my story below for more on this).

Quantifying the Impact

The impact of an AI/ML model can be measured in money saved, revenue added, risk avoided, time saved, time to value, and other metrics. But the real key here is that impact, in most cases, is a result of human action. And even if we know where problems are, our opportunity to influence them is usually some fraction of that. Without knowing the value of the problems you’re trying to solve, it’s difficult to prioritize data science resources accordingly.

Accounting For and Measuring Potential Risk

When it comes to AI, there are many ways to be wrong. Spend some time as a team thinking about the possible unintended consequences and potential harms of your project: What types of errors can be made? What is the cost of these errors? Then, rank them in probability, severity, and frequency.  

 

What We’ve Learned About AI Discovery

My team started requiring AI discovery and design as an initial service because we found that a lot of clients came to us wanting to build a solution but weren’t really ready. 

The discovery and design process is our way of quickly taking a narrow problem statement, sniffing out the skeletons, and deciding if the project is even worth doing. During discovery, we run through the following steps with key people the model will impact. 

  1. Do a deep dive into decision making. Who’s involved? How do you measure success? 
  2. Try to understand what already exists on the tech side. What data (to nobody’s surprise, all data is a mess) is available? What systems are you plugging in to? What does your current security, permissions, and privacy look like? 
  3. Go through rapid prototyping. Create the simplest version of a model to get an idea of how much work it will take to meet KPIs. And most importantly, figure out if it’s good enough to help influence the right decisions!

This process not only ends a lot of ideas sooner, but also pivots mediocre ideas to feasible projects. For the feasible ideas, companies walk away with a project plan that includes clear KPIs, benchmarks, and anticipated roadblocks.  

How Discovery Impacts Trust and Success 

Without a discovery session, and without the right people in the room, it’s nearly impossible to measure the level of trust you’ll receive with the introduction of a new model. 

In one instance, we built a model to identify patients who qualified for a particular government subsidized service based on complex claims data. Our numbers showed that the model would capture 30% more than the existing team’s efforts. But when we rolled it out, the pilot tanked. 

In-Person & Virtual Data Science Conference

October 29th-31st, 2024 – Burlingame, CA

Join us for 300+ hours of expert-led content, featuring hands-on, immersive training sessions, workshops, tutorials, and talks on cutting-edge AI tools and techniques, including our first-ever track devoted to AI Robotics!

 

Why? 

We later discovered that employees’ year-end bonuses depended on their close rate—they didn’t want to bet their bonus on a new model they didn’t trust. Once we realized it was an education and trust issue, we spent some time getting the team more comfortable with making model-aided decisions. Numbers improved drastically. 

Our takeaway? If we had had some of these employees in the room during the initial project discovery, we may have been able to identify these motivations and figure out what it would take for them to trust the model.

As AI systems are impacting more and more workflows, really think about how you’re measuring value in terms of humans and their decisions. Those who frame AI projects around humans and their actions (or inactions) will be far more successful at creating value at scale.

Think about how you redirect the effort of individuals whose workflows are impacted by this in more productive ways. In the end, you’ll see better customer satisfaction, higher quality work, and better work-life balance.

Cal Al-Dhubaib

Cal Al-Dhubaib is a globally recognized data scientist and AI strategist in trustworthy artificial intelligence, as well as the Head of AI and Data Science at Further, a data, cloud, and AI company focused on helping make sense of raw data.

1