Artificial intelligence (AI) holds tremendous promise as a means of improving the efficiency and quality of health care delivery— from enhancing patient outreach and engagement, to managing medical and pharmacy inventory, to identifying patients at the greatest risk of disease progression. The tangible benefits of AI are increasingly being realized each day and there are countless future possibilities. The more we use AI, the more we learn about how best to harness its potential and mitigate its weaknesses.
One of the greatest strengths and limitations of AI models is its heavy reliance on big data. On the one hand, big data enables us to uncover patterns and associations we never would have imagined. Unfortunately, big data is also an accurate reflection of bias and discrimination that occurs in the real world. As a result, AI models may invisibly and unintentionally exacerbate these biases. Failure to identify and address bias in our data can lead to inaccurate results and erroneous conclusions.
There is no solution to completely eradicate the risk of bias in data, but we can take steps to apply human experience and insights to minimize it and be prepared to respond when bias occurs. At Optum, we convened a team of internal and external subject matter experts from a wide breadth of disciplines to develop a strategy to address the limitations and risks posed by AI methods. Critical components of our approach include:
Establish a culture of responsible use. The foundation of this work is to establish a culture of responsible use. We developed a set of corporate guiding principles that serve as a statement of intention – we will use AI to advance our mission to help people live healthier lives and to help make the health care system work better for everyone, and affirm our commitment to be thoughtful, transparent and accountable in our development and use of AI models. The analytic teams that create and the businesses that use AI models are beholden to these principles. These principles are broadly available via a company-wide resource site that also catalogs analytic assets, best practices, training materials, and standard processes and tools for advanced analytics work across the organization.
Embed fairness testing in the model development process. We identified an open-source bias detection tool to test for fairness in AI model predictions. We train our analytic and technology teams to use this tool and conduct fairness testing before deploying models, to prevent introducing or intensifying existing biases that can lead to disparate impacts.
Monitor model performance and use following deployment. Health care is continually evolving as new treatments and technologies are developed, new public health policies and regulations are released, and new research illuminates what affects people’s health and wellness. AI models in health care should be regularly assessed after they are deployed to determine if they need to be retrained. Our analytic teams are expected to define a monitoring plan as part of the model development process. In addition, analysts should take steps to ensure that their models are being used appropriately by end-users. An AI algorithm can inadvertently produce inequities if it’s not used as intended. Training and oversight by human experts are paramount to seeing that models are used properly.
Develop a diverse workforce. We are intentionally building an inclusive and diverse analytic team that represents all the communities we serve. Our recruitment process promotes this approach, and our employees are provided inclusion and diversity training. We also invest time and resources to support diversity in our future workforce. For example, our organization and the Atlanta University Center Consortium, the oldest and largest consortium of historically Black colleges and universities, have partnered on an initiative to prepare students for careers in data science. Having a diverse model development team fosters innovation and creativity, and values an array of experiences, helping to shed light on unconscious bias and improve our ability to proactively detect and address sources of bias.
Research root causes of health inequities. At Optum, we are conducting research to better understand the sources of systemic bias in health care delivery. One of our focus areas is race corrections in clinical guidelines. Raising awareness and increasing our knowledge of the root causes of health disparities helps us be more attuned to the risks of bias and better-informed consumers of AI-derived results.
Like all computational methods, AI-derived algorithms have limitations, and these limitations should be proactively assessed, managed, and made transparent to end-users. Responsible use of AI requires a commitment to recognizing the strengths and limitations of health care data sets, the analytic methods used, and the application of results.
To capitalize on the potential of AI and fully reap its benefits in health care, we must be thoughtful and intentional in its application. Health care organizations should have a proactive strategy for the responsible use of AI that acknowledges the limitations of health care data, addresses sources of bias, prevents inappropriate use of models, and promotes fairness and health equity.
To learn more on AI applications in health care, visit optum.com/ai.
About the author:
Margaret (Meg) Good, Ph.D., specializes in health economics, health policy, and survey research methods. She has been with Optum since 2005. In her current role as Vice President of Data Analytics, Dr. Good advises Optum businesses on how to use analytics to achieve strategic objectives for their products and services. She supports the advancement and use of artificial intelligence, machine learning, advanced analytics, and emerging technologies at Optum. Before joining the OEA team, Dr. Good served as the Vice President of Health Economics & Outcomes Research in Optum Life Sciences. This team conducts observational research studies using administrative claims data, patient and provider surveys, EHR/medical chart data, and other secondary data sources. Prior to joining Optum, she was a faculty member in the Department of Public Policy at the University of Maryland, Baltimore County where she taught courses in health policy and research methods. She also worked at the University of Minnesota where she worked in a research collaborative funded by the Robert Wood Johnson Foundation to help states expand access to health insurance and health coverage among disadvantaged populations. Dr. Good earned her PhD and MS in health services research and policy at the University of Minnesota and her undergraduate degree at Williams College. She has presented her research at national conferences and has authored or co-authored publications that include articles in the Journal of the American Medical Association, Inquiry, Medical Care Research & Review, and the Journal of Health Politics, Policy and Law.