The World Health Organization, WHO, has released a comprehensive publication outlining key regulatory considerations for AI in health. These guidelines emphasize the importance of safety, effectiveness, and the need for dialogue among various stakeholders, making it a significant step in embracing AI in the healthcare sector.
As we’ve seen in multiple reports over the last couple of years, AI-powered tools have begun to contribute massively to clinical trials, medical diagnosis, treatment, self-care, and person-centered care. This is especially clear when it comes to pattern detection and optimizing human labor.
However the international health organization has grown concerned about AI’s rapid deployment due to a lack of legal framework. Much of these concerns center on AI’s performance and potential risks. This is where the WHO’s publication comes into play. It seeks to address these concerns and guide the establishment and maintenance of such frameworks.
Dr. Tedros Adhanom Ghebreyesus, WHO Director-General, acknowledges the challenges and promises that come with AI in healthcare. “Artificial intelligence holds great promise for health, but also comes with serious challenges, including unethical data collection, cybersecurity threats, and amplifying biases or misinformation.”
He continued, “This new guidance will support countries to regulate AI effectively, to harness its potential, whether in treating cancer or detecting tuberculosis while minimizing the risks.” The publication released today outlines six key areas for regulating AI in health.
These include Transparency and Documentation, Risk Management, External Validation, Data Quality Commitment, Jurisdiction and Consent, and Collaboration. According to the WHO due to AI’s complex systems, they hope that better regulation, guided by these principles, could manage risks while assisting to create more robust systems.
With the release of this outline. The WHO hopes that governments and regulatory authorities can follow suit and develop new guidance for AI governance at both the regional and national levels.
The push for AI-focus regulation has been a hot topic internationally. In 2023 alone, China has been at the forefront of developing new AI regulations aimed at protecting citizen data while not hindering the potential benefits of these systems.
The EU and the United States, both are also exploring ways of partnering with tech giants such as Google, OpenAI, Microsoft, and Meta to create regularity frameworks that protect people without stifling innovation.