The European Union has unveiled a first-of-its-kind set of regulations aimed at artificial intelligence, the AI Act. Last year, AI took the public’s imagination by storm as many AI-powered tools became accessible to the average person due to their ease of use. But correlation does not imply causation. Questions surrounding responsible AI and AI & ethics have been circulating in the data science community for years. Many predicted that it was only a matter of time before state actors moved forward with some sort of regulation and oversight, and it seems that the EU is the first to make such a move.
According to the official website of the AI Act, the law will assign applications powered by AI into three risk categories. Each of these categories is focused on the captured data of citizens and how the data is used by companies. “First, applications and systems that create an unacceptable risk, such as government-run social scoring of the type used in China, are banned. Second, high-risk applications, such as a CV-scanning tool that ranks job applicants, are subject to specific legal requirements. Lastly, applications not explicitly banned or listed as high-risk are largely left unregulated.
Clearly, the EU wants to get ahead of the technology as concerns about AI and governing power grow. With these three categories of the AI Act, they’re attempting to shy away from overly complex language that could hinder AI advancement or become unenforceable due to a lack of clarity. In short, the aim is to provide an environment for further development while ensuring AI has a holistic societal benefit. But AI is growing in popularity with government bodies. For example, city and county governments in the United States have begun to use AI & machine learning programs to assist in improving governing. Though other programs have raised civil rights concerns, particularly AI-powered programs used by law enforcement.
Concerns about AI and possible bias resulting from programs have grown over the years. So it’s no wonder why countries across the globe are either putting together frameworks or at least investigating how they can protect their citizens’ data. One such was the United States’ AI Bill of Rights which provided guidance to federal agencies a framework to begin investigating with regulatory needs. Other nations such as China and Brazil have also begun to regulate aspects of AI in 2022 with the former investigating the effects of deepfake technology and requiring creators to watermark deepfake content.
The proposed law is still under review as it requires an agreement on a common language between the EU Council and the European Parliament. But it’s clear that governments across the globe are attempting to get ahead of emerging technology instead of waiting back to act once some issue comes up.
If you’re interested in artificial intelligence, its creation, management, and ethics surrounding the technology, ODSC East 2023 is a great opportunity to immerse yourself in the field. Enjoy the best in data science in-person or virtually while connecting with thousands of data experts and professionals.