As reported last week, the EU is in the process of putting together some of the widest-ranging rules looking to govern AI. If passed, the EU could be the second major region, outside of China, that has passed legislation specifically for AI. So, what is going on with this law? Well, according to The E.U. Artificial Intelligence Act‘s official website, it creates “three risk categories.”
First, it would outright ban the creation of AI for China-like social scoring and facial recognition in public. For privacy advocates who worry about the Western adoption of similar tactics, this is an important aspect of the law. Second, it would tighten laws around AI software used in the hiring process. Though not outright banning them, the act states those tools, “are subject to specific legal requirements.”
In a final bullet point, the law states that “applications not explicitly banned or listed as high-risk are largely left unregulated.” In a statement to Time, Amba Kak, the executive director of the AI Now Institute said of the act, “The E.U. AI Act is definitely going to set the regulatory tone around: what does an omnibus regulation of AI look like?”
Though the three points don’t seem all that impactful, there is more. And it has to do with “general purpose AI.” Think, ChatGPT. As reported earlier this month, Italy banned the popular chatbot due to OpenAI’s lack of age verification and privacy concerns. Though some within the Italian government thought the ban was too quickly applied, it’s no secret that other EU states are observing closely what is happening.
With that said, the debate is centered around if general-purpose AI should be considered high-risk. If so, it would fall under the strictest of regulatory rules which also harbors the steepest penalties for misuse. And this is where major tech companies come in.
Major tech companies and conservative members of the bloc’s legislature are concerned that if AI such as ChatGPT gets labeled high-risk, it would stifle innovation. If so, then the EU could risk falling behind in AI technology. But on the other hand, privacy-minded politicians and other technology leaders fear exempting powerful companies to any degree.
In their view, it would be as if the block passed regulations for automobiles while exempting major auto manufacturers. This has become such a worry, that the AI Now Institute wrote an open letter claiming that “General Purpose AI Poses Serious Risks.” The group argues in four points, that general-purpose AI carries a large enough risk of harm, that regulation must include it.
Meredith Whittaker, the president of the Signal Foundation and a signatory of the letter, stated, “Considering [general purpose AI] as not high-risk would exempt the companies at the heart of the AI industry, who make extraordinarily important choices about how these models are shaped, how they’ll work, who they’ll work for, during the development and calibration process.”
On the other side of the agreement, in a letter from industry group, which has Microsoft as co-signer, stated of the act proposal “It is … not possible for providers of GPAI software to exhaustively guess and anticipate the AI solutions that will be built based on their software.”
So far, The E.U. AI Act is still being considered, so it’s unsure where it will go. On another note, the block is also contending with privacy laws that have other groups, such as The Python Software Foundation worried due to its vagueness and possible negative effects on the open-source community.
Though the fates of both proposals aren’t decided yet, it’s clear that the EU doesn’t want to fall behind at the opportunity to regulate AI as it rapidly scales globally.