New EU-wide legislation governing AI hopes to place safeguards for its use within the regional bloc, while also protecting European businesses’ interests and sectors that could benefit from the rapidly scaling technology.
The agreement on the Artificial Intelligence Act came last week on the 8th after about 36 hours of negotiations that spread over three days, and it will mark the first major power outside of Asia to pass governing rules for AI. As part of the legislation, limits on the use of live facial recognition and new transparency requirements for developers are a major part of the act.
So what is the AI Act exactly? Well to put it shortly, the law is a first-of-its-kind risk-based approach to the regulation of AI. It will categorize artificial intelligence systems based on their perceived levels of risk and impact on member-state citizens.
Under the Act, the following cases are banned:
- Biometric categorization systems that use protected or sensitive characteristics (e.g., political, religious, philosophical beliefs, sexual orientation, race).
- Untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases.
- Emotion recognition in the workplace and educational institutions.
- Social scoring based on social behavior or personal characteristics.
- AI systems that manipulate human behavior to circumvent their free will.
- AI is used to exploit the vulnerabilities of people due to their age, disability, and social or economic situation.
Though not officially done, the act is still in the process of being edited. With that said, there will likely be changes to a number, if not, all of the banned cases; developers and companies could likely see several exceptions and other changes.
As reported by Reuters Alexander Duisberg, partner at law firm Ashurst said, “The Council and the European Parliament will then formally resolve and confirm the wording … After that, it will be published in the Official Journal, initiating the sunrise period.”
Currently, the AI Act isn’t expected to become law until sometime in 2025 or 2026. Nations across the globe are also keeping an eye on this law’s movements within the EU Parliament. For example, the United States is still in the early stages of developing governing rules for AI.
This past summer, Tech leaders representing Google, Microsoft, OpenAI, Meta, and others met with both chambers of Congress to discuss the issue of regulation and the need to focus on responsible AI.
In Asia, China has already moved forward with its own rules governing AI. South Korea and Japan are following suit as well. One thing that each of these governments has in common is attempting to balance the risks and benefits associated with AI.
This is no surprise as generative AI is expected to aid in double-digit GPD growth. This doesn’t even touch on AI’s effect on multiple industries and markets such as Healthcare.