Back in late February, the Federal Trade Commission’s Michael Atleson, Attorney within the FTC Division of Advertising Practices, warned businesses in a blog post that make exaggerated or false claims about their AI products to be cautious. Now it seems that the agency will continue to move swiftly as AI advances because, on Monday, Chair Lina Khan stated that the agency will be paying close attention to developments in artificial intelligence to ensure the field isn’t dominated by the major tech platforms according to Bloomberg.
Though the AI market is showing clear signs of being a dynamic marketplace, Khan made clear the need for vigilance to ensure it’s not dominated by a few large players. She said in part, “As you have machine learning that depends on huge amounts of data and also a huge amount of storage, we need to be very vigilant to make sure that this is not just another site for big companies to become bigger.”
The Federal Trade Commission is the chief enforcer of competition and consumer protection laws within the United States. This is why late February’s post by agency attorney Michael Atleson made clear exaggerated or false claims about AI-powered products could bring issues for companies. This month, the FTC launched an inquiry focused on cloud computing. It is seeking information on data security and competition in the industry. Amazon.com Inc., Alphabet Inc.’s Google, and Microsoft Corp. are among the biggest cloud providers.
Echoing Atleson’s write-up, Khan said companies offering AI-powered tools and services need to make sure they are not “overselling or overstating” what their products can do to their customers. With the explosion of generative AI and AI-powered applications that are user-friendly, millions of users are seeking out the technology for a variety of purposes. The turn toward AI by the United States federal government is part of the White House’s AI Bill of Rights outline which requested that agencies begin to explore regularity guidance for AI to protect individual data and more.
Speaking about exaggerated claims about AI tools and services, Khan stated, “Sometimes we see claims that are not fully vetted or not really reflecting how these technologies work…Developers of these tools can potentially be liable if technologies they are creating are effectively designed to deceive.” Calls for responsible AI and legal AI frameworks are growing across the globe. For example, the EU’s AI Act is that block’s first steps in addressing AI and regulation, and other nations are also exploring AI regulations.