In recent years, artificial intelligence has become a buzzword in the tech industry, with companies across various sectors touting the benefits of AI-enabled products. However, the Federal Trade Commission’s Michael Atleson, Attorney within the FTC Division of Advertising Practices, warns businesses that make exaggerated or false claims about their AI products to be cautious.
In a blog post from late February, he writes that such claims that can be proven to be exaggerated or false that can mislead consumers could be a violation of advertising laws. This, in turn, could have serious consequences as AI continues to rapidly deploy in multiple industries at once.
Atleson at the beginning of the post asks, “And what exactly is “artificial intelligence?” He notes that AI is an ambiguous term with various definitions, but it generally refers to technology that uses computation to perform tasks such as predictions, decisions, or recommendations. The FTC is concerned that some businesses may overhype the capabilities of their AI products, making unsupported claims that could deceive consumers.
Atleson points out several questions that the FTC may ask when evaluating AI claims, including whether the claims exaggerate what the product can do, whether the product actually uses AI, and whether the advertiser is aware of the risks associated with the product. He emphasizes that marketers must have scientific evidence to back up any claims about the efficacy of their products and must be transparent about the limitations and potential risks.
He says in part, “Your performance claims would be deceptive if they lack scientific support or if they apply only to certain types of users or under certain conditions.” This is a very clear condition that claims about AI programs must, like conclusions from scientific studies, be clear and be able to replicate results in a meaningful way if circumstances warrant.
The FTC has previously issued guidance on the ethical use of AI in advertising, highlighting the importance of fairness and equity. Atleson reiterates that businesses must not use automated tools with biased or discriminatory impacts. Furthermore, he emphasizes that businesses must be accountable for the reasonably foreseeable risks and impacts of their AI products and cannot shift responsibility onto third-party developers.
This is a clear indication that after the A.I. Bill of Rights was introduced by the White House last summer, federal agencies are eyeing AI for not only what the technology can do, but claims that are being made by companies. In Congress, there is also chatter among some within the U.S. House Representatives about the need for the legislative body to be ahead on AI so the mistakes of social media aren’t repeated.
Overall, what the data science community does together with responsible AI, and the ethics behind the technology will likely affect how it is viewed, used, and how society will cope in the near future.