In a report from the Wall Street Journal, Apple has blocked an update to an app that is powered by ChatGPT, a large language model trained by OpenAI. The app, which is not named in the report, was reportedly blocked due to concerns over how it could generate inappropriate content for children. Due to this, Apple is requesting that the app with AI-language capabilities set an age restriction of 17 and older.
The concern over AI harm is not a new one, and it is something that has been discussed and debated for years. AI has the potential to revolutionize many industries and improve our lives in countless ways, but there are also risks associated with the technology. As AI becomes more advanced, there is a risk that it could be used to harm people or cause damage in some way. A risk private companies are aware of and are attempting to address. For Apple, they are which is why the unnamed app isn’t the first app to be blocked.
In the same report, the week prior Apple blocked the email app BlueMail due to concerns that the new AI feature could also show inappropriate content. Much like the unnamed app blocked today, BlueMail’s AI feature uses OpenAI’s latest ChatGPT chatbot to assist in the automation of emails. According to a message from Apple to a BlueMail developer, “Your app includes AI-generated content but does not appear to include content filtering at this time.”
Apple’s decision to block the app update suggests that the company taking concerns about content moderation and AI very seriously. Though it’s still unclear what’s the exact nature of the concerns that led to the block. But it is likely that Apple is taking a cautious approach to the use of AI in apps that are available on its platform.
With the explosion of generative AI in recent months, it’s unsurprising that platforms are searching for ways to cope with AI. Though the technology has interestingly captured the public’s imagination and desire to participate, there are also concerns related to the technology. This is seen across multiple sectors of society, from education to the future of employment prospects as AI becomes more mainstream.
In Apple’s case, it seems that the tech giant’s move to closely monitor AI and the risks associated with the technology – particularly with minors – shows it has concerns with the potential abuse of large language models. But this doesn’t mean that Apple is shying away from AI. Instead, as stated by Apple’s Chief Executive Tim Cook during the company’s quarterly earnings conference call in regards to AI,”(AI)is a major focus of ours,…We see an enormous potential in this space to affect virtually everything we do.”