

Horror Icon Stephen King Isn’t Scared of AI
AI and Data Science Newsposted by ODSC Team September 2, 2023 ODSC Team

In an op-ed for The Atlantic, the horror icon, Stephen King, opened up about his work being used to train large language models. The author of such books as The Shining, It, The Stand, and many others compared opposing AI, “is like trying to stop the industrial process by hammering a steam loom to pieces.”
This is in sharp contrast to other authors who view the training of LLMs on their work as a form of copyright violation. But for Stephen King, it seems that he honestly isn’t bothered by it. In his op-ed he looked at our current state of technology, saying part, “We live with self-driving cars, phones that guide us, and saucer-shaped vacuum cleaners.”
However, it goes beyond that. The horror author sees what AI generates as something that isn’t at the same level as a piece of work made by a human. Of this, he said, “AI poems in the style of William Blake or William Carlos Williams (I’ve seen both) are a lot like movie money: good at first glance, not so good upon close inspection.”
So in short, King is quite skeptical of AI’s ability to take that creative spark humans have. With that said, he also views resisting AI will become a fruitless venture as its popularity will continue to grow.
Even though he’s a bit concerned about AI going the way of Skynet, he sees any ban on work on training models as pointless. “Would I forbid the teaching (if that is the word) of my stories to computers? Not even if I could.”
He went on to say, “I might as well be King Canute, forbidding the tide to come in. Or a Luddite trying to stop industrial progress by hammering a steam loom to pieces.” This comparison is one similar to what has been used by others who have described the assent of AI as something similar to the Industrial Revolution in terms of the scale of change it could usher.
King’s nonchalant attitude about AI training in his work isn’t shared by many other creatives. Artists have also been sounding the alarm of AI being trained on their work to what they claim is replicating.
The question surrounding AI-generated content and training models is pretty heated. The New York Times is correctly considering a lawsuit against OpenAI, and Google is being sued for how they used data to train their models.