There’s a long history of video games and artificial intelligence. While most people know that AI has been trained to beat video games or compete against human players, or even be used to train reinforcement learning algorithms, not many are aware of the hardware side of things. How did the evolution of video game processors lead to what we know as modern AI? Recently, we spoke to Jack McCauley, an engineer most known for his work on the original Oculus VR headset and the Guitar Hero guitar controllers about the role of video games in the development of artificial intelligence. Here are some key takeaways from the interview, and you can watch the full interview video here.
Video Game GPUs Leading to AI
A GPU, aka a graphics processing unit that processes video for a computer or gaming device, is used to showcase what you see on a screen. The more powerful your GPU, the more you can render more complex and realistic graphics at higher resolutions and frame rates.
Jack McCauley believes that we have GPUs used in video games to thank for modern artificial intelligence. Video game GPUs have become the workhorses for training and running machine learning and deep learning models. Their ability to perform massively parallel computations efficiently makes them ideal for handling large datasets, training complex neural networks, and performing real-time inference. “AI is based on tensor algebra,” Jack says. “GPUs can perform tensor algebra which is required for AI. This processing power can even allow you to run AI in real-time, and we wouldn’t be able to do that without gaming GPUs.”
The Potential for AI in Gaming
On the flip side, fast-forward to 2023 and now you can use AI to improve video game development. Jack McCauley goes on to discuss how AI can expedite much of the game development process, such as how artists can use artificial intelligence to generate landscapes, locations, and so on. This wouldn’t replace the artist; rather, it would help them to provide a baseline to work from, modify, and customize, rather than starting from scratch.
Considering how quickly both the fields of AI and video games have progressed, it’s hard to say what’s next. I’m expecting there to be a game of leapfrog between the two, where AI uses gaming hardware to get ahead, then games use AI to progress, leading to stronger GPUs that AI will then adopt, and so on.
There are a few ways that you can learn more about video games and AI together at ODSC West from October 30th to November 2nd. Primarily, you can check out Jack McCauley’s session, “AI and Video Games: The Evolution,” there!
You can also check out our Generative AI track and see how you can use GenAI yourself! Some session titles include:
- Aligning Open-source LLMs Using Reinforcement Learning from Feedback
- Generative AI, Autonomous AI Agents, and AGI – How new Advancements in AI will Improve the Products we Build
- Implementing Gen AI in Practice
- Scope of LLMs and GPT Models in Security Domain
- Prompt Optimization with GPT-4 and Langchain
- Building Generative AI Applications: An LLM Case Study
- Graphs: The Next Frontier of GenAI Explainability
- Stable Diffusion: A New Frontier for Text-to-Image Paradigm
- Generative AI, Autonomous Agents, and Neural Techniques: Pioneering the Next Era of Games, Simulations, and the Metaverse
- Generative AI in Enterprises: Unleashing Potential and Navigating Challenges
- The AI Paradigm Shift: Under the Hood of a Large Language Models
- Deploying Trustworthy Generative AI