fbpx
A Neuroscience Expert is Calling For a “Neuroshield” Against AI A Neuroscience Expert is Calling For a “Neuroshield” Against AI
In a paper from Rice University’s Baker Institute for Public Policy, a neuroscience expert is arguing that there is an “urgent”... A Neuroscience Expert is Calling For a “Neuroshield” Against AI

In a paper from Rice University’s Baker Institute for Public Policy, a neuroscience expert is arguing that there is an “urgent” need to develop a system of digital self-defense against AI. This comes in a time of a growing call for AI regulation focused on AI safety.

Fellow in the Brain Health at the Baker Institute, Harris Eyre, said that there needed to be steps to regulate advanced AI. He also pointed to AI-enhanced social media to protect people from what he calls AI “hacking that could harm interpersonal relationships and collective intelligence.

Within the paper, Harris Eyre acknowledges how AI has bridged gaps within the world, but he also pointed to many other issues. He writes in part, “Although such technology brings the entire world to our devices and offers ample opportunities for individual and community fulfillment, it can also distort reality and create false illusions.”

But it’s more than views on reality, his concerns go beyond the micro to the macro effects on society as a wrote, “By spreading dis- and misinformation, social media and AI pose a direct challenge to the functioning of our democracies.”

One of the biggest concerns has to do with deep fakes. Eyre goes on to argue that there is an “urgent” need to design neuroscience-based policies that can help protect citizens from the predatory use of AI. In his view, a “neuroshield.”

He writes, “The way we interpret the reality around us, the way we learn and react, depends on the way our brains are wired,…It has been argued that, given the rapid rise of technology, evolution has not been given enough time to develop those regions of the neocortex which are responsible for higher cognitive functions. As a consequence, we are biologically vulnerable and exposed.

The way a neuroshield would work is through a threefold approach. First, developing a code of conduct concerning information objectivity. Then implement regulatory protections and finally create an educational toolkit for citizens.

Eyre argues that cooperation between publishers, journalists, media leaders, opinion makers, and brain scientists can form a “code of conduct” that supports the objectivity of information. “As neuroscience demonstrates, ambiguity in understanding facts can create ‘alternative truths’ that become strongly encoded in our brains,” he explains.

This week, tech giants such as Google, Microsoft, and OpenAI, came together to form an industry body focused on advanced AI safety. In China, the government is already at work regulating AI through watermarks on deep fakes and regulatory frameworks which startups and companies must follow.

Editor’s Note: Responsible AI is becoming a critical topic in AI development, and if you want to stay on the frontlines of the latest developments, then you need to hear from the industry leaders leading the charge. You’ll get that at the ODSC West 2023 Machine Learning Safety & Security Track. Save your seat and register today.

ODSC Team

ODSC Team

ODSC gathers the attendees, presenters, and companies that are shaping the present and future of data science and AI. ODSC hosts one of the largest gatherings of professional data scientists with major conferences in USA, Europe, and Asia.

1