fbpx
Texas A&M University Joins Consortium to Elevate AI Safety and Reliability Texas A&M University Joins Consortium to Elevate AI Safety and Reliability
Following the University of Notre Dame, Texas A&M has joined the Artificial Intelligence Safety Institute Consortium (AISIC), focusing on AI safety... Texas A&M University Joins Consortium to Elevate AI Safety and Reliability

Following the University of Notre Dame, Texas A&M has joined the Artificial Intelligence Safety Institute Consortium (AISIC), focusing on AI safety and reliability. Spearheaded by the U.S. Department of Commerce’s National Institute of Standards and Technology (NIST), the group is a collection of leading corporations, academic institutions, and federal agencies.

The purpose of this group is to help address the complexities of AI technology and issues related to safety and reliability. Members include Amazon, Apple, Adobe, Intel, Google, Meta, Microsoft, and OpenAI, alongside venerable academic institutions including The Johns Hopkins University, Massachusetts Institute of Technology, and Stanford University.

Notably, the nonprofit Linux Foundation is also part of the group. According to the group, their selection criteria emphasize the capability to contribute to AI’s research and development, aiming to maximize its benefits, minimize risks, and foster innovation.

Dr. Jack G. Baldauf, Vice President for Research at Texas A&M, highlighted the rapid expansion of AI tools and applications, noting the potential societal changes and the need for meticulous study and comprehensive research.

Saying in part, “In terms of tools and applications, artificial intelligence is expanding at an astonishing rate,…AI is likely to change every aspect of our society. The benefits are promising but the risks are daunting. Everything about AI calls for careful study and thorough research. We anticipate making significant contributions to this important body of work.”

In-Person and Virtual Conference

April 23rd to 25th, 2024

Join us for a deep dive into the latest data science and AI trends, tools, and techniques, from LLMs to data analytics and from machine learning to responsible AI.

 

The consortium plans to focus on developing policies, standards, and best practices across five critical areas: risk management, synthetic content detection, evaluation benchmarks, adversarial testing, and stress testing for security-risk AI models.

Dr. Nick Duffield, Director of the Texas A&M Institute of Data Science and holder of the Royce E. Wisenbaker Professorship I. Dr. Duffield, along with a multidisciplinary team from across the university, aims to significantly contribute to the consortium’s goals.

Dr. Duffield said of the team’s hopes, “Our researchers look forward to contributing to the development of best practices to support responsible adoption of AI and underpin confidence in innovative products and services enabled by exciting advances in AI,”.

With the growth of groups such as the Artificial Intelligence Safety Institute Consortium, it’s becoming clear to the movers and shakers within the AI space, that responsibility and safety related to AI products must take center stage.

ODSC Team

ODSC Team

ODSC gathers the attendees, presenters, and companies that are shaping the present and future of data science and AI. ODSC hosts one of the largest gatherings of professional data scientists with major conferences in USA, Europe, and Asia.

1