fbpx
10 Machine Learning Safety Topics for 2022 10 Machine Learning Safety Topics for 2022
As organizations increasingly rely on machine learning models for both developing strategic advantages and in their consumer-facing products. As a result,... 10 Machine Learning Safety Topics for 2022

As organizations increasingly rely on machine learning models for both developing strategic advantages and in their consumer-facing products. As a result, protecting one’s data and models has also become increasingly important. At ODSC East 2022, we will be featuring several training sessions, workshops, and talks on machine learning safety and security to help you and your organization stay at the forefront of this field. Check out a few machine learning safety topics below.

Examining the Role of Artificial Intelligence for Cybersecurity

Sagar Samtani, PhD | Assistant Professor | Indiana University

The past several years have been marked by several well-known ransomware and other malware attacks on governments (city, state, and federal), companies, and individuals, indicating that this is a threat that is going to continue, rather than fade away. Therefore, we need new tools to stay one step ahead. Utilizing AI for cybersecurity offers us some possible solutions. This session will address the current state of AI for cybersecurity and explore some of the possible areas of future development and growth. 

Unsolved ML Safety Problems

Dan Hendrycks | Research Intern | DeepMind

Machine learning systems are growing, not only in size, but also in what they are capable of doing. As a result, it’s becoming increasingly important to ensure that the safety measures in place to protect both the data and the model are advancing just as quickly. Three areas that require particular attention are robustness, monitoring, and alignment. In this session, Dan Hendrycks of DeepMind, the company that made history with its AlphaGo program, will address these key areas of machine learning safety. 

Data Science for Digital Forensics & Incident Response (DFIR)

Jess Garcia | CEO, Security & Forensics Analyst, Incident Responder, Senior Instructor | One eSecurity, SANS Institute

Digital Forensics and Incident Response data, which can include data such as logs, network traffic and artifacts, can also be analyzed using data science tools and methods. This session will introduce you to the fundamentals of DFIR and data science and provide examples of how they can be used together to gain insights and information. 

https://odsc.com/boston

 Explainable AI: Balancing the Triad – Business needs, Technology maturity & the Governance regulations

Krishna Sankar | Distinguished Engineer, Artificial Intelligence | U.S. Bank

One of the major issues we’ve had with machine learning and AI models is that historically many have operated essentially in a black box, providing little insight into how the models’ conclusions are reached. To ensure that our models are performing at their best, and not perpetuating existing bias and discrimination, it’s essential that we develop the tools to see into those black boxes. This session will delve into some of the thornier questions regarding explainable AI. 

Analyzing Sensitive Data Using Differential Privacy

Ashwin Machanavajjhala, PhD | Associate Professor, Co-Founder | Duke University, Tumult Labs

As we discover more AI and machine learning applications in fields like healthcare, biotechnology, and pharma, it’s becoming increasingly important that we find and utilize effective means of anonymizing data. One such strategy is to use differential privacy, which makes it impossible to determine if any single datapoint came from a specific dataset. This session will introduce you to differential privacy and illustrate how it can be used to gain insights from particularly sensitive data. 

You Too Can Be a Cybersecurity Data Scientist!

John Speed Meyers, PhD | Security Data Scientist | Chainguard

With the ever-looming and growing threats of data breaches and malware, candidates with knowledge of, or expertise in, cybersecurity are sure to be in high demand. In this ODSC East session, you’ll learn about the different specializations in cybersecurity, some of the prominent experts in the field, and possible career paths.  

Balaji Lakshminarayanan, PhD

Staff Research Scientist | Google Brain

Speaker Balaji Lakshminarayanan, PhD, is a research scientist at Google Brain, which started in 2011 as one of Google’s more experimental departments. Since then, Google Brain has had many successes in its research into AI and associated applications, including TensorFlow, encryption systems, and translation services just to name a few.  

Patrick Hall

Co-founder & Senior Data Scientist | bnh.ai

Patrick Hall is a co-founder of and senior data scientist at bnh.ai, which helps organizations improve the fairness and security of their AI models and applications. Bnh.ai offers its services and expertise across a wide range of areas, from Responsible and Trustworthy AI to AI Incident Planning, Response, and Recovery in industries as diverse as retail and venture capital. Be sure to check out Patrick Hall’s session at the conference 

Adversarial Robustness: How to Make Artificial Intelligence Models Attack-proof!

Serg Masís | Climate Data Scientist | Syngenta

Adversarial robustness, defined as “a model’s ability to resist being fooled”, is another essential aspect of machine learning safety and security. If you ignore it, you will put your models, organization, and clients at serious risk. In this upcoming session, you will examine an evasion use case, elaborate on other forms of attacks, and two defense methods: spatial smoothing preprocessing and adversarial training.

Evaluating, Interpreting, and Monitoring Machine Learning Models

Ankur Taly, PhD | Staff Research Scientist | Google

Although we’ve made significant advances in the application of AI and machine learning models, we have not made similar advances in strategies for testing them. Instead, we continue to rely on hold-out test sets. In this session, Ankur Taly, PhD, will explore in-depth one of these strategies, the attribution method, that is applicable to several different types of deep neural networks. 

Register for ODSC East 2022 Now to Learn More about Machine Learning Safety

The tactics used to infiltrate or steal data are only going to get more sophisticated, making machine learning safety and security an essential skill for every data scientist and AI practitioner. Check out our machine learning safety and security focus area page for all of our confirmed sessions. And be sure to grab your pass before our early bird sale ends Friday.

ODSC Team

ODSC Team

ODSC gathers the attendees, presenters, and companies that are shaping the present and future of data science and AI. ODSC hosts one of the largest gatherings of professional data scientists with major conferences in USA, Europe, and Asia.

1