Security has historically lagged behind the implementation of new technology. With AI/ML transforming how industries and government agencies do business and serve citizens, it is critical that developers build security into our architectures from the beginning so that we do not repeat mistakes of the past. We can consider both the applications of AI in cybersecurity and also the security (or lack thereof) in AI technologies.
[Related Article: Using Blockchain and AI for Data Security]
AI in Cybersecurity
Gartner’s model for cataloging common security tasks includes: prediction, prevention, detection, and response. AI/ML can be applied in each of these categories both to reduce error rates and to increase the amount of data that can be reviewed. One of the early applications of AI/ML in cybersecurity was the detection of spam emails. This application has grown from basic string matching to using NLP techniques.
Areas that are currently growing include user behavioral analytics and human resources (HR) analytics. Companies and agencies are interested in detecting anomalous behavior and using HR data such as performance and satisfaction metrics to predict personnel attrition to reduce turnover. Government agencies are actively developing a wide range of prediction models for human behavior including mobile device trust models for authentication and the risk of individuals developing into insider threats. These examples along with many others illustrate how AI/ML is transforming the cybersecurity industry.
Cybersecurity in AI
Turning our view inward on AI itself, many recent hacks of AI systems identify a need to be more thoughtful about countering adversarial attacks and securing data pipelines. Adversarial attacks on AI systems can come in many forms, including digital and physical attacks. Data pipelines can also be attacked in many ways, including injecting poison data and exploiting knowledge of an ML model. There is ongoing research to understand how to mitigate these attacks.
At Exponent, my group is leading the advancement of several emerging areas at the intersection of AI and cybersecurity. First, the goal of secure inclusive design (SID) is to use AI/ML to design accessibility features for connected devices so differently-abled people can use new tech without compromising security. Next, we perform quantitative risk and threat assessment for companies and agencies beginning to implement AI/ML tools that haven’t necessarily considered security implications. Finally, we work with clients to modernize their analytics programs no matter what stage of the analytics lifecycle they are in, from improving data collection and quality control, to developing predictive models and understanding the business questions behind them.
To learn more about these topics, please attend my upcoming talk at ODSC West 2019. The goal of this talk is to demystify the application of AI in the security industry. I will address misconceptions and detail common use cases, while attempting to cut through the hype and inflated marketing claims for AI systems. I will walk through coding examples for training predictive models including spam detection and malware classification.
[Related Article: Detecting Cybersecurity Incidents with Machine Learning]
In addition to discussing the benefits to the security industry, I will discuss potential pitfalls and challenges. The end of the talk will flip the thesis to discuss applications of cybersecurity in AI, detailing famous adversarial attacks on AI systems and methods to mitigate such attacks. Members of the target audience have a curiosity about how AI methodologies are applied in cybersecurity, but need not be experts.