Artificial intelligence (AI) and machine learning (ML) systems can pilot autonomous vehicles, detect cyber intrusions, and diagnose diseases. They’re widely used to identify locations and features and authenticate users via facial recognition, voice commands, and other biometrics. AI and ML can also be hacked with disastrous consequences from vehicular crashes, cyber breaches, and stolen identities, to missed diagnoses and failures in financial risk management and ISR (intelligence, surveillance, and reconnaissance).
Adversarial AI/ML is the field that investigates hacks on AI/ML and how to mitigate and defend against them. In this article we focus on hacking AI -- the risks, vulnerabilities, threats, and concerns. The second article in our two-part series focuses on securing AI. Peraton Labs has extensive capabilities in adversarial AI/ML, including characterizing the AI/ML attack surface, identifying attack vectors and kill chains, and developing, testing, and validating attack, defense, and countermeasures. Contact us at [email protected] to learn more.
AI/ML systems are vulnerable to diverse and rapidly evolving attacks that target the ML pipeline from model design and data sourcing to processes for training, testing, and validation. Consequences can be severe and include:
The explosive adoption of ML across defense, communications, health care, intelligence, finance, transportation, and other markets makes adversarial AI a significant and growing threat. AI and ML systems are used in finance applications to detect fraud, manage assets, calculate credit scores, and approve loans. Misclassification attacks can wreak havoc on financial risk management. Confidentiality and privacy attacks are a particular threat in health care. This 2020 article in Nature discusses attack vectors on ML systems for medical imaging and notes that a successful privacy attack yielding re-identified patient records is a lucrative target for hackers.
Image recognition attacks were among the earliest examples of AI hacking and continue to be an area of concern. Image-based attacks can lead self-driving vehicles to misrecognize stop signs as speed limits, accelerate to 85 mph in a 35 mph zone, or swerve into the wrong lane. Misclassification and reduced confidence in analyzing images and other sensor data significantly impact critical activities such as feature identification in ISR, facial recognition in identity management, and target identification in warfare.
AI is critical to modern defense with a projected market of $11.6B in 2025 and wide deployment across land, sea, air, space, and cyber domains. A DHS AI Strategy report states: “Adversaries can increasingly use AI-enabled systems to exploit or overcome security measures currently in place at our physical borders including at ports-of-entry, in cyberspace, in election systems, and beyond.” A recent Air Force Magazine article includes this remark: “AI is like any other new weapon system: Getting it is only half the battle. Defending it is just as critical.”
Cybersecurity systems leverage AI/ML for anti-virus, spam filtering, malware detection, and other anomaly detection mechanisms. The ATLAS Case Study database lists several real-world attacks on malware detection, and 2020 saw the first common vulnerability and exposure (CVE) for a ML component of a commercial system. The CVE describes an attack which evades commercial email protection “with a goal of delivering malicious emails.”
Look for Securing AI, the second article in our two-part series, which focuses on methods for securing ML pipelines and AI systems. Contact us at [email protected].
Adversarial AI/ML is the field that investigates hacks on AI/ML and how to mitigate and defend against them. In this article we focus on hacking AI -- the risks, vulnerabilities, threats, and concerns. The second article in our two-part series focuses on securing AI. Peraton Labs has extensive capabilities in adversarial AI/ML, including characterizing the AI/ML attack surface, identifying attack vectors and kill chains, and developing, testing, and validating attack, defense, and countermeasures. Contact us at [email protected] to learn more.
AI/ML systems are vulnerable to diverse and rapidly evolving attacks that target the ML pipeline from model design and data sourcing to processes for training, testing, and validation. Consequences can be severe and include:
- Misclassification, degraded performance, reduced confidence, and nonsensical results, which lower the integrity of the ML results
- Delay, disruption, and denial of service, which decrease availability of the ML results and cause failures in the systems that rely on them
- Extraction and inference of proprietary and sensitive information about the model or data, which expose valuable confidential information and violate privacy requirements.
The explosive adoption of ML across defense, communications, health care, intelligence, finance, transportation, and other markets makes adversarial AI a significant and growing threat. AI and ML systems are used in finance applications to detect fraud, manage assets, calculate credit scores, and approve loans. Misclassification attacks can wreak havoc on financial risk management. Confidentiality and privacy attacks are a particular threat in health care. This 2020 article in Nature discusses attack vectors on ML systems for medical imaging and notes that a successful privacy attack yielding re-identified patient records is a lucrative target for hackers.
Image recognition attacks were among the earliest examples of AI hacking and continue to be an area of concern. Image-based attacks can lead self-driving vehicles to misrecognize stop signs as speed limits, accelerate to 85 mph in a 35 mph zone, or swerve into the wrong lane. Misclassification and reduced confidence in analyzing images and other sensor data significantly impact critical activities such as feature identification in ISR, facial recognition in identity management, and target identification in warfare.
AI is critical to modern defense with a projected market of $11.6B in 2025 and wide deployment across land, sea, air, space, and cyber domains. A DHS AI Strategy report states: “Adversaries can increasingly use AI-enabled systems to exploit or overcome security measures currently in place at our physical borders including at ports-of-entry, in cyberspace, in election systems, and beyond.” A recent Air Force Magazine article includes this remark: “AI is like any other new weapon system: Getting it is only half the battle. Defending it is just as critical.”
Cybersecurity systems leverage AI/ML for anti-virus, spam filtering, malware detection, and other anomaly detection mechanisms. The ATLAS Case Study database lists several real-world attacks on malware detection, and 2020 saw the first common vulnerability and exposure (CVE) for a ML component of a commercial system. The CVE describes an attack which evades commercial email protection “with a goal of delivering malicious emails.”
Look for Securing AI, the second article in our two-part series, which focuses on methods for securing ML pipelines and AI systems. Contact us at [email protected].