Machine learning systems are susceptible to various cyber threats, compromising their integrity and functionality. Understanding these threats is crucial for developing robust defense mechanisms. Here are two main categories of cyber threats to machine learning.
ML datasets, pre-trained models, or underlying ML libraries like TensorFlow, PyTorch, scikit-learn, etc. containing malware to spread across networks.
- Virus
- Worms
- Trojan horse
- Spyware
- Adware
- Botnets
- Ransomware
Manipulate ML datasets, models, or systems with the goal of compromising the confidentiality, integrity, or availability of the business use case.
- Model Evasion
- Model Extraction / Theft
- Model Backdoor
- Model Denial of service
- Data Poisoning / Data Backdoor
- Data Theft / Model Inference
- Generative AI: Prompt / Code Injection / Jailbreaking
These cyber threats underscore the importance of incorporating security measures at every stage of the machine learning lifecycle to ensure the reliability and trustworthiness of ML systems.
Explore the taxonomy of machine learning cyberattacks targeting training data, models, and inference processes. Delve into the intricacies of adversarial manipulation at each stage, understanding the vulnerabilities and implications for security. Available at ML Taxonomy of Cyberattacks.