ML-driven Malware that Targets AV Safety

Saurabh Jha, Shengkun Cui, Subho S. Banerjee, James Cyriac, Timothy Tsai, Zbigniew T. Kalbarczyk, and Ravishankar K. Iyer

DSN 2020



Abstract

Ensuring the safety of autonomous vehicles (AVs) is critical for their mass deployment and public adoption. However, security attacks that violate safety constraints and cause accidents are a major deterrent to achieving public trust in AVs which hinders a vendors' ability to deploy the AVs. Creating a security hazard that results in a serious safety compromise (for example, an accident) is compelling from an attacker’s perspective. In this paper, we introduce an attack model, a method to deploy the attack in the form of a smart malware, and experimental evaluation of its impact on production grade autonomous driving software. We find that determining the time interval during which to launch the attack is critically important for causing safety hazards (such as collision) with a high degree of success. For example, the smart malware caused 32× more forced emergency braking compared to random attacks, and accidents in 52.6% of the driving simulations.

Citation

@INPROCEEDINGS{Jha2020,
  author={S. {Jha} and S. {Cui} and S. S. {Banerjee} and J. {Cyriac} and T. {Tsai} and Z. {Kalbarczyk} and R. K. {Iyer}},
  booktitle={2020 50th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN)},
  title={ML-Driven Malware that Targets AV Safety},
  year={2020},
  volume={},
  number={},
  pages={113-124},
} 

Related Projects

  • Powered by Hugo
  • Last updated 10/21/2021
  • Feed