The fast development of deep learning (DL) enables even resource-constrained devices to tackle complex artificial intelligence (AI) tasks, especially those related to environment perception in autonomous driving systems (ADS).However, AI models deployed in the real world are exposed to the threats of adversarial examples (AE).One specific type of physical attack utilizes laser beams or spots planted on images Chicken rather than crafted pixel-level perturbations to manipulate the victim deep neural networks (DNN) prediction.
These attacks easily mislead traffic sign recognition and object detection in ADS.Laser-based adversarial attacks are cognitively stealthy but visually conspicuous, invalidating the previous defenses designed for digital attacks.This study considers two state-of-the-art (SOTA) laser-based attacks and establishes a benchmark comprising thousands of AEs.
Such AEs have distinct pattern features, significant occupation, high contrast, and low variance.Based BLACK CURRANT OIL on the observation, a lightweight detection framework, Laser Guard, is proposed.Specifically, preprocessing methods are used to approximate the laser-perturbed areas, followed by a statistics-based strategy to determine abnormalities in the given samples.
This framework can be applied in a plug-and-play manner with DNNs in intelligent vehicles.Extensive experimental results show that the framework can effectively filter out about 70-75% of laser-based street sign AEs, and extends well to other objects, successfully filtering out 80%.The detection latency of objects AEs is marginal, with the average detection time for laser spots being approximately 24 ms, and for laser beams, it is around 57 ms.