This investigation aims to address a critical knowledge gap in autonomous driving security by conducting a systematic comparative analysis of machine learning algorithms versus traditional computer vision techniques in real-time detection of physical adversarial objects.
Current literature documents high vulnerability rates but lacks quantitative processing time measurements and rigorous comparative analyses between methodological approaches, creating a significant empirical gap in understanding which detection paradigms provide optimal real-time protection against physical attacks on autonomous vehicle perception systems.
Autonomous driving technologies represent a transformative innovation with significant societal implications, yet their security vulnerabilities threaten both widespread adoption and public safety. Recent literature (Cao et al., 2021; Wang et al., 2023) demonstrates that physical adversarial attacks achieve success rates of 80-100% against machine learning detection systems, potentially inducing catastrophic failures in autonomous vehicles. Cao et al. (2021) documented a 100% simulated collision rate using adversarial traffic cones, while Eykholt et al. (2018) demonstrated 84.8% successful attacks in controlled field tests.
The scientific significance of this research derives from the absence of rigorous comparative analyses between contemporary machine learning techniques and traditional computer vision methods in this domain. Furthermore, no studies in 1 the extant literature quantitatively measure processing latency or real-time performance metrics—critical variables for practical implementation in autonomous systems where millisecond-level decision-making is essential. This investigation will employ both information-theoretic frameworks and computational perception paradigms to evaluate performance trade-offs between these approaches.
This research has implications beyond theoretical advancement—autonomous vehicle manufacturers require evidence-based guidance for detection system implementation, while regulatory bodies need empirical foundations for establishing safety standards. By addressing this research gap, we will significantly advance the theoretical understanding of adversarial resilience in autonomous perception systems operating under real-world constraints.
This investigation will employ a systematic multi-phase methodology:
Algorithm Selection and Implementation (Months 1 – 2): Implementation of representative algorithms from both paradigms:
• Machine Learning: YOLOv4/v5/v7, Faster R-CNN, and EfficientDet variants
• Traditional Computer Vision: Hough transform derivatives, HOG feature extraction with SVM classification, and SIFT/SURF descriptorbased methods
Testing Environment development (Months 2-3): Establishment of a standardized experimental platform utilizing:
• CARLA simulator for controlled virtual experimentation
• Laboratory testbed with scale model vehicles and calibrated camera arrays
• Controlled real-world testing environment for ecological validation
Adversarial Object Generation (Months 3-4): Development of physical adversarial objects following methodologies documented in literature:
• Traffic signage with optimized adversarial perturbations
• Roadway obstacles with adversarial camouflage patterns
• Transferable adversarial patches applicable to infrastructure elements
Comparative Performance Analysis (Months 4-8): Execution of comprehensive experimental protocols measuring:
• Detection accuracy against adversarial objects (precision, recall, F1-score)
• Frame-wise processing latency (milliseconds)
• Computational resource utilization (CPU/GPU utilization, memory requirements)
• Performance degradation under variable
Statistical Analysis and Documentation (Months 8-10): Application of statistical methods to identify significant performance differentials between approaches, with particular emphasis on the accuracy/latency Pareto frontier.
This methodological approach directly addresses the identified research gap by quantitatively measuring real-time performance metrics across detection paradigms—data entirely absent from existing literature.
This investigation will produce several empirical contributions addressing the current knowledge gap:
1. Cao, Y., Wang, N., Xiao, C., Yang, D., Fang, J., Yang, R., Chen, Q. A., Liu, M., & Li, B. (2021). Invisible for Both Camera and LiDAR: Security of Multi-Sensor Fusion Based Perception in Autonomous Driving Under Physical-World Attacks. IEEE Symposium on Security and Privacy.
2. Eykholt, K., Evtimov, I., Fernandes, E., Li, B., Rahmati, A., Xiao, C., Prakash, A., Kohno, T., & Song, D. (2018). Robust Physical-World Attacks on Deep Learning Visual Classification. IEEE/CVF Conference on Computer Vision and Pattern Recognition.
3. Rossolini, G., Nesti, F., D’Amico, G., Nair, S., Biondi, A., & Buttazzo, G. (2022). On the Real-World Adversarial Robustness of Real-Time Semantic Segmentation Models for Autonomous Driving. IEEE Transactions on Neural Networks and Learning Systems.
4. Tu, J., Ren, M., Manivasagam, S., Liang, M., Yang, B., Du, R., Cheng, F., & Urtasun, R. (2020). Physically Realizable Adversarial Examples for LiDAR Object Detection. Computer Vision and Pattern Recognition.
5. Wang, N., Luo, Y., Sato, T., Xu, K., & Chen, Q. A. (2023). Does Physical Adversarial Example Really Matter to Autonomous Driving? Towards System-Level Effect of Adversarial Object Evasion Attack. IEEE International Conference on Computer Vision.
6. Papernot, N., McDaniel, P., Jha, S., Fredrikson, M., Celik, Z. B., & Swami, A. (2016). The limitations of deep learning in adversarial settings. IEEE European Symposium on Security and Privacy.
7. Badue, C., Guidolini, R., Carneiro, R. V., Azevedo, P., Cardoso, V. B., Forechi, A., Jesus, L., Berriel, R., Paix˜ao, T., Mutz, F., et al. (2021). Selfdriving cars: A survey. Expert Systems with Applications, 165, 113816.
Preliminary Work and Experience
As preliminary work specifically for this investigation, I have conducted a comprehensive literature synthesis identifying the methodological gap in comparative performance analysis. I have also performed initial experimental validation with YOLOv5 and HOG-based detection on a dataset of adversarial images to test methodological feasibility. These preliminary experiments confirmed differential vulnerability patterns between approaches, with processing time variations that warrant systematic investigation.
This research exclusively focuses on algorithmic performance evaluation and physical object testing with autonomous vehicle systems. No human or animal 4 subjects are involved in any experimental procedures. Therefore, neither IRB nor IACUC approval is required for this investigation.