Deep neural networks (DNNs) are increasingly used in safety-critical autonomous systems as perception components processing high-dimensional image data. Formal analysis of these systems is particularly challenging due to the complexity of the perception DNNs, the sensors (cameras), and the environment conditions. We present a case study applying formal probabilistic analysis techniques to an experimental autonomous system that guides airplanes on taxiways using a perception DNN. We address the above challenges by replacing the camera and the network with a compact abstraction whose transition probabilities are computed from the confusion matrices measuring the performance of the DNN on a representative image data set. As the probabilities are estimated based on empirical data, and thus are subject to error, we also compute confidence intervals in addition to point estimates for these probabilities and thereby strengthen the soundness of the analysis. We also show how to leverage local, DNN-specific analyses as run-time guards to filter out mis-behaving inputs and increase the safety of the overall system. Our findings are applicable to other autonomous systems that use complex DNNs for perception.
CITATION STYLE
Păsăreanu, C. S., Mangal, R., Gopinath, D., Getir Yaman, S., Imrie, C., Calinescu, R., & Yu, H. (2023). Closed-Loop Analysis of Vision-Based Autonomous Systems: A Case Study. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 13964 LNCS, pp. 289–303). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-37706-8_15
Mendeley helps you to discover research relevant for your work.