Automatic design verification techniques are intended to check that a particular system design meets a set of formal requirements. When the system does not meet the requirements, some verification tools can perform culprit identification to indicate which design components contributed to the failure. With non-probabilistic verification, culprit identification is straightforward: the verifier returns a counterexample trace that shows how the system can evolve to violate the desired property, and any component involved in that trace is a potential culprit. For probabilistic verification, the problem is more complicated, because no single trace constitutes a counterexample. Given a set of execution traces that collectively refute a probabilistic property, how should we interpret those traces to find which design components are primarily responsible? This paper discusses an approach to this problem based on decision-tree learning. Our solution provides rapid, scalable, and accurate diagnosis of culprits from execution traces. It rejects distractions and accurately focuses attention on the components that primarily cause a property verification to fail.
- Design Engineering Division
- Computers and Information in Engineering Division
Identifying Culprits When Probabilistic Verification Fails
Musliner, DJ, Woods, T, & Maraist, J. "Identifying Culprits When Probabilistic Verification Fails." Proceedings of the ASME 2012 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. Volume 2: 32nd Computers and Information in Engineering Conference, Parts A and B. Chicago, Illinois, USA. August 12–15, 2012. pp. 1111-1119. ASME. https://doi.org/10.1115/DETC2012-71051
Download citation file: