Automatic design verification techniques are intended to check that a particular system design meets a set of formal requirements. When the system does not meet the requirements, some verification tools can perform culprit identification to indicate which design components contributed to the failure. With non-probabilistic verification, culprit identification is straightforward: the verifier returns a counterexample trace that shows how the system can evolve to violate the desired property, and any component involved in that trace is a potential culprit. For probabilistic verification, the problem is more complicated, because no single trace constitutes a counterexample. Given a set of execution traces that collectively refute a probabilistic property, how should we interpret those traces to find which design components are primarily responsible? This paper discusses an approach to this problem based on decision-tree learning. Our solution provides rapid, scalable, and accurate diagnosis of culprits from execution traces. It rejects distractions and accurately focuses attention on the components that primarily cause a property verification to fail.

This content is only available via PDF.
You do not currently have access to this content.