Proceedings of the Eighth International Conference on Probabilistic Safety Assessment & Management (PSAM)
Human Reliability: Application of Human Reliability in PRA I
Download citation file:
- Ris (Zotero)
- Reference Manager
Pursuing its risk-informed regulatory framework, the U.S. Nuclear Regulatory Commission (NRC) published Regulatory Guide 1.200, “An Approach for Determining the Technical Adequacy of Probabilistic Risk Assessment (PRA) Results for Risk-Informed Activities,” and developed an “Action Plan-Stabilizing the PRA Quality Expectations and Requirements,” SECY-04-0118, for addressing PRA quality issues. Two HRA issues mentioned are: (a) lack of consistency among HRA practitioners in implementing HRA methods and (b) method suitability for regulatory applications. To address these issues, NRC published “Good Practices for Implementing Human Reliability Analysis” (NUREG-1792) and is evaluating current HRA methods on the basis of the good practices in NUREG-1792. (Note that the opinions expressed in this paper are those of the authors and not of the NRC).
This paper summarizes the initial evaluation of ten HRA methods used in the United States. Findings include:
• Most methods are strictly quantification tools and therefore do not address many other steps of the HRA process.
• The methods differ in their underlying knowledge, data, and modeling approach, reflecting the evolution of HRA technology.
• Generally, two quantification approaches are used; one adjusts basic human error probabilities (HEPs) according to a set list of influencing factors, and the other uses a more context-defined set of factors and expert judgment to estimate the final HEP.
• The methods have different strengths and weaknesses and can be viewed as a ‘tool box’ providing different capabilities, some better suited than others for various applications.
• The underlying basis of some methods is relatively weak, and with the recent advances and expected continued evolution in HRA methodology, it is expected that they will become less useful and less accepted in the future.
• The methods are not always applied as intended by their authors. This, as well as insufficient written guidance in some methods, appears to contribute to the analyst-to-analyst variability often observed in HRA.
• Examining the evolution of HRA technology, it becomes apparent that limitations continue to exist because HRA did not have the benefit of adequate data collection and experimental work needed to validate the models and data underlying the methods.
• Nevertheless, the current HRA “tool box” (including the NRCs HRA good practices [NUREG-1792)] collectively contains good guidance for ensuring that human failure events (HFEs) are identified and modeled correctly, that important influencing factors are considered, and that the overall HRA is performed correctly. In most cases, HRA methods, if they follow the good practices and are used in applications that do not require accuracy beyond their capabilities, can identify conditions that tend to make errors more likely and can estimate reasonable HEPs. This allows users to identify human performance vulnerabilities and related improvements.