Abstract
This paper describes the process for assessing the predictive capability of the Consortium for the advanced simulation of light-water reactors (CASL) virtual environment for reactor applications code suite (VERA—CS) for different challenge problems. The assessment process is guided by the two qualitative frameworks, i.e., phenomena identification and ranking table (PIRT) and predictive capability maturity model (PCMM). The capability and credibility of VERA codes (individual and coupled simulation codes) are evaluated. Capability refers to evidence of required functionality for capturing phenomena of interest while credibility refers to the evidence that provides confidence in the calculated results. For this assessment, each challenge problem defines a set of phenomenological requirements (based on PIRT) against which the VERA software is evaluated. This approach, in turn, enables the focused assessment of only those capabilities that are relevant to the challenge problem. The credibility assessment using PCMM is based on different decision attributes that encompass verification, validation, and uncertainty quantification (VVUQ) of the CASL codes. For each attribute, a maturity score from zero to three is assigned to ascertain the acquired maturity level of the VERA codes with respect to the challenge problem. Credibility in the assessment is established by mapping relevant evidence obtained from VVUQ of codes to the corresponding PCMM attribute. The illustration of the proposed approach is presented using one of the CASL challenge problems called chalk river unidentified deposit (CRUD) induced power shift (CIPS). The assessment framework described in this paper can be considered applicable to other M & S code development efforts.
1 Introduction
Rapid growth in the computing power of Modeling and Simulation (M & S) tools has brought tremendous advances in the field of engineering and science. This is particularly true in fields where system-prototypic experimentation is difficult and/or expensive. In nuclear engineering, M & S tools are extensively used to support decisions regarding the design, operation, and safety assessment of nuclear power plants. Consequently, comprehensive methodologies and standard procedures have been developed to guide the evolution of M & S tools and assess their adequacy for the intended use. The “code scaling, applicability, and uncertainty” (CSAU) methodology [1] was the first effort led by the United States Nuclear Regulatory Commission (U.S. NRC) to provide standard procedures and guidelines for uncertainty quantification and adequacy assessment of the system analysis code for design basis accidents. Later the “evaluation model development and assessment process” [2], was developed to provide a systematic process for the development and assessment of evaluation models for transient and/or accident analysis of nuclear power plants during design basis accidents. The “predictive capability maturity model” (PCMM) [3] is a qualitative decision framework developed by Sandia National Laboratories which adopts a graded approach (based on qualitative criteria) for assessing the maturity of a simulation tool with respect to the consequence of its application. PCMM is the defacto assessment framework for the National Nuclear Safety Administration funded M & S conducted in the United States.
Both M & S and experiments provide approximate representations of real systems. Because of uncertainty and incomplete information, making decisions regarding the adequacy of a simulation code for an intended application, particularly when it is a high-consequence nuclear reactor safety application, can be difficult. Subjective decision-making regarding the adequacy and rigor of the code development team is common in the development and assessment of computational codes for complex nuclear reactor applications. Owing to this subjectivism, it becomes important to organize information and evidence systematically to establish their accountability toward the decision regarding the adequacy of the M & S code for a specific application.
The consortium for advanced simulation of light water reactor (CASL) is a U.S. Department of Energy (DOE) sponsored energy innovation hub for M & S of nuclear reactor applications. The primary objective of CASL is to develop modeling and simulation capabilities to support decisions regarding the safe and efficient operation of commercial nuclear power reactors. CASL has identified and developed different simulation codes with the capability to model different physics, e.g., coolant-boiling in rod arrays-two fluids code and computational fluid dynamics simulation) for reactor core thermal hydraulics, Michigan parallel characteristics transport code (MPACT) for reactor core neutronics, BISON code for fuel performance, and MPO advanced model for boron analysis (MAMBA) code for coolant chemistry and chalk river unidentified deposit (CRUD) [4]. Although most of these codes are mature with respect to their domain of individual physics, their extension to multiphysics and multiscale CASL challenge problems (CPs) require further capability development and extensive verification, validation, and uncertainty quantification (VVUQ) work [5]. This paper describes the process of assessment of the predictive capability of CASL virtual environment for reactor applications code suite (VERA—CS) for different CPs and also serves as first-of-kind documentation of the application of PCMM to complex multiphysics codes and problems. The proposed approach is based on the logical separation of capability and credibility assessment. The capability and credibility assessment are based on (1) phenomena identification and ranking table (PIRT), and (2) PCMM, respectively. The PIRT is used for complexity resolution and ultimately to determine the capability of VERA—CS with respect to the CP. Grading scales are used to specify the degree of knowledge, importance, modeling capability, and existing gaps corresponding to each phenomenon identified by the PIRT. The PCMM is used to define a set of decision attributes and assessment criteria to determine the attained degree of maturity for VERA—CS with respect to the CP. A similar gradation scheme is used to provide granularity for assessing the level of maturity. Given the large volume of heterogeneous data (or evidence) from various modeling and VVUQ activities of different codes related to the CPs, the PCMM serves as a convenient tool for organization and credibility assessment. However, PCMM needs a formal structure and the ability to incorporate evidence that can support the claims regarding the degree of maturity for VERA—CS for a specific CP. To justify the grade of evaluation and enhance transparency, a systematic scheme for evidence documentation and citation is incorporated in this work. The illustration of the proposed approach is presented using CRUD induced power shift (CIPS) as a CP, though the same approach has been applied to other CPs
The organization of the paper is as follows. Section 2 provides a summary of VERA—CS and CPs. Section 3 describes the technique for complexity resolution using PIRT. Section 4 describes the elements of the PCMM and assessment criteria. Section 5 describes the proposed approach for capability and credibility assessment of CASL VERA for a specific CP (i.e., CIPS), and Sec. 6 provides the conclusions.
2 Virtual Environment for Reactor Applications Code Suite and Challenge Problems
The CASL is charged with developing computational modeling and simulation capability for light water-moderated, commercial nuclear power reactors. The VERA—CS [6] includes a collection of tools for the simulation of neutronics, thermal-hydraulics, chemistry, and fuel performance (solid mechanics and heat transfer) in an integrated and coupled computational environment. These tools are generally designed to be employed in a high-performance computing environment and are highly parallelized. Computational fluid dynamics also play an important role, though not within VERA. The current, main CASL toolset includes the following software modules:
Consortium for the advanced simulation of light-water reactors has been organized around a handful of CPs [32,33]. These CPs have been identified by the nuclear industry as important for the safe and reliable operation of the current nuclear reactors. Each CP has a unique set of phenomena that may span multiple traditional disciplines. CASL has addressed seven CPs
CIPS,
CRUD induced localized corrosion (CILC),
Pellet-cladding interactions (PCI),
Grid to rod fretting (GTRF),
Departure from nucleate boiling (DNB),
Loss of coolant accident (LOCA),
Reactivity insertion accident (RIA).
In aggregate, the CASL CPs serve to define the intended purpose of VERA—CS. In this paper, the illustration of the process for predictive capability assessment of VERA—CS is presented with respect to one of these CASL CPs, i.e., CIPS. The clear articulation of an intended purpose is critical for assessing both code capability and credibility. CRUD in CIPS refers to the deposition of porous corrosion products on the surface of the nuclear fuel rods. These chemical products are Iron and Nickel-based compounds that are produced by the corrosion of the metallic surface of the steam generator in pressurized water reactors (PWRs). Some of the corrosion products get released into the coolant in particulate form and eventually find their way to the reactor fuel rods. The deposition of CRUD leads to poor heat transfer, changes in the flow pattern, and accelerated corrosion. CRUD formation is accelerated under the subcooled boiling condition. Furthermore, Boron compounds accumulate inside the porous CRUD. As Boron is a neutron poison, a shift in the power profile is observed. This shift is termed CIPS [34].
The M & S efforts for CASL CPs are centered around two types of activities: (1) model development activities (focused on the development of specific capabilities for CP applications), and (2) VVUQ activities (that enhances trustworthiness or credibility of the developed capabilities for CP applications). The model development activities are guided by the PIRT while the VVUQ activities are guided by the PCMM. More details related to the use of PIRT and PCMM are discussed in Secs. 3 and 4.
3 Complexity Resolution Using Phenomena Identification and Ranking Table
Natural systems are often fundamentally complex. Analysis and understanding of a complex system require segregation of the system into less intricate parts or subsystems with distinctive forms or characteristics. Herbert A. Simon in his classic paper on “the architecture of complexity” describes how different complex systems exhibit hierarchical structure and similar properties (in the context of architecture or structural organization) regardless of their specific content [35]. He explains that two types of interactions are eminent in a hierarchical system: (1) Interactions within subsystems (or intercomponent linkage), (2) Interactions among subsystems (or intracomponent linkage). It is the nature of these interactions that guide the decomposition of a complex system. A complex system can be considered nearly decomposable when interactions among subsystems are feeble in strength compared to interactions within a subsystem [35].
The PIRT is a classical approach for the complexity resolution of nuclear reactor applications for modeling and simulation. It was formally introduced by the U.S. NRC as part of the CSAU methodology in 1988. Similar procedures [36,37] have been published in 1987 and 1989 by the Committee of the Safety of Nuclear Installations (CSNI), Nuclear Energy Agency for analysis of thermal-hydraulic phenomena related to emergency core cooling system and LOCA in Light water reactors. In the past decades, PIRT has been extensively used to resolve several issues, like large break loss of coolant accidents [38], small break loss of coolant accidents [39], fire modeling in NPP [40], and design of next-generation nuclear power plants [41]. The PIRT involves the identification and ranking of different phenomena relevant to the figure of merit [2]. Major steps related to the PIRT process are shown below [42]
Define the problem and PIRT objectives.
Specify the scenario (transient or steady-state). In the case of a transient process, the scenario is partitioned into time phases based on the dominant process/mechanism.
Identify and define the figures of merit.
Identify and review all the relevant literature (experimental and analytical data).
Identify phenomena relevant to the figures of merit.
Rank all the Phenomena based on knowledge and importance (with respect to the figures of merit).
Document all the findings.
Simon [35] describes two types of descriptors that can be used for solving a problem involving a complex system. These descriptors are called state descriptor and process descriptor. A state descriptor provides criteria for identifying an object or state of the system while the process descriptors are related to different processes or actions that lead to that particular state of the system. He further explains, “We pose a problem by giving state description of the solution. The task is to discover a sequence of processes that will produce the goal state from an initial state” [35]. In the context of the PIRT, the figures of merit may be considered as a state descriptor while different phenomena/processes that impact the figure of merit may be considered as process descriptors. Understanding the sequence and relation of different phenomena becomes crucial for the successful formulation of the problem. The structure of PIRT is governed by the nature of the problem being analyzed. The PIRT for accident situations like LOCA resolves complexity by dividing the transient scenario into time phases (blowdown, refill and reflood) based on the dominant mechanism or some other factors (operators' action or valve opening and closing). The phenomena identified by the PIRT process are arranged hierarchically based on the transient phase, system components, and underlying phenomena. The PIRT for simulation of high fidelity CASL CPs involves system decomposition with respect to governing physics (neutronics, fuel performance, coolant chemistry, and thermal-hydraulics) and scale (microscale, meso-scale, and macroscale) of the underlying phenomena. Hence, scale separation and physics decoupling are the two elementary principles that guide complexity resolution for CASL CPs. The outcome of the PIRT process is governed by the experts' knowledge and understanding of the problem of interest. Therefore, PIRT is subject to large epistemic uncertainty. Human factors related to the oratory skills and persuasiveness of the participating experts can also introduce some biases.
In CASL, the PIRT is employed for the conception of governing mechanisms and underlying physical processes (complexity resolution), guiding model development, identification of issues, and data needs. In this way, PIRT helps in prioritizing the research and development needs for different CPs. The strategy for the formulation of PIRT for CASL CP is based on the identification of key phenomena with respect to the governing physics. Numerical grading (viz., 0, 1, 2, 3) is used to specify the degree of knowledge, importance, modeling capability, and existing gaps corresponding to each phenomenon identified by the PIRT. As there may be a disparity between the input of different experts the grades are averaged to obtain the final assessment.
4 Predictive Capability Maturity Model
Assessing the credibility of predictions made using scientific computer codes is a complex and multifaceted topic that is also relatively new compared to the technical fields for which the codes are written. This problem has become more challenging as scientific software has become more capable and includes multiple physical phenomena. Within CASL, the credibility of VERA—CS is assessed using the PCMM [3]. The PCMM was originally developed by the Sandia National Laboratories for the maturity assessment of computational simulations concerning nuclear weapon applications. The original PCMM matrix consists of 6 decision attributes for code maturity assessment, namely, (1) representation and geometric fidelity (RGF), (2) physics and material model fidelity (PMMF), (3) code verification (CVER), (4) solution verification (SVER), (5) model validation (SVAL), and (6) uncertainty quantification and sensitivity analysis (UQSA). The evaluation of different PCMM attributes is performed by defining maturity levels based on the consequence of the application. According to Oberkampf et al. [43], the maturity levels in PCMM are based on two distinct information attributes [44]: (1) intrinsic information quality (related to objectivity and fidelity of information), and (2) contextual information quality (related to thoroughness, volume, and level of detail of information). The maturity levels guide the evaluation of intellectual artifacts or evidence obtained by different M & S activities [43]. Categorically all the data and/or information related to the M & S activities contribute to the body of evidence for its' maturity assessment. Given the nature of information attributes in PCMM levels, the required quality and quantity of evidence increase with increasingly higher maturity levels. A descriptive set of qualitative assessment criteria is specified in the PCMM matrix to guide the evaluation of PCMM attributes at different levels. The target maturity level for each PCMM attribute is decided based on the consequence of the application. It should be noted that the assessment criteria and maturity levels in the PCMM matrix make extensive usage of qualitative classifiers like, “little,” “some,” “minimum,” “all,” “low,” “medium,” “high,” etc. These classifiers provide a basis for assessment but lack in providing an absolute metric for different attributes assessment. It is important to note that while the level classifiers are nonquantitative, they do heavily rely on the objective assessment of evidence and meaningful resolution between levels is still enabled for four levels of granularity in maturity assessment. Moreover, the focus of PCMM is the maturity assessment of a simulation tool for an application based on the different processes and activities that enhance confidence in its use. Therefore, it is difficult to provide a quantitative metric for assessment.
For CASL CP applications, two modifications have been made to the original PCMM matrix. These modifications include the separation of software quality assurance (SQA) and software quality engineering (SQE) from the code verification category and the separation of separate effects tests (SETs) validation from integral effects tests (IETs) validation. The purpose of SETs and IETs validation is analogous to performing the unit tests and integration tests during code verification. Both strategies involve understanding the hierarchy involved in both areas.
Within CASL, there has been a relatively high level of effort and rigor expended on SQA/SQE practices while less effort has been expended on the mathematical code verification activities such as demonstrating the expected order of convergence. Separating SQA/SQE from code verification permits a more precise assessment and communication of expectations and achievements for each aspect. Furthermore, [45] recognizes SQA/SQE and numerical algorithm verification as separate types of activities, yet they are both intended to minimize or eliminate unexpected bugs, errors, blunders, and mistakes from corrupting predictions. Similarly, for validation, the separation of IET validation from SET validation permits more resolution in the assessment and clearer identification of the expectations and accomplishments. Figure 1 shows the PCMM matrix used for CASL CP applications with all the eight attributes and their assessment criteria for each maturity level.
The predictive capability maturity model requires a detailed analysis of each element to decide its level of maturity. The current evaluation is based on a qualitative assessment of each code (in VERA—CS) based on the descriptors (decision criteria) for each element (or decision attribute) in the PCMM matrix. If a code satisfies all the descriptors at a particular level, it is assumed to reach the maturity corresponding to that level. If the descriptor for an element is partially satisfied, a fractional scale (e.g., 0.25, 0.5, 1.25…) is used to express maturity between the two levels. In this way, PCMM provides a qualitative decision model for the evaluation of codes using a graded approach. The primary use of the PCMM matrix in CASL can be summarized as:
The CASL consists of a large team that involves researchers from different institutions working on different aspects of code development and assessment. The PCMM matrix helps in communicating and elucidating the different attributes that can impact the credibility of a simulation. The CASL researchers expend efforts in M & S development and assessment according to the target maturity levels.
It provides a basis for discussion about the M & S needs and critical developments for different applications (challenge problems) to the decision-maker (CASL leadership team and council), stakeholders, and decision facilitators (researcher working on code development and VVUQ).
Tracking the progress in development and assessment of codes for specifics CPs and directing resources toward critical areas.
5 Assessment Methodology
This section describes the process of assessment of VERA—CS for CASL CPs. The assessment process is guided by PIRT and PCMM while the illustration of different steps is described using CIPS as a CP. Figure 2 shows the different steps involved in this assessment process. A description of each step with an example is shown below.
5.1 Step 1: Challenge Problems Specification.
The first step in this assessment process involves CP specification based on the purpose of analysis, system conditions, and figure of merit [1,2], e.g., the specification for CIPS is defined as:
Purpose of analysis: Assess the adequacy of VERA—CS for simulation of CIPS.
System condition: PWR system condition during transient and normal operation (with changing fuel burn up and CRUD deposition).
Figures of merit: Boron mass distribution (vector), Boron mass (scalar), and axial offset (scalar).
5.2 Step 2: Complexity Resolution by Phenomena Identification and Ranking Table.
The second step in the assessment process involves complexity resolution using the classical PIRT methodology [1,2]. Complexity resolution using PIRT for multiphysics CASL CPs involves system decomposition with respect to governing physics (Neutronics, Fuel Performance, Coolant Chemistry, and Thermal Hydraulics) and scale (microscale, meso-scale, and macroscale) to identify important phenomena or processes that can impact the figure of merit. The identified phenomena are ranked based on importance and knowledge information. Code adequacy is also determined at this step. The importance of a phenomenon is defined based on its relevance to the figure of merit, e.g., Boron exchange in and out of CRUD (see Table 1) is considered as a high importance phenomenon for determining boron mass distribution. Knowledge level expresses the level of understanding of the phenomena based on available models, experimental data, and existing literature. Code adequacy reflects the current capability of the individual codes in VERA—CS for simulating the phenomena in PIRT. Tables 2–5 provide code adequacy assessment for different phenomena identified by PIRT for CIPS. These rankings, along with an assessment of the cost of implementation, can be used to set funding and development priorities. The PIRT assessment directly informs the evaluation of capability as it links the required phenomenology with the code components designed to represent the phenomenology. CP specification (step 1 in Fig. 2) and complexity resolution using PIRT (step 2 in Fig. 2) is based on the input of subject matter experts (SMEs) and focus area leads in CASL.
Phenomenon | Importance | Knowledge | Importance | Knowledge |
---|---|---|---|---|
PIRT update (2017) | Mini-PIRT (2014) | |||
Local changes (near the rod) in the equation of state | 2.4 | 1.3 | 3.0 | 3.0 |
Chemical reaction rates are based on lower temperature and pressures | 2.0 | 1.3 | 2.0 | 2.0 |
Overlooked chemical reactions/species | 1.8 | 1.0 | 3.0 | 2.0 |
CRUD porosity | 2.8 | 1.8 | 2.0 | 2.0 |
CRUD permeability | 2.0 | 1.5 | 2.0 | 2.0 |
CRUD chimney density | 2.6 | 1.6 | 2.0 | 1.0 |
Water pH effect on Steam Generator Corrosion | 2.8 | 1.3 | 2.0 | 2.0 |
Water pH effect on CRUD Deposition | 2.3 | 1.5 | 2.0 | 2.0 |
Boron exchange in and out of the CRUD (New Phenomenon) | 3.0 | 1.0 | − | − |
Phenomenon | Importance | Knowledge | Importance | Knowledge |
---|---|---|---|---|
PIRT update (2017) | Mini-PIRT (2014) | |||
Local changes (near the rod) in the equation of state | 2.4 | 1.3 | 3.0 | 3.0 |
Chemical reaction rates are based on lower temperature and pressures | 2.0 | 1.3 | 2.0 | 2.0 |
Overlooked chemical reactions/species | 1.8 | 1.0 | 3.0 | 2.0 |
CRUD porosity | 2.8 | 1.8 | 2.0 | 2.0 |
CRUD permeability | 2.0 | 1.5 | 2.0 | 2.0 |
CRUD chimney density | 2.6 | 1.6 | 2.0 | 1.0 |
Water pH effect on Steam Generator Corrosion | 2.8 | 1.3 | 2.0 | 2.0 |
Water pH effect on CRUD Deposition | 2.3 | 1.5 | 2.0 | 2.0 |
Boron exchange in and out of the CRUD (New Phenomenon) | 3.0 | 1.0 | − | − |
Phenomena | MPACT | BISON | CTF | MAMBA |
---|---|---|---|---|
Steaming rate | H | |||
Subcooled boiling on a clean metal surface | H | |||
Subcooled boiling in CRUD | L | |||
Bulk coolant temperature | H | |||
Heat flux | H | |||
Wall roughness | L | |||
Single-phase heat transfer | H | |||
Mass balance of nickel and iron | L | |||
CRUD erosion | M | M | ||
Initial CRUD thickness (mass) | L | L | ||
Initial coolant boron concentration | H | |||
Initial coolant nickel concentration | L | L | ||
CRUD source term from steam generators and other surfaces | M | |||
CRUD induced change in boiling efficiency | L | |||
Heat flux distribution (new phenomenon) CRUD-fluid heat transfer model | M | M |
Phenomena | MPACT | BISON | CTF | MAMBA |
---|---|---|---|---|
Steaming rate | H | |||
Subcooled boiling on a clean metal surface | H | |||
Subcooled boiling in CRUD | L | |||
Bulk coolant temperature | H | |||
Heat flux | H | |||
Wall roughness | L | |||
Single-phase heat transfer | H | |||
Mass balance of nickel and iron | L | |||
CRUD erosion | M | M | ||
Initial CRUD thickness (mass) | L | L | ||
Initial coolant boron concentration | H | |||
Initial coolant nickel concentration | L | L | ||
CRUD source term from steam generators and other surfaces | M | |||
CRUD induced change in boiling efficiency | L | |||
Heat flux distribution (new phenomenon) CRUD-fluid heat transfer model | M | M |
Phenomena | MPACT | BISON | CTF | MAMBA |
---|---|---|---|---|
Local changes in rod power due to burn-up | H | H | ||
Fuel thermal conductivity changes as a function of burn-up | H | |||
Changes in effective crud conductivity due to internal fluid flow and boiling | H | |||
CRUD removal due to transient power changes. | L | |||
Fission product gas | H | |||
Pellet swelling | H | |||
Contact between the pellet and the clad | H |
Phenomena | MPACT | BISON | CTF | MAMBA |
---|---|---|---|---|
Local changes in rod power due to burn-up | H | H | ||
Fuel thermal conductivity changes as a function of burn-up | H | |||
Changes in effective crud conductivity due to internal fluid flow and boiling | H | |||
CRUD removal due to transient power changes. | L | |||
Fission product gas | H | |||
Pellet swelling | H | |||
Contact between the pellet and the clad | H |
Phenomena | MPACT | BISON | CTF | MAMBA |
---|---|---|---|---|
Local boron density increases absorption | H | |||
Moderator displaced by CRUD and replaced with an absorber | H | |||
Xenon impact on steady-state transients | M | |||
Geometry changes in the pellet | M | |||
Cross section changes | M | |||
Fission product production | M | |||
Fission product decay constant | M | |||
Simplified decay chain | M | |||
Boron induced shift in neutron spectrum | H | |||
Boron depletion due to exposure to neutron flux in the coolant | M | |||
Boron depletion due to exposure to neutron flux in the CRUD | L | |||
Fuel depletion and neutron flux calculation resolution disparity | L | |||
Boron concentration computation method | L | |||
Iron and nickel neutron absorption (new phenomena) | M |
Phenomena | MPACT | BISON | CTF | MAMBA |
---|---|---|---|---|
Local boron density increases absorption | H | |||
Moderator displaced by CRUD and replaced with an absorber | H | |||
Xenon impact on steady-state transients | M | |||
Geometry changes in the pellet | M | |||
Cross section changes | M | |||
Fission product production | M | |||
Fission product decay constant | M | |||
Simplified decay chain | M | |||
Boron induced shift in neutron spectrum | H | |||
Boron depletion due to exposure to neutron flux in the coolant | M | |||
Boron depletion due to exposure to neutron flux in the CRUD | L | |||
Fuel depletion and neutron flux calculation resolution disparity | L | |||
Boron concentration computation method | L | |||
Iron and nickel neutron absorption (new phenomena) | M |
Phenomena | MPACT | BISON | CTF | MAMBA |
---|---|---|---|---|
Local changes (near the rod) in the equation of state | M | |||
Temperature-dependent chemical reaction rates | M | |||
CRUD porosity | M | |||
CRUD permeability | M | |||
CRUD chimney density | L | |||
Water pH effect on Steam Generator Corrosion | L | |||
Water pH effect on CRUD deposition | M | |||
Boron exchange in and out of the CRUD (New Phenomenon) | M |
Phenomena | MPACT | BISON | CTF | MAMBA |
---|---|---|---|---|
Local changes (near the rod) in the equation of state | M | |||
Temperature-dependent chemical reaction rates | M | |||
CRUD porosity | M | |||
CRUD permeability | M | |||
CRUD chimney density | L | |||
Water pH effect on Steam Generator Corrosion | L | |||
Water pH effect on CRUD deposition | M | |||
Boron exchange in and out of the CRUD (New Phenomenon) | M |
Within CASL, the CIPS CP involves four codes: MPACT, CTF, BISON, and MAMBA. The conceptual, physics-based understanding of computational modeling for CIPS can be described in a series of steps. First, the simulation must compute a neutron flux that produces energy from fission (deposited in the fuel and the coolant). Boron in CRUD, fuel temperature, moderator density, and moderator temperature are all feedback mechanisms. Next, the computation must conduct the energy in the fuel radially out from the center, across the gap, through the clad, and finally through the CRUD into the coolant. The fuel is changing with burn-up and the gap is shrinking. Subsequently, the code must remove the heat from the clad to the coolant and advect it out of the core. Finally, the simulation must predict how CRUD is exchanged between the fuel pin surface and the coolant (boiling and nonboiling) and how Boron deposited in and on the CRUD.
The PIRT for CIPS is constructed by the identification of phenomena and processes in each governing physics, i.e., neutronics, subchannel thermal hydraulics, fuel modeling, and coolant chemistry. Tables 1 and 6–8 show phenomena identified by PIRT for CIPS CP in subchannel hydraulics, fuel modeling, neutronics, and coolant chemistry, respectively. The complete PIRT for different challenge problems (CIPS, DNB, and PCI) with the description of all phenomena, is reported in CASL V & V assessment for VERA [46]. The CIPS PIRT results presented in this paper represent two specific PIRT exercises: a preliminary or Mini-PIRT conducted in 2014 and a Mini-PIRT update conducted in 2017. The PIRT update conducted for the CIPS CP was executed in two phases. First, the phenomena identified from the previous Mini-PIRT for CIPS were organized into a survey and this survey was made available electronically to CIPS experts within CASL. It is worth noting that the survey included the ability to suggest additional phenomena for consideration. The electronic survey was completed by several CASL researchers with expertise in different areas of M & S. Once the PIRT survey results were obtained, an extended discussion with all the participants was conducted to work through items that had significant disagreement among the survey responses. This proved relatively efficient since items where participants were already well converged could be passed by quickly and more time spent on items with greater disagreement. The final score for each phenomenon was obtained by averaging the scores provided by the participants. The comparison of PIRT responses of 2014 and 2017 indicates some changes:
Phenomenon | Importance | Knowledge | Importance | Knowledge |
---|---|---|---|---|
PIRT update (2017) | Mini-PIRT (2014) | |||
Steaming rate | 3.0 | 2.0 | 3.0 | 2.0 |
Subcooled boiling on a clean metal surface | 3.0 | 3.0 | 3.0 | 3.0 |
Subcooled boiling In CRUD | 3.0 | 1.0 | 3.0 | 1.0 |
Bulk coolant temperature | 3.0 | 3.0 | 2.0 | 2.0 |
Heat flux | 3.0 | 2.2 | 3.0 | 3.0 |
Wall roughness | 2.0 | 1.0 | 1.0 | 1.0 |
Single phase heat transfer | 2.0 | 2.5 | 1.0 | 2.0 |
Mass balance of nickel and iron | 3.0 | 1.8 | 3.0 | 1.0 |
Boron mass balance | 2.5 | 2.6 | 1.0 | 3.0 |
CRUD erosion | 2.2 | 1.3 | 3.0 | 1.0 |
Initial CRUD thickness (mass) | 2.5 | 2.0 | 3.0 | 1.0 |
Initial coolant nickel and boron concentration | 2.7 | 2.3 | 3.0 | 1.0 |
CRUD source term from steam generators and other surfaces | 3.0 | 1.7 | 3.0 | 1.0 |
CRUD induced change in boiling efficiency | 2.7 | 1.3 | 1.0 | 2.0 |
CRUD induced change in flow area | 0.7 | 1.4 | 1.0 | 2.0 |
CRUD induced change in friction pressure drop | 1.0 | 1.6 | 1.0 | 1.0 |
Change in thermal hydraulic equation of state due to chemistry | 1.8 | 1.3 | 1.0 | 1.0 |
Change in local heat flux to the coolant from the fuel due to CRUD buildup | 1.7 | 1.5 | 3.0 | 1.0 |
Heat flux distribution (new phenomenon) | 3.0 | 1.0 | − | − |
Phenomenon | Importance | Knowledge | Importance | Knowledge |
---|---|---|---|---|
PIRT update (2017) | Mini-PIRT (2014) | |||
Steaming rate | 3.0 | 2.0 | 3.0 | 2.0 |
Subcooled boiling on a clean metal surface | 3.0 | 3.0 | 3.0 | 3.0 |
Subcooled boiling In CRUD | 3.0 | 1.0 | 3.0 | 1.0 |
Bulk coolant temperature | 3.0 | 3.0 | 2.0 | 2.0 |
Heat flux | 3.0 | 2.2 | 3.0 | 3.0 |
Wall roughness | 2.0 | 1.0 | 1.0 | 1.0 |
Single phase heat transfer | 2.0 | 2.5 | 1.0 | 2.0 |
Mass balance of nickel and iron | 3.0 | 1.8 | 3.0 | 1.0 |
Boron mass balance | 2.5 | 2.6 | 1.0 | 3.0 |
CRUD erosion | 2.2 | 1.3 | 3.0 | 1.0 |
Initial CRUD thickness (mass) | 2.5 | 2.0 | 3.0 | 1.0 |
Initial coolant nickel and boron concentration | 2.7 | 2.3 | 3.0 | 1.0 |
CRUD source term from steam generators and other surfaces | 3.0 | 1.7 | 3.0 | 1.0 |
CRUD induced change in boiling efficiency | 2.7 | 1.3 | 1.0 | 2.0 |
CRUD induced change in flow area | 0.7 | 1.4 | 1.0 | 2.0 |
CRUD induced change in friction pressure drop | 1.0 | 1.6 | 1.0 | 1.0 |
Change in thermal hydraulic equation of state due to chemistry | 1.8 | 1.3 | 1.0 | 1.0 |
Change in local heat flux to the coolant from the fuel due to CRUD buildup | 1.7 | 1.5 | 3.0 | 1.0 |
Heat flux distribution (new phenomenon) | 3.0 | 1.0 | − | − |
Phenomenon | Importance | Knowledge | Importance | Knowledge |
---|---|---|---|---|
PIRT update (2017) | Mini-PIRT (2014) | |||
Local changes in rod power due to burn-up | 2.0 | 2.2 | 3.0 | 2.0 |
Fuel thermal conductivity changes as a function of burn-up | 1.5 | 1.8 | 3.0 | 2.0 |
Changes in effective CRUD conductivity due to internal fluid flow and boiling | 2.0 | 1.0 | 3.0 | 2.0 |
CRUD removal due to transient power changes | 2.0 | 1.0 | 3.0 | 2.0 |
Fission product gas | 1.0 | 1.3 | 1.0 | 2.0 |
Pellet swelling | 1.0 | 1.3 | 3.0 | 2.0 |
Contact between the pellet and the clad | 1.0 | 1.3 | 3.0 | 2.0 |
Phenomenon | Importance | Knowledge | Importance | Knowledge |
---|---|---|---|---|
PIRT update (2017) | Mini-PIRT (2014) | |||
Local changes in rod power due to burn-up | 2.0 | 2.2 | 3.0 | 2.0 |
Fuel thermal conductivity changes as a function of burn-up | 1.5 | 1.8 | 3.0 | 2.0 |
Changes in effective CRUD conductivity due to internal fluid flow and boiling | 2.0 | 1.0 | 3.0 | 2.0 |
CRUD removal due to transient power changes | 2.0 | 1.0 | 3.0 | 2.0 |
Fission product gas | 1.0 | 1.3 | 1.0 | 2.0 |
Pellet swelling | 1.0 | 1.3 | 3.0 | 2.0 |
Contact between the pellet and the clad | 1.0 | 1.3 | 3.0 | 2.0 |
Phenomenon | Importance | Knowledge | Importance | Knowledge |
---|---|---|---|---|
PIRT update (2017) | Mini-PIRT (2014) | |||
Local boron density increases absorption | 2.5 | 2.8 | 3.0 | 3.0 |
Moderator displaced by CRUD and replaced with an absorber | 1.6 | 2.0 | 1.0 | 3.0 |
Xenon impact on steady-state transients | 1.0 | 1.8 | 3.0 | 3.0 |
Geometry changes in the pellet | 0.5 | 1.3 | 1.0 | 2.0 |
Cross section changes | 2.7 | 2.7 | 3.0 | 2.0 |
Fission product production | 1.3 | 1.7 | 2.0 | 2.0 |
Fission product decay constants | 1.3 | 1.7 | 3.0 | 3.0 |
Simplified decay chains | 1.0 | 1.0 | 2.0 | 2.0 |
Boron induced shift in neutron spectrum | 1.5 | 2.0 | 2.0 | 2.0 |
Boron depletion due to exposure to neutron flux in the coolant | 2.0 | 2.2 | 1.0 | 1.0 |
Boron depletion due to exposure to neutron flux in the CRUD | 3.0 | 2.0 | 1.0 | 1.0 |
Fuel depletion and neutron flux calculation resolution disparity | 1.0 | 1.8 | 1.0 | 1.0 |
Boron concentration computation method | 0.8 | 1.6 | 1.0 | 1.0 |
Iron and nickel neutron absorption (new phenomena) | 2.0 | 3.0 | - | - |
Phenomenon | Importance | Knowledge | Importance | Knowledge |
---|---|---|---|---|
PIRT update (2017) | Mini-PIRT (2014) | |||
Local boron density increases absorption | 2.5 | 2.8 | 3.0 | 3.0 |
Moderator displaced by CRUD and replaced with an absorber | 1.6 | 2.0 | 1.0 | 3.0 |
Xenon impact on steady-state transients | 1.0 | 1.8 | 3.0 | 3.0 |
Geometry changes in the pellet | 0.5 | 1.3 | 1.0 | 2.0 |
Cross section changes | 2.7 | 2.7 | 3.0 | 2.0 |
Fission product production | 1.3 | 1.7 | 2.0 | 2.0 |
Fission product decay constants | 1.3 | 1.7 | 3.0 | 3.0 |
Simplified decay chains | 1.0 | 1.0 | 2.0 | 2.0 |
Boron induced shift in neutron spectrum | 1.5 | 2.0 | 2.0 | 2.0 |
Boron depletion due to exposure to neutron flux in the coolant | 2.0 | 2.2 | 1.0 | 1.0 |
Boron depletion due to exposure to neutron flux in the CRUD | 3.0 | 2.0 | 1.0 | 1.0 |
Fuel depletion and neutron flux calculation resolution disparity | 1.0 | 1.8 | 1.0 | 1.0 |
Boron concentration computation method | 0.8 | 1.6 | 1.0 | 1.0 |
Iron and nickel neutron absorption (new phenomena) | 2.0 | 3.0 | - | - |
new phenomena were added in the 2017 PIRT update (e.g., heat flux distribution was added as a new phenomenon and was considered important for modeling physics for subchannel thermal hydraulics, boron exchange in and out of the CRUD was also added as a new phenomenon and was considered important for modeling coolant chemistry) and
scores for several phenomena were changed in the 2017 PIRT update.
The above-mentioned changes can be attributed to the increased understanding of CIPS phenomena from model development and VVUQ after the first iteration of the PIRT in 2014. However, due to the subjective nature of the PIRT process, the differences may also stem from some bias and disparity in the opinion of the expert groups from 2014 and 2017 PIRT. Therefore, neither the preliminary PIRT nor the update should be considered exhaustive and this acknowledged as a current shortcoming of the V & V assessment. Given increased priority and resources in the future or for any new CPs undertaken, a more comprehensive PIRT should be conducted.
The disagreement in the experts' opinion can be minimized by adopting the argumentation technique [47,48]. The argumentation technique makes use of explicit classifiers like “claim,” “argument,” “justification,” “assumption,” “context,” “evidence” to represent any information. Any claim regarding the knowledge and importance of a phenomenon needs to be supported by relevant evidence or justification. During any PIRT exercise, the experts may present data or evidence to support their opinion (or claims). However, sometimes this information gets lost in the discussion and is not properly documented. By conducting PIRT in a structured framework created by the argumentation technique, the uncertainties due to the subjective nature of PIRT can be minimized.
5.3 Step 3: Define Requirement for Model Development and Assessment.
The third step in the assessment process defines the requirements for code capability development and assessment, i.e., code VVUQ. Requirements for code capability development are defined by determining the model and data needs with respect to the phenomena in the PIRT. Requirements for code assessment are defined based on the target maturity levels of the PCMM attributes for the respective CP.
5.4 Step 4: Map Requirements to Code Capability.
The fourth step in the assessment process is focused on mapping the code development and assessment requirements to relevant single physics and/or coupled simulation code in VERA—CS. This step can be considered analogous to a transition from qualitative to quantitative. For example, if the effect of CRUD deposition on cladding temperature is identified as an important phenomenon, then the associated requirement would be that the code must be able to compute CRUD deposition with a specified accuracy, precision, and range of validity. Mapping the requirements to code capability also helps in identifying the gaps (due to model deficiencies and/or lack of data for validation).
A backlog of code requirements is established by examining the cost of implementation and the importance of pay-off for each of the phenomena. The code development teams bear the responsibility for model development and VVUQ. Each code development team examines their resources (i.e., developer time, computing hardware, etc.) and decides how much of the code requirement backlog they can address.
5.5 Step 5: Assemble Evidence for Maturity Assessment.
The fifth step in the assessment process is focused on the accumulation of VVUQ evidence to support the PCMM assessment. The assembling of evidence for PCMM is supported by the input of CASL researchers and code teams working on different aspects of model development and VVUQ activities in CASL. Therefore, the collection of evidence is guided by,
Direct statements from the code's teams working on development and VVUQ of codes.
Data/information gathered from user and theory manuals for the various codes as well as documentation of VVUQ activities such as verification test problems (e.g., observing the correct order of convergence for the numerical discretization schemes used in the codes) or comparison to validation data and uncertainty or sensitivity studies.
These two pieces of information are completely related as the statement from CASL code teams regarding the PCMM attributes needs to be backed up by supporting data or information from CASL technical milestones documentation. A careful review of relevant documents is required to gather a specific set of information that verifies the assessment of PCMM attributes at a specific level. During this process gaps in code functionalities and VVUQ are also identified. Gaps act as counterevidence in PCMM assessment as they undermine the confidence in codes' capability. In this way, a body of evidence is assembled that forms the sole basis for the maturity assessment. There is some subjectivity in assessing this evidence, and the authors acknowledge that there may be some disagreement about the numerical scores. Complete documentation of evidence related to CIPS CP, as reported earlier in CASL V & V assessment [46] is shown in Tables 9–19.
Index | Category | Description | Relevance/ Comments |
---|---|---|---|
MP.1.1. 1 | HLE | Comprehensive MPACT V&V manual [7–10,13]. | |
MP.1.1. 2 | HLE | Comprehensive unit tests and regression tests support the SQA of MPACT [9]. | SQA |
MP.1.1. 3 | HLE | Some peer reviews conducted. | Need tracking of issues and resolution |
MP.1.1. 4 | HLE | Rigorous version control [9,11]. | SQA |
MP.1.2. 1 | MLE | Unit test for individual functions and subroutines [11]. | SQA |
MP.1.2. 2 | MLE | Regression tests involve functional tests encompassing different sections of the code with various inputs [9–11]. | CVER |
MP.1.2. 3 | MLE | impact software test plan, requirement, and test report [11]. | SQA |
MP.1.2. 4 | MLE | Work is in progress to implement both the consistency test and MMS test in the MPACT reactor code as part of the code verification and overall quality assessment effort for MPACT [13]. | CVER |
MP.1.3. 1 | LLE | Unit tests for solver kernels test against analytical solutions [9] | Including CVER |
MP.1.3. 2 | LLE | Key capabilities tested [9]: | Including CVER |
• Geometry | RGF | ||
• Transports solvers: P0 and Pn 2D Method of Characteristics (MOC), P0 and Pn 2D-1D with SP3 and Nodal Expansion Method (NEM) solver | PMMF | ||
• Other solvers: depletion search (boron, rod), multistate, Eq Xe/Sm, cross section shielding, coarse mesh finite difference (CMFD), cusping treatment | |||
• Parallel solver: message passing interface (MPI), Open-source message passing interface (OpenMPI). | |||
MP.1.3. 3 | LLE | • Code verification using the method of exact solutions, | CVER |
• Benchmark problem 3.4 in Ganapol [49] has been used as a code verification test for MPACT [13], | |||
• MPACT agreed with all cases to within a few pcm [13]. | |||
MP.1.3. 4 | LLE | Code verification using method of manufactured solution (MMS) | CVER |
• Applied MMS to the C5G7 benchmark problem to verify the 2D multigroup neutron transport solver. | |||
• The relative error of the scalar flux of the first energy group is ∼1 × 10−8. The relative error of the scalar flux of the thermal energy group is close to ∼1 × 10−5. This close-to-zero error indicates that the scalar flux from the fixed-source problem converges to the same solution as from the eigenvalue calculation [13]. |
Index | Category | Description | Relevance/ Comments |
---|---|---|---|
MP.1.1. 1 | HLE | Comprehensive MPACT V&V manual [7–10,13]. | |
MP.1.1. 2 | HLE | Comprehensive unit tests and regression tests support the SQA of MPACT [9]. | SQA |
MP.1.1. 3 | HLE | Some peer reviews conducted. | Need tracking of issues and resolution |
MP.1.1. 4 | HLE | Rigorous version control [9,11]. | SQA |
MP.1.2. 1 | MLE | Unit test for individual functions and subroutines [11]. | SQA |
MP.1.2. 2 | MLE | Regression tests involve functional tests encompassing different sections of the code with various inputs [9–11]. | CVER |
MP.1.2. 3 | MLE | impact software test plan, requirement, and test report [11]. | SQA |
MP.1.2. 4 | MLE | Work is in progress to implement both the consistency test and MMS test in the MPACT reactor code as part of the code verification and overall quality assessment effort for MPACT [13]. | CVER |
MP.1.3. 1 | LLE | Unit tests for solver kernels test against analytical solutions [9] | Including CVER |
MP.1.3. 2 | LLE | Key capabilities tested [9]: | Including CVER |
• Geometry | RGF | ||
• Transports solvers: P0 and Pn 2D Method of Characteristics (MOC), P0 and Pn 2D-1D with SP3 and Nodal Expansion Method (NEM) solver | PMMF | ||
• Other solvers: depletion search (boron, rod), multistate, Eq Xe/Sm, cross section shielding, coarse mesh finite difference (CMFD), cusping treatment | |||
• Parallel solver: message passing interface (MPI), Open-source message passing interface (OpenMPI). | |||
MP.1.3. 3 | LLE | • Code verification using the method of exact solutions, | CVER |
• Benchmark problem 3.4 in Ganapol [49] has been used as a code verification test for MPACT [13], | |||
• MPACT agreed with all cases to within a few pcm [13]. | |||
MP.1.3. 4 | LLE | Code verification using method of manufactured solution (MMS) | CVER |
• Applied MMS to the C5G7 benchmark problem to verify the 2D multigroup neutron transport solver. | |||
• The relative error of the scalar flux of the first energy group is ∼1 × 10−8. The relative error of the scalar flux of the thermal energy group is close to ∼1 × 10−5. This close-to-zero error indicates that the scalar flux from the fixed-source problem converges to the same solution as from the eigenvalue calculation [13]. |
Index | Category | Description | Relevance/ Comments |
---|---|---|---|
MP.2.1. 1 | HLE | Supported by a test involving mesh convergence analysis and method of manufactured solution [9,13]. | SVER |
MP.2.1. 2 | HLE | Numerical effects are quantitatively estimated to be small on some SRQ (system response quantities) [9,13]. | SVER |
MP.2.1. 3 | HLE | I/O independently verified [9]. | |
MP.2.1. 4 | HLE | Some peer reviews conducted. | Need tracking of issues and resolution |
MP.2.2. 1 | MLE | Mesh convergence analysis-Work is based on evaluation of the sensitivity of K-effective to different MOC parameters (Flat source region mesh, angular quadrature, ray spacing) for VERA Benchmark Problems [9]. | SVER |
MP.2.2. 2 | MLE | Method of Manufactured Solution will be used to quantify the rate of convergence of the solution to MOC parameters [7,10]. | Gap |
MP.2.3. 1 | LLE | Test performed for regular pin cell (VERA—CS benchmark problem 1a) and assembly (VERA—CS benchmark problem 1a) [9] | RGF |
MP.2.3. 2 | LLE | The test encompasses radial and azimuthal discretization, ray spacing, angular quadrature, the coupling between discretization parameters [9]. | RGF |
MP.2.3. 3 | LLE | MPACT library generation procedure [9]. | PMMF |
MP.2.3. 4 | LLE | Testing (and improvement) of the nuclide transmutation solver (ORIGEN) application programing interface (API) [9]. | PMMF |
MP.2.3. 5 | LLE | Extensive solution verification test performed for 3D assembly geometry and 2D pin geometry [13]. | SVER |
Index | Category | Description | Relevance/ Comments |
---|---|---|---|
MP.2.1. 1 | HLE | Supported by a test involving mesh convergence analysis and method of manufactured solution [9,13]. | SVER |
MP.2.1. 2 | HLE | Numerical effects are quantitatively estimated to be small on some SRQ (system response quantities) [9,13]. | SVER |
MP.2.1. 3 | HLE | I/O independently verified [9]. | |
MP.2.1. 4 | HLE | Some peer reviews conducted. | Need tracking of issues and resolution |
MP.2.2. 1 | MLE | Mesh convergence analysis-Work is based on evaluation of the sensitivity of K-effective to different MOC parameters (Flat source region mesh, angular quadrature, ray spacing) for VERA Benchmark Problems [9]. | SVER |
MP.2.2. 2 | MLE | Method of Manufactured Solution will be used to quantify the rate of convergence of the solution to MOC parameters [7,10]. | Gap |
MP.2.3. 1 | LLE | Test performed for regular pin cell (VERA—CS benchmark problem 1a) and assembly (VERA—CS benchmark problem 1a) [9] | RGF |
MP.2.3. 2 | LLE | The test encompasses radial and azimuthal discretization, ray spacing, angular quadrature, the coupling between discretization parameters [9]. | RGF |
MP.2.3. 3 | LLE | MPACT library generation procedure [9]. | PMMF |
MP.2.3. 4 | LLE | Testing (and improvement) of the nuclide transmutation solver (ORIGEN) application programing interface (API) [9]. | PMMF |
MP.2.3. 5 | LLE | Extensive solution verification test performed for 3D assembly geometry and 2D pin geometry [13]. | SVER |
Index | Category | Description | Relevance/ Comments |
---|---|---|---|
MP.3.1. 1 | HLE | Quantitative assessment of predictive accuracy for key SRQ from IETs and SETs [7,10]. | SVAL, IVAL |
MP.3.1. 2 | HLE | MPACT validation is supported by Refs. [8] and [9] measured data from different criticality tests, operating nuclear power plants, measured isotopes from irradiated fuel, calculation from continuous energy MC simulation, use of postirradiation examination (PIE) tests for evaluation and validation of the isotopic depletion capability in MPACT. | SVAL, IVAL |
MP.3.1. 3 | HLE | Demonstrated capability to support CIPS. | Validation of phenomena using experimental data |
MP.3.1. 4 | HLE | Additional validation is required. | Gap |
MP.3.2. 1 | MLE | Criticality tests encompass: critical condition, fuel rod fission rate distribution, control rod burnable poison worth, isothermal temperature coefficient. | |
MP.3.2. 2 | MLE | Operating nuclear power plants: critical soluble boron concentration, MOC physics parameter- control rod worth, temperature coefficient, fission rates. | RGF |
MP.3.2. 3 | MLE | Measured isotopes from the postirradiation experiment: gamma scans of Cs-137, burnup based on Nd-148, full radiochemical assay of the major actinides and fission products. | |
MP.3.2. 4 | MLE | Continuous energy Monte Carlo simulation: 3D core pin-by-pin fission rates at operating condition, intrapin distribution of fission, capture rates, reactivity, pin power distribution, gamma transport, thick radial core support structure effects. | |
MP.3.3. 1 | LLE | Babcock & wilcox critical experiments (validation based on fast-flux, fission power, and cross section data). | The successful validation shows adequate quality in RGF and PMMF |
MP.3.3. 2 | LLE | Development of preliminary VERA—CS Crud induced localized corrosion modeling capability (milestone: L2:PHI.P17.03) [53]. | RGF, PMMF |
MP.3.3. 3 | LLE | Special power excursion reactor test (SPERT) (validation based on fast-flux, fission power and cross section data) | IVAL, RGF, PMMF |
MP.3.3. 4 | LLE | DIMPLE critical experiments (validation based on fast-flux, fission power, and cross section data). | IVAL, RGF, PMMF |
MP.3.3. 5 | LLE | Watts Bar Nuclear plant (validation based on fast-flux, fission power, isotopics, boron feedback to neutronics, cross section data, burn up). | IVAL, RGF, PMMF |
The MPACT validation for the WB2 startup tests (Godfrey, 2017) [54]. | |||
MP.3.3. 6 | LLE | Benchmark for evaluation and validation of reactor simulations (BEAVRS), validation based on fission power, isotopics, boron feedback to neutronics, cross section data, burn up. | IVAL, RGF, PMMF |
MP.3.3. 7 | LLE | Validation by code-to-code comparisons using Monte Carlo N-Particle (MCNP) transport code. | IVAL, RGF, PMMF |
MP.3.3. 8 | LLE | Reaction rate analysis. | |
MP.3.3. 9 | LLE | VERA progression problems 1–4. | RGF, PMMF |
MP.3.3. 10 | LLE | Extensive PWR pin and assembly benchmark problems. | RGF, PMMF |
Index | Category | Description | Relevance/ Comments |
---|---|---|---|
MP.3.1. 1 | HLE | Quantitative assessment of predictive accuracy for key SRQ from IETs and SETs [7,10]. | SVAL, IVAL |
MP.3.1. 2 | HLE | MPACT validation is supported by Refs. [8] and [9] measured data from different criticality tests, operating nuclear power plants, measured isotopes from irradiated fuel, calculation from continuous energy MC simulation, use of postirradiation examination (PIE) tests for evaluation and validation of the isotopic depletion capability in MPACT. | SVAL, IVAL |
MP.3.1. 3 | HLE | Demonstrated capability to support CIPS. | Validation of phenomena using experimental data |
MP.3.1. 4 | HLE | Additional validation is required. | Gap |
MP.3.2. 1 | MLE | Criticality tests encompass: critical condition, fuel rod fission rate distribution, control rod burnable poison worth, isothermal temperature coefficient. | |
MP.3.2. 2 | MLE | Operating nuclear power plants: critical soluble boron concentration, MOC physics parameter- control rod worth, temperature coefficient, fission rates. | RGF |
MP.3.2. 3 | MLE | Measured isotopes from the postirradiation experiment: gamma scans of Cs-137, burnup based on Nd-148, full radiochemical assay of the major actinides and fission products. | |
MP.3.2. 4 | MLE | Continuous energy Monte Carlo simulation: 3D core pin-by-pin fission rates at operating condition, intrapin distribution of fission, capture rates, reactivity, pin power distribution, gamma transport, thick radial core support structure effects. | |
MP.3.3. 1 | LLE | Babcock & wilcox critical experiments (validation based on fast-flux, fission power, and cross section data). | The successful validation shows adequate quality in RGF and PMMF |
MP.3.3. 2 | LLE | Development of preliminary VERA—CS Crud induced localized corrosion modeling capability (milestone: L2:PHI.P17.03) [53]. | RGF, PMMF |
MP.3.3. 3 | LLE | Special power excursion reactor test (SPERT) (validation based on fast-flux, fission power and cross section data) | IVAL, RGF, PMMF |
MP.3.3. 4 | LLE | DIMPLE critical experiments (validation based on fast-flux, fission power, and cross section data). | IVAL, RGF, PMMF |
MP.3.3. 5 | LLE | Watts Bar Nuclear plant (validation based on fast-flux, fission power, isotopics, boron feedback to neutronics, cross section data, burn up). | IVAL, RGF, PMMF |
The MPACT validation for the WB2 startup tests (Godfrey, 2017) [54]. | |||
MP.3.3. 6 | LLE | Benchmark for evaluation and validation of reactor simulations (BEAVRS), validation based on fission power, isotopics, boron feedback to neutronics, cross section data, burn up. | IVAL, RGF, PMMF |
MP.3.3. 7 | LLE | Validation by code-to-code comparisons using Monte Carlo N-Particle (MCNP) transport code. | IVAL, RGF, PMMF |
MP.3.3. 8 | LLE | Reaction rate analysis. | |
MP.3.3. 9 | LLE | VERA progression problems 1–4. | RGF, PMMF |
MP.3.3. 10 | LLE | Extensive PWR pin and assembly benchmark problems. | RGF, PMMF |
Index | Category | Description | Relevance/ Comments |
---|---|---|---|
CT.1.1. 1 | HLE | SQA is based on unit test and regression tests. | SQA |
CT.1.1. 2 | HLE | Documentation of the SQA of base code is required. | Gap |
CT.1.1. 3 | HLE | Code verification work is insufficient. | Gap |
CT.1.1. 4 | HLE | Solution verification study performed by mesh refinement study. | SVER |
CT.1.2. 1 | MLE | Unit tests: tests for different classes/procedures. | SQA |
CT.1.2. 2 | MLE | Regression tests: unit tests, verification problems, and validation problems used as a regression tests. | SQA |
CT.1.2. 3 | MLE | Code verification: Few models have been verified using an analytical solution. | Limited CVER |
CT.1.2. 4 | MLE | Solution verification by mesh refined study for progression problem 6. | Limited SVER |
CT.1.3. 1 | LLE | (Unit test) Covers input reading, fluid properties, units, etc. | SQA |
CT.1.3. 2 | LLE | • (Regression test) Covers both steady-state and transient simulation. | SQA |
• All V&V test inputs are part of the CTF repository. | |||
• PHI continues testing system. | |||
CT.1.3. 3 | LLE | Tested phenomena: Single phase wall shear, grid heat transfer enhancement, isokinetic advection, shock tube, water faucet. | CVER |
CT.1.3. 4 | LLE | Test performed with and without spacer grids, QoI: total pressure drop across the assembly. | SVER |
CT.1.3. 5 | LLE | Use validation tests as regression tests which are run on a continual basis to demonstrate code results are not changing. | SQA |
CT.1.3. 6 | LLE | Code to code benchmarking with subchannel code, VIPRE-01. | SQA |
CT.1.3. 7 | LLE | Comparison of CTF predicted rod surface temperature with STAR CCM+ predicted rod surface temperature. | SQA |
CT.1.3. 8 | LLE | Details on CTF coverage by code and solution verification are provided in the latest CTF code and solution verification report. There are some gaps in the assessment (Grid shear enhancement, grid heat transfer enhancement is not tested). Convergence behavior and numerical errors need to be quantified [16]. | CVER, SVER, Gap |
CT.1.3. 9 | LLE | Solution verification tests were conducted [16]. | SVER |
• The first solution verification problem in assembly geometry is a modification of Problem 3 in the CASLs Progression Test Suite (Godfrey) for decoupled codes. | |||
• The second solution verification test in assembly geometry is a modification of Problem 6 in the Progression Test Suite, which emphasizes coupled CTF and MPACT calculations using VERA—CS. These solution verification tests represent a nearly complete integration of the physics capabilities in assembly geometry. | |||
CT.1.3. 10 | LLE | Solution and code verification of the wall friction model in CTF [18]. | CVER and SVER |
CT.1.3. 11 | LLE | Solution verification on the governing equations for the water faucet problem [55]. | SVER |
CT.1.3. 12 | LLE | Two-phase pressure drop code verification study. | CVER |
Index | Category | Description | Relevance/ Comments |
---|---|---|---|
CT.1.1. 1 | HLE | SQA is based on unit test and regression tests. | SQA |
CT.1.1. 2 | HLE | Documentation of the SQA of base code is required. | Gap |
CT.1.1. 3 | HLE | Code verification work is insufficient. | Gap |
CT.1.1. 4 | HLE | Solution verification study performed by mesh refinement study. | SVER |
CT.1.2. 1 | MLE | Unit tests: tests for different classes/procedures. | SQA |
CT.1.2. 2 | MLE | Regression tests: unit tests, verification problems, and validation problems used as a regression tests. | SQA |
CT.1.2. 3 | MLE | Code verification: Few models have been verified using an analytical solution. | Limited CVER |
CT.1.2. 4 | MLE | Solution verification by mesh refined study for progression problem 6. | Limited SVER |
CT.1.3. 1 | LLE | (Unit test) Covers input reading, fluid properties, units, etc. | SQA |
CT.1.3. 2 | LLE | • (Regression test) Covers both steady-state and transient simulation. | SQA |
• All V&V test inputs are part of the CTF repository. | |||
• PHI continues testing system. | |||
CT.1.3. 3 | LLE | Tested phenomena: Single phase wall shear, grid heat transfer enhancement, isokinetic advection, shock tube, water faucet. | CVER |
CT.1.3. 4 | LLE | Test performed with and without spacer grids, QoI: total pressure drop across the assembly. | SVER |
CT.1.3. 5 | LLE | Use validation tests as regression tests which are run on a continual basis to demonstrate code results are not changing. | SQA |
CT.1.3. 6 | LLE | Code to code benchmarking with subchannel code, VIPRE-01. | SQA |
CT.1.3. 7 | LLE | Comparison of CTF predicted rod surface temperature with STAR CCM+ predicted rod surface temperature. | SQA |
CT.1.3. 8 | LLE | Details on CTF coverage by code and solution verification are provided in the latest CTF code and solution verification report. There are some gaps in the assessment (Grid shear enhancement, grid heat transfer enhancement is not tested). Convergence behavior and numerical errors need to be quantified [16]. | CVER, SVER, Gap |
CT.1.3. 9 | LLE | Solution verification tests were conducted [16]. | SVER |
• The first solution verification problem in assembly geometry is a modification of Problem 3 in the CASLs Progression Test Suite (Godfrey) for decoupled codes. | |||
• The second solution verification test in assembly geometry is a modification of Problem 6 in the Progression Test Suite, which emphasizes coupled CTF and MPACT calculations using VERA—CS. These solution verification tests represent a nearly complete integration of the physics capabilities in assembly geometry. | |||
CT.1.3. 10 | LLE | Solution and code verification of the wall friction model in CTF [18]. | CVER and SVER |
CT.1.3. 11 | LLE | Solution verification on the governing equations for the water faucet problem [55]. | SVER |
CT.1.3. 12 | LLE | Two-phase pressure drop code verification study. | CVER |
Index | Category | Description | Relevance/ Comments |
---|---|---|---|
CT.2.1. 1 | HLE | Lack of separate effect validation. | Gap |
CT.2.1. 2 | HLE | Extensive integral effect validation was done. | |
CT.2.2. 1 | MLE | Testing of component models (correlations). | See Table 14 for details |
CT.2.2. 2 | MLE | Integral-effect test validation. | See Table 15 for detail |
CT.2.3. 1 | LLE | High to low fidelity simulation using STAR CCM+ was used to improve grid heat transfer effect or rod bundle geometry. | Accuracy improvement |
CT.2.3. 2 | LLE | Development of preliminary VERA—CS Crud induced localized corrosion modeling capability (milestone: L2:PHI.P17.03) [53]. | RGF |
CT.2.3. 3 | LLE | Improvement in representation and geometric fidelity of CTF was shown by the calibration study using measured plant data (Watts bar nuclear plant) and experimental loop data (Westinghouse Advanced Loop Tester, WALT) [55]. | RGF |
Index | Category | Description | Relevance/ Comments |
---|---|---|---|
CT.2.1. 1 | HLE | Lack of separate effect validation. | Gap |
CT.2.1. 2 | HLE | Extensive integral effect validation was done. | |
CT.2.2. 1 | MLE | Testing of component models (correlations). | See Table 14 for details |
CT.2.2. 2 | MLE | Integral-effect test validation. | See Table 15 for detail |
CT.2.3. 1 | LLE | High to low fidelity simulation using STAR CCM+ was used to improve grid heat transfer effect or rod bundle geometry. | Accuracy improvement |
CT.2.3. 2 | LLE | Development of preliminary VERA—CS Crud induced localized corrosion modeling capability (milestone: L2:PHI.P17.03) [53]. | RGF |
CT.2.3. 3 | LLE | Improvement in representation and geometric fidelity of CTF was shown by the calibration study using measured plant data (Watts bar nuclear plant) and experimental loop data (Westinghouse Advanced Loop Tester, WALT) [55]. | RGF |
Phenomenon | Model | Validation test status | Verification test status |
---|---|---|---|
Single-phase convection | Dittus–Boelter | Completed | — |
Subcooled boiling heat transfer | Thom | Completed | — |
Single-phase grid spacer pressure loss | Form loss | Completed | — |
Single-phase wall shear | Darcy–Weisbach | Completed | Completed |
Grid heat transfer enhancement | Yao–Hochreiter–Leech | — | — |
Single-phase turbulent mixing | Mixing-length theory | Completed | Completed |
Pressure-directed cross flow | Transverse momentum equation | — | — |
Phenomenon | Model | Validation test status | Verification test status |
---|---|---|---|
Single-phase convection | Dittus–Boelter | Completed | — |
Subcooled boiling heat transfer | Thom | Completed | — |
Single-phase grid spacer pressure loss | Form loss | Completed | — |
Single-phase wall shear | Darcy–Weisbach | Completed | Completed |
Grid heat transfer enhancement | Yao–Hochreiter–Leech | — | — |
Single-phase turbulent mixing | Mixing-length theory | Completed | Completed |
Pressure-directed cross flow | Transverse momentum equation | — | — |
Effect | Experiments |
---|---|
Pressure drop | BWR full-size fine-mesh bundle test (BFBT), FRIGG test loop |
Void/quality | PWR subchannel bundle test (PSBT), FRIGG test loop |
Single-phase turbulent mixing | General electric (GE) 3 × 3 bundle tests, combustion engine (CE) 5 × 5 rod bundle tests, RPI |
Turbulent mixing/void drift | GE 3 × 3 bundle tests, BFBT |
DNB | Harwell high-pressure loop test, Takahama |
Heat transfer | CE 5 × 5 rod bundle test |
Natural circulation | Pacific Northwest National Laboratory (PNNL) 2 × 6 rod array |
Fuel temperature | Halden test |
Effect | Experiments |
---|---|
Pressure drop | BWR full-size fine-mesh bundle test (BFBT), FRIGG test loop |
Void/quality | PWR subchannel bundle test (PSBT), FRIGG test loop |
Single-phase turbulent mixing | General electric (GE) 3 × 3 bundle tests, combustion engine (CE) 5 × 5 rod bundle tests, RPI |
Turbulent mixing/void drift | GE 3 × 3 bundle tests, BFBT |
DNB | Harwell high-pressure loop test, Takahama |
Heat transfer | CE 5 × 5 rod bundle test |
Natural circulation | Pacific Northwest National Laboratory (PNNL) 2 × 6 rod array |
Fuel temperature | Halden test |
Index | Category | Description | Relevance/comments |
---|---|---|---|
MA.1.1. 1 | HLE | The MAMBA3D refactoring the developers are implementing a unit and regression testing protocol that should result in robust source code verification when the code is completed at the end of PoR15. | Gap |
MA.1.1. 2 | HLE | SQA needs some improvement. | Gap |
MA.1.1. 3 | HLE | Low-level code verification was performed. | Gap |
MA.1.1. 4 | HLE | Solution verification is not done for CASL CPs. | Gap |
MA.1.1. 5 | HLE | Some validation work was performed (SET, IET, and plant analysis). | Gap |
MA.1.2. 1 | MLE | Solution verification and code verification using the analytical solutions are in progress [56]. | |
MA.1.2. 2 | MLE | Simulation of Westinghouse WALT Experiment: | |
• Comparison of cladding temperature versus rod power and crud thickness against the WALT data. | |||
MA.1.2. 3 | MLE | An initial CIPS study compared axial offset predicted by coupled MAMBA/CTF/MPACT with plant data for Watts Bar. | Multiple codes |
MA.1.2. 4 | MLE | Plant analysis: | Multiple codes |
• CIPS study by coupled MAMBA (1D)/CTF/MPACT simulations compared with plant data. | |||
• Oxide thickness and morphology compared with an operating plant. | |||
MA.1.3. 1 | LLE | Software quality assurance: | SQA Gap |
• Unit testing (water properties). | |||
• Unit test coverage is good and most of the important routines are tested. | |||
• The automatic test coverage feature reported coverage of ∼98%. | |||
• Source properties and Steam generator properties are not tested in the assessed version of MAMBA [56]. | |||
MA.1.3. 1 | LLE | Comparisons between the model in FACTSAGE code and MAMBA [57]. | SQA, PMMF |
MA.1.3. 2 | LLE | Comparison to BOA 3.0 code for heat transfer/chimney boiling model, mass evaporation rate versus crud thickness, pin power, and thermochemistry. | Quasi-CVER |
MA.1.3. 3 | LLE | Comparison to MAMBA-BDM to verify cladding temperature and boiling velocity. | Quasi-SVER |
MA.1.3. 4 | LLE | Convergence studies for the main quantities of interest as a function of the radial mesh density are completed. Convergence studies with respect to the internal time-step size are completed [56]. | |
MA.1.3. 5 | LLE | Code verification and solution verification tests conducted [56]: | CVER and SVER |
• The thermal and mass transport solvers were compared to analytical solutions for a simple diffusion problem (no convection or sinks/sources). | |||
• A simplified thermal diffusion problem with a sink term was solved by introducing a few minor code changes and compared to the form of the corresponding analytical solution. | |||
• A simplified convection-diffusion problem was implemented by setting reaction rates for internal chemical reactions to zero and choosing the concentrations of Li and B to avoid precipitation of Li2B4O7. | |||
• The solution to the CRUD growth rate equation was verified by comparison to an analytical solution. | |||
MA.1.3. 6 | LLE | Inference of CRUD model parameters from plant data [55]. | IVAL (partial credit) calibration study |
MA.1.3. 7 | LLE | Improvement in MAMABA source term model was achieved by calibration using measured plant data and experimental loop data. The calibration process was able to estimate thermophysical and growth rate parameters in MAMBA given experimental evidence in the form of flux maps and thermocouple measurements. | IVAL (partial credit) calibration study |
The small-scale WALT loop calibration demonstrated the ability to perform statistical inference of the thermophysical crud parameters present in MAMBA given an experimental data set from a small-scale crud test loop using a Markov Chain Monte Carlo sampler [55]. | |||
MA.1.3. 8 | LLE | Improvement in representation and geometric fidelity of MAMABA was shown by the calibration study using measured plant data (Watts bar nuclear plant) and experimental loop data (westinghouse advanced loop tester) [55]. | RGF |
MA.1.3. 9 | LLE | Development of preliminary VERA—CS Crud induced localized corrosion modeling capability (milestone: L2:PHI.P17.03) [53]. | RGF |
Index | Category | Description | Relevance/comments |
---|---|---|---|
MA.1.1. 1 | HLE | The MAMBA3D refactoring the developers are implementing a unit and regression testing protocol that should result in robust source code verification when the code is completed at the end of PoR15. | Gap |
MA.1.1. 2 | HLE | SQA needs some improvement. | Gap |
MA.1.1. 3 | HLE | Low-level code verification was performed. | Gap |
MA.1.1. 4 | HLE | Solution verification is not done for CASL CPs. | Gap |
MA.1.1. 5 | HLE | Some validation work was performed (SET, IET, and plant analysis). | Gap |
MA.1.2. 1 | MLE | Solution verification and code verification using the analytical solutions are in progress [56]. | |
MA.1.2. 2 | MLE | Simulation of Westinghouse WALT Experiment: | |
• Comparison of cladding temperature versus rod power and crud thickness against the WALT data. | |||
MA.1.2. 3 | MLE | An initial CIPS study compared axial offset predicted by coupled MAMBA/CTF/MPACT with plant data for Watts Bar. | Multiple codes |
MA.1.2. 4 | MLE | Plant analysis: | Multiple codes |
• CIPS study by coupled MAMBA (1D)/CTF/MPACT simulations compared with plant data. | |||
• Oxide thickness and morphology compared with an operating plant. | |||
MA.1.3. 1 | LLE | Software quality assurance: | SQA Gap |
• Unit testing (water properties). | |||
• Unit test coverage is good and most of the important routines are tested. | |||
• The automatic test coverage feature reported coverage of ∼98%. | |||
• Source properties and Steam generator properties are not tested in the assessed version of MAMBA [56]. | |||
MA.1.3. 1 | LLE | Comparisons between the model in FACTSAGE code and MAMBA [57]. | SQA, PMMF |
MA.1.3. 2 | LLE | Comparison to BOA 3.0 code for heat transfer/chimney boiling model, mass evaporation rate versus crud thickness, pin power, and thermochemistry. | Quasi-CVER |
MA.1.3. 3 | LLE | Comparison to MAMBA-BDM to verify cladding temperature and boiling velocity. | Quasi-SVER |
MA.1.3. 4 | LLE | Convergence studies for the main quantities of interest as a function of the radial mesh density are completed. Convergence studies with respect to the internal time-step size are completed [56]. | |
MA.1.3. 5 | LLE | Code verification and solution verification tests conducted [56]: | CVER and SVER |
• The thermal and mass transport solvers were compared to analytical solutions for a simple diffusion problem (no convection or sinks/sources). | |||
• A simplified thermal diffusion problem with a sink term was solved by introducing a few minor code changes and compared to the form of the corresponding analytical solution. | |||
• A simplified convection-diffusion problem was implemented by setting reaction rates for internal chemical reactions to zero and choosing the concentrations of Li and B to avoid precipitation of Li2B4O7. | |||
• The solution to the CRUD growth rate equation was verified by comparison to an analytical solution. | |||
MA.1.3. 6 | LLE | Inference of CRUD model parameters from plant data [55]. | IVAL (partial credit) calibration study |
MA.1.3. 7 | LLE | Improvement in MAMABA source term model was achieved by calibration using measured plant data and experimental loop data. The calibration process was able to estimate thermophysical and growth rate parameters in MAMBA given experimental evidence in the form of flux maps and thermocouple measurements. | IVAL (partial credit) calibration study |
The small-scale WALT loop calibration demonstrated the ability to perform statistical inference of the thermophysical crud parameters present in MAMBA given an experimental data set from a small-scale crud test loop using a Markov Chain Monte Carlo sampler [55]. | |||
MA.1.3. 8 | LLE | Improvement in representation and geometric fidelity of MAMABA was shown by the calibration study using measured plant data (Watts bar nuclear plant) and experimental loop data (westinghouse advanced loop tester) [55]. | RGF |
MA.1.3. 9 | LLE | Development of preliminary VERA—CS Crud induced localized corrosion modeling capability (milestone: L2:PHI.P17.03) [53]. | RGF |
Index | Category | Description | Relevance/ Comments |
---|---|---|---|
BI.2.1. 1 | HLE | IET and SET validation work performed for key physical phenomenon related to CASL quantity of interest [58,59]. | |
BI.2.2. 1 | MLE | LWR validation (48 Cases): | |
BI.2.2. 2 | MLE | Validation metrics: | |
o Fuel centerline temperature through all phases of fuel life | |||
o Fission gas release | |||
o Clad diameter (PCMI). | |||
BI.2.3. 1 | LLE | LWR fuel benchmark: reasonable prediction of centerline temperature. | |
BI.2.3. 2 | LLE | LWR fuel benchmark: rod diameter prediction with large errors. | Gap |
BI.2.3. 3 | LLE | LWR fuel benchmark: large uncertainty in key models | Gap (need SVER) |
o Relocation (and recovery) | |||
o Fuel (swelling) and clad creep | |||
o Frictional contact | |||
o Gaseous swelling (at high temperature. | |||
BI.2.3. 4 | LLE | L3: FMC.CLAD.P13.04 – Cluster dynamics modeling of Hydride precipitation. | UQ (data assessment) |
BI.2.3. 5 | LLE | • SET (Bursting experiments) and IET validation of BISON for LOCA behavior [59]. | SVAL and IVAL |
• Validation of BISON to integral LWR experiment (IET validation) [58]. |
Index | Category | Description | Relevance/ Comments |
---|---|---|---|
BI.2.1. 1 | HLE | IET and SET validation work performed for key physical phenomenon related to CASL quantity of interest [58,59]. | |
BI.2.2. 1 | MLE | LWR validation (48 Cases): | |
BI.2.2. 2 | MLE | Validation metrics: | |
o Fuel centerline temperature through all phases of fuel life | |||
o Fission gas release | |||
o Clad diameter (PCMI). | |||
BI.2.3. 1 | LLE | LWR fuel benchmark: reasonable prediction of centerline temperature. | |
BI.2.3. 2 | LLE | LWR fuel benchmark: rod diameter prediction with large errors. | Gap |
BI.2.3. 3 | LLE | LWR fuel benchmark: large uncertainty in key models | Gap (need SVER) |
o Relocation (and recovery) | |||
o Fuel (swelling) and clad creep | |||
o Frictional contact | |||
o Gaseous swelling (at high temperature. | |||
BI.2.3. 4 | LLE | L3: FMC.CLAD.P13.04 – Cluster dynamics modeling of Hydride precipitation. | UQ (data assessment) |
BI.2.3. 5 | LLE | • SET (Bursting experiments) and IET validation of BISON for LOCA behavior [59]. | SVAL and IVAL |
• Validation of BISON to integral LWR experiment (IET validation) [58]. |
Index | Category | Description | Relevance/comments |
---|---|---|---|
VE.1.1. 1 | HLE | The initial VERA—CS validation efforts with WB Unit 1 and BEAVRS provide sufficient basis to propose metrics that can be used to assess the adequacy of the PWR core follow calculations for addition to the VERA—CS validation base. | |
VE.1.1. 2 | HLE | For every new VERA—CS reactor analyzed, the metrics shown in Table 19 were suggested as an initial proposal. | |
VE.1.2. 1 | MLE | Specific attention/analysis would be expected for any plants/cycles/measurements that fall outside of these metrics. (VE.1.1.2). | |
VE.1.2. 2 | MLE | A red-flag condition would be automatically generated on the results outside this metric (VE.1.1.2) and require reevaluation and review before that data is admitted to the validation base. | |
VE.1.2. 3 | MLE | The TIAMAT code for MPACT-BISON code coupling requires significant V&V work. | References [60–62] Gap |
VE.1.3. 1 | LLE | Godfrey [54,63] successfully demonstrated VERA—CS ability to model the operating history of the Watts bar i nuclear plant cycles 1-12 and Watts Bar Unit 2. A rigorous benchmark was performed using criticality measurements, physics testing results, critical soluble boron concentrations, and measured in-core neutron flux distributions. | PMMF, RGF |
VE.1.3. 2 | LLE | The BEAVRS provided measured data for BEAVRS includes Cycles 1 and 2 ZPPT results, power escalation and HFP measured flux maps and HFP critical boron concentration measurements for both cycles. In general, the VERA—CS prediction results for cycle 1 are in good agreement with the plant data. | PMMF, RGF |
VE.1.3. 3 | LLE | Cycle 2 of BEAVRS has been completed and similar results were observed (to be documented) | PMMF, RGF |
VE.1.3. 4 | LLE | Need to verify the MPACT-CTF coupling for a more general range of applications to include, | Gap |
• Nonsquare cells, complex composition mixtures such as coolant+grid mixtures, and regions with major variation (e.g., above/below the region CTF models). | |||
• The impact of thermal expansion on the verification of the MPACT-CTF coupling. | |||
VE.1.3. 5 | LLE | L2:VMA.P12.01—data assimilation and uncertainty quantification using VERA—CS for a core wide LWR problem with depletion [64]. | UQ |
VE.1.3. 6 | LLE | L2:VMA.VUQ.P11.04—uncertainty quantification analysis using VERA—CS for a PWR fuel assembly with depletion. | UQ |
VE.1.3. 7 | LLE | L2:VMA.P13.03—initial UQ of CIPS [65]. | UQ |
VE.1.3. 8 | LLE | Uncertainty quantification and sensitivity analysis with CASL core simulator VERA—CS [66]. | UQ/SA |
VE.1.3. 9 | LLE | Uncertainty quantification and data assimilation (UQ/DA) study on a VERA core simulator component for CRUD analysis [67]. | UQ |
VE.1.3. 10 | LLE | Improvement in representation and geometric fidelity of VERA—CS (MAMABA and CTF) was shown by the calibration study using measured plant data (Watts Bar Nuclear Plant) and experimental loop data (WALT experiment) [55]. | RGF |
VE.1.3. 11 | LLE | Development of preliminary VERA—CS Crud induced localized corrosion modeling capability (milestone: L2:PHI.P17.03) [53]. | RGF (MAMBA, MPACT, and CTF) |
Index | Category | Description | Relevance/comments |
---|---|---|---|
VE.1.1. 1 | HLE | The initial VERA—CS validation efforts with WB Unit 1 and BEAVRS provide sufficient basis to propose metrics that can be used to assess the adequacy of the PWR core follow calculations for addition to the VERA—CS validation base. | |
VE.1.1. 2 | HLE | For every new VERA—CS reactor analyzed, the metrics shown in Table 19 were suggested as an initial proposal. | |
VE.1.2. 1 | MLE | Specific attention/analysis would be expected for any plants/cycles/measurements that fall outside of these metrics. (VE.1.1.2). | |
VE.1.2. 2 | MLE | A red-flag condition would be automatically generated on the results outside this metric (VE.1.1.2) and require reevaluation and review before that data is admitted to the validation base. | |
VE.1.2. 3 | MLE | The TIAMAT code for MPACT-BISON code coupling requires significant V&V work. | References [60–62] Gap |
VE.1.3. 1 | LLE | Godfrey [54,63] successfully demonstrated VERA—CS ability to model the operating history of the Watts bar i nuclear plant cycles 1-12 and Watts Bar Unit 2. A rigorous benchmark was performed using criticality measurements, physics testing results, critical soluble boron concentrations, and measured in-core neutron flux distributions. | PMMF, RGF |
VE.1.3. 2 | LLE | The BEAVRS provided measured data for BEAVRS includes Cycles 1 and 2 ZPPT results, power escalation and HFP measured flux maps and HFP critical boron concentration measurements for both cycles. In general, the VERA—CS prediction results for cycle 1 are in good agreement with the plant data. | PMMF, RGF |
VE.1.3. 3 | LLE | Cycle 2 of BEAVRS has been completed and similar results were observed (to be documented) | PMMF, RGF |
VE.1.3. 4 | LLE | Need to verify the MPACT-CTF coupling for a more general range of applications to include, | Gap |
• Nonsquare cells, complex composition mixtures such as coolant+grid mixtures, and regions with major variation (e.g., above/below the region CTF models). | |||
• The impact of thermal expansion on the verification of the MPACT-CTF coupling. | |||
VE.1.3. 5 | LLE | L2:VMA.P12.01—data assimilation and uncertainty quantification using VERA—CS for a core wide LWR problem with depletion [64]. | UQ |
VE.1.3. 6 | LLE | L2:VMA.VUQ.P11.04—uncertainty quantification analysis using VERA—CS for a PWR fuel assembly with depletion. | UQ |
VE.1.3. 7 | LLE | L2:VMA.P13.03—initial UQ of CIPS [65]. | UQ |
VE.1.3. 8 | LLE | Uncertainty quantification and sensitivity analysis with CASL core simulator VERA—CS [66]. | UQ/SA |
VE.1.3. 9 | LLE | Uncertainty quantification and data assimilation (UQ/DA) study on a VERA core simulator component for CRUD analysis [67]. | UQ |
VE.1.3. 10 | LLE | Improvement in representation and geometric fidelity of VERA—CS (MAMABA and CTF) was shown by the calibration study using measured plant data (Watts Bar Nuclear Plant) and experimental loop data (WALT experiment) [55]. | RGF |
VE.1.3. 11 | LLE | Development of preliminary VERA—CS Crud induced localized corrosion modeling capability (milestone: L2:PHI.P17.03) [53]. | RGF (MAMBA, MPACT, and CTF) |
5.6 Step 6: Evidence Classification and Organization.
where AB identifies the code to which the evidence refers, x corresponds to the PCMM attribute or set of attributes for which the evidence was identified, z is a counter that differentiates between multiple pieces of evidence, and y is a level identifier that indicates the level of detail of the evidence. The level of detail of evidence is represented using three levels:
High-level evidence (HLE): Global statement or activity related to model development and VVUQ of code,
Medium-level evidence (MLE): Specific task to support the high-level evidence,
Low-level evidence (LLE): Reference to performance or test details.
Due to space constraints, it is not possible to include all the details from the CASL milestone reports and documentations in the evidence tables. Therefore, key information is abstracted from the relevant sources and included as evidence description. The “relevance/comment” section in the evidence tables clarify or contain the following information:
what is the relevant PCMM attribute for the evidence?
does the evidence refer to a gap in M & S functionality or assessment (VVUQ)?
any comment or further detail related to the evidence.
The evidence in Tables 9–18 is classified and organized according to the aforementioned scheme. As an example, consider MP.1.1.1 (in Table 9) which is a high-level piece of evidence stating the presence of comprehensive documentation related to verification and validation of MPACT. However, this statement does not include any specific details related to the type of tests performed or their results. Such information is captured by low-level evidence, e.g., evidence MP.1.3.1. indicates that a code verification exercise was performed using the method of the exact solution, benchmark problem 3.4 in Ganapol [49] was used as a code verification test for MPACT, and MPACT agreed with all cases to within a few per cent mille (pcm) [13].
5.7 Step 7: Map Predictive Capability Maturity Model Attribute to Supporting Evidence.
The seventh step in the assessment process is focused on mapping the evidence to the relevant PCMM attribute. The CASL VERA CS is developed for different applications mentioned in section 2. A common body of evidence is used for the assessment of CASL VERA CS for different CPs. However, the significance of the evidence is governed by its relevance to the specific CP (governed by PIRT) and the corresponding PCMM attribute. Three levels, i.e., high (H), medium (M), and low (L) are used to specify the level of significance. Mapping evidence to PCMM attributes helps in assigning the PCMM score (maturity level) and provides credibility to the attributes' assessment. Tables 20 and 21 show the results of mapping the evidence for the assessment of different PCMM attributes of VERA for CIPS. The evidence in these tables are graded based on their significance level with respect to the PCMM attributes and CIPS, e.g., evidence MP.1.3.1 and MP.1.3.4 are graded to a high significance level (H) as they directly support code verification of the neutronics component (MPACT) in VERA—CS. As another example, consider evidence MP.3.3.6, MP.3.3.7, MP.3.3.9, and MP.3.3.10 in Table 11. This evidence are related to validation of MPACT using data from different test facility and benchmark problems. However, this evidence adds value to the assessment of three different attributes with different significance levels: (1) Integral Effect Test Validation of VERA—CS at the low significance level (support validation of neutronic component in CASL VERA CS) (see Table 21), (2) representation and geometric fidelity of VERA—CS at medium significance level (see Table 20), and (3) physics and material model fidelity at low significance level (see Table 20). The significance level for this evidence is medium or low level because of different scaling issues and assumptions pertaining to the test facility and benchmark problems.
Significance | |||||
---|---|---|---|---|---|
PCMM attribute | H | M | L | Gap/overall evaluation | |
RGF: Representation and Geometric Fidelity | MP.1.3. 2 | MP.3.3. 1 | Marginal [1.5] | ||
MP.2.3. 1 | MP.3.3. 3 | ||||
MP.2.3. 2 | MP.3.3. 4 | ||||
MP.3.2. 2 | MP.3.3. 5 | ||||
MA.1.3. 8 | MP.3.3. 6 | ||||
MA.1.3. 9 | MP.3.3. 7 | ||||
CT.2.3. 2 | MP.3.3. 9 | ||||
CT.2.3. 3 | MP.3.3. 10 | ||||
VE.1.3. 10 | CT.2.2. 2 | ||||
VE.1.3. 11 | VE.1.3. 1 | ||||
VE.1.3. 2 | |||||
VE.1.3. 3 | |||||
PMMF: Physics and Material Model Fidelity | MA.1.3. 2 | MP.3.3. 1 | MP.3.3. 6 | Marginal [1.5] | |
MP.2.3. 3 | MP.3.3. 3 | MP.3.3. 7 | |||
MP.2.3. 4 | MP.3.3. 4 | MP.3.3. 9 | |||
VE.1.3. 1 | MP.3.3. 5 | MP.3.3. 10 | |||
VE.1.3. 2 | |||||
VE.1.3. 3 | |||||
Significance | |||||
---|---|---|---|---|---|
PCMM attribute | H | M | L | Gap/overall evaluation | |
RGF: Representation and Geometric Fidelity | MP.1.3. 2 | MP.3.3. 1 | Marginal [1.5] | ||
MP.2.3. 1 | MP.3.3. 3 | ||||
MP.2.3. 2 | MP.3.3. 4 | ||||
MP.3.2. 2 | MP.3.3. 5 | ||||
MA.1.3. 8 | MP.3.3. 6 | ||||
MA.1.3. 9 | MP.3.3. 7 | ||||
CT.2.3. 2 | MP.3.3. 9 | ||||
CT.2.3. 3 | MP.3.3. 10 | ||||
VE.1.3. 10 | CT.2.2. 2 | ||||
VE.1.3. 11 | VE.1.3. 1 | ||||
VE.1.3. 2 | |||||
VE.1.3. 3 | |||||
PMMF: Physics and Material Model Fidelity | MA.1.3. 2 | MP.3.3. 1 | MP.3.3. 6 | Marginal [1.5] | |
MP.2.3. 3 | MP.3.3. 3 | MP.3.3. 7 | |||
MP.2.3. 4 | MP.3.3. 4 | MP.3.3. 9 | |||
VE.1.3. 1 | MP.3.3. 5 | MP.3.3. 10 | |||
VE.1.3. 2 | |||||
VE.1.3. 3 | |||||
Significance | |||||
---|---|---|---|---|---|
PCMM attribute | H | M | L | Gap/overall evaluation | |
SQA: software quality assurance (including documentation) | MA.1.3. 2 | MP.1.1. 2 | CT.1.3. 2 | MP.1.1. 1 | CT.1.1. 2 |
MP.1.1. 3 | MP.1.1. 4 | CT.1.3. 5 | MP.1.2. 1 | MA.1.1. 1 | |
CT.1.1. 1 | CT.1.2. 1 | CT.1.3. 6 | MP.1.2. 2 | MA.1.1. 2 | |
MA.1.3. 1 | CT.1.2. 2 | CT.1.3. 7 | MP.1.3. 1 | Marginal [1.5] (MAMBA) | |
CT.1.3. 1 | MP.1.3. 2 | ||||
CVER: code verification | MP.1.2. 2 | MP.1.3. 1 | MP.2.2. 2 | ||
MP.2.3. 4 | MP.1.3. 2 | CT.1.1. 3 | |||
MP.1.3. 3 | CT.1.3. 3 | CT.1.2. 3 | |||
MP.1.3. 4 | |||||
MA.1.1. 3 | |||||
MP.1.2. 3 | |||||
VE.1.3. 4 | |||||
CT.1.2. 3 | |||||
Need improvement [1] | |||||
CT.1.3. 8 | |||||
CT.1.3. 10 | |||||
CT.1.3. 12 | |||||
MA.1.3. 2 | |||||
MA.1.3. 4 | |||||
MA.1.3. 5 | |||||
SVER: solution verification | MP.2.1. 1 | MP.2.1. 2 | MP.2.2. 1 | MP.2.2. 2 | |
MP.2.1. 4 | MP.2.1. 3 | MP.2.3. 1 | CT.1.2. 4 | ||
MP.2.3. 5 | MP.2.3. 3 | MP.2.3. 2 | MA.1.1. 4 | ||
CT.1.1. 4 | MP.2.3. 4 | MP.3.2. 4 | MA.1.2. 1 | ||
CT.1.2. 4 | CT.1.3. 4 | VE.1.3. 4 | |||
CT.1.3. 9 | Need improvement [1] | ||||
CT.1.3. 11 | |||||
MA.1.3. 3 | |||||
MA.1.3. 5 | |||||
SVAL: separate effects validation | MP.3.1. 1 | MP.2.3. 1 | MP.3.2. 1 | MP.3.1. 4 | |
BI.2.3. 5 | MP.3.1. 3 | MP.3.2. 4 | CT.2.1. 1 | ||
CT.2.2. 1 | MP.3.3. 1 | MA.1.1. 5 | |||
MP.3.3. 7 | Need improvement [1] (MAMBA) | ||||
MP.3.3. 8 | |||||
MP.3.3. 9 | |||||
MP.3.3. 10 | |||||
IVAL: integral effects validation | MP.3.1. 1 | MP.3.1. 2 | MP.3.2. 2 | MP.3.1. 4 | |
MA.1.2. 2 | MP.3.1. 3 | MP.3.2. 3 | MA.1.1. 5 | ||
MA.1.2. 3 | CT.2.1. 2 | MP.3.3. 3 | CT.2.3. 1 | ||
MA.1.2. 4 | MP.3.3. 4 | ||||
MP.3.3. 5 | |||||
VE.1.1. 2 | MP.3.3. 6 | ||||
VE.1.2. 1 | CT.2.2. 2 | Marginal [1.5] | |||
VE.1.2. 2 | MA.1.3. 6 | ||||
BI.2.3. 5 | MA.1.3. 7 | ||||
UQSA: uncertainty quantification and sensitivity analysis | VE.1.3. 5 | None [0] | |||
VE.1.3. 6 | |||||
VE.1.3. 7 | |||||
VE.1.3. 8 VE.1.3. 9 |
Significance | |||||
---|---|---|---|---|---|
PCMM attribute | H | M | L | Gap/overall evaluation | |
SQA: software quality assurance (including documentation) | MA.1.3. 2 | MP.1.1. 2 | CT.1.3. 2 | MP.1.1. 1 | CT.1.1. 2 |
MP.1.1. 3 | MP.1.1. 4 | CT.1.3. 5 | MP.1.2. 1 | MA.1.1. 1 | |
CT.1.1. 1 | CT.1.2. 1 | CT.1.3. 6 | MP.1.2. 2 | MA.1.1. 2 | |
MA.1.3. 1 | CT.1.2. 2 | CT.1.3. 7 | MP.1.3. 1 | Marginal [1.5] (MAMBA) | |
CT.1.3. 1 | MP.1.3. 2 | ||||
CVER: code verification | MP.1.2. 2 | MP.1.3. 1 | MP.2.2. 2 | ||
MP.2.3. 4 | MP.1.3. 2 | CT.1.1. 3 | |||
MP.1.3. 3 | CT.1.3. 3 | CT.1.2. 3 | |||
MP.1.3. 4 | |||||
MA.1.1. 3 | |||||
MP.1.2. 3 | |||||
VE.1.3. 4 | |||||
CT.1.2. 3 | |||||
Need improvement [1] | |||||
CT.1.3. 8 | |||||
CT.1.3. 10 | |||||
CT.1.3. 12 | |||||
MA.1.3. 2 | |||||
MA.1.3. 4 | |||||
MA.1.3. 5 | |||||
SVER: solution verification | MP.2.1. 1 | MP.2.1. 2 | MP.2.2. 1 | MP.2.2. 2 | |
MP.2.1. 4 | MP.2.1. 3 | MP.2.3. 1 | CT.1.2. 4 | ||
MP.2.3. 5 | MP.2.3. 3 | MP.2.3. 2 | MA.1.1. 4 | ||
CT.1.1. 4 | MP.2.3. 4 | MP.3.2. 4 | MA.1.2. 1 | ||
CT.1.2. 4 | CT.1.3. 4 | VE.1.3. 4 | |||
CT.1.3. 9 | Need improvement [1] | ||||
CT.1.3. 11 | |||||
MA.1.3. 3 | |||||
MA.1.3. 5 | |||||
SVAL: separate effects validation | MP.3.1. 1 | MP.2.3. 1 | MP.3.2. 1 | MP.3.1. 4 | |
BI.2.3. 5 | MP.3.1. 3 | MP.3.2. 4 | CT.2.1. 1 | ||
CT.2.2. 1 | MP.3.3. 1 | MA.1.1. 5 | |||
MP.3.3. 7 | Need improvement [1] (MAMBA) | ||||
MP.3.3. 8 | |||||
MP.3.3. 9 | |||||
MP.3.3. 10 | |||||
IVAL: integral effects validation | MP.3.1. 1 | MP.3.1. 2 | MP.3.2. 2 | MP.3.1. 4 | |
MA.1.2. 2 | MP.3.1. 3 | MP.3.2. 3 | MA.1.1. 5 | ||
MA.1.2. 3 | CT.2.1. 2 | MP.3.3. 3 | CT.2.3. 1 | ||
MA.1.2. 4 | MP.3.3. 4 | ||||
MP.3.3. 5 | |||||
VE.1.1. 2 | MP.3.3. 6 | ||||
VE.1.2. 1 | CT.2.2. 2 | Marginal [1.5] | |||
VE.1.2. 2 | MA.1.3. 6 | ||||
BI.2.3. 5 | MA.1.3. 7 | ||||
UQSA: uncertainty quantification and sensitivity analysis | VE.1.3. 5 | None [0] | |||
VE.1.3. 6 | |||||
VE.1.3. 7 | |||||
VE.1.3. 8 VE.1.3. 9 |
The assessment of the VERA CS for a CP is an iterative process and at the end of each iteration information related to gaps in modeling capability (see Fig. 2 for illustration), data need, and status of VVUQ is obtained. This information guides the development and assessment process for the subsequent iteration. However, if the target maturity level for all PCMM attributes is achieved with credible evidence, the assessment is completed. The current assessments of VERA—CS (MPACT, MAMBA, and CTF) for CIPS are shown in Table 22. PCMM scoring scheme is described in the next section.
PCMM attribute | MPACT | CTF | MAMBA |
---|---|---|---|
Representation and geometric fidelity | 3 | 2 | 2 |
Physics and material model fidelity | 3 | 2 | 1.5 |
Software quality assurance | 2 | 2 | 1 |
Code verification | 2 | 2 | 1 |
Solution verification | 2 | 2 | 1.5 |
Separate effects validation | 2 | 1 | 0 |
Integral effects validation | 2 | 2 | 1 |
Uncertainty quantification | 0 | 0 | 0 |
PCMM attribute | MPACT | CTF | MAMBA |
---|---|---|---|
Representation and geometric fidelity | 3 | 2 | 2 |
Physics and material model fidelity | 3 | 2 | 1.5 |
Software quality assurance | 2 | 2 | 1 |
Code verification | 2 | 2 | 1 |
Solution verification | 2 | 2 | 1.5 |
Separate effects validation | 2 | 1 | 0 |
Integral effects validation | 2 | 2 | 1 |
Uncertainty quantification | 0 | 0 | 0 |
6 Predictive Capability Maturity Model Scoring Technique
This section describes the process for making scoring decisions in the current assessment. PCMM is based on qualitative descriptors or criteria and there is no quantitative measure to clearly define the transition from one level to the next. The four-steps in maturity level for PCMM span a very wide range and the provided qualitative descriptors are sufficient to resolve maturity levels. In the current assessment, the scores are determined by carefully reviewing the evidence against the qualitative assessment criteria for attributes at the specified target levels. During the evidence classification and organization, the gaps are clearly identified and documented. These gaps help in determining the completeness of a body of evidence for a specific maturity level.
For all PCMM attributes, the decision process is supported by the identified phenomenology from the PIRT for each CP. For Representation and geometric fidelity and physics and material model fidelity, the maturity scoring is based on the ability of the code(s) to address the dependent phenomenology identified for each CP. For example, CRUD formation involves porosity and chimneys that promote boiling and current modeling does not resolve these features. For software quality assurance and engineering, the concept of regression test line coverage was used to help support the decision-making between maturity scores. For Code Verification, the particular partial differential equations relevant to simulating the phenomena of interest were paid particular attention. The code verification evidence was considered in light of which partial differential equations and associated solvers are tested for convergence behavior. For solution verification, the numerical effects on SRQs relevant for CPs were analyzed. For both separate and integral effects validation, the phenomena of interest were closely compared to the available validation data and associated comparisons to modeled results. For every CP, there is insufficient validation data to support the validation of every phenomenon identified. To distinguish between maturity levels a simple “majority rule” of validated phenomena was utilized. For uncertainty quantification, only the simulation of quantities of interest relating to the particular CP was considered. Future research in this area should involve a quantitative assessment of uncertainties that can further drive phenomena identification.
7 Conclusion
This paper summarizes the process of assessing VERA—CS for the CASL CPs. The classical PIRT methodology is adopted to identify relevant phenomena with respect to a CP application. Based on the identified phenomena, requirements for model development and assessment are defined and mapped to different codes in VERA—CS. Evaluation of VERA—CS is performed by assessing different PCMM attributes related to the VVUQ of codes. Credibility in the assessment is established by mapping relevant evidence obtained from the VVUQ of codes. The approach described herein has been iteratively applied to VERA—CS and the incremental findings from each report have been utilized to prioritize code development and VVUQ activities within the CASL program. Given the large volume of heterogeneous data (or evidence) from various modeling and VVUQ activities of different codes related to the CPs, the PCMM serves as a convenient tool for predictive capability evaluation. However, it needs a formal structure and the ability to incorporate evidence that can support the claims regarding the maturity levels achieved by a particular code. Evidence form the basis of any PCMM assessment. The current assessment takes into account the relevance and level of detail of the evidence. However, this is not sufficient. Assumptions and justification related to the evidence and their grading also need to be incorporated. Incorporating such details is difficult without formalizing the PCMM.
The assessment methodology presented here has certain drawbacks. The process of collection of evidence has a certain degree of subjectivism. PCMM lacks a quantitative basis to measure the evidence, and qualitative classifiers like low, medium, high, some, “many,” are extensively used. Both PCMM and PIRT rely on expert opinion. Therefore, the assessment may be affected by the knowledge and expertise of the people conducting the PIRT and PCMM. PIRT and PCMM are affected by subjective bias and disparity in the experts' opinions. One way to minimize this is to make use of the argumentation technique. However, this is a topic of further research, and the needs and requirements for a such framework could only be understood after performing PCMM at the initial stage. In this work, a systematic scheme for evidence classification and organization is incorporated to support scoring.
An extension of this work involves the formalization of PCMM as a decision model using argumentation theory and Bayesian Network. This work [50] demonstrates a quantitative approach for maturity assessment using a case study of CTF.
The predictive capability maturity model is not just focused on the results of the M & S activities but the quality and rigor of different processes (VVUQ) used to enhance confidence in a simulation tool for a specific application. Assessment using PCMM is based on heterogeneous data from different M & S activities. Therefore, it is qualitative in nature. The framework presented here for assessing the predictive capability and maturity of CASL VERA—CS can be utilized by researchers and code developers seeking to assess other M & S codes for other problems. In particular, the authors of this work believe that there is a need to formalize such a framework to address the widespread subjectivity and fallacious “appeal to authority” arguments for asserting M & S code adequacy. Of particular importance are the identification and decomposition of an intended problem of interest and the alignment of evidence and decision attributes for asserting maturity. Furthermore, the generation, documentation, and archival of objective evidence of maturity is a critical prerequisite for this process. The alignment of the problem of interest and evidence is accomplished in the present methodology via the PIRT process and the mapping of evidence to PCMM attributes. The PCMM matrix uses high-level criteria for the assessment of each PCMM attribute. However, for a comprehensive in-depth assessment, further details regarding the relevant subattributes need to be incorporated. A hierarchical model may be suitable for this purpose, however, the qualitative nature of PCMM makes it difficult to adopt hierarchy in PCMM. Thus, the assessment methodology presented here can serve as a starting point for developing such a framework.
Acknowledgment
The work was performed with support by the U.S. Department of Energy (DOE) via the Consortium for Advanced Simulation of Light Water Reactors (CASL) under contract number, DEAC05-00OR22725. The authors would like to express their gratitude to the CASL code team, CASL leadership team, and CASL council for providing necessary information, data, evidence that forms the basis of the CASL VERA assessment. The authors are also grateful to the reviewers for their insightful comments and suggestions for improving the quality of this paper.
Funding Data
U.S. Department of Energy (Grant No.: DEAC05-00OR22725; Funder ID: 10.13039/100000015).
Nomenclature
- CASL =
consortium for advanced simulation of light water reactors
- CIPS =
CRUD induced power shift
- CP =
challenge problem
- CRUD =
chalk river unidentified deposits
- CSAU =
code scaling, applicability, and uncertainty
- CVER =
code verification
- DNB =
departure from nucleate boiling
- IVAL =
integral effect test validation
- MAMBA =
MPO advanced model for boron analysis code
- MPACT =
Michigan parallel characteristics transport code
- PCI =
pellet clad interactions
- PCMM =
predictive capability maturity model
- PIRT =
phenomena identification and ranking table
- PMMF =
physics and material model fidelity (a PCMM attribute)
- QOI =
quantity of interest
- RGF =
representation and geometric fidelity (a PCMM attribute)
- SQA =
software quality assurance
- SQE =
software quality engineering
- SVAL =
separate effect test validation
- SVER =
solution verification
- U.S. NRC =
United States Nuclear Regulatory Commission
- VERA—CS =
virtual environment for reactor applications code suite
- VVUQ =
verification, validation, and uncertainty quantification