Update search

Filter

- Title
- Author
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- ISBN-10
- ISSN
- Issue
- Volume
- References
- Paper No

- Title
- Author
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- ISBN-10
- ISSN
- Issue
- Volume
- References
- Paper No

- Title
- Author
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- ISBN-10
- ISSN
- Issue
- Volume
- References
- Paper No

- Title
- Author
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- ISBN-10
- ISSN
- Issue
- Volume
- References
- Paper No

- Title
- Author
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- ISBN-10
- ISSN
- Issue
- Volume
- References
- Paper No

- Title
- Author
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- ISBN-10
- ISSN
- Issue
- Volume
- References
- Paper No

*Expand*

Journal citation

### NARROW

Format

Journal

Article Type

Conference Series

Subject Area

Topics

Date

Availability

1-20 of 31

Jeffrey T. Fong

Close
**Follow your search**

Access your saved searches in your account

Would you like to receive an alert when new items match your search?

Sort by

Proceedings Papers

*Proc. ASME*. PVP2018, Volume 1A: Codes and Standards, V01AT01A007, July 15–20, 2018

Paper No: PVP2018-84771

Abstract

The ASME Boiler & Pressure Vessel Code Section XI Committee is currently developing a new Division 2 nuclear code entitled the “Reliability and Integrity Management (RIM) program,” with which one is able to arrive at a risk-informed, NDE-based engineering maintenance decision by estimating and managing all uncertainties for the entire life cycle including design, material selection, degradation processes, operation and non-destructive examination (NDE). This paper focuses on the uncertainty of the NDE methods employed for preservice and inservice inspections due to a large number of factors such as the NDE equipment type and age, the operator’s level and years of experience, the angle of probe, the flaw type, etc. In this paper, we describe three approaches with which uncertainty in NDE-risk-informed decision making can be quantified: (1) A regression model approach in analyzing round-robin experimental data such as the 1981–82 Piping Inspection Round Robin (PIRR), the 1986 Mini-Round Robin (MRR) on intergranular stress corrosion cracking (IGSCC) detection and sizing, and the 1989–90 international Programme for the Inspection of Steel Components III-Austenitic Steel Testing (PISC-AST). (2) A statistical design of experiments approach. (3) An expert knowledge elicitation approach. Based on a 2003 Pacific Northwest National Laboratory (PNNL) report by Heasler and Doctor (NUREG/CR-6795), we observe that the first approach utilized round robin studies that gave NDE uncertainty information on the state of the art of the NDE technology employed from the early 1980s to the early 1990s. This approach is very time-consuming and expensive to implement. The second approach is based on a design-of-experiments (DEX) of eight field inspection exercises for finding the length of a subsurface crack in a pressure vessel head using ultrasonic testing (UT), where five factors (operator’s service experience, UT machine age, cable length, probe angle, and plastic shim thickness), were chosen to quantify the sizing uncertainty of the UT method. The DEX approach is also time-consuming and costly, but has the advantage that it can be tailored to a specific defect-detection and defect-sizing problem. The third approach using an expert panel is the most efficient and least costly approach. Using the crack length results of the second approach, we introduce in this paper how the expert panel approach can be implemented with the application of a software package named the Sheffield Elicitation Framework (SHELF). The crack length estimation with uncertainty results of the three approaches are compared and discussed. Significance and limitations of the three uncertainty quantification approaches to risk assessment of NDE-based engineering decisions are presented and discussed.

Proceedings Papers

*Proc. ASME*. PVP2018, Volume 6B: Materials and Fabrication, V06BT06A074, July 15–20, 2018

Paper No: PVP2018-84730

Abstract

In the aerospace industry, open hole specimens of composite laminates have been used in standardized tests to generate design allowables. Using finite element method (FEM) based tool MicMac/FEA with AB AQUS code interface and statistical design of experiments, Shah, et al. in 2010 [11] studied average-property-based failure envelope with uncertainty estimates of open hole specimen with quasi-isotropic carbon fiber-epoxy laminate. However, their FEM model is deterministic, without uncertainty analysis. In this paper, based on Shah’s FEM model, we developed FEM model of uni-axial strength test of holed composite laminates using ABAQUS with a serious of quadrilateral S4R and trilateral S3R shell element designs. The mesh density ranges from the original 8 × 8 (very coarse) to 48 × 48 (very fine). For each of the meshes, we compute the failure strength from Hasin failure criteria. Then we use a 4-parameter logistic function nonlinear least squares fit algorithm to obtain an estimate of the failure strength at infinite degrees of freedom (d.o.f) as well as its uncertainty at 50,000-d.o.f. and relative error convergence rates. Our results are then compared with Shah’s with the additional advantage that our results have uncertainty quantification that can be compared with experimental data. The significance and limitation of our method on the uncertainty quantification of FEM model of uniaxial strength test of holed composite laminates are discussed.

Proceedings Papers

*Proc. ASME*. PVP2018, Volume 6B: Materials and Fabrication, V06BT06A075, July 15–20, 2018

Paper No: PVP2018-84739

Abstract

A large number of fatigue life models for engineering materials such as concrete and steel are simply a linear or nonlinear relationship between the cyclic stress amplitude, σ a , and the log of the number of cycles to failure, N f . In the linear case, the relationship is a power-law relation between σ a and N f , with two constants determined by a linear least squares fit algorithm. The disadvantage of this simple linear fit of fatigue test data is that it fails to predict the existence of an endurance limit, which is defined as the cyclic stress amplitude at which the number of cycles is infinity. In this paper, we introduce a nonlinear least square fit based on a 4-parameter logistic function, where the curve of the y vs. x plot will have two horizontal asymptotes, namely, y 0 , at the left infinity, and y 1 , at the right infinity with y 1 < y 0 to simulate a fatigue model with a decreasing y for an increasing x . In addition, we need a third parameter, k , to denote the slope of the curve as it traverses from the left horizontal asymptote to the lower right horizontal asymptote, and a fourth parameter, x 0 , to denote the center of the curve where it crosses a horizontal line half-way between y 0 and y 1 . In this paper, the 4-parameter logistic function is simplified to a 3-parameter function as we apply it to model a fatigue sress-life relationship, because in a stress-log (life) plot, the left upper horizontal asymptote, y 0 , can be assumed as a constant equal to the static ultimate strength of the material, U 0 . This simplification reduces the logistic function to the following form: y = U 0 − (U 0 − y 1 ) / (1 + exp(−k (x − x 0 )), where y = σ a , and x = log(N f ). The fit algorithm allows us to quantify the uncertainty of the model and the estimation of an endurance limit, which is the parameter, y 1 . An application of this nonlinear modeling technique is applied to fatigue data of plain concrete in the literature with excellent results. Significance and limitations of this new fit algorithm to the interpretation of fatigue stress-life data are presented and discussed.

Proceedings Papers

*Proc. ASME*. PVP2018, Volume 6B: Materials and Fabrication, V06BT06A076, July 15–20, 2018

Paper No: PVP2018-84784

Abstract

The determinant of the Jacobian matrix is frequently used in the Finite Element Method as a measure of mesh quality. A new metric is defined, called the Standard Error, based on the distribution of the determinants of the Jacobian matrices of all elements of a finite element mesh. Where the Jacobian norm can be used to compare the quality of one element to another of the same type, the Standard Error compares the mesh quality of different versions of a finite element model where each version uses a different element type. To motivate this new Standard Error, we investigate the geometric meaning of the Jacobian norm on 3D Finite Elements. This mesh quality metric is applied to 8, 20, and 27 node hexahedra, 6 and 15 node prisms, 4 and 10 node tetrahedra, 5 and 13 node pyramid, and 3, 4, 6, 8, and 9 node shell elements. The shape functions for these 14 element types, or more precisely their first partial derivatives, are used to construct the Jacobian Matrix. The matrix is normalized to compensate for size. The determinant of the Jacobian is calculated at Gaussian points within each element. Statistics are gathered to form the Standard Error of the mesh. To illustrate the applicability of this a priori metric, we present two simple example problems having exact answers, and two industry-type problems, a pipe elbow with a crack and a magnetic resonance imaging (MRI) birdcage RF coil resonance, both having no analytical solution. Significance and limitations of using this a priori metric to assess the accuracy of finite element simulations of different mesh designs are presented and discussed.

Proceedings Papers

*Proc. ASME*. VVS2018, ASME 2018 Verification and Validation Symposium, V001T12A001, May 16–18, 2018

Paper No: VVS2018-9320

Abstract

Errors and uncertainties in finite element method (FEM) computing can come from the following eight sources, the first four being FEM-method-specific, and the second four, model-specific: (1) Computing platform such as ABAQUS, ANSYS, COMSOL, LS-DYNA, etc.; (2) choice of element types in designing a mesh; (3) choice of mean element density or degrees of freedom (d.o.f.) in the same mesh design; (4) choice of a percent relative error (PRE) or the Rate of PRE per d.o.f. on a log-log plot to assure solution convergence; (5) uncertainty in geometric parameters of the model; (6) uncertainty in physical and material property parameters of the model; (7) uncertainty in loading parameters of the model, and (8) uncertainty in the choice of the model. By considering every FEM solution as the result of a numerical experiment for a fixed model, a purely mathematical problem, i.e., solution verification, can be addressed by first quantifying the errors and uncertainties due to the first four of the eight sources listed above, and then developing numerical algorithms and easy-to-use metrics to assess the solution accuracy of all candidate solutions. In this paper, we present a new approach to FEM verification by applying three mathematical methods and formulating three metrics for solution accuracy assessment. The three methods are: (1) A 4-parameter logistic function to find an asymptotic solution of FEM simulations; (2) the nonlinear least squares method in combination with the logistic function to find an estimate of the 95% confidence bounds of the asymptotic solution ; and (3) the definition of the Jacobian of a single finite element in order to compute the Jacobians of all elements in a FEM mesh. Using those three methods, we develop numerical tools to estimate (a) the uncertainty of a FEM solution at one billion d.o.f., (b) the gain in the rate of PRE per d.o.f. as the asymptotic solution approaches very large d.o.f.’s, and (c) the estimated mean of the Jacobian distribution (mJ) of a given mesh design. Those three quantities are shown to be useful metrics to assess the accuracy of candidate solutions in order to arrive at a so-called “best” estimate with uncertainty quantification. Our results include calibration of those three metrics using problems of known analytical solutions and the application of the metrics to sample problems, of which no theoretical solution is known to exist.

Proceedings Papers

*Proc. ASME*. ETAM2018, ASME 2018 Symposium on Elevated Temperature Application of Materials for Fossil, Nuclear, and Petrochemical Industries, V001T04A002, April 3–5, 2018

Paper No: ETAM2018-6711

Abstract

Uncertainty in modeling the creep rupture life of a full-scale component using experimental data at microscopic (Level 1), specimen (Level 2), and full-size (Level 3) scales, is addressed by applying statistical theory of prediction intervals, and that of tolerance intervals based on the concept of coverage, p . Using a nonlinear least squares fit algorithm and the physical assumption that the one-sided Lower Tolerance Limit ( LTL ), at 95 % confidence level, of the creep rupture life, i.e., the minimum time-to-failure, minTf , of a full-scale component, cannot be negative as the lack or “Failure” of coverage ( Fp ), defined as 1 - p , approaches zero, we develop a new creep rupture life model, where the minimum time-to-failure, minTf , at extremely low “Failure” of coverage, Fp , can be estimated. Since the concept of coverage is closely related to that of an inspection strategy, and if one assumes that the predominent cause of failure of a full-size component is due to the “Failure” of inspection or coverage, it is reasonable to equate the quantity, Fp , to a Failure Probability, FP , thereby leading to a new approach of estimating the frequency of in-service inspection of a full-size component. To illustrate this approach, we include a numerical example using the published creep rupture time data of an API 579-1/ASME FFS-1 Grade 91 steel at 571.1 C (1060 F) (API-STD-530, 2007), and a linear least squares fit to generate the necessary uncertainties for ultimately performing a dynamic risk analysis, where a graphical plot of an estimate of risk with uncertainty vs. a predicted most likely date of a high consequence failure event due to creep rupture becomes available for a risk-informed inspection strategy associated with an energy-generation or chemical processing plant equipment.

Proceedings Papers

*Proc. ASME*. PVP2017, Volume 2: Computer Technology and Bolted Joints, V002T02A011, July 16–20, 2017

Paper No: PVP2017-65817

Abstract

In most finite-element-analysis codes, accuracy is achieved through the use of the hexahedron hexa-20 elements (a node at each of the 8 corners and 12 edges of a brick element). Unfortunately, without an additional node in the center of each of the element’s 6 faces, nor in the center of the hexa, the hexa-20 elements are not fully quadratic such that its truncation error remains at h(0), the same as the error of a hexa-8 element formulation. To achieve an accuracy with a truncation error of h 3 (0), we need the fully-quadratic hexa-27 formulation. A competitor of the hexa-27 element in the early days was the so-called serendipity cubic hexa-32 solid elements (see Ahmad, Irons, and Zienkiewicz, Int. J. Numer. Methods in Eng., 2:419-451 (1970) [1]). The hexa-32 elements, unfortunately, also suffer from the same lack of accuracy syndrome as the hexa20’s. In recent work, we have developed methods to test the errors and the rate of convergence in FEA [2,3,4]. In this paper, we propose a new metric for determining the quality of isoparametric elements a priori. Significance of the highly accurate hexa-27 formulation and a comparison of its results with similar solutions using ABAQUS hexa-20 elements, are presented and discussed. Guidelines are proposed for selection of better elements.

Proceedings Papers

*Proc. ASME*. PVP2016, Volume 1B: Codes and Standards, V01BT01A055, July 17–21, 2016

Paper No: PVP2016-63350

Abstract

Recent experimental results on creep-fracture damage with minimum time to failure (minTTF) varying as the 9 th power of stress, and a theoretical consequence that the coefficient of variation (CV) of minTTF is necessarily 9 times that of the CV of the stress, created a new engineering requirement that the finite element analysis of pressure vessel and piping systems in power generation and chemical plants be more accurate with an allowable error of no more than 2% when dealing with a leak-before-break scenario. This new requirement becomes more critical, for example, when one finds a small leakage in the vicinity of a hot steam piping weldment next to an elbow. To illustrate the critical nature of this creep and creep-fatigue interaction problem in engineering design and operation decision-making, we present the analysis of a typical steam piping maintenance problem, where 10 experimental data on the creep rupture time vs. stress (83 to 131 MPa) for an API Grade 91 steel at 571.1 C (1060 F) are fitted with a straight line using the linear least squares (LLSQ) method. The LLSQ fit yields not only a two-parameter model, but also an estimate of the 95% confidence upper and lower limits of the rupture time as basis for a statistical design of creep and creep-fatigue. In addition, we will show that when an error in stress estimate is 2% or more, the 95% confidence lower limit for the rupture time will be reduced from the minimum by as much as 40%.

Proceedings Papers

*Proc. ASME*. PVP2016, Volume 1B: Codes and Standards, V01BT01A058, July 17–21, 2016

Paper No: PVP2016-63715

Abstract

In most finite-element-analysis codes, accuracy is achieved through the use of the hexahedron hexa-20 elements (a node at each of the 8 corners and 12 edges of a brick element). Unfortunately, without an additional node in the center of each of the element’s 6 faces, nor in the center of the hexa, the hexa-20 elements are not fully quadratic such that its truncation error remains at h 2 (0), the same as the error of a hexa-8 element formulation. To achieve an accuracy with a truncation error of h 3 (0), we need the fully-quadratic hexa-27 formulation. A competitor of the hexa-27 element in the early days was the so-called serendipity cubic hexa-32 solid elements (see Ahmad, Irons, and Zienkiewicz, Int. J. Numer. Methods in Eng., 2:419–451 (1970) [1]). The hexa-32 elements, unfortunately, also suffer from the same lack of accuracy syndrome as the hexa20’s. In this paper, we investigate the accuracy of various elements described in the literature including the fully quadratic hexa-27 elements to a shell problem of interest to the pressure vessels and piping community, viz. the shell-element-based analysis of a barrel vault. Significance of the highly accurate hexa-27 formulation and a comparison of its results with similar solutions using ABAQUS hexa-8, and hexa-20 elements, are presented and discussed. Guidelines are proposed for selection of better elements.

Proceedings Papers

*Proc. ASME*. PVP2016, Volume 1B: Codes and Standards, V01BT01A059, July 17–21, 2016

Paper No: PVP2016-63890

Abstract

A finite element method (FEM)-based solution of an industry-grade problem with complex geometry, partially-validated material property databases, incomplete knowledge of prior loading histories, and an increasingly user-friendly human-computer interface, is extremely difficult to verify because of at least five major sources of errors or solution uncertainty (SU), namely, (SU-1) numerical algorithm of approximation for solving a system of partial differential equations with initial and boundary conditions; (SU-2) the choice of the element type in the design of a finite element mesh; (SU-3) the choice of a mesh density; (SU-4) the quality measures of a finite element mesh such as the mean aspect ratio.; and (SU-5) the uncertainty in the geometric parameters, the physical and material property parameters, the loading parameters, and the boundary constraints. To address this problem, a super-parametric approach to FEM is developed, where the uncertainties in all of the known factors are estimated using three classical tools, namely, (a) a nonlinear least squares logistic fit algorithm, (b) a relative error convergence plot, and (c) a sensitivity analysis based on a fractional factorial orthogonal design of experiments approach. To illustrate our approach, with emphasis on addressing the mesh quality issue, we present a numerical example on the elastic deformation of a cylindrical pipe with a surface crack and subjected to a uniform load along the axis of the pipe.

Proceedings Papers

*Proc. ASME*. PVP2011, Volume 6: Materials and Fabrication, Parts A and B, 1029-1042, July 17–21, 2011

Paper No: PVP2011-57712

Abstract

In this paper, we propose an approach to public-private collaborative research in predictive modeling and control of complex engineered systems. Society depends intimately on complex systems. The behavior of a simple system can be modeled and the model can be validated by experimental observations, if the behavior of each component and its interface with other components are known and well-defined. In contrast, a complex system cannot be modeled accurately enough to effectively predict and control the behaviors of the overall system. One example of an engineered complex system network (CSN) is the electricity power grid, which encompasses power generation, transmission, distribution, and consumption, as one giant system that includes electric generators, transformers, substation switchyards, transmission lines, consumer devices, and a multitude of new evolving components. The electricity power grid depends on other complex systems, e.g., climate systems that govern wind current for wind turbines, river water levels for thermoelectric cooling, and economic systems for service demand, pricing, revenue collection, and for business capital supply. Operational robustness, reliability, and efficiency of CSN’s are in the interest of all the subsystem owners, end users, and the public welfare of the nation. Conundrum? Who is responsible for the overall CSN’s operational robustness, reliability and efficiency, when so many parts of the system reside in so many different hands with the ultimate beneficiaries of the systems being the general public? Which entities are responsible for funding critical high-risk research, whose ultimate benefits do not reside with any one subset of stakeholders? These questions characterize the challenge of sourcing R&D funds that can be focused on modeling, understanding, and management of CSNs in general. To address such needs for innovative collaborative research, Congress established the Technology Innovation Program (TIP) at the National Institute of Standards and Technology (NIST) as part of the 2007 America COMPETES Act. Its purpose is to “assist United States businesses and institutions of higher education or other organizations, such as national laboratories and nonprofit research institutions, to support, promote, and accelerate innovation in the United States through high-risk, high-reward research in areas of critical national need.” Ongoing efforts by TIP to identify and qualify societal challenges in the critical national need area of Complex System Networks are introduced.

Proceedings Papers

Jeffrey T. Fong, Stephen R. Gosselin, Pedro V. Marcal, James J. Filliben, N. Alan Heckert, Robert E. Chapman

*Proc. ASME*. PVP2010, ASME 2010 Pressure Vessels and Piping Conference: Volume 6, Parts A and B, 1065-1089, July 18–22, 2010

Paper No: PVP2010-25168

Abstract

This paper is a continuation of a recent ASME Conference paper entitled “Design of a Python-Based Plug-in for Benchmarking Probabilistic Fracture Mechanics Computer Codes with Failure Event Data” (PVP2009-77974). In that paper, which was co-authored by Fong, deWit, Marcal, Filliben, Heckert, and Gosselin, we designed a probability-uncertainty plug-in to automate the estimation of leakage probability with uncertainty bounds due to variability in a large number of factors. The estimation algorithm was based on a two-level full or fractional factorial design of experiments such that the total number of simulations will be small as compared to a Monte-Carlo method. This feature is attractive if the simulations were based on finite element analysis with a large number of nodes and elements. In this paper, we go one step further to derive a risk-uncertainty formula by computing separately the probability-uncertainty and the consequence-uncertainty of a given failure event, and then using the classical theory of error propagation to compute the risk-uncertainty within the domain of validity of that theory. The estimation of the consequence-uncertainty is accomplished by using a public-domain software package entitled “Cost-Effectiveness Tool for Capital Asset Protection, version 4.0, 2008” ( http://www.bfrl.nist.gov/oae/ or NIST Report NISTIR-7524 ), and is more fully described in a companion paper entitled “An Economics-based Intelligence (EI) Tool for Pressure Vessels & Piping (PVP) Failure Consequence Estimation,” (PVP2010-25226, Session MF-23.4 of this conference). A numerical example of an application of the risk-uncertainty formula using a 16-year historical database of probability and consequence of main steam and hot reheat piping systems is presented. Implication of this risk-uncertainty estimation tool to the design of a risk-informed in-service inspection program is discussed.

Proceedings Papers

Robert E. Chapman, Jeffrey T. Fong, David T. Butry, Douglas S. Thomas, James J. Filliben, N. Alan Heckert

*Proc. ASME*. PVP2010, ASME 2010 Pressure Vessels and Piping Conference: Volume 6, Parts A and B, 1091-1105, July 18–22, 2010

Paper No: PVP2010-25226

Abstract

This paper is built around ASTM E 2506, Standard Guide for Developing a Cost-Effective Risk Mitigation Plan for New and Existing Constructed Facilities. E 2506 establishes a three-step protocol—perform risk assessment, specify combinations of risk mitigation strategies for evaluation, and perform economic evaluation—to insure that the decision maker is provided the requisite information to choose the most cost effective combination of risk mitigation strategies. Because decisions associated with low-probability, high-consequence events involve uncertainty both in terms of appropriate evaluation procedures and event-related measures of likelihood and consequence, NIST developed a Risk Mitigation Toolkit. This paper uses (a) a data center undergoing renovation for improved security, and (b) a PVP-related failure event to illustrate how to perform the E 2506 three-step protocol with particular emphasis on the third step—perform economic evaluation. The third step is built around the Cost-Effectiveness Tool for Capital Asset Protection (CET), which was developed by NIST. Version 4.0 of CET is used to analyze the security- or failure-related event with a focus on consequence estimation and consequence assessment via Monte Carlo techniques. CET 4.0 includes detailed analysis and reporting features designed to identify key cost drivers, measure their impacts, and deliver estimated consequence parameters with uncertainty bounds. Significance of this economics-based intelligence (EI) tool is presented and discussed for security- or failure-consequence estimation to risk assessment of failure of critical structures or components.

Proceedings Papers

*Proc. ASME*. PVP2010, ASME 2010 Pressure Vessels and Piping Conference: Volume 6, Parts A and B, 771-776, July 18–22, 2010

Paper No: PVP2010-26144

Abstract

In composite structural design, a fundamental requirement is to furnish the designer with a set of elastic constants. For example, to design for a given temperature a laminate consisting of transversely isotropic fiber-reinforced laminae, we need five independent elastic constants of each lamina of interest, namely, E 1 , E 2 , ν 12 , G 12 , and ν 23 . At present, there exist seven tests, two of mechanical-lamina, two of thermal-expansion-lamina, and three of thermal-expansion-laminate types, to accomplish this task. It is known in the literature that the mechanical tests are capable of measuring E 1 , E 2 , and ν 12 , whereas the two thermal-expansion-lamina tests will measure α 1 and α 2 , and the three thermal-expansion-laminate tests yield an over-determined system of three simultaneous equations of the remaining two unknown elastic constants, G 12 and ν 23 . In this paper, we propose a new approach to determining those five elastic constants with uncertainty bounds using the extra information obtainable from an over-determined system. The approach takes advantage of the classical theory of error propagation for which variance formulas were derived to estimate standard deviations of some of our five elastic constants. To illustrate this approach, we apply it to a set of experimental data on PEEK/IM7 unidirectional lamina. The experiment consists of the following tests: Two tensile tests with four samples of unidirectional specimens to measure E 1 , E 2 and ν 12 ; two thermal-expansion-lamina tests for coefficients (α 1 and α 2 ) each using four [(0) 32 ]T unidirectional specimens; and three thermal-expansion-laminate tests on four samples of [(+30/−30) 8 ] s laminates. The results of our new approach are compared with those of a similar but more ad hoc approach that has appeared in the literature. The potential of applying this new methodology to the creation of a composite material elastic property database with uncertainty estimation and to the reliability analysis of composite structure is discussed.

Proceedings Papers

*Proc. ASME*. PVP2010, ASME 2010 Pressure Vessels and Piping Conference: Volume 6, Parts A and B, 777-783, July 18–22, 2010

Paper No: PVP2010-26145

Abstract

For brevity, the class of “composite materials” in this paper is intended to refer to one of its subclasses, namely, the fiber-reinforced composite materials. In developing composite material property databases, three categories of data are needed. Category 1 consists of all raw test data with detailed information on specimen preparation, test machine description, specimen size and number per test, test loading history including temperature and humidity, etc., test configuration such as strain gage type and location, grip description, etc. Category 2 is the design allowable derived from information contained in Category 1 without making further experimental tests. Category 3 is the same design allowable for applications such that new experiments prescribed by user to obtain more reliable properties for the purpose on hand. At present, most handbook-based composite material property databases contain incomplete information in Category 1 (raw data), where a user is given only the test average values of properties such as longitudinal, transverse, and shear moduli, major and out-of-plane Poisson’s ratios, longitudinal tensile and compressive, transverse tensile and compressive, and shear strengths, inter-laminar shear strength, ply thickness, hygrothermal expansion coefficients, specific gravity, fiber volume fraction, etc. The presentation in Category 1 ignores the inclusion of the entire test environment description necessary for a user to assess the uncertainty of the raw data. Furthermore, the design allowable listed in Category 2 is deterministically obtained from Category 1 and the user is given average design allowable without uncertainty estimation. In this paper, it is presented a case study where average design allowable failure envelopes of open hole specimens were obtained numerically for two different quasi-isotropic carbon fiber-epoxy laminates using the appropriate Category 1 data. Using the method of statistical design of experiments, it is then showed how the average design allowable can be supplemented with uncertainty estimates if the Category 1 database is complete. Application of this methodology to predicting reliability of composite structures is discussed.

Proceedings Papers

*Proc. ASME*. PVP2009, Volume 6: Materials and Fabrication, Parts A and B, 1573-1601, July 26–30, 2009

Paper No: PVP2009-77867

Abstract

Over the last thirty years, much research has been done on the development and application of failure event databases, NDE databases, and materials property databases for pressure vessels and piping, as reported in two recent symposia: (1) ASME 2007 PVP Symposium (in honor of the late Dr. Spencer Bush), San Antonio, Texas, on “Engineering Safety, Applied Mechanics, and Nondestructive Evaluation (NDE).” (2) ASME 2008 PVP Symposium, Chicago, Illinois, on “Failure Prevention via Robust Design and Continuous NDE Monitoring.” The two symposia concluded that those three types of databases, if properly documented and maintained on a worldwide basis, could hold the key to the continued safe and reliable operation of numerous aging structures including nuclear power or petro-chemical processing plants. During the 2008 symposium, four uncertainty categories associated with causing uncertainty in fatigue life estimates were identified, namely, (1) Uncertainty-1 in failure event databases, (2) Uncertainty-2 in NDE databases, (3) Uncertainty-3 in materials property databases, and (4) Uncertainty-M in crack-growth and damage modeling. In this paper, which is one of a series of four to address all those four uncertainty categories, we address Uncertainty-3 in materials property databases by developing a D ataplot- P ython- A NLAP ( DPA ) plug-in, which automates the uncertainty estimation algorithms of material property test data such that those data can be combined with field NDE data by office engineers to speed up the process of probabilistic damage assessment and remaining life estimation. To illustrate this approach, we describe an example application where several mechanical property data sets of a U.S.-made low-carbon steel (A36) and a proprietary high-strength steel (Class 590 MPa) from Japan, are first computed with uncertainty estimates, and then compared with a traditional calculation without uncertainty for deterministic modeling. Significance of the development of computer plug-ins to facilitate data mining of materials property databases and to assist risk-informed analysis is discussed.

Proceedings Papers

*Proc. ASME*. PVP2009, Volume 6: Materials and Fabrication, Parts A and B, 1613-1649, July 26–30, 2009

Paper No: PVP2009-77871

Abstract

Over the last thirty years, much research has been done on the development and application of in-service inspection (ISI) and failure event databases for pressure vessels and piping, as reported in two recent symposia: (1) ASME 2007 PVP Symposium (in honor of the late Dr. Spencer Bush), San Antonio, Texas, on “Engineering Safety, Applied Mechanics, and Nondestructive Evaluation (NDE).” (2) ASME 2008 PVP Symposium, Chicago, Illinois, on “Failure Prevention via Robust Design and Continuous NDE Monitoring.” The two symposia concluded that those databases, if properly documented and maintained on a worldwide basis, could hold the key to the continued safe and profitable operation of numerous aging nuclear power or petro-chemical processing plants. During the 2008 symposium, four uncertainty categories associated with causing uncertainty in fatigue life estimates were identified, namely, (1) Uncertainty-1 in failure event databases, (2) Uncertainty-2 in NDE databases, (3) Uncertainty-3 in material property databases, and (4) Uncertainty-M in crack-growth and damage modeling. In this paper, which is one of a series of four to address all those four uncertainty categories, we introduce an automatic natural language abstracting and processing (ANLAP) tool to address Uncertainty-1. Three examples are presented and discussed.

Proceedings Papers

Jeffrey T. Fong, Roland deWit, Pedro V. Marcal, James J. Filliben, N. Alan Heckert, Stephen R. Gosselin

*Proc. ASME*. PVP2009, Volume 6: Materials and Fabrication, Parts A and B, 1651-1693, July 26–30, 2009

Paper No: PVP2009-77974

Abstract

In a 2007 paper entitled “Application of Failure Event Data to Benchmark Probabilistic Fracture Mechanics (PFM) Computer Codes” (Simonen, F. A., Gosselin, S. R., Lydell, B. O. Y., Rudland, D. L., & Wikowski, G. M. Proc. ASME PVP Conf., San Antonio, TX , Paper PVP2007-26373), it was reported that the two benchmarked PFM models, PRO-LOCA and PRAISE, predicted significantly higher failure probabilities of cracking than those derived from field data in three PWR and one BWR cases by a factor ranging from 30 to 10,000. To explain the reasons for having such a large discrepancy, the authors listed ten sources of uncertainties: (1) Welding Residual Stresses. (2) Crack Initiation Predictions. (3) Crack Growth Rates. (4) Circumferential Stress Variation. (5) Operating temperatures different from design temperatures. (6) Temperature factor in actual activation energy vs. assumed. (7) Under reporting of field data due to NDE limitations. (8) Uncertainty in modeling initiation, growth, and linking of multiple cracks around the circumference of a weld. (9) Correlation of crack initiation times and growth rates. (10) Insights from NUREG/CR-6674 (2000) fatigue crack growth models using conservative inputs for cyclic strain rates and environmental parameters such as oxygen content. In this paper we design a Python-based plug-in that allows a user to address those ten sources of uncertainties. This approach is based on the statistical theory of design of experiments with a 2-level factorial design, where a small number of runs is enough to estimate the uncertainties in the predictions of PFM models due to some combination of the source uncertainties listed by Simonen et al (PVP2007-26373).

Proceedings Papers

*Proc. ASME*. PVP2009, Volume 6: Materials and Fabrication, Parts A and B, 1331-1374, July 26–30, 2009

Paper No: PVP2009-77827

Abstract

Over the last thirty years, much research has been done on the development and application of failure event databases, NDE databases, and material property databases for pressure vessels and piping, as reported in two recent symposia: (1) ASME 2007 PVP Symposium (in honor of the late Dr. Spencer Bush), San Antonio, Texas, on “Engineering Safety, Applied Mechanics, and Nondestructive Evaluation (NDE).” (2) ASME 2008 PVP Symposium, Chicago, Illinois, on “Failure Prevention via Robust Design and Continuous NDE Monitoring.” The two symposia concluded that those three types of databases, if properly documented and maintained on a worldwide basis, could hold the key to the continued safe and reliable operation of numerous aging nuclear power or petrochemical processing plants. During the 2008 symposium, four uncertainty categories associated with causing uncertainty in fatigue life estimates were identified, namely, (1) Uncertainty-1 in failure event databases, (2) Uncertainty-2 in NDE databases, (3) Uncertainty-3 in material property databases, and (4) Uncertainty-M in crack-growth and damage modeling. In this paper, which is one of a series of four to address all those four uncertainty categories, we address Uncertainty-2 in NDE databases by developing a W eb-based U ncertainty P lug- I n ( WUPI ), which automates the uncertainty estimation algorithms of flaw sizing, fracture toughness, and crack growth vs. ΔK data such that NDE data from the field can be acted on by office engineers with a reduced feedback time for maintenance decision making.

Proceedings Papers

*Proc. ASME*. PVP2007, Volume 6: Materials and Fabrication, 199-205, July 22–26, 2007

Paper No: PVP2007-26751

Abstract

Computer models are abstractions of physical reality and are routinely used for solving practical engineering problems. These models are prepared using large complex computer codes that are widely used in the industry. Patran/Thermal is such a finite element computer code that is used for solving complex heat transfer problems in the industry. Finite element models of complex problems involve making assumptions and simplifications that depend upon the complexity of the problem and upon the judgment of the analysts. The assumptions involve mesh size, solution methods, convergence criteria, material properties, boundary conditions, etc. that could vary from analyst to analyst. All of these assumptions are, in fact, candidates for a purposeful and intended effort to systematically vary each in connection with the others to determine there relative importance or expected overall effect on the modeled outcome. These kinds of models derive from the methods of statistical science and are based on the principles of experimental designs. These, as all computer models, must be validated to make sure that the output from such an abstraction represents reality [1,2]. A new nuclear material packaging design, called 9977, which is undergoing a certification design review, is used to assess the capability of the Patran/Thermal computer model to simulate 9977 thermal response. The computer model for the 9977 package is validated by comparing its output with the test data collected from an actual thermal test performed on a full size 9977 package. Inferences are drawn by performing statistical analyses on the residuals (test data – model predictions).