While the uncertainties associated with actual pipeline asset condition demand the use of probabilistic methodologies to assess the integrity of pipelines, a realistic and validated probabilistic method to demonstrate post-hydrostatic test (PHT) integrity has eluded the pipeline industry. Traditionally, deterministic methods grow a “just-surviving flaw” (JSF) under worst-case pressure cycling to predict the remaining life of the most severe imperfection which could have survived a high-pressure event, such as hydrostatic test. The deterministic analysis results in a JSF fatigue life but does not identify the likelihood that the flaw exists. Furthermore, identifying the most severe flaw is not intuitive and attempts to probabilistically model material variabilities have failed to match known historical PHT reliability.

A pipeline operator has now developed a novel approach to the task of quantifying marginal pipeline reliability after hydrostatic tests. Rather than limiting random values to only material properties, potential defects are assigned sizes and pressure cycling values, randomly sampled from validated distributions of defect size and pressure cycling severity (equivalent to downstream location). The number of generated defects is determined by a validated defect density, and defect size remains limited to what could have physically survived the hydrostatic test. The question posed is no longer “what are possible sizes of JSF close to discharge pressure surviving to a specific time under known load conditions?”, but rather “what proportion of the pipeline segments with similar defect populations would survive to a specific time under known load conditions?”. This represents a fundamental paradigm shift away from considering only a worst-case scenario to the quantification of plausible pipeline health conditions. Monte Carlo simulation time is kept practical by using an equivalent load integral method to project crack growth. This proposed methodology was validated by applying it to a selection of pipeline segments with known historical fatigue failures following hydrostatic tests in order to quantify the predictability of each section’s reliability at the failure time. The initial validation of the method was found to reasonably predict the past incidents.

This paper will discuss the methodology, input parameters including their distributions, methods for assigning defect size distributions and densities based on extrapolations of field nondestructive examination (NDE) and in-line inspection (ILI) data, and a minimum defect density floor established based on the PHT fatigue failure of a newly constructed pipeline. While this method originally targets PHT pipeline segments, the development of a similar method for pipelines managed exclusively by ILI data is under development. The largest potential flaw for ILI-managed assets is then dictated by what could have evaded ILI tool detection rather than what could have survived a hydrostatic test. Herein, the progress on this development and future suggested research will be provided.

This content is only available via PDF.
You do not currently have access to this content.