Knowledge of the measurement errors of an in-line inspection tool and field tool are important (i) for determining the corrosion feature severity and the probability of failure of the pipeline due to that corrosion feature, (ii) for verifying the tool vendors claimed accuracy, (iii) as a component of the tools development program, (iv) as a reference for other inspection tools and (v) for probability of corrosion/crack detection assessments. When in-line inspection tool reporting is used in site specific probability of failure analyses or growth modelling applications, the measurement error of the tool plays a significant role in determining the distributions of penetration and rupture pressure at the time of inspection, or any time in the future. Often, the in-line inspection tools are compared with data obtained from a reference (field) tool which is usually assumed to be perfect, but in reality no tool is perfect. In the late 1940’s a procedure was developed that decomposes the total scatter between tools being compared and assigns an appropriate scatter or measurement error to each tool individually. The procedure is based on a suite of assumptions, which sometimes fail. The typical result is an estimate of measurement error that is negative, similar to a sums of squares estimate in an analysis of variance being negative, which is clearly wrong and unacceptable. Late researchers have suggested methods that overcome this difficulty. These estimators also suffer from certain limitations. In this paper a Bayesian methodology that can overcome some of these recognized limitations is presented.

This content is only available via PDF.
You do not currently have access to this content.