Pipeline operators used to map and quantify corrosion damage along their aging pipeline systems by carrying out periodical in-line metal-loss inspections. Comparison with the data sets from subsequent runs of such inspections is one of the most reliable techniques to infer representative corrosion growth rates throughout the pipeline length, within the period between two inspections. Presently there are two distinct approaches to infer corrosion rates based on multiple in-line inspections: individual comparison of the detected defective areas (quantified by more than one inspection), and comparison between populations. The former usually requires a laborious matching process between the run-data sets, while the drawback of the latter is that it often fails to notice hot-spot areas. The object of this work is to present a new methodology which allows quick data comparison of two runs, while still maintaining local distinct characteristics of the corrosion process severity. There are three procedures that must be performed. Firstly, ILI metal-loss data set should be submitted to a filtering/adjustment process, taking into consideration the reporting threshold consistency; the possible existence of systematic bias and corrosion mechanisms similarity. Secondly, the average metal-loss growth rate between inspections should be determined based on the filtered populations. Thirdly, the defects reported by the latest inspection should have their corrosion growth rates individually determined as a function of the mean depth values of the whole population and in the defect neighborhood. The methodology allows quick and realistic damage-progression estimates, endeavoring to achieve more cost-effective and reliable strategies for the integrity management of aged corroded systems. Model robustness and general feasibility is demonstrated in a real case study.

This content is only available via PDF.
You do not currently have access to this content.