Well testing is routinely performed to evaluate the performance of a well, which establishes the allocation factor for the lease, which in turn establishes tax and royalty basis. Most well testing is done with conventional gravity separators, which separates the produced stream into oil, water, and gas components and measures these individual components as individual streams. New multiphase measurement technology improves well test results through improved accuracy, consistency, and more frequent well testing. This paper examines the implication of these improved capabilities to recognize well problems and optimize production. A simple economic model is provided that an operator can use to assess the balance between the cost of performing periodic well tests and the benefits of more quickly discovering well problems that can result in less than expected production. The model relates the cost of decreased production, as the result of unforeseen changes in the well, to the frequency and accuracy of the well tests. The model derives an optimum test interval that minimizes the total cost of well testing and deferred production on the basis of the probability that a higher than normal decline in production rate can be detected by well testing. The model is then used in several field examples to assess the optimum period between well tests and how the optimum period can lead to reduced cost of operation and improved production.

This content is only available via PDF.
You do not currently have access to this content.