This paper presents a practical methodology for propagating and processing uncertainties associated with random measurement errors (that vary from test to test) and systematic measurement errors (uncertain but similar from test to test) in inputs and outputs of replicate tests to characterize response variability of stochastic systems. Also treated are test input condition control variability from test to test and sampling uncertainty due to limited numbers of replicate tests. These variabilities and uncertainties result in uncertainty on computed statistics of output response quantities. The methodology is general but was initially developed in the context of processing experimental data for "Real Space" model validation comparisons against model-predicted statistics and uncertainty thereof. The methodology is flexible and sufficient for many data uncertainty quantification and model validation assessment needs. It handles both interval and probabilistic uncertainty descriptions and can be performed with relatively little computational cost through use of simple and effective dimension- and order- adaptive polynomial response surfaces in a Monte Carlo uncertainty propagation approach. A key feature of the progressively upgraded response surfaces is that they enable estimation of propagation error contributed by the surrogate model. Sensitivity analysis of the relative contributions of the various uncertainty sources to the total uncertainty is also presented. The methodologies are demonstrated on real experimental data involving all the mentioned sources and types of error and uncertainty in five replicate tests of pressure vessels heated and pressurized to failure. Simple spreadsheet procedures are illustrated for all processing operations.