This paper summarizes an emerging process to establish credibility for surrogate models that cover multidimensional, continuous solution spaces. Various features lead to disagreement between the surrogate model's results and results from more precise computational benchmark solutions. In our verification process, this disagreement is quantified using descriptive statistics to support uncertainty quantification, sensitivity analysis, and surrogate model assessments. Our focus is stress-intensity factor (SIF) solutions. SIFs can be evaluated from simulations (e.g., finite element analyses), but these simulations require significant preprocessing, computational resources, and expertise to produce a credible result. It is not tractable (or necessary) to simulate a SIF for every crack front. Instead, most engineering analyses of fatigue crack growth (FCG) employ surrogate SIF solutions based on some combination of mechanics, interpolation, and SIF solutions extracted from earlier analyses. SIF values from surrogate solutions vary with local stress profiles and nondimensional degrees-of-freedom that define the geometry. The verification process evaluates the selected stress profiles and the sampled geometries using the surrogate model and a benchmark code (abaqus). The benchmark code employs a Python scripting interface to automate model development, execution, and extraction of key results. The ratio of the test code SIF to the benchmark code SIF measures the credibility of the solution. Descriptive statistics of these ratios provide convenient measures of relative surrogate quality. Thousands of analyses support visualization of the surrogate model's credibility, e.g., by rank-ordering of the credibility measure.