Abstract

Multi-fidelity (MF) models abound in simulation-based engineering fields. Many MF strategies have been proposed to improve the efficiency in engineering processes, especially in design optimization. When it comes to assessing the performance of MF optimization techniques, existing practice usually relies on test cases involving contrived MF models of seemingly random math functions, due to limited access to real-world MF models. While it is acceptable to use contrived MF models, these models are often manually written up rather than created in a systematic manner. This gives rise to the potential pitfall that the test MF models may be not representative of general scenarios. We propose a framework to generate test MF models systematically and characterize tested MF optimization methods' performances comprehensively. In our framework, the MF models are generated based on given high-fidelity (HF) model and come with two parameters to control their fidelity levels and allow model randomization. In our testing process, MF case problems are systematically formulated using our model creation method. Running the given MF optimization technique on these problems produces what we call “savings curve” that characterizes the method's performance similarly to how ROC curves characterize machine learning classifiers. Our test results also allow plotting “optimality curves” that serve similar functionality to savings curves in certain types of problems. The flexibility of our MF model creation facilitates the development of testing processes for general MF problem scenarios, and our framework can be easily extended to other MF applications than optimization.

This content is only available via PDF.
You do not currently have access to this content.