Abstract

Previous studies have shown the potential of using a multiobjective CFD (computational fluid dynamics)-driven machine-learning approach to train both transition and turbulence models in RANS (Reynolds averaged Navier-Stokes) calculations for improved turbine flow predictions (Akolekar et al., GT2022-81091; Fang et al., GT2023-102902). However, conducting CFD-driven training incurs a high computational cost as thousands of RANS calculations are required if the starting guesses are taken from an initial population of randomly generated models. This paper, for the first time, adopts a transformer technique, belonging to the class of natural language processing models, in gene expression programming (GEP), to expedite the training process for transition and turbulence models. The efficacy of utilizing the transformer is investigated for two scenarios. In one, we introduce previously trained models to randomly generated ones in the initial population of candidate models, facilitating the generation of models with a higher likelihood of achieving lower cost function values from the outset. In the other scenario, assuming that no suitable information is available from pre-training, a dynamic approach is employed at certain training iterations, where models exhibiting significant errors are excluded and replaced by those trained on the fly by the transformer and demonstrating smaller errors. Additionally, we incorporate mathematical operators such as minimum, maximum, and exponential functions, along with a technique called a rolling window, to avoid nested functions in the trained models. This enhances the flexibility in constructing trained models while still allowing us to delve into the underlying physics and provide recommendations for developing physical models. Finally, we also introduce two additional physical features that serve as training inputs for the turbulence model that contribute to smaller errors. With these enhancements to the previous GEP framework, model training is accelerated considerably and the models found show improved performance for both training and previously unseen testing cases.

This content is only available via PDF.
You do not currently have access to this content.