Current turbulent heat flux models fail to predict accurate temperature distributions in film cooling flows. The present paper focuses on a machine learning approach to this problem, in which the Gradient Diffusion Hypothesis (GDH) is used in conjunction with a data-driven prediction for the turbulent diffusivity field αt. An overview of the model is presented, followed by validation against two film cooling datasets. Despite insufficiencies, the model shows some improvement in the near-injection region. The present work also attempts to interpret the complex machine learning decision process, by analyzing the model features and determining their importance. These results show that the model is heavily reliant of distance to the wall d and eddy viscosity vt, while other features display localized prominence.

This content is only available via PDF.
You do not currently have access to this content.