Abstract

In this work, artificial neural networks (ANNs) is used to predict nucleate boiling heat flux by learning from a dataset of twelve experimental parameters across 231 independent samples. An approach to reduce the number of parameters involved and to increase model accuracy is proposed and implemented. The proposed approach consists of two steps. In the first step, a feature importance study is performed to determine the most significant parameters. Only important features are used in the second step. In the second step, dimensional analysis is performed on these important parameters. Neural network analysis is then conducted based on dimensionless parameters. The results indicate that the proposed feature importance study and dimensional analysis can significantly improve the ANNs performance. It also show that model errors based on the reduced dataset are considerably lower than those based on the initial dataset. The study based on other machine learning models also shows that the reduced dataset generate better results. The results conclude that ANNs outperform other machine learning algorithms and outperform a well-known boiling correlation equation. Additionally, the feature importance study concludes that wall superheat, gravity and liquid subcooling are the three most significant parameters in the prediction of heat flux for nucleate boiling. Novel results quantifying parameter significance in surface tension dominated (SDB) and buoyancy dominated (BDB) boiling regimes have been reported. The results show that surface tension and liquid subcooling are the most significant parameters in SDB regime with a combined contribution percentage of 60%, while wall superheat and gravity are the most significant parameters in BDB regime with a combined contribution percentage of 70%.

1 Introduction

The interest in nucleate boiling is mainly due to high heat flux generated at considerably lower wall superheats. This makes nucleate boiling an efficient mode of heat transfer for cooling applications including cooling in microgravity where an efficient medium of heat transfer is required to make the components more compact.

The typical nature of the boiling curve during the nucleate boiling phase generally shows a sharp increase in the heat flux with increase in wall superheat up to a maximum value known as critical heat flux (CHF). Beyond the CHF, a decrease in the heat flux value is observed with an increase in wall superheat [1]. It is difficult to control the heat flux near the CHF because nucleate boiling is a complicated process that comprises of multiple mechanisms such as liquid–vapor phase change, bubble dynamics, contact line dynamics, and nucleation site densities [2]. These mechanisms further depend on experimental parameters such as wall superheat, liquid subcooling, gravity, surface roughness and thermo-physical properties of the liquid and vapor in question. One of the goals of nucleate boiling research has been to provide a generalized prediction of the heat flux based on other experimental input parameters such that it can be maintained at an optimum value close to the CHF [3]. Empirical correlations have been reported in the literature; however, they lack the accuracy for a general mechanistic prediction [2].

1.1 Empirical Correlations.

Multiple studies related to experiments, numerical simulations and predictive correlations of nucleate boiling have been reported in the literature [4,5]. Numerical simulation studies are typically performed for specific cases, and are computationally expensive. Additionally, simulations are challenging due to complexities arising from phase change and bubble dynamics. Similarly, experiments are also performed for specific cases. Predictive correlations have also been reported in literature, most of which were developed by using empirical coefficients. Rohsenow [6] proposed one of the first correlations in nucleate boiling. He related the heat flux, q˙ to the wall superheat, ΔTw by the assumption of a single phase forced convection problem. Stephan and Abdelsalam [7] also provided a correlation by determining the important fluid property groups using regression analysis. The correlation provided by them considers bubble departure diameter (Dd) as a variable, which is difficult to be considered as an input parameter for predicting heat flux in industrial applications. The correlation by Stephan and Abdelsalam does not consider surface parameters while Rohsenow's correlation implicitly considers surface contribution through a proportionality constant Cs. In Rohsenow's model, Cs depends on the heater material and fluid combination. Later, Liaw and Dhir [8] improved Rohsenow's model. In the improved model, Cs varies linearly with contact angle. Stephan and Abdelsalam's correlation does not consider gravity but Rohsenow's equation considers gravity. Other correlations related to nucleate boiling have been proposed by studies of Fritz [9], and Gorenflo et al. [10] among others. However, these correlations are specific to subprocess mechanisms such as bubble dynamics and nucleation site densities. Thus, these correlations do not aid in predicting the general heat flux. Rohsenow's corrected equation by Liaw and Dhir is a widely-adopted model for the prediction of heat flux.

There have been multiple papers on the effects of surface roughness on nucleate boiling behavior. Wang and Dhir [11] performed research on a range of surface roughness values. They used three subranges: lower than 3.5μm, between 3.5μm and 5.8μm, and larger than 5.8μm. They concluded that density of active nucleation sites is related to the square of the heat flux. Jones et al. [12] also studied the effect of surface roughness for ranges between 0.027μm and 0.038μm, as well as from 1.08μm to 10.0μm. They reported that different trends were observed for different liquids for the same roughness value in terms of heat transfer coefficient. So far, no other work has compared the effects of multiple variables such as surface roughness with other variables such as wall-superhear, gravity, and density. This work attempts to do that by using a dataset which includes surface roughness data in the range of 0.1μm to 16.3μm.

1.2 Machine Learning Techniques.

Multiple studies on nucleate boiling experiments have been performed. A significant amount of data from these experiments is generated. These data can be exploited to identify new features and build better prediction models. It has been shown that Machine Learning predictive models have been effective in their predictive capabilities across multiple fields. Computer Vision [13], Natural Language Processing [14], Software Engineering [15], Epidemiology [16], and Environmental Sciences [17] are some of the fields in which they have been used with great success. Machine learning models including artificial neural networks (ANNs) have been used for prediction in fluid flow and heat transfer based problems too. Alizadehdakhel et al. [18] performed experiments, Computational Fluid Dynamics (CFD) simulations and neural network predictions of multiphase flow pressure drop in an unsteady problem. Jambunathan et al. [19] used back-propagation algorithm-based neural networks to predict convective heat transfer coefficient in a duct. Ling and Templeton [20] evaluated different machine learning algorithms for prediction of turbulence modeling. Rajendran et al. [21] perform vortex detection on unsteady CFD simulations using recurrent neural networks. Singh and Abbassi [22] combined numerical simulations with ANNs to perform thermal modeling in HVAC systems. Mohan and Gaitonde [23] implemented deep neural networks to build a reduced order model using dimensionality reduction technique like proper orthogonal decomposition. They used high-fidelity Direct Numerical Simulation (DNS) data to train the model. This is significant since DNS is computationally expensive specifically for turbulent flows [24]. Naphon and Arisariyawong [25] applied ANNs to analyze heat transfer of spirally fluted tubes. Guo and Iorio [26] used convolution neural networks for steady flow approximation of flow over vehicles. They trained the model with velocity field data over primitive shapes like triangles, quadrilaterals, and hexagons, at steady-state. Then, the model predicts the velocity field of multiple car samples provided as images. Wang et al. [27] used Random Forest algorithm to reconstruct discrepancy between RANS stress modeling and DNS data.

Machine learning models have also been used in predictions of boiling related research works. Liu et al. [28] predicted heat transfer from near wall local features using simulation data from four different heat flux values for boiling. They showed that the neural network model works well in both interpolation and extrapolation cases with respect to the training values provided. Since the training data for their neural networks are generated from simulation data, the same method cannot be used for experimental data because the type of input parameters Liu et al. used for training for their model, such as pressure gradient and momentum convection are hard to determine a priori in experiment. Hassanpour et al. [29] compared different Artificial Intelligence (AI) techniques for prediction of boiling heat transfer coefficient of alumina water based nanofluids. Their results confirmed that diameter of nanoparticles, its weight concentration in base fluid, wall superheat, and operating pressure are the best independent variables for estimating the pool boiling heat transfer coefficient of water-alumina nanofluid. Mazzola [30] integrated ANNs and empirical correlations for predicting the CHF for subcooled water. They provided a range for the variables determined from curve fitting techniques in order to predict the CHF. They report that their method would likely be suitable for thermal-hydraulic and heat transfer data processing. Alimoradi and Shams [31] used ANNs to study optimization of subcooled flow boiling in a vertical pipe. They determined that there is an optimum condition, which leads to minimizing the surface temperature and maximizing the averaged vapor volume fraction. They report that this optimization technique could be used for avoiding burning the walls while maintaining maximum vapor volume fraction. Scalabrin et al. [32] modeled flow boiling heat transfer of pure fluids using ANNs. They report improvements to the conventional correlations in flow boiling using ANNs. Qiu et al. [33] used an ANN model to predict mini/microchannels saturated flow boiling heat transfer coefficient. They report that the ANNs model did extremely well when a working fluid data was included in the training dataset, and poorly when a working fluid data was excluded from training dataset. Zhou et al. [34] compared ANNs predictions with other machine learning models for prediction of flow condensation heat transfer coefficients. They report machine learning models performed better than generalized correlation equation. Bubble images have been used by Suh et al. [35] as input to predict pool boiling characteristics as well. None of these works considered variation in gravity, and very few of them use data from multiple test liquids. McClure and Carey [36] did consider gravity in order to predict heat flux. However, they only considered four input parameters: wall superheat, gravity, surface tension, pressure. They did not consider liquid subcooling, vapor and liquid densities, thermal conductivity, and other parameters, which will be shown to be significant in the feature importance study presented in this work. This work uses deep learning to create a model for nucleate boiling heatflux with a comprehensive set of all of the significant input parameters including gravity for five different test liquids. These input parameters were determined based on four different feature importance techniques. So far according to the author's knowledge no other study has done this.

Limited dataset size is another issue that needs to be addressed. Using a high number of input parameters while the number of training samples are low could reduce the model accuracy as the dimensionality for the model to learn would be high. To solve that problem, this work proposes to use a combination of feature selection techniques and dimensional analysis to reduce the number of input parameters. This helps in keeping the information of each variable intact, and yet reducing the number of parameters which aids in model convergence and increasing model accuracy, and reduces training time [37]. In this work, one algorithm from each of the three feature selection methods has been used: Wrapper Methods, Filter Methods, and Embedded Methods. A brief description of each method is provided in the Section 1 of Supplemental Materials on the ASME Digital Collection.

The accuracy of machine learning models depends on the quality and quantity of the dataset. In many cases, like the study of nucleate boiling, the dataset is very limited while the number of features involved is large, which can be problematic to train the model ([29,32,38]). To address the problem it is proposed to use an approach to reduce the number of features and to improve model prediction accuracy. The approach consists of two steps:

  • Step 1: Reduce number of features from 12 to 8 by using feature selection techniques as mentioned above.

  • Step 2: Using the 8 selected variables from Step 1, dimensional analysis is used to generate four non-dimensional Π terms.

In this way, the dimensionality is reduced for the model. In Step 1, a total of four feature selection techniques are used: one from each of the three methods, namely, the Backward Elimination Technique (Wrapper Method), Pearson Correlation (Filter Method) and LASSO (Embedded Method) and an additional manual-wrapper type method based on results of ANNs model, which was trained by removing one feature at a time. Based on the combined conclusion from all the techniques, eight variables are selected for performing dimensional analysis in Step 2.

In this work, ANNs are used to predict nucleate boiling heat flux. Comparisons are made between models that are trained based on (1) a consolidated experimental dataset using all 12 parameters as input and (2) the reduced dataset with three nondimensional Π terms as input. The effectiveness of this approach is highlighted by the change in the reported error metric MAPE (mean absolute percentage error) values. Comparison of the ANNs predictions for both datasets against other machine learning models such as Random Forest-Regression (RFR) and Extreme Gradient Boost (XGBoost or XGB) algorithms have been performed. Additionally, ANN model prediction comparison to a well-known boiling correlation prediction (Liaw and Dhir [8]-corrected Rohsenow's equation [6]) has also been performed. The results show that using feature selection study and dimensional analysis to reduce the number of parameters is a viable option to generate accurate predictions from machine learning models for a limited dataset size. The remainder of the paper is divided as follows: Sec. 2 provides the methods description and reports the details on the dataset, a discussion on the feature importance study and dimensional analysis performed. Section 3 provides ANNs prediction results using Dataset-B and discusses the comparison of ANNs prediction with other machine learning models for both Dataset-A and Dataset-B. It also includes discussion on effect of parameters and parameter significance for different regimes and conditions. Additionally, comparison of heat flux prediction is provided between ANNs and Rohsenow's correlation. Conclusions are provided in Sec. 4.

2 Method

This section provides details on the methods used. Description of ANNs are provided in Sec. 2.1. Section 2.2 provides a description of the original full dataset used.

2.1 Artificial Neural Networks.

A neural network [39] is a collection of mutually connected units or neurons. Each neuron in a network has a single task, and it is connected to multiple other neurons in the same layer. A regular neural network with four inputs X1, X2, X3, and X4 (for example, density and viscosity in the fluid flow cases) is shown in Fig. 1. The first layer (to the extreme left) is called the input layer, which contains the features.

Fig. 1
Regular neural network
Fig. 1
Regular neural network
Close modal

Also shown in Fig. 1 is the output layer shown on the extreme right consisting of only one unit named O which corresponds to the label (for example: heat flux in this work). The layers shown in the middle are hidden layers. In Fig. 1, two hidden layers are shown, each with two neurons. For complicated problems multiple layers are usually employed. Also the input and output layers can consist of multiple features and labels. A step-by-step description of the process is provided in Section 2 of Supplemental Materials on the ASME Digital Collection. In this case, the model is trained for 2500 epochs. Model architecture for the boiling parameters case that is used in this work for Dataset-A consists of 12 input parameters. The output layer at the extreme right contains one unit which corresponds to the heat flux. There are five hidden layers between the input and the output layer. The number of neurons in the 5 hidden layers are set to 1000, 500, 250, 100, and 50. Different combinations of number of layers, neurons, epochs, value of learning rate, for hyper-parameter tuning were tested. The reported model provided the most optimum results, hence this architecture was chosen for the study. All codes were developed using Python programing language on Tensorflow [40] and Keras [41] framework using Sci-kit learn [42] package.

Additionally, other machine learning models such as Random Forest Regression (RFR) and Extreme Gradient Boosting were used. A brief description of these are available in Section 2 of Supplemental Materials on the ASME Digital Collection.

2.2 Description of Data.

The original 12-parameter dataset which is used to predict the heat-flux is denoted as Dataset-A, and the reduced data-set which uses three nondimensional Π terms to predict the fourth nondimensional heat-flux term as Dataset-B.

For Dataset-A, 231 data points were consolidated from multiple publications, namely: Dhir (2005) [2], Oka et al. [43], Merte et al. [44], Straub [45], Raj et al. [46], and Warrier et al. [47]. Details about the dataset are provided in Table 1. It should be noted that only ranges of features pertinent to experimental conditions are provided in the table. Other features, which include the fluid properties, are not included since they are constant values for each liquid.

Table 1

Range of values from each source

Range of variablesDhir [2]Warrier et al. [47]Raj et al. [46]Straub et al. [45]Merte et al. [44]Oka et al. [43]Total
Heatflux (W/m2)1439–115,89540–39,3003230–391,300106,440–404,0009430–80,7004826–221,17140–404,000
Gravity (m/s2)0.098– 9.82.45 × 10−7–9.89.8 × 10−6–9.89.8 × 10−4–9.89.8 × 10−4– 9.89.8 × 10−2 –9.82.45 × 10−7–9.8
Wall superheat (°CorK)6–12−15.8–11.714.1–39.110.6–40.411.3–39.83.04–54.26−15.88–54.26
Liquid subcooling (°CorK)010.551117113–190–19
Surface roughness (μm)8.516.31NA(0)NA(0)0.10–16.3
System pressure (KPa)101.32125101102150101.3101–150
Number of samples10191799167231
LiquidWaterpfnh/FC72pfnh/FC72R113R113n-pentane, R113, WaterWater, pfnh, R113, n-pentane
Range of variablesDhir [2]Warrier et al. [47]Raj et al. [46]Straub et al. [45]Merte et al. [44]Oka et al. [43]Total
Heatflux (W/m2)1439–115,89540–39,3003230–391,300106,440–404,0009430–80,7004826–221,17140–404,000
Gravity (m/s2)0.098– 9.82.45 × 10−7–9.89.8 × 10−6–9.89.8 × 10−4–9.89.8 × 10−4– 9.89.8 × 10−2 –9.82.45 × 10−7–9.8
Wall superheat (°CorK)6–12−15.8–11.714.1–39.110.6–40.411.3–39.83.04–54.26−15.88–54.26
Liquid subcooling (°CorK)010.551117113–190–19
Surface roughness (μm)8.516.31NA(0)NA(0)0.10–16.3
System pressure (KPa)101.32125101102150101.3101–150
Number of samples10191799167231
LiquidWaterpfnh/FC72pfnh/FC72R113R113n-pentane, R113, WaterWater, pfnh, R113, n-pentane
These data points cover 12 parameters with a wide range: 5 different gravity values (ranging from earth gravity to microgravity), 10 different liquid subcooling values, 5 different surface roughness values, 230 different wall superheat values and four different liquids which includes water, perfluoro-n-hexane, CFC-113 and n-pentane. Out of the 231 samples, a random 80%20% split was made for training data and testing data, respectively. On the training set data, a further validation split of 80%20% was applied. Data were scaled using the standard scaling equation
z=(xμ)/s
(1)

where z is the scaled output, x is the input prior to scaling, μ is the mean, and s is the standard deviation. The 12 parameters that were used for training along with the prediction variable heat flux are given in Table 2 with their respective symbols and units. The details pertaining to the highlighted portions and the column named “Contribution” in Table 2 has been discussed later in the paper.

Table 2

12 parameters from the original dataset

12 parameters from the original dataset
12 parameters from the original dataset

A flowchart of the entire parameter reduction procedure is provided in Fig. 2.

Fig. 2
Flowchart of parameter reduction
Fig. 2
Flowchart of parameter reduction
Close modal

3 Results and Discussion

Section 3.1 provides details pertaining to the feature selection study. Section 3.2 reports the dimensional analysis description followed by results and analysis of results in the following subsections.

3.1 Feature Importance Study.

A deeper look at Dataset-A shows that there are 12 different parameters which are used in predicting one parameter. This means the model has twelve degrees-of-freedom. It can be problematic to train an ANNs model with 12 parameters but less than 200 data points. Hence, with the goal of reducing the number of parameters, first, a feature importance study is performed. Four different feature selection techniques were used: one from each of the wrapper, filter, and embedded methods, and one additional manual wrapper method based on ANNs model prediction result.

The results for the manual wrapper method are reported first. The ANNs prediction is used as the basis for this method. Here, “Mean Absolute Percent Error” (MAPE) value is treated as the error metric, it is calculated as
MAPE=1ni=1n|AiFiAi|
(2)

where A is the actual value, F is the predicted value, and n is the number of samples predicted. In order to understand the contribution of each parameter, an ablation study was performed by following the steps below:

  • Step 1: Run the ANNs model by training on all 12 parameters of Dataset-A. It reports a MAPE of 25.77% (Details are provided in Sec. 3.4).

  • Step 2: Remove one parameter and train the same model architecture on the remaining 11 parameters. Record the MAPE value as it indicates the significance of the parameter that was left out.

  • Step 3: Add the previously left out parameter back and remove a different parameter and perform Step 2.

  • Step 4: Repeat step 2 and 3 for all parameters.

  • Step 5: Compare the MAPE values for each of the runs.

Based on the degree of change in MAPE, the effect of each parameter can be determined. Figure 3 shows the error between the MAPE in each case when one parameter is left out of the training set and the initial Dataset-A which include all parameters. This shows the effect of leaving out each parameter on the MAPE. Based on the results in Fig. 3, the percent contribution for each parameter was calculated. The contribution of each parameter in the order of significance is highlighted in Table 2 in the column named “Contribution.”

Fig. 3
Feature importance study: Effect of leaving out one variable on MAPE using ANNs (all inclusive refers to the model being run with all parameters included)
Fig. 3
Feature importance study: Effect of leaving out one variable on MAPE using ANNs (all inclusive refers to the model being run with all parameters included)
Close modal
Next, a second feature importance study technique called Pearson Correlation [48] was performed. This method determines the correlation between each variable calculating the correlation coefficient, r
r=(xix¯)(yiy¯)(xix¯)2(yiy¯)2
(3)

Based on the correlation coefficient values for each variable in Dataset-A, a heatmap was generated, which shows the degree of correlation among each variables in Fig. 4.

Fig. 4
Feature importance study: Correlation factor for pearson correlation
Fig. 4
Feature importance study: Correlation factor for pearson correlation
Close modal

In Fig. 4, each cell reports the correlation factor between the two variables corresponding to the row and column that the cell belongs to. So, for the heat flux correlation, one should look at the right most column. The high correlation coefficient between density and viscosity is in agreement with the physics, since density is correlated to viscosity. Similarly, high coefficient values between thermal conductivity and latent heat of vaporization also makes sense in the nucleate boiling space as both these variables are related to the total heat coming into the liquid. Regarding the heat flux correlations, the results reported using Pearson Correlation technique are in good agreement to the ablation study reported earlier. Wall superheat, Gravity and Liquid subcooling are determined to be the three highest correlated variables with heat flux.

Out of the five parameters with 1% contribution, surface tension was selected due to its significance in nucleate boiling heatflux [49], and since its percentage being a little higher than the other ones (which was rounded off to 1%). Liquid density was the other parameter selected to formulate density ratio between liquid and vapor densities as a dimension-less parameter in the dimensional analysis (details on dimensional analysis is described in Sec. 3.2), since density ratio has been shown to be a significant parameter for nucleate boiling in other studies [2]. Additionally, it should be noted that the results from Backward Elimination technique were also used in the selection of the eight parameters for the dimensional analysis. These eight parameters were chosen based on the significance of those parameters by both ANN ablation study and Backward elimination technique. Liquid density and surface tension were deemed to be significant by both ANN ablation and Backward Elimination technique.

Additionally, two different feature importance study techniques were also used, namely: Backward Elimination technique and the LASSO (Least Absolute Shrinkage and Selection Operator) technique. All methods report the same conclusion that wall superheat, gravity, and liquid subcooling are the three most significant parameters in predicting nucleate boiling heat flux. Details about the Backward Elimination and the LASSO techniques are not included for brevity.

3.2 Effect of Dimensional Analysis.

Based on the results from the feature importance study, the three most significant parameters were determined. The first step is to train the ANNs model with the three significant parameters, namely, wall superheat, gravity, and liquid subcooling. The MAPE reported by training the ANNs using only these three parameters was 43.43% which is much higher than the MAPE with 12 parameters of 25.7%. This shows that three parameters are not enough to construct an accurate functional mapping by ANNs. The model need the effect of more parameters, with reduced number of parameters. Next, the eight most important parameters including heat flux are selected to perform dimensional analysis [50] as highlighted in the black colored box in Table 2 with the goal of reducing the number of parameters from 8 to 4 nondimensional Π terms. Out of these eight, the parameters that were selected as repeating variables are highlighted in Table 2 with violet cell background color. The Π terms generated as a result of performing dimensional analysis are shown in Table 3. Once the Π terms are generated, the next step is to train the model using Π1,Π2 and Π3 as input to predict the nondimensional heat flux Π4. Here a log function was used in Π4 to reduce the range of the nondimensional gravity values (microgravity is in the range of 1.0×106m/s2) and earth gravity is 9.8m/s2). From the original Dataset-A, a new dataset Dataset-B has been generated which has the four nondimensional Π terms and their values for the same 231 samples.

Table 3

Π terms generated by dimensional analysis

Π1Π2Π3Π4
ΔTsubΔTwg(1/4)σ(5/4)κlΔTwρv(1/4)ρlρvlog(g(9/4)ρv(1/4)q˙σ3/4)
Π1Π2Π3Π4
ΔTsubΔTwg(1/4)σ(5/4)κlΔTwρv(1/4)ρlρvlog(g(9/4)ρv(1/4)q˙σ3/4)
The model is trained based on Dataset-B, with only one change to the reported model in Sec. 2. Since the log function was used in Π4, it generated many negative values. Hence, to incorporate that, the “ELU” activation function or Exponential Linear Unit instead of “ReLU” was implemented. ELU unlike ReLU can produce negative outputs. ELU is defined as
f(x)={xx>0α·(exp(x)1)x0}
(4)

where α is the scale for negative factor and is set to the default value of 1.0 in the code.

The results for the Π4 predictions of 46 samples using ANNs are provided in Fig. 5. The ground truth values (actual Π4 values) are plotted along the x-axis, and the model predictions are plotted along the y-axis. The points adhering closer to the center diagonal line passing through the origin depicts higher accuracy. The MAPE reported for ANNs model using nondimensional Π terms is 9.1%, which reduces to 6.4% after removing three outlier predictions.

Fig. 5
Artificial neural networks prediction for Π4: Dataset-B
Fig. 5
Artificial neural networks prediction for Π4: Dataset-B
Close modal

Next, the results from different models that are trained based on different dataset are compared. In Sec. 3.3, predictions from ANNs are compared with Extreme Gradient Boost (XGB) and Random Forest-Regression (RFR) algorithm predictions using Dataset-B. In Sec. 3.4, predictions are compared with Dataset-A for ANNs and other machine learning models. In Sec. 3.6, comparison of ANNs prediction with that of Rohsenow's correlation is provided.

3.3 Comparison With Other Machine Learning Models.

Two new models are trained using Dataset-B to predict nondimensional heat flux Π4. The predictions for Π4 using XGB and RFR algorithms are provided in Fig. 6. The XGB model has an MAPE value of 16.31% which is higher than 9.12% from the ANNs model. Random Forest-Regression reports a MAPE of 18.71% which is higher than those from the ANNs model and the XGB model. MAPE for XGB prediction for Π4 using Dataset-B for 46 samples is 16.3% which is higher than 9.1% of ANNs.

Fig. 6
XGB (left) and RFR (right) predictions for Π4: Dataset-B
Fig. 6
XGB (left) and RFR (right) predictions for Π4: Dataset-B
Close modal

Random Forest-Regression reports a MAPE of 18.7% for Dataset-B which is higher than 9.1% of ANNs and 16.3% of XGB. As can be seen from Fig. 6, the predictions are very similar for both models with some minor differences.

3.4 Comparison With 12 Parameter Dataset Predictions.

To highlight the effectiveness of dimension reduction, results from the ANNs model trained using Dataset-A are compared with results from ANNs model trained using Dataset-B. Prediction was made over the testing dataset of 46 independent samples for Dataset-A. The results are shown in Fig. 7. In the case of Dataset-A, the MAPE value is 25.7%. After excluding two outlier samples, the MAPE reduces to 14.4%. The outliers did not have any specific correlation. They are for different liquids with different gravity, wall superheat and liquid subcooling values, indicating that the outliers do not conform to any specific type which the model is unable to predict correctly. In comparison, the prediction error for Dataset-B using ANNs was much lower at 9.1%.

Fig. 7
Artificial neural networks prediction for Heat flux: Dataset-A in a log–log plot
Fig. 7
Artificial neural networks prediction for Heat flux: Dataset-A in a log–log plot
Close modal

Next, the predictions from the XGB and RFR model for Dataset-A are provided. Prediction was made over the testing data-set of 46 independent samples as in Dataset-B. The results for the XGB and RFR prediction are shown are shown in Fig. 8.

Fig. 8
XGB (left) and RFR (right) predictions for Heat flux: Dataset-A in a log–log plot
Fig. 8
XGB (left) and RFR (right) predictions for Heat flux: Dataset-A in a log–log plot
Close modal

The corresponding MAPE values for XGB prediction for heat flux for Dataset-A is 42.1% which is higher than the 25.7% of ANNs. In comparison, the prediction for Dataset-B using XGB was much lower at 16.3%.

For Dataset-A, RFR reports a MAPE of 44.6% which is higher than 25.7% of ANNs and 42.1% of XGB. In comparison, the prediction for Dataset-B using RFR was much lower at 18.7%.

A comparison between Figs. 6 and 8 shows that the predictions for Dataset-B in Fig. 6 are much closer to the centerline and hence more accurate than the predictions for Dataset-A in Fig. 8. This can also be seen from the MAPE value comparisons.

A summary of the comparison of the MAPE values along with standard deviations of the MAPE for the machine learning models using both Dataset-A in predicting heat flux and Dataset-B in predicting Π4 for the same 46 samples is provided in Table 4. Here, the standard deviation ϕ is calculated as
ϕ=(xiμ)2N
(5)

where N is the number of the samples, xi represents the values from the ith sample, and μ is the mean. The 46 samples were selected prior to the training process for all cases shown in Table 4. The selection process was completely random, however the same seed value was used for all methods so that the exact same samples are chosen to be included in the test set for all models. This would maintain a fair comparison for the predictive capabilities across the models. From the results in Table 4 it is evident that the ANNs model reports the most accurate predictions for both Dataset-A and Dataset-B. The prediction for ANNs using the Dataset-B is considerably more accurate than that using Dataset-A. This trend holds true for XGB and RFR algorithms as well. This suggests that performing feature importance study and dimensional analysis, which resulted in the reduction of the number of features to 3 from 12, improved the accuracy of models.

Table 4

MAPE and standard deviation comparison of machine learning models for 46 samples using Dataset-A and Dataset-B

Dataset-ADataset-B
ModelMAPEStandard deviationMAPEStandard deviation
ANNs25.7%61.09.1%16.2
XGB42.1%66.516.3%28.4
RFR44.6%94.118.7%32.9
Dataset-ADataset-B
ModelMAPEStandard deviationMAPEStandard deviation
ANNs25.7%61.09.1%16.2
XGB42.1%66.516.3%28.4
RFR44.6%94.118.7%32.9

3.5 Analysis of Predictions

3.5.1 Effect of Gravity.

The model learns the physics-based behavior from the data. This can be verified by considering the example of gravity and its effect on heatflux. Typically experiments have shown that keeping other parameters constant, if gravity is reduced then a reduction in heatflux is observed ([44,46,47]). For the model prediction, two sample points are considered from the unseen test set. The two points are for the same liquid (pfnh) and uses all other parameters of similar value, the only difference between the two datapoints is gravity value. One is in under microgravity conditions, and the other is earth gravity. The ground truth data for gravity and heatflux and the ANN model prediction of heatflux for the two points are shown in Table 5.

Table 5

Effect of gravity on heatflux

Gravity (m/s2)Actual heatflux (W/m2)Predicted heatflux (W/m2)Percent difference in MAPE
9.8215,000215,7550.35%
9.8E-6308402468719.94%
Gravity (m/s2)Actual heatflux (W/m2)Predicted heatflux (W/m2)Percent difference in MAPE
9.8215,000215,7550.35%
9.8E-6308402468719.94%

As can be seen from Table 5, the ANN model predicts the trend of heatflux reducing with gravity. This trend has been verified by multiple experiments ([44,45,47]). The error for the earth gravity case is less than 1%. Although the error for the microgravity case is higher at 19.94%, this error is similar to the state-of-the-art scaling law model of Raj et al. [46] for microgravity conditions which provides an error of 20% for heat flux predictions. However, it should be noted that the scaling law model for [46] has been tested only for microgravity conditions. The ANN model predictions have tested for all conditions. Additionally, the higher error for microgravity case can also be explained by the lower number of data points in microgravity conditions. With more experimental data available, the model accuracy could be further improved. This can be considered as part of a future work.

3.5.2 Effect of Wall Superheat.

Next, the effect of wall super heat on heatflux has been considered. From the unseen test set, sample datapoints have been selected which are for the same test liquid (water) and uses all other parameters of similar values, the only difference is in the wall super heat values. Experiments have shown that heat-flux increases with wall-super heat, and this could be attributed to the additional supply of heat due to the increased temperature difference. The ground truth data for wall superheat and heatflux and the ANN model prediction of heatflux for the two points are shown in Table 6.

Table 6

Effect of wall superheat on heatflux

Wall superheat (C)Actual heatflux (W/m2)Predicted heatflux (W/m2)Percent difference in MAPE
10.831,69332,1111.32%
17.770,10272,5073.42%
20.2103,163108,9825.64%
21.2126,648126,6230.019%
Wall superheat (C)Actual heatflux (W/m2)Predicted heatflux (W/m2)Percent difference in MAPE
10.831,69332,1111.32%
17.770,10272,5073.42%
20.2103,163108,9825.64%
21.2126,648126,6230.019%

As can be seen from Table 6, the ANN model learns the behavior that heatflux increases with increase in wall superheat. Additionally, the accuracy of the model prediction is high which is evident from the low MAPE values. This shows that the model learns the behavior of individual parameters with heatflux and creates a universal functional mapping for all the parameters with heatflux.

3.5.3 Critical Heat Flux.

In this section we report the error of the model for prediction of heat-flux in cases where the critical heat flux (CHF) has been reached. To explore this, first the CHF (Critical Heat Flux) is calculated for all the datapoints in the dataset using Zuber's [51] equation:
qCHF=Chfgρv[σg(ρlρv)ρv2]1/4
(6)

Here, C is constant with value of 0.149 for flat surfaces. The calculated CHF values were compared with the actual heatflux to see which data-points had reached CHF. It was determined that out of the 231 datapoints, only 27 datapoints had reached the CHF value. 18 out of those 27 were for reduced gravity or microgravity conditions. Out of those 27 datapoints, 5 datapoints were part of the randomly chosen test dataset which is unseen to the ANN model. Next, the ANN model's prediction accuracy was checked for the cases where CHF was reached. In order to do that, the results of the actual heatflux, predicted heatflux and MAPE for those 5 datapoints are provided in Table 7.

Table 7

Prediction accuracy for cases where CHF was reached

Actual heatflux (W/m2)Predicted heatflux (W/m2)MAPEAverage MAPE
3048024,68719.94%15.36%
4406047,2857.32%
215000215,7550.35%
146040124,24314.92%
3931025,83534.27%
Actual heatflux (W/m2)Predicted heatflux (W/m2)MAPEAverage MAPE
3048024,68719.94%15.36%
4406047,2857.32%
215000215,7550.35%
146040124,24314.92%
3931025,83534.27%

As can be seen from the table, that the average MAPE is about 15.3% for cases where the CHF was reached, hence the model is able to predict cases of nucleate boiling which are at the border of transition boiling. For comparison, the MAPE of the complete dataset was 25.77% which shows that the cases where CHF had reached has a lower error.

3.5.4 Parameter Significance.

One of the key novel contributions of this work includes the quantification of the importance of each parameter in heatflux prediction. This quantification had been provided in Table 2. In order to verify the claim that “the three most significant parameters for heatflux prediction in nucleate boiling are: Wall superheat, Gravity, Liquid subcooling,” a study has been performed. The ANN-model was trained only with these three parameters and the accuracy of this model was compared with the complete dataset. A second study was performed to verify if the seven parameters that were selected for the dimensional analysis, can predict heatflux with reasonable accuracy. The comparison is provided in Table 8.

Table 8

MAPE comparison between 3-parameter (most significant inputs), 7-parameter (the parameters chosen for dimensional analysis), and the complete dataset with 12-parameter

MAPE-3 parameter datasetMAPE-7 parameter datasetMAPE-12 parameter dataset
43.43%26.54%25.77%
MAPE-3 parameter datasetMAPE-7 parameter datasetMAPE-12 parameter dataset
43.43%26.54%25.77%
It is expected that the MAPE for the 12 parameter dataset to be lower than the three parameter and seven-parameter ones since it includes more information. The percentage difference between the 12-parameter MAPE with the 3-parameter one is about 16%, which shows that the contribution of the top three parameters is about 84%. Here percentage difference (pd) between the two quantities (×2 and ×1) is defined as
pd=((x2x1)/x1)×100
(7)

The MAPE for the seven-parameter dataset is closer to the 12 parameter one than the 3-parameter one since it contains more relevant information. The seven-parameter dataset has a percentage difference of 2.9% from the 12-parameter one. This shows that the contribution of these seven parameters combined, namely, wall superheat, gravity, liquid subcooling, vapor density, thermal conductivity, surface tension, and liquid density is about 97% in the prediction of heatflux.

3.5.5 Boiling Regimes.

In order to explore the effect of different boiling regimes, the percentage contribution of the variables in both surface tension dominated boiling (SDB) and buoyancy dominated boiling (BDB) regimes has been quantified. Based on the threshold value of Lh/Lc>2.1 for BDB and Lh/Lc<2.1 for SDB, where Lh is the heater size, and Ls=σg(ρlρv) as proposed by Raj and Kim [52], this dataset includes 60 datapoints in the SDB regime and remaining 171 datapoints in the BDB regime. The contribution of each parameter was determined by calculating the MAPE difference between (a) prediction from ANN model trained using complete dataset and (b) prediction from ANN model trained by removing that particular parameter which would show the contribution of the specific parameter. The MAPE thus calculated is based on the test cases of either only SDB regime or only BDB regime. Based on the equation of Ls, surface tension, gravity, and densities of liquid and vapor are significant terms for regime determination. The contribution of these parameters are provided in Table 9 to see the if the ANN model is able to predict the physics behind the regimes.

Table 9

Parameter significance for different boiling regimes

VariablesContribution percentage in SDBContribution percentage in BDB
Gravity0.96%22.78%
Surface tension44.86%0.57%
Liquid and vapor density4.97%1.62%
VariablesContribution percentage in SDBContribution percentage in BDB
Gravity0.96%22.78%
Surface tension44.86%0.57%
Liquid and vapor density4.97%1.62%

The results quantify the effect of gravity and surface tension in the SDB and BDB regimes. The higher percentage contribution of Surface Tension in the SDB regime and lower in the BDB regimes shows that surface tension is a key parameter in SDB regime. Similarly, gravity is a key parameter in BDB regime. This can also be verified with the physics behind the problem, as gravity is reduced, buoyancy reduces, and surface tension effect becomes stronger. With continued reduction in gravity, surface tension becomes the dominant parameter. The contribution of gravity in SDB regime is almost nonexistent, this was also reported by Raj et al. [46] and Raj and Kim [52] from their experiments. Additionally, the model predictions also highlight that surface tension and liquid subcooling are the two most significant parameters in SDB regime with a combined contribution of 60%. Similarly, wall superheat and gravity are the two most significant parameters in BDB regime, with a combined contribution of 72%.

3.6 Comparison With Rohsenow's Correlation.

Next, ANNs predictions are compared against predictions from the well-known correlation originally proposed by Rohsenow [6] and later improved by Liaw and Dhir [8]. Out of the 46 samples contact angle information was available only for 22 samples. So, those 22 samples were used to calculate the heat flux using the correlation. The MAPE for ANNs using Dataset-B for those 22 samples was also calculated in order to have a fair comparison with Rohsenow's correlation. Results of the experimental heat flux, Rohsenow's prediction, and the ANNs predictions are shown in Fig. 9.

Fig. 9
Comparison of heat flux between Experimental, Rohsenow's Correlation and Predictions from ANNs based on 22 samples
Fig. 9
Comparison of heat flux between Experimental, Rohsenow's Correlation and Predictions from ANNs based on 22 samples
Close modal

It is evident in Fig. 9 that ANNs predictions are much closer to the experimental values than Rohsenow's correlation. In order to quantify the differences the MAPE values along with the standard deviation of the MAPE using Dataset-B for 22 samples of ANNs and Rohsenow's correlation are reported in Table 10.

Table 10

MAPE and standard deviation comparison of ANNs (heat flux calculated from Π4) and Rohsenow correlation for 22 samples for heat flux prediction

ModelMAPEStandard deviation
ANNs19.07%22.46
Rohsenow103.48%67.18
ModelMAPEStandard deviation
ANNs19.07%22.46
Rohsenow103.48%67.18

Results in Table 10 show that that ANNs prediction with Dataset-B outperforms Rohsenow's correlation to a considerable extent. The overall conclusion for the results using the 22 samples in Table 10 is similar to that of the 46 samples reported in Table 4. It confirms that ANNs using Dataset-B: (i) Outperforms its own predictions of Dataset-A, (ii) Outperforms other machine learning models using both Dataset-A and Dataset-B, and (iii) Outperforms Rohsenow's correlation.

4 Conclusion

The key contributions of the paper are highlighted below:

  1. An accurate data-driven model to predict nucleate boiling heatflux has been developed. The best published model is the Liaw and Dhir's modified equation of Rohsenow's correlation. The model provides a MAPE of about 102% for 46 independent sample points. ANN based proposed model has a MAPE of only 9%, which is about 10 times more accurate.

  2. The paper quantifies the significance percentage of each parameter on which the nucleate boiling heat flux depends. Earlier work has discussed the roles of many parameters in boiling. However, the quantification of these parameters has not been reported by anyone, as per the author's knowledge. The current work would help researchers to determine the parameters to reject with minimum loss in accuracy.

  3. A quantified percentage contribution of the significant variables in the SDB and BDB regimes of nucleate boiling have been provided. The key highlight of this result is that surface tension and liquid subcooling are the most significant parameters in SDB regime with a combined contribution percentage of 60%, while wall superheat and gravity are the most significant parameters in BDB regime with a combined contribution percentage of 70%.

  4. Most of prior studies using deep learning did not consider variation in gravity. One of the studies did consider gravity, however they only considered only four other input parameters: wall superheat, gravity, surface tension, pressure. They did not consider liquid subcooling, vapor and liquid densities, thermal conductivity and other parameters which are shown to be significant in the feature importance study presented in this work. This work uses deep learning to create a model for nucleate boiling heat flux with a comprehensive set of all of the significant input parameters including gravity for five different test liquids. These were determined based on four different feature importance techniques.

  5. A new way is proposed to handle challenges of small dataset in deep learning. Training a model of a large number of features on a small dataset could reduce the model accuracy. To solve that problem, using a combination of feature selection techniques and dimensional analysis has been proposed to reduce the number of input parameters by creating non-dimensional Π-terms from the most significant input parameters and then use these Π-terms as input. This helps in keeping the information of each variable intact, and yet reducing the number of parameters which aids in model convergence and increasing model accuracy. The reduction in error using this methodology has been highlighted as well.

References

1.
Faghri
,
A.
, and
Zhang
,
Y.
,
2020
, “
Boiling
,”
Fundamentals of Multiphase Heat Transfer and Flow
,
Springer
, Berlin, pp.
469
534
.
2.
Dhir
,
V. K.
,
2006
, “
Mechanistic Prediction of Nucleate Boiling Heat Transfer-Achievable or a Hopeless Task?
,”
ASME J. Heat Mass Transfer-Trans. ASME
,
128
(
1
), pp.
1
12
.10.1115/1.2136366
3.
Dhir
,
V. K.
,
2001
, “
Numerical Simulations of Pool-Boiling Heat Transfer
,”
AIChE J.
,
47
(
4
), pp.
813
834
.10.1002/aic.690470407
4.
Banerjee
,
S.
,
Lian
,
Y.
,
Liu
,
Y.
, and
Sussman
,
M.
,
2022
, “
A New Method for Estimating Bubble Diameter at Different Gravity Levels for Nucleate Pool Boiling
,”
ASME J. Heat Mass Transfer-Trans. ASME
,
144
(
2
), p. 021601.10.1115/1.4053102
5.
Banerjee
,
S.
,
Liu
,
Y.
,
Sussman
,
M.
, and
Lian
,
Y.
,
2022
, “
Depletable Micro-Layer for Nucleate Boiling Simulations in Micro-Gravity Conditions: A New Approach
,”
Int. J. Heat Mass Transfer
,
190
, p.
122642
.10.1016/j.ijheatmasstransfer.2022.122642
6.
Rohsenow
,
W.
,
1952
, “
A Method of Correlating Heat Transfer Data for Surface Boiling Liquids
,”
Trans. ASME
,
74
, p.
966
.
7.
Stephan
,
K.
, and
Abdelsalam
,
M.
,
1980
, “
Heat-Transfer Correlations for Natural Convection Boiling
,”
Int. J. Heat Mass Transfer
,
23
(
1
), pp.
73
87
.10.1016/0017-9310(80)90140-4
8.
Liaw
,
S.-P.
, and
Dhir
,
V.
,
1989
, “
Void Fraction Measurements During Saturated Pool Boiling of Water on Partially Wetted Vertical Surfaces
,”
ASME J. Heat Mass Transfer-Trans. ASME
, 111(3), pp.
731
738
.10.1115/1.3250744
9.
Fritz
,
W.
,
1935
, “
Maximum Volume of Vapor Bubbles
,”
Phys. Z
,
36
, pp.
379
384
.
10.
Gorenflo
,
D.
,
Knabe
,
V.
, and
Bieling
,
V.
,
1986
, “
Bubble Density on Surfaces With Nucleate Boiling-Its Influence on Heat Transfer and Burnout Heat Flux at Elevated Saturation Pressures
,”
International Heat Transfer Conference Digital Library
,
Begel House Inc
., Danbury, CT.
11.
Wang
,
C.
, and
Dhir
,
V.
,
1993
, “
Effect of Surface Wettability on Active Nucleation Site Density During Pool Boiling of Water on a Vertical Surface
,”
ASME J. Heat Transfer
, 115(3), pp.
659
669
.10.1115/1.2910737
12.
Jones
,
B. J.
,
McHale
,
J. P.
, and
Garimella
,
S. V.
,
2009
, “
The Influence of Surface Roughness on Nucleate Pool Boiling Heat Transfer
,”
ASME J. Heat Mass Transfer-Trans. ASME
,
131
(
12
), p. 121009.10.1115/1.3220144
13.
Voulodimos
,
A.
,
Doulamis
,
N.
,
Doulamis
,
A.
, and
Protopapadakis
,
E.
,
2018
, “
Deep Learning for Computer Vision: A Brief Review
,”
Comput. Intell. Neurosci.
,
2018
, pp.
1
13
.10.1155/2018/7068349
14.
Young
,
T.
,
Hazarika
,
D.
,
Poria
,
S.
, and
Cambria
,
E.
,
2018
, “
Recent Trends in Deep Learning Based Natural Language Processing
,”
IEEE Comput. Intell. Mag.
,
13
(
3
), pp.
55
75
.10.1109/MCI.2018.2840738
15.
Chatterjee
,
P.
,
Damevski
,
K.
, and
Pollock
,
L.
,
2021
, “
Automatic Extraction of Opinion-Based Q&A From Online Developer Chats
,”
Proceedings of the 43rd International Conference on Software Engineering (ICSE)
, Madrid, ES, May 22–30, pp.
1260
1272
.
16.
Banerjee
,
S.
, and
Lian
,
Y.
,
2022
, “
Data Driven Covid-19 Spread Prediction Based on Mobility and Mask Mandate Information
,”
Appl. Intell.
,
52
(
2
), pp.
1969
1978
.10.1007/s10489-021-02381-8
17.
Jakaria
,
A. H.
,
Hossain
,
M. M.
, and
Rahman
,
M. A.
,
2020
, “
Smart Weather Forecasting Using Machine Learning: A Case Study in Tennessee
,” preprint arXiv:2008.10789.
18.
Alizadehdakhel
,
A.
,
Rahimi
,
M.
,
Sanjari
,
J.
, and
Alsairafi
,
A. A.
,
2009
, “
CFD and Artificial Neural Network Modeling of Two-Phase Flow Pressure Drop
,”
Int. Commun. Heat Mass Transfer
,
36
(
8
), pp.
850
856
.10.1016/j.icheatmasstransfer.2009.05.005
19.
Jambunathan
,
K.
,
Hartle
,
S.
,
Ashforth-Frost
,
S.
, and
Fontama
,
V.
,
1996
, “
Evaluating Convective Heat Transfer Coefficients Using Neural Networks
,”
Int. J. Heat Mass Transfer
,
39
(
11
), pp.
2329
2332
.10.1016/0017-9310(95)00332-0
20.
Ling
,
J.
, and
Templeton
,
J.
,
2015
, “
Evaluation of Machine Learning Algorithms for Prediction of Regions of High Reynolds Averaged Navier Stokes Uncertainty
,”
Phys. Fluids
,
27
(
8
), p.
085103
.10.1063/1.4927765
21.
Rajendran
,
V.
,
Kelly
,
K. Y.
,
Leonardi
,
E.
, and
Menzies
,
K.
,
2018
, “
Vortex Detection on Unsteady CFD Simulations Using Recurrent Neural Networks
,”
AIAA
Paper No. 2018-372410.2514/6.2018-3724.
22.
Singh
,
S.
, and
Abbassi
,
H.
,
2018
, “
1D/3D Transient HVAC Thermal Modeling of an Off-Highway Machinery Cabin Using CFD-ANN Hybrid Method
,”
Appl. Therm. Eng.
,
135
, pp.
406
417
.10.1016/j.applthermaleng.2018.02.054
23.
Mohan
,
A. T.
, and
Gaitonde
,
D. V.
,
2018
, “
A Deep Learning Based Approach to Reduced Order Modeling for Turbulent Flow Control Using LSTM Neural Networks
,” preprint arXiv:1804.09269.
24.
Banerjee
,
S.
,
Ayala
,
O.
, and
Wang
,
L.-P.
,
2020
, “
Direct Numerical Simulations of Small Particles in Turbulent Flows of Low Dissipation Rates Using Asymptotic Expansion
,” 5th Thermal and Fluids Engineering Conference (
TFEC
), New Orleans, LA, Apr. 5–8, pp.
659
668
.10.1615/T FEC2020.tfl.032308
25.
Naphon
,
P.
, and
Arisariyawong
,
T.
,
2016
, “
Heat Transfer Analysis Using Artificial Neural Networks of the Spirally Fluted Tubes
,”
J. Res. Appl. Mech. Eng.
,
4
(
2
), pp.
135
147
.
26.
Guo
,
X.
,
Li
,
W.
, and
Iorio
,
F.
,
2016
, “
Convolutional Neural Networks for Steady Flow Approximation
,”
Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining
, San Francisco, CA, Aug. 13–17, pp.
481
490
.https://dl.acm.org/doi/10.1145/2939672.2939738
27.
Wang
,
J.-X.
,
Wu
,
J.-L.
, and
Xiao
,
H.
,
2017
, “
Physics-Informed Machine Learning Approach for Reconstructing Reynolds Stress Modeling Discrepancies Based on DNS Data
,”
Phys. Rev. Fluids
,
2
(
3
), p.
034603
.10.1103/PhysRevFluids.2.034603
28.
Liu
,
Y.
,
Dinh
,
N.
,
Sato
,
Y.
, and
Niceno
,
B.
,
2018
, “
Data-Driven Modeling for Boiling Heat Transfer: Using Deep Neural Networks and High-Fidelity Simulation Results
,”
Appl. Therm. Eng.
,
144
, pp.
305
320
.10.1016/j.applthermaleng.2018.08.041
29.
Hassanpour
,
M.
,
Vaferi
,
B.
, and
Masoumi
,
M. E.
,
2018
, “
Estimation of Pool Boiling Heat Transfer Coefficient of Alumina Water-Based Nanofluids by Various Artificial Intelligence (AI) Approaches
,”
Appl. Therm. Eng.
,
128
, pp.
1208
1222
.10.1016/j.applthermaleng.2017.09.066
30.
Mazzola
,
A.
,
1997
, “
Integrating Artificial Neural Networks and Empirical Correlations for the Prediction of Water-Subcooled Critical Heat Flux
,”
Revue Générale de Thermique
,
36
(
11
), pp.
799
806
.10.1016/S0035-3159(97)87750-1
31.
Alimoradi
,
H.
, and
Shams
,
M.
,
2017
, “
Optimization of Subcooled Flow Boiling in a Vertical Pipe by Using Artificial Neural Network and Multi Objective Genetic Algorithm
,”
Appl. Therm. Eng.
,
111
, pp.
1039
1051
.10.1016/j.applthermaleng.2016.09.114
32.
Scalabrin
,
G.
,
Condosta
,
M.
, and
Marchi
,
P.
,
2006
, “
Modeling Flow Boiling Heat Transfer of Pure Fluids Through Artificial Neural Networks
,”
Int. J. Therm. Sci.
,
45
(
7
), pp.
643
663
.10.1016/j.ijthermalsci.2005.09.009
33.
Qiu
,
Y.
,
Garg
,
D.
,
Zhou
,
L.
,
Kharangate
,
C. R.
,
Kim
,
S.-M.
, and
Mudawar
,
I.
,
2020
, “
An Artificial Neural Network Model to Predict Mini/Micro-Channels Saturated Flow Boiling Heat Transfer Coefficient Based on Universal Consolidated Data
,”
Int. J. Heat Mass Transfer
,
149
, p.
119211
.10.1016/j.ijheatmasstransfer.2019.119211
34.
Zhou
,
L.
,
Garg
,
D.
,
Qiu
,
Y.
,
Kim
,
S.-M.
,
Mudawar
,
I.
, and
Kharangate
,
C. R.
,
2020
, “
Machine Learning Algorithms to Predict Flow Condensation Heat Transfer Coefficient in Mini/Micro-Channel Utilizing Universal Data
,”
Int. J. Heat Mass Transfer
,
162
, p.
120351
.10.1016/j.ijheatmasstransfer.2020.120351
35.
Suh
,
Y.
,
Bostanabad
,
R.
, and
Won
,
Y.
,
2021
, “
Deep Learning Predicts Boiling Heat Transfer
,”
Sci. Rep.
,
11
(
1
), pp.
1
10
.10.1038/s41598-021-85150-4
36.
McClure
,
E. R.
, and
Carey
,
V. P.
,
2021
, “
Genetic Algorithm and Deep Learning to Explore Parametric Trends in Nucleate Boiling Heat Transfer Data
,”
ASME J. Heat Mass Transfer-Trans. ASME
,
143
(
12
), p.
121602
.10.1115/1.4052435
37.
James
,
G.
,
Witten
,
D.
,
Hastie
,
T.
, and
Tibshirani
,
R.
,
2013
,
An Introduction to Statistical Learning
, Vol.
112
,
Springer
, New York.
38.
Van Der Maaten
,
L.
,
Postma
,
E.
, and
Van den Herik
,
J.
,
2009
, “
Dimensionality Reduction: A Comparative
,”
J. Mach. Learn Res.
,
10
(
66–71
), p.
13
.
39.
McCulloch
,
W. S.
, and
Pitts
,
W.
,
1943
, “
A Logical Calculus of the Ideas Immanent in Nervous Activity
,”
Bull. Math. Biophys.
,
5
(
4
), pp.
115
133
.10.1007/BF02478259
40.
Abadi
,
M.
,
Barham
,
P.
,
Chen
,
J.
,
Chen
,
Z.
,
Davis
,
A.
,
Dean
,
J.
,
Devin
,
M.
, et al.,
2016
, “
TensorFlow: A System for Large-Scale Machine Learning
,”
Proceedings of the 12th USENIX Conference on Operating Systems Design and Implementation
(
OSDI'16
), USENIX Association, Savannah, GA, Nov. 2–4, pp.
265
283
.https://www.usenix.org/system/files/conference/osdi16/osdi16-abadi.pdf
41.
Chollet
,
F.
, et al.,
2015
, “
Keras
,” accessed Jan. 3, 2023, https://github.com/fchollet/keras
42.
Pedregosa
,
F.
,
Varoquaux
,
G.
,
Gramfort
,
A.
,
Michel
,
V.
,
Thirion
,
B.
,
Grisel
,
O.
, et al.,
2011
, “
Scikit-Learn: Machine Learning in Python
,”
J. Mach. Learn. Res.
,
12
, pp.
2825
2830
.
43.
Oka
,
T.
,
Abe
,
Y.
,
Mori
,
Y. H.
, and
Nagashima
,
A.
,
1995
, “
Pool Boiling of n-Pentane, CFC-113 and Water Under Reduced Gravity: Parabolic Flight Experiments With a Transparent Heater
,”
ASME J. Heat Mass Transfer-Trans. ASME
,
117
(
2
), pp.
408
417
.10.1115/1.2822537
44.
Merte
,
H.
, Jr.
,
Lee
,
H.
, and
Keller
,
R.
,
1996
, “
Report on Pool Boiling Experiment Flown on STS-47 (PBE-IA)
,” STS-57 (PBE-IB), and STS-60 (PBE-IC). Final report,
Michigan University
, Ann Arbor, MI, Report No. N-96-27393; NASA-CR-198465; E-10154; NAS-1.26: 198465; NIPS-96-35673.
45.
Straub
,
J.
,
2001
, “
Boiling Heat Transfer and Bubble Dynamics in Microgravity
,”
Adv. Heat Transfer
,
35
, pp.
57
172
.10.1016/S0065-2717(01)80020-4
46.
Raj
,
R.
,
Kim
,
J.
, and
McQuillen
,
J.
,
2012
, “
Pool Boiling Heat Transfer on the International Space Station: Experimental Results and Model Verification
,”
ASME J. Heat Mass Transfer-Trans. ASME
,
134
(
10
), p.
101504
.10.1115/1.4006846
47.
Warrier
,
G. R.
,
Dhir
,
V. K.
, and
Chao
,
D. F.
,
2015
, “
Nucleate Pool Boiling eXperiment (NPBX) in Microgravity: International Space Station
,”
Int. J. Heat Mass Transfer
,
83
, pp.
781
798
.10.1016/j.ijheatmasstransfer.2014.12.054
48.
Pearson
,
K.
,
1920
, “
Notes on the History of Correlation
,”
Biometrika
,
13
(
1
), pp.
25
45
.10.1093/biomet/13.1.25
49.
Morgan
,
A.
,
Bromley
,
L.
, and
Wilke
,
C.
,
1949
, “
Effect of Surface Tension on Heat Transfer in Boiling
,”
Ind. Eng. Chem.
,
41
(
12
), pp.
2767
2769
.10.1021/ie50480a025
50.
Baron de Fourier
,
J. B. J.
,
1822
,
Théorie Analytique de la Chaleur
,
Firmin Didot
.
51.
Zuber
,
N.
,
1961
, “
The Hydrodynamic Crisis in Pool Boiling of Saturated and Subcooled Liquids
,” Int. Developments in Heat Transfer, 27, pp.
230
236
.
52.
Raj
,
R.
, and
Kim
,
J.
,
2010
, “
Heater Size and Gravity Based Pool Boiling Regime Map: Transition Criteria Between Buoyancy and Surface Tension Dominated Boiling
,”
ASME J. Heat Mass Transfer-Trans. ASME
,
132
(
9
), p. 091503.10.1115/1.4001635

Supplementary data