Abstract
Monte Carlo ray tracing (MCRT) is a prevalent and reliable computation method for simulating light-matter interactions in porous media. However, modeling these interactions becomes computationally expensive due to complex structures and enormous variables. Hence, machine learning (ML) models have been utilized to overcome computational burdens. In this study, we investigate two distinct frameworks for characterizing radiative properties in porous media for pack-free and packing-based methods. We employ two different regression tools for each case, namely Gaussian process (GP) regressions for pack-free MCRT and convolutional neural network (CNN) models for pack-based MCRT to predict the radiative properties. Our study highlights the importance of selecting the appropriate regression method based on the physical model, which can lead to significant computational efficiency improvement. Our results show that both models can predict the radiative properties with high accuracy (>90%). Furthermore, we demonstrate that combining MCRT with ML inference not only enhances predictive accuracy but also reduces the computational cost of simulation by more than 96% using the GP model and 99% for the CNN model.