Abstract

Melt pool modeling is critical for model-based uncertainty quantification (UQ) and quality control in metallic additive manufacturing (AM). Finite element (FE) simulation for thermal modeling in metal AM, however, is tedious and time-consuming. This paper presents a multifidelity point-cloud neural network method (MF-PointNN) for surrogate modeling of melt pool based on FE simulation data. It merges the feature representations of the low-fidelity (LF) analytical model and high-fidelity (HF) FE simulation data through the theory of transfer learning (TL). A basic PointNN is first trained using LF data to construct a correlation between the inputs and thermal field of analytical models. Then, the basic PointNN is updated and fine-tuned using the small size of HF data to build the MF-PointNN. The trained MF-PointNN allows for efficient mapping from input variables and spatial positions to thermal histories, and thereby efficiently predicts the three-dimensional melt pool. Results of melt pool modeling of electron beam additive manufacturing (EBAM) of Ti-6Al-4V under uncertainty demonstrate the efficacy of the proposed approach.

1 Introduction

Metallic additive manufacturing (AM) is a disruptive manufacturing technology, which makes three-dimensional (3D) metal components layer upon layer based on computer-aided design (CAD) models [13]. Compared to traditional manufacturing processes, AM has significant advantages in fabricating intricate and customized parts, including high-efficiency and cost-saving [4,5]. Nevertheless, the inferior process consistency and components quality hinder the wide application of metallic AM techniques, which are chiefly caused by propagation and aggregation of various uncertainty sources (e.g., microstructural heterogeneity, variation in properties of the powder, and fluctuation in temperature boundary) in AM process [6]. Therefore, it is necessary to develop effective uncertainty quantification (UQ) methods for the quality control of the metallic AM process under uncertainty.

Uncertainty quantification is generally adopted by constructing a correlation between quality and uncertainty sources on the quantities of interest (QoI), followed by process optimization [7]. Although UQ techniques have been widely applied for traditional manufacturing processes, their application in the metallic AM process is still at its early stage. To date, the available studies of UQ in AM are limited [812]. Currently reported UQ approaches in metallic AM can be categorized into experiment-based UQ and model-based UQ [13]. The experiment-based UQ relies heavily on repetitive experiments, which are not only tedious and time-consuming but also demanding of costly material consumption. Powered by advanced simulation techniques, the model-based UQ used a large volume of computational data to achieve quality control in a cheap yet effective manner. Among various models (e.g., energy consumption model, melting pool model, solidification model, and so on) adopted in the model-based UQ, the melt pool model is one of the decisive models to investigate the effects of the metallic AM process on the microstructure and mechanical properties of the as-fabricated AM parts [14]. The thermal phenomena in melt pools are changeable and complicated, which contains convective, conductive, and radiative heat transfer interactions between the part, material, powder, and energy source. Actually, these thermal aspects in metallic AM, which govern the thermal field (i.e., temperature distribution) in the components, are in turn a function of the material properties, component design, and the process parameters [15]. Hence, geometrical and thermal modeling of melt pools is essential for comprehensive UQ.

Recently, finite element (FE) models have been widely adopted for studying thermal characteristics of AM process at the part-level [1618]. Though FE-based models provide high-fidelity (HF) simulations of the AM process, the high computational cost is still the major disadvantage. To substitute the FE-based models and reduce the computational cost, surrogate models, including the Kriging model, support vector machine (SVM), and neural networks (NN), are successfully adopted in UQ [19]. Due to the high dimensionality of the thermal field, only a few examples of surrogates for thermal modeling in AM have been reported in the literature [2023]. Nath et al. [20] developed a thermal surrogate model using singular value decomposition (SVD) and the Kriging model, which achieve the superior prediction of the original 3D thermal field. Wang et al. [21] further adapted the SVD-Kriging to deal with surrogate modeling of 3D steady temperature field and microstructure statistical moments under uncertainty in electron beam additive manufacturing (EBAM). Without uncertainty, Ren et al. [22] designed a two-dimensional (2D) thermal field model in laser aided additive manufacturing, which uses recurrent neural networks (RNN) and deep neural networks. Zhu et al. [23] proposed a physics-informed neural network (PINN) framework for 3D AM processes modeling, which fed physical knowledge into the PINN to improve the prediction accuracy. Unfortunately, in the presence of uncertainties, since the Kriging model and NN-based model need abundant HF data to achieve desirable accuracy, these methods based solely on FE simulation data are still computationally tormenting.

Instead, multifidelity (MF) metamodels [24,25] provides a promising solution to address the above drawbacks, which can be trained with low-fidelity (LF) data and HF data simultaneously. During the MF modeling process, lots of LF samples are first employed to provide the initial trend of the response, then few HF samples are used to calibrate the LF model [26]. The popular MF approaches, including Monte Carlo (MLMC), co-Kriging, and Scaling-function-based MF, are already well-established in the engineering domain [2628]. However, these methods may fail to deal with high-dimensional problems when the LF and HF data feature stochastic independence and nonlinear correlation [29]. Lately, the deep learning method combined with the MF scheme has been a hot topic. Liu and Wang [30] first introduced the theory of MF into a physics-constrained neural network (MF-PCNN) to solve partial differential equations (PDEs) and perform materials modeling. The MF-PCNN blends two PCNN, including LF-PCNN and discrepancy artificial neural networks, to perform the final prediction. Conversely, De et al. [31] presented deep MF multilayer perceptron (MLP) for uncertainty propagation, which used transfer learning (TL) and the bi-fidelity weighted learning in one MLP to balance the accuracy and the computational cost. Meng and Karniadakis [32] developed an MF-PINN for inverse PDEs problems, which could effectively construct the nonlinear and linear correlations between the LF and the HF data. Inspired by Refs. [30] and [31], a TL-based MF-PINN for PDEs problems was proposed, which adapted to construct the mapping between the known LF data and unknown HF data [33].

Based on the discussion above, existing NN-based surrogate methods (i.e., deep neural networks, PCNN, PINN) with the MF scheme for application in 3D AM thermal modeling still has some critical limitations: (1) The governing PDEs used in PINN or PCNN are mainly built on the uncertainty-free assumptions, and will usually fail when it comes to uncertainties; (2) Most LF data used in MF approaches are derived from FE-based numerical models with coarser mesh than HF data. Nevertheless, the required computational cost is also relatively large; (3) The PINN adopted for 3D AM processes modeling in Ref. [23] is constructed by several over-parameterized fully-connected neural networks (FNN, i.e., MLP), which is notoriously memory-intensive with large inputs. These methods may achieve inferior performance if the input data are unstructured from irregular grids.

To overcome the above limitations, this paper proposes a multifidelity point-cloud neural network (MF-PointNN) for 3D thermal modeling in EBAM, which is based on one-dimensional convolutional neural networks (1D CNN) and FNN. Different from some of the previous researches, a 3D AM thermal analytical model is adopted as an efficient LF data generator under uncertainty for the first time. In MF-PointNN, the spatial coordinates of the mesh grid or nodes from HF and LF models are used as one of the inputs, while the corresponding temperatures are the outputs. Another input is the uncertainty parameters, including design variables and uncertainty sources parameters. In that case, the MF-PointNN can learn an end-to-end mapping between uncertainties, spatial positions, and temperature field to perform 3D thermal modeling. Meanwhile, the fine-tune method of TL is employed for the MF problem, allowing for a significant reduction of computational costs. The results of a numerical example show that the proposed surrogate method achieved the highest accuracy compared with other surrogate methods with a similar computational cost.

The rest of the paper is structured as follows: The background of thermal models for EBAM is introduced in Sec. 2. Following that, Sec. 3 briefly reviews current Kriging-based melt pool surrogate modeling methods. Next, Sec. 4 presents the proposed method. The experimental results are presented and discussed in detail in Sec. 5. Finally, the conclusions are given in Sec. 6 along with future works.

2 Thermal Models for Electron Beam Additive Manufacturing

This section briefly reviews the thermal models, including the analytical model (i.e., Rosenthal's model) and numerical model (i.e., finite element-based model), for modeling and simulation of the temperature field during the EBAM process.

2.1 Rosenthal's Model.

Rosenthal's model was first developed for analytical thermal modeling in fusion welding [34]. Then, Rosenthal [35] further adopted the Rosenthal equation to predict the quasi-steady-state temperature distribution. Benefiting from various similarities between welding processes and AM, Rosenthal equation and its variants have been widely extended to AM [35]. The Rosenthal equation can be expressed as follow
(1)

where Tr(x,y,z) is the temperature at coordinate (x,y,z), Tpre is the preheating temperature (unit: K), η is the absorption efficiency of beam power, P is the power of electron beam, k is the thermal conductivity (unit: W·m−1·K−1), R is the distance from the of electron beam, v is the beam velocity (unit: m·s−1), α is the thermal diffusivity (unit: m2·s−1), ρ the density of the powder materials (unit: kg·m−3), and Cp is the specific heat capacity (unit: J·kg−1·K−1).

To use the Rosenthal equation in EBAM, three assumptions are made: (1) the thermal properties are independent of temperature; (2) temperature predicted at the origin of the beam is infinite; (3) the maximum temperature does not exceed the liquidus temperatures [36]. In this study, Rosenthal's model is employed to obtain LF data [37,38].

2.2 Finite Element-Based Model.

In this study, an FE-based heat transfer model incorporating a moving heat source is utilized to predict the in-process temperature distribution [21], which are then extracted as HF data. Specifically, the tempo-spatial evolution of temperature field with external heat input is governed by
(2)
where T(x,y,z,t) is the tempo-spatial temperature field, k is thermal conductivity, ρ is density and cp is specific heat. Of special importance is the external heat input from the moving electron beam, Qe, which is described using the Gaussian distribution equation as the following [39]
(3)

where η is the power absorption efficiency of the powder bed, P is the nominal power of electron beam, d is the diameter of the electron beam, v is the beam velocity and ze is the absolute penetration depth of electron beam associated with the acceleration voltage, Ve.

The boundary conditions of the FE-based thermal model, as graphically illustrated in Fig. 1, are summarized as follows: (1) the initial boundary condition of substrate and deposits is T0=Tpre, where Tpre is the preheating temperature; (2) the types of heat transfers at the surface are convective and radiative; (3) in view of the limited thermal conductivity of loose powders, adiabatic conditions are imposed on the sides of the printing part, thus with raw powders physically ignored in the simulation; and (4) evaporation is not incorporated in the FE-based thermal model, as normally did in AM thermal models. The above FE-based thermal model is realized in ABAQUS 6.10 using a custom DFLUX user subroutine [40]. The extraction of temperature field (upon reaching steady-state) as HF data is automated by Python script.

Fig. 1
Illustration of thermal model of electron beam
Fig. 1
Illustration of thermal model of electron beam
Close modal

3 Current Surrogate Modeling Methods of Melt Pool Based on Kriging

In this section, we briefly review two existing methods based on Kriging for melt pool surrogate modeling. These two methods will be compared with the proposed methods in the result section.

3.1 Melt Pool Surrogate Modeling Using Kriging and Singular Value Decomposition.

The basic principles of Kriging and SVD for melt pool surrogate modeling are provided in this section. More details are available in Refs. [20,21]. Figure 2 presents the overall flowchart of the thermal surrogate modeling using the Kriging and SVD methods.

Fig. 2
Flowchart of melt pool surrogate modeling using Kriging and SVD
Fig. 2
Flowchart of melt pool surrogate modeling using Kriging and SVD
Close modal
For given the controllable variables d (e.g., preheating temperature, power of electron beam, and beam velocity) and the uncontrollable uncertain parameters θ (e.g., power absorption efficiency of powder bed, thermal conductivity, specific heat capacity, and density of the powder materials), the steady temperature field can be expressed as T(d,θ,s), where s is the (x,y,z) coordinates of all nodes. Then temperature field under different conditions T(d(i),θ(i),s),i=1,2,,Nsample are generated through HF FE-based model, where (d(i),θ(i)),i=1,2,,Nsample are the Nsample training points using the sampling approach. For dimensionality reduction, the SVD is first employed to approximate the raw thermal field data matrix. The data matrix T(d,θ,s) is decomposed using SVD as T(d,θ,s)=UMVT, where U is a Nsample×Nsample orthogonal matrix, V is a np×np orthogonal matrix, np is the number of data points, M is a Nsample×np diagonal matrix with singular values. For given matrix as γ=UM, the raw thermal field data matrix can be expressed as follow:
(4)

where γj(i),j=1,2,,m (the element of γ at i-th row and j-th column) are the responses in the latent space, m is the number of significant features used in SVD; τj(s),j=1,2,,m (j-th row of VT) are the significant features.

After that, the Kriging models [41] are built based on γj(i),j=1,2,,m, which as expressed by:
(5)

where ĝ(·) is the Kriging model.

In this way, the SVD-based Kriging model is built, for given test point (dt,θt), the predicted thermal field response of the SVD-based Kriging model can be given by:
(6)

where μĝj(dt,θt) is the mean prediction of the j-th latent space Kriging model, and T̂(dt,θt,s) is the predicted thermal field response on test point (dt,θt).

3.2 Multifidelity Surrogate Modeling Using Kriging and Singular Value Decomposition.

A standard MF SVD-Kriging using additive scaling function [26,42] will be illustrated in this section. The scaled LF SVD-Kriging model is first built based on inputs and response of LF data. Then, the discrepancies between the HF model and the scaled LF SVD-based Kriging model are calculated, using inputs and response of few HF samples. In this context, the scaling function is constructed using the discrepancies. Finally, the MF SVD-Kriging is proposed to predict the thermal field, which is combined with the scaled LF SVD-Kriging model and the scaling function. The details of the standard MF SVD-Kriging for melt pool surrogate modeling can be summarized as follows,

Firstly, LF training points (d(li),θ(li)),li=1,2,,NHFtrain and HF training points (d(hi),θ(hi)),hi=1,2,,NLFtrain are collected, where NHFtrain and NLFtrain are the number of training points in HF dataset and LF dataset, respectively. Then, HF thermal responses THF(d(hi),θ(hi),s),hi=1,2,,NHFtrain and LF thermal responses TLF(d(li),θ(li),s),li=1,2,,NLFtrain are developed by HF and LF model, respectively.

Secondly, the thermal field of training LF data and HF data matrix using SVD are approximated as follows:
(7)
(8)

where γHFj(hi),j=1,2,,m and γLFj(li),j=1,2,,m are respectively the responses in the latent space of HF data and LF data, m is the number of significant features used in SVD; τHFj(s),j=1,2,,m and τLFj(s),j=1,2,,m are the significant features of HF data and LF data, respectively.

Thirdly, the LF SVD-Kriging model is built using γLFj(li),j=1,2,,m, as expressed by
(9)
(10)

where ĝLF(·) is the Kriging model of LF data; μĝLFj is the mean prediction of the jth latent space LF SVD-Kriging model, and T̂LF(dt,θt,s) is the predicted thermal field response of LF SVD-Kriging model on test point (dt,θt).

Based on that, the discrepancies between the HF model and the scaled LF SVD-based Kriging model are calculated by,
(11)

where TD(d(hi),θ(hi),s) is the discrepancies between the HF model and LF model at HF data hi, ω is the LF scale factor; γDj(hi),j=1,2,,m is the responses in the latent space of discrepancies, τDj(s),j=1,2,,m is the significant features of the discrepancies. For original MF metamodel, the LF scale factor is set as 1.

Then, the scaling function is constructed using the γDj(hi),j=1,2,,m, which shown as
(12)
(13)

where ĝD(·) is the Kriging model of discrepancies; μĝDj is the mean prediction of the j-th latent space of the scaling function, and T̂D(dt,θt,s) is the predicted thermal field response of the scaling function on test point (dt,θt).

Finally, the standard MF SVD-Kriging with the scaling function and the scaled LF SVD-Kriging model is constructed as follow:
(14)

where T̂MF(dt,θt,s) is the predicted thermal field response of the standard MF SVD-Kriging model on test point (dt,θt).

4 Proposed Method

This section describes the proposed method for constructing multifidelity surrogates for predicting the thermal field in AM. The dataset generation under uncertainty is discussed in Sec. 4.1, Sec. 4.2, and Sec. 4.3 present the structure of the PointNN and the details of the proposed MF-PointNN, respectively.

4.1 Dataset Generation Under Uncertainty.

In this study, the controllable variables d and the uncontrollable uncertain parameters θ of the melt pool is similar to that in Ref. [21]. The controllable variables d include the preheating temperature Tpre, the power of electron beam P, and the beam velocity v. The uncontrollable uncertain parameters θ include the absorption efficiency of beam power η, the thermal conductivity k, the specific heat capacity Cp, and the density of the Ti-6Al-4V powders ρ. Table 1 shows the distribution of controllable variables and uncontrollable uncertain parameters.

Table 1

Distribution of controllable variables and uncontrollable uncertain parameters

Variables and parametersUnitRangeMeanStandard deviationDistribution type
Controllable variablesTpreK923 ∼ 1003//Uniform
PW360 ∼ 720//Uniform
vm·s−10.188 ∼ 0.608//Uniform
Uncontrollable uncertain parametersη/0.6 ∼ 0.9//Uniform
kW·m−1·K−1/4.970.20Gaussian
CpJ·kg−1·K−1/531.1110Gaussian
ρkg·m−3/422040Gaussian
Variables and parametersUnitRangeMeanStandard deviationDistribution type
Controllable variablesTpreK923 ∼ 1003//Uniform
PW360 ∼ 720//Uniform
vm·s−10.188 ∼ 0.608//Uniform
Uncontrollable uncertain parametersη/0.6 ∼ 0.9//Uniform
kW·m−1·K−1/4.970.20Gaussian
CpJ·kg−1·K−1/531.1110Gaussian
ρkg·m−3/422040Gaussian

For given d and θ (i.e., inputs), the steady thermal field (i.e., outputs or response) is defined as T(d,θ,s), where sΩxyz represents the all spatial coordinates of the nodes. In this research, the Latin hypercube sampling approach [43] is used to generate 280 sample points for d and θ. After that, the HF thermal simulations using the FE-based model (i.e., Sec. 2.2) are performed for all sample points and we obtain the steady thermal fields denoted as THF(d(hi),θ(hi),s),hi=1,2,,280. Similarly, the LF dataset TLF(d(li),θ(li),s),li=1,2,,280 can be generated using Eq. (1). The generation of all the LF dataset only requires several seconds (based on Intel Core i7-9700K), which shows the high efficiency of analytical model. Notably, the EBAM of cuboid-shaped test part is simulated on top of a build plate with the dimension of 1.5 × 1.5 × 12 mm and layers along the z-axis. The whole part is transformed into a set of 18,746 (13 × 14 × 103) discrete nodes, then, the spatial coordinates of these nodes are saved as sΩxyz. Figures 3 and 4 show comparisons of steady thermal field predictions from LF and HF models for two samples. Following that, Fig. 5 presents the absolute differences between HF and LF predictions for the two samples depicted in Figs. 3 and 4.

Fig. 3
Comparison of the steady thermal fields from LF and HF datasets (sample 1)
Fig. 3
Comparison of the steady thermal fields from LF and HF datasets (sample 1)
Close modal
Fig. 4
Comparison of the steady thermal fields from LF and HF datasets (sample 2)
Fig. 4
Comparison of the steady thermal fields from LF and HF datasets (sample 2)
Close modal
Fig. 5
Absolute differences between HF and LF predictions for the above two samples
Fig. 5
Absolute differences between HF and LF predictions for the above two samples
Close modal

For the present problem, the thermal field around the melting pool and in the active region of the laser is critical, while the far-field data are relatively unimportant. In this study, a set of 4,096 discrete nodes around QoI are randomly sampled according to the temperature gradient. For the sampled data, the range of grid point distribution is that x[0,0.62mm], y[0.983mm,1.5mm], z[0.824mm,10.2mm]. Therefore, the LF dataset (TLF(d(li),θ(li),sL(li)),li=1,2,,280,sL(li)s) and HF dataset (THF(d(hi),θ(hi),sH(hi)),hi=1,2,,280,sH(hi)s) are successfully built. Figures 6 and 7 show comparisons of critical steady thermal fields from HF dataset and LF dataset for two samples. Figure 8 depicts the absolute differences of critical thermal fields between the HF and LF predictions for the two samples.

Fig. 6
Comparison of critical thermal fields from LF and HF datasets (sample 1)
Fig. 6
Comparison of critical thermal fields from LF and HF datasets (sample 1)
Close modal
Fig. 7
Comparison of critical thermal fields from LF and HF datasets (sample 2)
Fig. 7
Comparison of critical thermal fields from LF and HF datasets (sample 2)
Close modal
Fig. 8
Absolute differences between HF and LF predictions for the critical thermal fields
Fig. 8
Absolute differences between HF and LF predictions for the critical thermal fields
Close modal

4.2 Point-Cloud Neural Network.

As illustrated in Fig. 9, a PointNN is developed to perform the prediction of the 3D thermal field under uncertainty. The inputs of the model are unstructured grid points Npoints×3 including the geometry information and uncertainty parameters 1×Npara. The output of the model is the computed thermal field Npoints×4, where each unstructured grid point is concatenated with the corresponding temperature.

Fig. 9
Proposed PointNN structure for predicting the 3D thermal field
Fig. 9
Proposed PointNN structure for predicting the 3D thermal field
Close modal

Indeed, the PointNN is a modified architecture that uses multiple transformation networks (T-Net) and Shared MLP, which are derived from the PointNet [44]. PointNet provides an end-to-end solution for classification and scene segmentation of point clouds, which has the advantages in efficiency and easy implementation compared to 3D CNN or other multiview-based methods [45]. Lately, Kashefi et al. [46] first introduced the PointNet into the surrogate model of computational fluid dynamics (CFD), which achieves efficient prediction of 2D fluid flow fields on irregular geometries. To our best knowledge, none of the PointNet-based methods has been used in AM domain.

The PointNN is actually an encoder-decoder structure, the encoder branch consists of two T-Net, five shared MLP and a max-pooling layer. The shared MLP is applied with T-Net interchangeably, which is employed for local features aggregation with transformation-invariant characteristics. After a max-pooling layer, the global features of geometry at the QoI are obtained. To merge the local and global knowledge, the per point local features Npoints×64 after the second T-Net are concatenating with the global features, which is employed for the generation of combined point features Npoints×1088. In addition, to obtain new point features Npoints×(1088+Npara) under uncertainty, the parameters 1×Npara of each point are concatenated with combined point features. In the decoder branch, the features Npoints×(1088+Npara) are fed into three Shared MLP with output sizes (512,256,128). Then, the extracted features are fed into one Conv1D layer and concatenated with input points Npoints×3 to produce the predicted thermal field. Herein, the Sigmoid activation function is used in the final Conv1D layer, which limits the outputs in the range of [0,1].

The details of the shared MLP and T-Net are shown in Fig. 10. The Shared MLP contains only one one-dimensional convolution (Conv1D) layer with Batch Normalization [47] and Rectified Linear Unit (ReLU) [48] activation function, Conv1D layer has dout filters with a kernel of din×1 and a stride of 1×1, where din and dout are respectively the dimensionalities of input features and output features. For up-sampling from inputs features Npoints×din to output features Npoints×dout(dout>din), the trainable parameters of the Conv1D layer are din×dout, while that of the FNN are din×Npoints×dout×Npoints. Hence, the shared MLP shared the feature and parameter of each point, which has the advantage of reducing the computational costs.

Fig. 10
Details of the shared MLP and T-Net
Fig. 10
Details of the shared MLP and T-Net
Close modal

Furthermore, the T-Net is composed of three Shared MLP modules, one max pooling, and three FNN, which is served as a mini PointNet to perform the prediction of the transformation matrix. Then the transformation matrix is applied to the input features for joint alignment. In this way, the learned representation by point clouds is invariant to geometric transformations, which is beneficial for predicting the 3D thermal field of the melt pool.

4.3 Multifidelity-Point-Cloud Neural Network With Transfer Learning.

After the dataset is successfully built, the problem of the 3D thermal field approximating is solved by constructing surrogates using training data of both the LF dataset and HF dataset. On the concept of TL [49], an LF PointNN is first optimized and built based on inputs and responses of LF training samples. Then, the parameters of LF PointNN are partly transferred to a new PointNN. After that. the new PointNN is updated and fine-tuned using the smaller HF data to obtain the optimal MF-PointNN. During the training process of the new PointNN, the parameters transferred from LF PointNN are freeze. Thus, the features learned from the LF model are reserved in the new PointNN, which is beneficial for the promotion of the convergence from the beginning and reduction of the computation costs. Finally, the optimized MF-PointNN is proposed to predict the 3D thermal field. The general procedures of the proposed method can be summarized as follows:

  • Step 1. The MF dataset is built under uncertainty, which includes the LF dataset and HF dataset (i.e., Sec. 4.1).

  • Step 2. Randomly split the MF dataset into two subsets of training and testing.

    THF(d(hi),θ(hi),sH(hi)),hi=1,2,,NHFtrain and TLF(d(li),θ(li),sL(li)),li=1,2,,NLFtrain are firstly confirmed, where NHFtrain and NLFtrain are the sample size of train subset in HF dataset and LF dataset, respectively. Then, THF(d(hi),θ(hi),sH(hi)),hi=1,2,,Ntest is selected, where Ntest is the sample size of test subset.

  • Step 3. Optimize the LF PointNN based on TLF(d(li),θ(li),sL(li)),li=1,2,,NLFtrain.

    The optimization problem can be formulated as Eq. (15), which can be solved by a gradient descent-based algorithm.
    (15)
    where {ωL,bL} is the optimal parameters of PointNN, Lloss(·) is the loss function, the JL is the total loss to be minimized, and T̂LF(d(li),θ(li),sL(li)) is the predicted thermal field response of the LF PointNN model at the training data li.
    The LF PointNN with the optimal parameters {ωL,bL} is constructed as follow,
    (16)
    where MLF̂(·) is the optimal LF PointNN, T̂LF(d,θ,s) is the predicted thermal field response of the LF PointNN model.
  • Step 4. A new PointNN MNeŵ((d,θ,s);{ωLpart+ωnew,bLpart+bnew}) is constructed, where MNeŵ(·) is the new PointNN, {ωLpart,bLpart} is the parameters transferred from the LF PointNN, and {ωnew,bnew} is the parameters need to be optimized.

    Based on above remark, the new PointNN is updated and fine-tuned using the smaller HF data to obtain the optimal MF-PointNN, which can be expressed as
    (17)
    Then, the MF-PointNN with the optimal parameters {ωM,bM} is constructed as follow,
    (18)
    where MMF̂(·) is the optimal MF-PointNN, T̂MF(d,θ,s) is the predicted thermal field response of the MF-PointNN model.
  • Step 5. At the testing stage, the testing samples THF(d(hi),θ(hi),sH(hi)),i=1,2,,Ntest are input into the MF-PointNN to get the final predicted 3D thermal field.

Figure 11 summarizes the overall procedure for the training of MF-PointNN with transfer learning.

Fig. 11
The overall procedure for the training of MF-PointNN with TL
Fig. 11
The overall procedure for the training of MF-PointNN with TL
Close modal

5 Results and Discussion

In this section, the steady temperature field developed during electron beam scanning of a long track (i.e., Sec. 4.1) is used as a case study, to validate the efficiency and effectiveness of the proposed method. All experiments were performed on Windows 10 with 32GB RAM, Intel Core i7-9700K processor, and an NVIDIA GeForce RTX 2080 GPU. The proposed method is implemented in Python 3.8.5 with PyTorch 1.7.1.

5.1 Training Details.

The main goal of the proposed method is to predict temperature in each (x,y,z) coordinate. Hence, the mean square error is defined as the loss function Lmse to be optimized, which can be formulated as:
(19)

where Ns is the number of training samples in a mini-batch, Np is the number of (x,y,z) coordinates, Tj,i and T̂j,i are respectively the true and predicted thermal history.

During the training process, Rectified Adam (RAdam) optimizer [50] combined with LookAhead schemes [51] is used to adjust and optimize the loss function Lmse, which has better learning stability compared to Adam optimizer [52]. The default parameters of RAdam and LookAhead are selected, with the initial learning rate (LR) of 3×102 and the batch size of 32. To improve the performance of PointNN, A LR scheduler with cosine annealing [53] is utilized to decay the LR for each batch (shown in Fig. 12), as follows:
(20)

where ηt is the current LR, ηinitial is the initial LR, ηmin is the minimum of the LR, Tmax is number of epochs in each restart, and Tcur is the number of current epochs. In this research, Tmax and ηmin is fixed as 100 and 3×106, respectively.

Fig. 12
LR scheduler with cosine annealing for each batch
Fig. 12
LR scheduler with cosine annealing for each batch
Close modal
Besides, the temperature of each point is normalized and scaled in a range of [0,1], which is given by,
(21)

where Tnormal is the normalized temperature histories, T is the original temperature histories.

In this case, the sample size of test subset Ntest is defined as 28. Moreover, the mini-batch size of training and testing are fixed at 32 samples and 1 sample, respectively. The total number of training epoch is set as 5000 in our experiments. To avoid contingency in the testing process, all experiments are conducted 10 times and the average values are reported as the final results for analysis.

5.1.1 Evaluation Metrics.

As mention in Sec. 5.1, the proposed method is aims to predict the thermal field in the component. Therefore, the max absolute error (MAE) and root mean square error (RMSE) are defined as effectiveness metrics, which are given by,
(22)
(23)

where Ntest is the number of testing samples, Ti and T̂i are respectively the true thermal history and predicted thermal history of test sample i.

The MAE reflects the local accuracy of the model, and the RMSE evaluates the global accuracy. The lower the values of RMSE and MAE are, the more accurate the metamodel is. In addition, the efficiency is measured by computational budget and computation (i.e., testing) time. The computational budget is defined as the number of HF samples used during the training process [27]. Computation time is defined as the average CPU time per sample during the test process.

5.2 Comparison of Different Methods.

In this section, the proposed method is compared with FE analysis, SVD-Kriging [20] with HF data (i.e., Sec. 3.1), and standard MF SVD-Kriging (i.e., LF scale factor fixed as 1 [26], reviewed in Sec. 3.2). These methods are implemented in MATLAB R2020b [54] with the Statistics & Machine Learning Toolbox. In this case, 252 LF data and HF data randomly chosen to train surrogates, the number of significant features used in SVD-Kriging is defined as 10. Notably, randomly sampling without replacement is used, which makes testing data are unseen in the training data.

As discussed in Sec. 4.1, the temperatures in data are randomly sampled according to the gradient, which is randomly scatter and unsorted. The Kriging-based method may fail to construct using this kind of data. For a fair comparison, the sorted data is also used in the experiment. The comparison results of different methods for 3D thermal modeling on the test part are listed in Table 2. As shown in Table 2, although the SVD-Kriging-based methods achieve superior performance with sorted data, these methods fail to predict the thermal field with unsorted data. However, the FE data is usually in form of random scattered or complex grids in practical application, which is a challenge for conventional surrogate models.

Table 2

Comparison results of different methods for 3D thermal modeling

MethodsComputational budgetMAERMSEComputation/testing time (ms)
Use only HFFE analysis (ABAQUS)≈ 3,000,000
SVD-Kriging with HF data (sorted)252H151.5713.560.51
SVD-Kriging with HF data (unsorted)252H481.754889.540.51
PointNN (unsorted)252H129.4713.013.0
PointNN (unsorted)140H287.9643.553.4
Use both LF and HFStandard MF SVD-Kriging [26] (sorted)140H|252 L392.7649.251.06
Standard MF SVD-Kriging [26] (unsorted)140H|252 L491.834800.231.03
MF-PointNN (unsorted)140H|252 L167.2314.383.6
MethodsComputational budgetMAERMSEComputation/testing time (ms)
Use only HFFE analysis (ABAQUS)≈ 3,000,000
SVD-Kriging with HF data (sorted)252H151.5713.560.51
SVD-Kriging with HF data (unsorted)252H481.754889.540.51
PointNN (unsorted)252H129.4713.013.0
PointNN (unsorted)140H287.9643.553.4
Use both LF and HFStandard MF SVD-Kriging [26] (sorted)140H|252 L392.7649.251.06
Standard MF SVD-Kriging [26] (unsorted)140H|252 L491.834800.231.03
MF-PointNN (unsorted)140H|252 L167.2314.383.6

Conversely, considering unsorted data, the proposed PointNN has the best performance in both global accuracy and local accuracy with an MAE of 129.47, and an RMSE of 13.01 using 252 HF data. The proposed MF-PointNN also achieves the superior performance with an MAE of 167.23, and an RMSE of 14.38 using 140 HF data and 252 LF data, while the PointNN only has the performance with a MAE of 287.96, and a RMSE of 43.55 using 140 HF data.

Additionally, as shown in Table 2, the computation time of proposed the MF-PointNN is only 3.6 ms, while those of SVD-Kriging and standard MF SVD-Kriging are respectively 0.51 ms and 1.06 ms. Thus, the proposed method shows equal advantages to the conventional surrogate model in computation time. Furthermore, the computation time for 3D thermal field of a surrogate model is within 4 milliseconds, as opposed to over 3,000,000 ms with FE analysis.

A testing example for comparison between surrogate model prediction and FE thermal model is shown in Fig. 13. The input parameters of this example are {Tpre=993.357,P=438.756,v=0.484,η=0.720,k=4.789,Cp=530.050,ρ=4299.37}. As evident from Fig. 13, the proposed method has superior performance in prediction of temperature field. The temperature concentration and distribution around the melt pool are clearly demonstrated. Figure 14 shows the absolute errors of surrogate model predictions compared with FE-based thermal model. Obviously, from Fig. 14, the PointNN and MF-PointNN achieve lower absolute errors even near the electron beam focus, while the Kriging-based method obtains relatively large prediction errors. Figure 15 shows the probability density function (PDF) of absolute prediction errors of different methods. It indicates the absolute prediction errors of PointNN are much smaller than its counterpart from the SVD-Kriging based methods. It should be noted that the accuracy of PointNN with 252 HF data is similar to that of MF-PointNN. But MF-PointNN only uses 140 HF data samples. This demonstrates the effectiveness of the proposed MF-PointNN method in reducing the number of required MF data samples in surrogate modeling. Consequently, the proposed method can be an effective and efficiency substitute for FE-based thermal model.

Fig. 13
Comparison between surrogate model prediction and FE thermal model: (a) FE-based thermal model and (b) surrogate model prediction. From top to bottom: SVD-Kriging with HF data (sorted), standard MF SVD-Kriging(sorted), PointNN using 252 HF data, and MF-PointNN.
Fig. 13
Comparison between surrogate model prediction and FE thermal model: (a) FE-based thermal model and (b) surrogate model prediction. From top to bottom: SVD-Kriging with HF data (sorted), standard MF SVD-Kriging(sorted), PointNN using 252 HF data, and MF-PointNN.
Close modal
Fig. 14
Absolute errors of surrogate model prediction compared with FE-based thermal model: (a) SVD-Kriging with HF data (sorted), (b) standard MF SVD-Kriging(sorted), (c) PointNN using 252 HF data, and (d) MF-PointNN
Fig. 14
Absolute errors of surrogate model prediction compared with FE-based thermal model: (a) SVD-Kriging with HF data (sorted), (b) standard MF SVD-Kriging(sorted), (c) PointNN using 252 HF data, and (d) MF-PointNN
Close modal
Fig. 15
PDF of absolute errors in surrogate model predictions
Fig. 15
PDF of absolute errors in surrogate model predictions
Close modal

6 Conclusions

This paper proposes a novel MF-PointNN method combining PointNN and TL-based MF schemes for efficient and accurate surrogate modeling of 3D thermal field in metallic AM. PointNN has shown a powerful high-dimensionality modeling ability, while a TL-based MF scheme has been proposed to reduce computational cost. Firstly, the Rosenthal's analytical model is adopted to generate cheap LF dataset under uncertainty, and the FE-based numerical model is used for the HF dataset generator. After dataset generation, a LF-PointNN is adjusted and optimized by LF dataset, then, the LF-PointNN is fine-tuned and updated using the smaller HF data by TF-based MF scheme. In this way, MF-PointNN is construct and employed as an automatic feature extractor to improve capability of the information from both LF data and HF data. Finally, the proposed method has been validated to predict the 3D thermal field around melt pool on EBAM of Ti-6Al-4V in cuboid geometry. Numerical experimental results suggest that the proposed method can not only improve the prediction performance of 3D thermal field but also effectively reduce the computational costs.

In our future work, we will focus on investigating different analytical models [55], aiming to generate a more reliable LF dataset. Meanwhile, an advanced point-cloud learning method [4850] will be adapted in our proposed method, which makes the 3D thermal field modeling more efficient and more accurate. We are interested in merging the proposed model with comprehensive UQ and process optimization [5658].

Funding Data

  • Michigan Institute of Data Science (MIDAS).

  • National Science Foundation (NSF) (Grant No. CMMI-1662864; Funder ID: 10.13039/100000001).

References

1.
Standard
,
A.
, and
others,
2012
, “
Standard Terminology for Additive Manufacturing Technologies
,”
ASTM
International F2792-12a.https://www.astm.org/DATABASE.CART/WITHDRAWN/F2792.htm
2.
Laureijs
,
R. E.
,
Roca
,
J. B.
,
Narra
,
S. P.
,
Montgomery
,
C.
,
Beuth
,
J. L.
, and
Fuchs
,
E. R.
,
2017
, “
Metal Additive Manufacturing: Cost Competitive Beyond Low Volumes
,”
ASME J. Manuf. Sci. Eng.
,
139
(
8
), p. 081010.10.1115/1.4035420
3.
Herzog
,
D.
,
Seyda
,
V.
,
Wycisk
,
E.
, and
Emmelmann
,
C.
,
2016
, “
Additive Manufacturing of Metals
,”
Acta Mater.
,
117
, pp.
371
392
.10.1016/j.actamat.2016.07.019
4.
Körner
,
C.
,
2016
, “
Additive Manufacturing of Metallic Components by Selective Electron Beam Melting—a Review
,”
Int. Mater. Rev.
,
61
(
5
), pp.
361
377
.10.1080/09506608.2016.1176289
5.
Wang
,
C.
,
Tan
,
X. P.
,
Tor
,
S. B.
, and
Lim
,
C. S.
,
2020
, “
Machine Learning in Additive Manufacturing: State-of-the-Art and Perspectives
,”
Addit. Manuf.
,
36
, p.
101538
.10.1016/j.addma.2020.101538
6.
Qi
,
X.
,
Chen
,
G.
,
Li
,
Y.
,
Cheng
,
X.
, and
Li
,
C.
,
2019
, “
Applying Neural-Network-Based Machine Learning to Additive Manufacturing: Current Applications, Challenges, and Future Perspectives
,”
Engineering
,
5
(
4
), pp.
721
729
.10.1016/j.eng.2019.04.012
7.
Hu
,
Z.
, and
Mahadevan
,
S.
,
2017
, “
Uncertainty Quantification and Management in Additive Manufacturing: Current Status, Needs, and Opportunities
,”
Int. J. Adv. Manuf. Technol.
,
93
(
5–8
), pp.
2855
2874
.10.1007/s00170-017-0703-5
8.
Hu
,
Z.
,
Mahadevan
,
S.
, and
Du
,
X.
,
2016
, “
Uncertainty Quantification of Time-Dependent Reliability Analysis in the Presence of Parametric Uncertainty
,”
ASCE-ASME J. Risk Uncertainty Eng. Systems, Part B: Mech. Eng.
,
2
(
3
), p.
031005
.10.1115/1.4032307
9.
Hu
,
Z.
, and
Mahadevan
,
S.
,
2017
, “
Uncertainty Quantification in Prediction of Material Properties During Additive Manufacturing
,”
Scr. Mater.
,
135
, pp.
135
140
.10.1016/j.scriptamat.2016.10.014
10.
Garcia
,
D.
,
Wu
,
Z.
,
Kim
,
J. Y.
,
Hang
,
Z. Y.
, and
Zhu
,
Y.
,
2019
, “
Heterogeneous Materials Design in Additive Manufacturing: Model Calibration and Uncertainty-Guided Model Selection
,”
Addit. Manuf.
,
27
, pp.
61
71
.10.1016/j.addma.2019.02.014
11.
Moser
,
D.
,
Cullinan
,
M.
, and
Murthy
,
J.
,
2019
, “
Multi-Scale Computational Modeling of Residual Stress in Selective Laser Melting With Uncertainty Quantification
,”
Addit. Manuf.
,
29
, p.
100770
.10.1016/j.addma.2019.06.021
12.
Wang
,
Z.
,
Liu
,
P.
,
Ji
,
Y.
,
Mahadevan
,
S.
,
Horstemeyer
,
M. F.
,
Hu
,
Z.
,
Chen
,
L.
, and
Chen
,
L.-Q.
,
2019
, “
Uncertainty Quantification in Metallic Additive Manufacturing Through Physics-Informed Data-Driven Modeling
,”
JOM
,
71
(
8
), pp.
2625
2634
.10.1007/s11837-019-03555-z
13.
Chan
,
S.
, and
Elsheikh
,
A. H.
,
2018
, “
A Machine Learning Approach for Efficient Uncertainty Quantification Using Multiscale Methods
,”
J. Comput. Phys.
,
354
, pp.
493
511
.10.1016/j.jcp.2017.10.034
14.
Wang
,
Z.
,
Jiang
,
C.
,
Liu
,
P.
,
Yang
,
W.
,
Zhao
,
Y.
,
Horstemeyer
,
M. F.
,
Chen
,
L.-Q.
,
Hu
,
Z.
, and
Chen
,
L.
,
2020
, “
Uncertainty Quantification and Reduction in Metal Additive Manufacturing
,”
NPJ Comput. Mater.
,
6
(
1
), p.
175
.10.1038/s41524-020-00444-x
15.
Yavari
,
M. R.
,
Cole
,
K. D.
, and
Rao
,
P.
,
2019
, “
Thermal Modeling in Metal Additive Manufacturing Using Graph Theory
,”
J. Manuf. Sci. Eng.
,
141
(
7
), p. 071007.10.1115/1.4043648
16.
Bikas
,
H.
,
Stavropoulos
,
P.
, and
Chryssolouris
,
G.
,
2016
, “
Additive Manufacturing Methods and Modelling Approaches: A Critical Review
,”
Int. J. Adv. Manuf. Technol.
,
83
(
1–4
), pp.
389
405
.10.1007/s00170-015-7576-2
17.
Foteinopoulos
,
P.
,
Papacharalampopoulos
,
A.
, and
Stavropoulos
,
P.
,
2018
, “
On Thermal Modeling of Additive Manufacturing Processes
,”
CIRP J. Manuf. Sci. Technol.
,
20
, pp.
66
83
.10.1016/j.cirpj.2017.09.007
18.
Luo
,
Z.
, and
Zhao
,
Y.
,
2018
, “
A Survey of Finite Element Analysis of Temperature and Thermal Stress Fields in Powder Bed Fusion Additive Manufacturing
,”
Addit. Manuf.
,
21
, pp.
318
332
.10.1016/j.addma.2018.03.022
19.
Bhosekar
,
A.
, and
Ierapetritou
,
M.
,
2018
, “
Advances in Surrogate Based Modeling, Feasibility Analysis, and Optimization: A Review
,”
Comput. Chem. Eng.
,
108
, pp.
250
267
.10.1016/j.compchemeng.2017.09.017
20.
Nath
,
P.
,
Hu
,
Z.
, and
Mahadevan
,
S.
,
2017
, “
Multi-Level Uncertainty Quantification in Additive Manufacturing
,”
Proceedings of the 28th Annual International Solid Freeform Fabrication Symposium–an Additive Manufacturing Conference
, Austin, TX, Aug. 7–9, pp.
922
937
.https://utw10945.utweb.utexas.edu/sites/default/files/2017/Manuscripts/MultiLevelUncertaintyQuantificationinAdditive.pdf
21.
Wang
,
Z.
,
Liu
,
P.
,
Xiao
,
Y.
,
Cui
,
X.
,
Hu
,
Z.
, and
Chen
,
L.
,
2019
, “
A Data-Driven Approach for Process Optimization of Metallic Additive Manufacturing Under Uncertainty
,”
J. Manuf. Sci. Eng.
,
141
(
8
), p. 081004.10.1115/1.4043798
22.
Ren
,
K.
,
Chew
,
Y.
,
Zhang
,
Y.
,
Fuh
,
J.
, and
Bi
,
G.
,
2020
, “
Thermal Field Prediction for Laser Scanning Paths in Laser Aided Additive Manufacturing by Physics-Based Machine Learning
,”
Comput. Methods Appl. Mech. Eng.
,
362
, p.
112734
.10.1016/j.cma.2019.112734
23.
Zhu
,
Q.
,
Liu
,
Z.
, and
Yan
,
J.
,
2021
, “
Machine Learning for Metal Additive Manufacturing: Predicting Temperature and Melt Pool Fluid Dynamics Using Physics-Informed Neural Networks
,”
Comput. Mech.
,
67
(
2
), pp.
619
635
.10.1007/s00466-020-01952-9
24.
Viana
,
F. A.
,
Simpson
,
T. W.
,
Balabanov
,
V.
, and
Toropov
,
V.
,
2014
, “
Special Section on Multidisciplinary Design Optimization: Metamodeling in Multidisciplinary Design Optimization: How Far Have We Really Come?
,”
AIAA J.
,
52
(
4
), pp.
670
690
.10.2514/1.J052375
25.
Giselle Fernández-Godino
,
M.
,
Park
,
C.
,
Kim
,
N. H.
, and
Haftka
,
R. T.
,
2019
, “
Issues in Deciding Whether to Use Multifidelity Surrogates
,”
AIAA J.
,
57
(
5
), pp.
2039
2054
.10.2514/1.J057750
26.
Shu
,
L.
,
Jiang
,
P.
,
Zhou
,
Q.
,
Shao
,
X.
,
Hu
,
J.
, and
Meng
,
X.
,
2018
, “
An on-Line Variable Fidelity Metamodel Assisted Multi-Objective Genetic Algorithm for Engineering Design Optimization
,”
Appl. Soft Comput.
,
66
, pp.
438
448
.10.1016/j.asoc.2018.02.033
27.
Zhou
,
Q.
,
Wang
,
Y.
,
Choi
,
S.-K.
,
Jiang
,
P.
,
Shao
,
X.
, and
Hu
,
J.
,
2017
, “
A Sequential Multi-Fidelity Metamodeling Approach for Data Regression
,”
Knowl. Based Syst.
,
134
, pp.
199
212
.10.1016/j.knosys.2017.07.033
28.
Shu
,
L.
,
Jiang
,
P.
,
Song
,
X.
, and
Zhou
,
Q.
,
2019
, “
Novel Approach for Selecting Low-Fidelity Scale Factor in Multifidelity Metamodeling
,”
AIAA J.
,
57
(
12
), pp.
5320
5330
.10.2514/1.J057989
29.
Gao
,
H.
,
Zhu
,
X.
, and
Wang
,
J.-X.
,
2020
, “
A Bi-Fidelity Surrogate Modeling Approach for Uncertainty Propagation in Three-Dimensional Hemodynamic Simulations
,”
Comput. Methods Appl. Mech. Eng.
,
366
, p.
113047
.10.1016/j.cma.2020.113047
30.
Liu
,
D.
, and
Wang
,
Y.
,
2019
, “
Multi-Fidelity Physics-Constrained Neural Network and Its Application in Materials Modeling
,”
J. Mech. Des.
,
141
(
12
), p. 121403.10.1115/1.4044400
31.
De
,
S.
,
Britton
,
J.
,
Reynolds
,
M.
,
Skinner
,
R.
,
Jansen
,
K.
, and
Doostan
,
A.
,
2020
, “
On Transfer Learning of Neural Networks Using Bi-Fidelity Data for Uncertainty Propagation
,”
Int. J. Uncertainty Quantif.
,
10
(
6
), pp.
543
573
.10.1615/Int.J.UncertaintyQuantification.2020033267
32.
Meng
,
X.
, and
Karniadakis
,
G. E.
,
2020
, “
A Composite Neural Network That Learns From Multi-Fidelity Data: Application to Function Approximation and Inverse PDE Problems
,”
J. Comput. Phys.
,
401
, p.
109020
.10.1016/j.jcp.2019.109020
33.
Chakraborty
,
S.
,
2021
, “
Transfer Learning Based Multi-Fidelity Physics Informed Deep Neural Network
,”
J. Comput. Phys.
,
426
, p.
109942
.10.1016/j.jcp.2020.109942
34.
Rosenthal
,
D.
,
1941
, “
Mathematical Theory of Heat Distribution During Welding and Cutting
,”
Weld. J.
,
20
, pp.
220
234
.https://ci.nii.ac.jp/naid/10014566598/
35.
Rosenthal
,
D.
,
1946
, “
The Theory of Moving Sources of Heat and Its Application of Metal Treatments
,”
Trans. ASME
,
68
(
8
), pp.
849
866
.https://ci.nii.ac.jp/naid/10004812806/en/
36.
Al-Bermani
,
S. S.
,
Blackmore
,
M. L.
,
Zhang
,
W.
, and
Todd
,
I.
,
2010
, “
The Origin of Microstructural Diversity, Texture, and Mechanical Properties in Electron Beam Melted Ti-6Al-4V
,”
Metall. Mater. Trans. A
,
41
(
13
), pp.
3422
3434
.10.1007/s11661-010-0397-x
37.
Promoppatum
,
P.
,
Yao
,
S.-C.
,
Pistorius
,
P. C.
, and
Rollett
,
A. D.
,
2017
, “
A Comprehensive Comparison of the Analytical and Numerical Prediction of the Thermal History and Solidification Microstructure of Inconel 718 Products Made by Laser Powder-Bed Fusion
,”
Engineering
,
3
(
5
), pp.
685
694
.10.1016/J.ENG.2017.05.023
38.
Li
,
J.
,
Wang
,
Q.
, and
(Pan) Michaleris
,
P.
,
2018
, “
An Analytical Computation of Temperature Field Evolved in Directed Energy Deposition
,”
J. Manuf. Sci. Eng.
,
140
(
10
), p. 101004.10.1115/1.4040621
39.
Raghavan
,
N.
,
Dehoff
,
R.
,
Pannala
,
S.
,
Simunovic
,
S.
,
Kirka
,
M.
,
Turner
,
J.
,
Carlson
,
N.
, and
Babu
,
S. S.
,
2016
, “
Numerical Modeling of Heat-Transfer and the Influence of Process Parameters on Tailoring the Grain Morphology of IN718 in Electron Beam Additive Manufacturing
,”
Acta Mater.
,
112
, pp.
303
314
.10.1016/j.actamat.2016.03.063
40.
ABAQUS
,
V.
,
2010
,
User Subroutines Reference Manual
,
Dassault Systemes Simulia Corp
, Johnston, RI.http://130.149.89.49:2080/v6.10ef/
41.
Hu
,
Z.
, and
Mahadevan
,
S.
,
2016
, “
Global Sensitivity Analysis-Enhanced Surrogate (GSAS) Modeling for Reliability Analysis
,”
Struct. Multidiscip. Optim.
,
53
(
3
), pp.
501
521
.10.1007/s00158-015-1347-4
42.
Zhou
,
Q.
,
Yang
,
Y.
,
Jiang
,
P.
,
Shao
,
X.
,
Cao
,
L.
,
Hu
,
J.
,
Gao
,
Z.
, and
Wang
,
C.
,
2017
, “
A Multi-Fidelity Information Fusion Metamodeling Assisted Laser Beam Welding Process Parameter Optimization Approach
,”
Adv. Eng. Software
,
110
, pp.
85
97
.10.1016/j.advengsoft.2017.04.001
43.
Helton
,
J. C.
, and
Davis
,
F. J.
,
2003
, “
Latin Hypercube Sampling and the Propagation of Uncertainty in Analyses of Complex Systems
,”
Reliab. Eng. Syst. Saf.
,
81
(
1
), pp.
23
69
.10.1016/S0951-8320(03)00058-9
44.
Qi
,
C. R.
,
Su
,
H.
,
Mo
,
K.
, and
Guibas
,
L. J.
,
2017
, “
Pointnet: Deep Learning on Point Sets for 3d Classification and Segmentation
,”
2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
, Honolulu, HI, July 21–26, pp.
77
85
.10.1109/CVPR.2017.16
45.
Bello
,
S. A.
,
Yu
,
S.
,
Wang
,
C.
,
Adam
,
J. M.
, and
Li
,
J.
,
2020
, “
Deep Learning on 3D Point Clouds
,”
Remote Sensing
,
12
(
11
), p.
1729
.10.3390/rs12111729
46.
Kashefi
,
A.
,
Rempe
,
D.
, and
Guibas
,
L. J.
,
2020
, “
A Point-Cloud Deep Learning Framework for Prediction of Fluid Flow Fields on Irregular Geometries
,”
Phys. Fluids
, 33(2), p.
027104
.10.1063/5.0033376
47.
Ioffe
,
S.
, and
Szegedy
,
C.
,
2015
, “
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
,”
Proceedings of the 32nd International Conference on Machine Learning
, Lille, France, July 7–9, pp.
448
456
.
http://proceedings.mlr.press/v37/ioffe15.html
48.
Glorot
,
X.
,
Bordes
,
A.
, and
Bengio
,
Y.
,
2011
, “
Deep Sparse Rectifier Neural Networks
,”
Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics
, Fort Lauderdale, FL, Apr. 11–13, pp.
315
323
.
49.
Tan
,
C.
,
Sun
,
F.
,
Kong
,
T.
,
Zhang
,
W.
,
Yang
,
C.
, and
Liu
,
C.
,
2018
, “
A Survey on Deep Transfer Learning
,”
International Conference on Artificial Neural Networks
,
Springer
, Rhodes, Greece, Oct. 4–7, pp.
270
279
.10.1007/978-3-030-01424-7_27
50.
Liu
,
L.
,
Jiang
,
H.
,
He
,
P.
,
Chen
,
W.
,
Liu
,
X.
,
Gao
,
J.
, and
Han
,
J.
,
2020
, “
On the Variance of the Adaptive Learning Rate and Beyond
,”
Proceedings of the Eighth International Conference on Learning Representations (ICLR 2020)
, Addis Ababa, Ethiopia, Apr. 26–May 1.https://openreview.net/forum?id=rkgz2aEKDr
51.
Zhang
,
M.
,
Lucas
,
J.
,
Ba
,
J.
, and
Hinton
,
G. E.
,
2019
, “
Lookahead Optimizer: K Steps Forward, 1 Step Back
,”
Adv. Neural Inf. Process. Syst.
,
32
, pp.
9593
9604
.https://arxiv.org/abs/1907.08610
52.
Kingma
,
D. P.
, and
Ba
,
J.
,
2014
, “
Adam: A Method for Stochastic Optimization
,”
3rd International Conference for Learning Representations
, San Diego, CA, May
7
9
.
53.
Loshchilov
,
I.
, and
Hutter
,
F.
,
2016
, “
Sgdr: Stochastic Gradient Descent With Warm Restarts
,”
5th International Conference on Learning Representations (ICLR 2017)
, Toulon, France, Apr.
24
26
.https://arxiv.org/abs/1608.03983
54.
MatLab
,
P.
,
2020
,
9.9.0.1495850 (R2020b)
,
The MathWorks Inc
,
Natick, MA
.
55.
Ning
,
J.
,
Sievers
,
D. E.
,
Garmestani
,
H.
, and
Liang
,
S. Y.
,
2019
, “
Analytical Modeling of in-Process Temperature in Powder Bed Additive Manufacturing Considering Laser Power Absorption, Latent Heat, Scanning Strategy, and Powder Packing
,”
Materials
,
12
(
5
), p.
808
.10.3390/ma12050808
56.
Guo
,
Y.
,
Wang
,
H.
,
Hu
,
Q.
,
Liu
,
H.
,
Liu
,
L.
, and
Bennamoun
,
M.
,
2020
, “
Deep Learning for 3D Point Clouds: A Survey
,”
IEEE Trans. Pattern Anal. Mach. Intell.
, in press.10.1109/TPAMI.2020.3005434
57.
Hu
,
Q.
,
Yang
,
B.
,
Xie
,
L.
,
Rosa
,
S.
,
Guo
,
Y.
,
Wang
,
Z.
,
Trigoni
,
N.
, and
Markham
,
A.
,
2020
, “
RandLA-Net: Efficient Semantic Segmentation of Large-Scale Point Clouds
,”
2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
, Seattle, WA, June 16–18, pp.
11108
11117
.10.1109/CVPR42600.2020.01112
58.
Xu
,
Q.
,
Sun
,
X.
,
Wu
,
C.-Y.
,
Wang
,
P.
, and
Neumann
,
U.
,
2020
, “
Grid-GCN for Fast and Scalable Point Cloud Learning
,”
2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
, Seattle, WA, June 16–18, pp.
5661
5670
.10.1109/CVPR42600.2020.00570