## Abstract

We applied machine learning models to predict the relationship between the yield stress and the stacking fault energies landscape in high entropy alloys. The data for learning in this work were taken from phase-field dislocation dynamics simulations of partial dislocations in face-centered-cubic metals. This study was motivated by the intensive computation required for phase-field simulations. We adopted three different ways to describe the variations of the stacking fault energy (SFE) landscape as inputs to the machine learning models. Our study showed that the best machine learning model was able to predict the yield stress to approximately 2% error. In addition, our unsupervised learning study produced a principal component that showed the same trend as a physically meaningful quantity with respect to the critical yield stress.

## 1 Introduction

High entropy alloy (HEA) is a new class of material that has many favorable mechanical and thermal properties. It shows a high yield stress [1,2], low-coupling between ductility and temperature [3], excellent specific strength, superior mechanical performance at high temperatures, exceptional ductility and fracture toughness at cryogenic temperatures, super-paramagnetism, and superconductivity [4]. Also, its high hardness, wear resistance, high-temperature softening resistance, and anticorrosion make the HEAs a perfect candidate for structural uses in the transportation and energy industries [5,6].

High entropy alloys are generally classified as alloys that are composed of five or more alloy elements. The crystal structure of HEAs can be complex with heterogeneous phases [7]. There are several studies that examine the variability of the stacking fault energy (SFE) landscape in an face-centered cubic (FCC) HEA as a result of the presence of heterogeneous phases [810]. The phase-field dislocation method (PFDM) studies conducted by Zeng et al. [8] showed that the yield stress of a FCC HEA increases with larger fluctuations of the SFE, and the maximum strength increase is attained when the characteristic length scale of SFE fluctuations is close to the average equilibrium stacking fault width. While their simulation results illuminate on the effect of SFE fluctuations on the strength of FCC HEAs, the computational cost of their simulations are extremely high. In this study, we apply machine learning (ML) models to learn the relationship between the SFE fluctuations and the yield stress of an FCC HEA using the published simulations results from Zeng et al. We showed that our ML models can predict the yield stress of HEAs with varying SFE landscapes to around $2%$ error.

## 2 Data

The data used in this study contain the SFE landscape and the resulting critical yield stress. The yield stress is obtained from PFDM simulations of dislocations moving in this energy landscape under an externally applied shear stress. The details of the simulations can be found in Ref. [8], and a summary of the model is described in the following sections.

### 2.1 Phase-Field Dislocation Model.

The PFDM studies the evolution of a single dislocation under an externally applied stress. In the PFDM, any displacement in a slip system is described by a sum of scalar-valued phase fields ξα(x):
$Δ(x)=b∑α=13ξα(x)sα$
(1)
where $s1=2/2[01¯1],s2=2/2[101¯],$ and $s3=2/2[1¯10]$ are the directions of the three Burgers vectors in the slip plane (111) in FCC materials [11,12].

The evolution of the phase fields is obtained through computing the minimum total energy of the dislocation ensemble [12,13]. This energy consists of two energies: first, the strain energy Ee, and second, the misfit energy, Em. The misfit energy accounts for stacking fault formations through parametrizing the gamma-surface [11,14].

The plastic distortion $βijp$ can be written as follows:
$βijp(x)=1d∑α=1Nbαξα(x)miαsjα$
(2)
where N is the total number of slip systems, d is the distance between slip planes, and mα is the normal to the slip plane α. The total distortion can be obtained using the elastic Green’s function Gij [15,16] as follows:
$βij(x)=−Gik,l⋆(Cklmnβmnp(x)),j$
(3)
where Cklmn is the tensor of elastic constants and ($⋆$) represents the convolution operator. The strain energy can be calculated as follows:
$Ee=12∫ΩCijkl(ϵij−ϵijp)(ϵkl−ϵklp)d3x−σappϵ¯ijp$
(4)
where Ω is the domain, the strain εij = sym(βij) is the symmetric part of the distortion, $ϵijp=sym(βijp)$ is the plastic strain, and
$ϵ¯ijp=1Ω∫Ωϵijpd3x$
(5)
The gamma-surface in the PFDM [11,14] allowed the prediction of the equilibrium stacking fault width for several FCC metals [11,12,17] to close agreement with atomistic computations and experiments. The gamma-surface is introduced as follows [11]:
$Em=∑α=1N∫Sαϕ[ξ]d2x$
(6)
where the integral is over the slip planes $Sα$. The misfit energy can be written as an explicit function of the intrinsic stacking fault energy γ and the unstable stacking fault energy γu [11,14,18,19]:
$ϕ(ξ)=γsin2(πξ)+(γu−γ/2)sin2(2πξ)$
(7)
The advantage of this formulation of the misfit energy is that it depends explicitly on γ and γu. However, the limitation of Eq. (7) is that it can be used only for displacements from a single phase field.
The equilibrium configuration of the dislocation ensemble is obtained from minimizing the total energy. This results in a system of α coupled equations for the phase fields:
$δEe[ξ(x)]+Em[ξ(x)]δξα(x)=0$
(8)
Materials parameters for Ni, listed in Table 1, are used in all the simulations.

In HEAs, the stacking fault energy varies locally with the local composition of the alloy. In different regions of the slip plane, we allocated different values of the intrinsic stacking fault energy. All remaining material properties are left unchanged. Two straight extended dislocations with Burgers vector in the [110] direction are introduced in the slip plane. At zero applied stress, each dislocation splits into two partials. Subsequently, an external stress is applied. The yield stress is defined as the minimum stress required for the dislocation to slide. All the data used in this article are obtained from the study by Zeng et al. [8].

## 3 Methods

### 3.1 Machine Learning Models.

There are two major classes of ML models: supervised and unsupervised learning. Unsupervised learning looks for patterns in the training data without using target outputs. Supervised learning uses the target outputs to train the ML models.

In this study, we applied six different supervised ML models and one unsupervised ML model to our data to find the optimal ML model. All the ML models used in this study are implemented in the Scikit-learn machine learning library in python [21]. We used the GridSearchCV feature to iteratively select the optimal parameters for each ML models from a set of predefined hyperparameters. Hyperparameters are specific to each ML models that are used to tune the learning process. The hyperparameters for each ML models were chosen using the tenfold cross-validation scheme. We briefly describe each ML models used in this study and their hyperparameters.

• K-Neighbors Regressor. The K-neighbors model uses the similarity of the data attributes to predict the value of the test data. This similarity is computed as distances in the m-dimensional features space x through an Euclidean distance as shown in Eq. (9), where xi denotes a training input and $x^i$ denotes a test input. The predicted outcome is the mean of the nearest neighbor’s output as shown in Eq. (10) [22], where K is the number of nearest neighbors, yi denotes the output from the training data in the neighborhood, and $y^$ denotes the predicted output for test input $x^i$.
$d=∑i=1m(x^i−xi)2$
(9)
$y^=1K∑i=1Kyi$
(10)
The hyperparameter of the Kernel Ridge Regression model is the parameter K, which is the number of nearest neighbors.
• Bayesian Ridge Regression. Bayesian Ridge is a probabilistic approach to linear regression. It assumes that the output of the data are normally distributed as shown in Eq. (11), where βT are the weights of linear regression and σ is the standard deviation. Using Bayes theorem and a prior distribution for the parameters β and σ, a posterior distribution for the output at the unknown input can be found given the input data [23].
$y^∼N(βTX,σ2I)$
(11)
The hyperparameters of the Bayesian ridge regression model are as follows: σ is the standard deviation of the normal distribution about the linear fit βTX; λ is the standard deviation of the distribution of the weights β.
• Decision Tree Regression. Decision tree regression breaks the training data into branches where each break point is a decision node. Each node has two or more branches where an attribute of the input can be tested. A prediction is achieved when a leaf (terminal) node is reached. It uses a top–down greedy search through the branches with no backtracking [24]. The hyperparameters used in this model are the criterion (e.g., mean absolute error (MAE) and mean squared error) used to determine locations where the tree splits—the maximum depth of the tree, which is the length of the longest path that traces from the root of the tree to the leaf of the tree.

• Gradient Boosting Regression (GBR). Gradient boosting regression consists of a set of M weak learning models and sequentially (i.e., boosting) reducing a predefined loss function using a gradient descent residual by adding the M weak learning models one at a time. The most commonly used weak learning models are decision trees with a small number of leaves [25]. The hyperparameters used in this models are the maximum depth of the decision trees; the loss function, which is used to compute the residual for each decision tree; the learning rate, which scales the contribution from each decision tree; and the maximum number of decision trees allowed.

• Kernel Ridge Regression. Kernel ridge regression is a generalization of ordinary least-squares models to a nonlinear infinite dimensional feature space using the kernel trick. The difference between Kernel ridge regression and support vector regression lies in the loss function. Kernel ridge regression minimizes the square loss function [26]. The hyperparameters used in this model are the types of kernel (e.g., linear, squared exponential, and polynomial) and the regularization parameter α, which prevents the models from overfitting.

• Gaussian Process Regression (GPR). Gaussian process regression assumes the output to be a Gaussian process. The joint probability distribution between any subset of the outputs form a multivariate Gaussian distribution. The covariance kernel k(x, x′) is assumed to be only a function of the inputs x and x′. A commonly used covariance kernel is the radial basis function as shown in Eq. (12), where l is a hyper-parameter indicating a characteristic length scale of the data. Using the definition of condition probability, a posterior distribution for the target y at an unknown input is computed as follows [27].
$k(x,x′)=exp(−γ‖x−x′l‖2)$
(12)
The hyperparameters used in this model are as follows: the types of kernels (e.g., squared exponential, rational quadratic, and Matern). Each of the kernels are controlled by a length-scale hyperparameter.
• Principal Component Analysis (PCA). Principal component analysis is an unsupervised ML models where the principal components of the data are computed. The principal components transforms the set of correlated input variables to a set of linearly uncorrelated variables. The components are chosen by picking directions with the largest variances in the data [28]. In our study, we use the principal component analysis to linearly decorrelate the input features. The only hyperparameter for PCA is the number of components selected for the analysis.

### 3.2 Features for Learning.

For our study, we used 83 PFDM simulations of the Ni-Co-Fe-Cr-Mn family of high entropy alloys with different SFE distributions from Zeng et al. [8] to train our ML models. In the PFDM simulations, the regions with constant SFE are produced from a Voronoi tessellation algorithm with the mean region size, d, between [0.25, 12] nm. The SFEs are distributed uniformly and are randomly assigned to each region. The mean SFE $γ¯$ used in the PFDM simulations are $γ¯∈[72.0,84.7,127.1]mJ/m$ with a standard deviation of σ = 39 mJ/m and $γ¯=35mJ/m$ with standard deviation of σ = 12 mJ/m. An example of an input to the PFDM simulations is plotted in Fig. 1.

To determine which features best describe the PFDM inputs, we proposed three different candidates as the input features. Each machine learning model was trained separately using each of three candidate input features. The dimensions for each candidate input features are listed in Table 2.

1. Feature Type 1. We used the prescribed mean region size d, mean SFE values $γ¯$, and the standard deviations of the SFE σ as input features to the ML models. We refer to the triplet $(γ¯,σ,d)$ as the prescribed statistical features for the remaining of the article.

2. Feature Type 2. We numerically estimated the mean region size $d^$, mean SFE values $γ^$, and the standard deviations $σ^$ of the SFE from pixel values of each image as shown in Fig. 1. This feature was motivated by SFE landscapes produced by atomic simulations, where the distribution of SFE landscape is not known a priori. We refer to this triplet, $(γ^,σ^,d^)$, as the estimated statistical features for the remaining of the article.

3. Feature Type 3. We used a resolution of 256 × 256 grid to sample the SFEs at each grid point of the PFDM inputs as shown in Fig. 1. We refer to this feature as the SFE grid features for the remaining of the article.

### 3.3 Evaluation Metrics.

We used a number of metrics to measure the efficacy of the ML models to predict the yield stress. For the following equations, y denotes the actual output for the test data, $y^$ is the ML predicted output for the test data, and n is the number of test data points.

• Coefficient of Determination (R2). Also known as multiple correlation coefficient, the R2 is a measure of the explained variability of the dependent variable by a model [29]. This coefficient can take negative values. The metric is shown in Eq. (13):
$R2(y,y^)=1−(∑i=0n−1(yi−yi^)2∑i=0n−1(yi−y¯)2)$
(13)
where $y¯$ is expressed as follows:
$y¯=1n∑i=0n−1yi$
(14)
• Mean Absolute Percentage Error (MAPE):
$MAPE=100n∑i=0n−1|yi−yi^yi|$
(15)
• Mean Absolute Error:

$MAE(y,y^)=1n∑i=0n−1|yi−yi^|$
(16)
• Root-Mean-Square Error (RMSE):

$RMSE(y,y^)=1n∑i=0n−1(yi−yi^)2$
(17)

### 3.4 K-Fold Cross-Validation.

The K-fold cross-validation is an evaluation method where the training data are split into K parts. The ML model is trained K times. For each training, each of the K parts is left out as validation data, and the remaining data are used for training. The validation error of the ML model is averaged over all the K rounds of validation error. Cross-validation provides a measure of how well the model will generalize to an independent data set [30].

### 3.5 Training and Test Data Partition.

The data used in this ML study were split randomly into 75% training and 25% test data. A tenfold cross-validation technique was used on the training data to select the hyperparameters for each of the ML models. The test data are completely unseen by the ML models and is only used to compute the test error of the ML models reported in Sec. 4

## 4 Results and Discussion

We trained our machine learning models using the three different type of features and compared their efficacy at learning the phase-field data. We discuss the results using both supervised and unsupervised models in this section.

### 4.1 Training the Machine Learning Models.

In this section, we applied different ML models listed in Sec. 3 to the training data for each feature types. The training-testing procedure of an ML model is as follows: the available data is randomly partitioned into 75% training data and 25% test data. The training data is used to train the ML models, and the test data are unseen by the ML models. A tenfold cross-validation scheme is applied to the training data to select the hyperparameters of each ML model. The hyperparameters are selected using the GridSearchCV routine in the Scikit-learn python machine learning library [21]. After the training of an ML model is finished, the ML model is used to make predictions on the unseen test data, and errors are computed based on a measured difference between the predicted yield stress and the exact yield stress of the test data. The error measures are listed in Sec. 3.3.

To ensure that the test errors for a given ML model represent a good generalization of the errors on unseen data, we repeated the aforementioned training-testing procedure for the ML model ten times. The test errors reported in the following results are averaged over ten different rounds of training-test procedures for every ML model and for every input feature type.

For the first two types of input, i.e., the prescribed and estimated statistical features, the input data consist of only three dimensions as presented in Table 2. The third type of input, i.e., SFE grid, consists of 65,536 dimensions. A direct application of the SFE grid input to an ML model would suffer from the “curse of dimensionality” where an extremely large number of training data would be necessary to avoid overfitting the ML model, leading to high test errors. The reason why a high-dimensional input requires for training data can be seen by the heuristic that to fit a line in one dimension, two data points are necessary; to fit a plane in two dimension, three data points are necessary; to fit a linear function in 65,536-dimension input, a minimum of 65,537 data points would be necessary. Due to the limited number of available PFDM data points, we have to reduce the dimension of the SFE grid input using PCA as described in Sec. 3. By using PCA, we reduced the SFE grid input to three dimensions to provide a fair comparison to the first two input types.

For each input type, the training-testing procedure was performed ten times on each of the six ML models in Sec. 3. The ML model that yielded the smallest averaged mean absolute error (MAE) on the test data was chosen as the ML model for the input type. In Table 3, the averaged test errors for each of the input types are reported. We observed that the feature types that use statistical descriptors yielded slightly better accuracy than the SFE grid.

To provide an alternative visual form of the test results from the ML models, we plotted the exact yield stress versus the ML model predicted yield stress using a randomly selected 25% of the available data as test data in Fig. 2. Since the test data for each of the subplots in the Fig. 2 were selected randomly from the available data, different data points were shown in the subplots in the figure. Note that the actual yield stress of the test data was lumped together in roughly four groups according to the four different mean SFE values. It can be seen that there is very little spread from diagonal line, which indicates that the predictions are robust.

#### 4.1.1 Dimension Reduction Using Principal Component Analysis.

Recall that so far, we have only applied PCA for the SFE grid input feature because the number of dimensions were too large to be used directly. Conversely, we used the prescribed and estimated statistical inputs directly to train the ML models. We investigated whether PCA could be also applied to the first two input types: prescribed and estimated statistical features. The goal of applying PCA to the first two input types was to reduce their dimensions to less than three. Using the PCA-reduced components from the first two input types, we applied the training-testing procedure ten times and computed the averaged test error. Figure 3 shows the averaged test error as a function of increasing number of PCA components (i.e., from one to three components) for the mean absolute percent error. It is shown in Fig. 3, after applying PCA, that only two components are sufficient to achieve comparable test errors to the three-dimensional raw input data. The reduction from three to two dimension enables visualization of the relationship between the yield stress and the two PCA components. The reason why we can reduce input data dimension using PCA is justified by the explained variance of each PCA dimensions for all three input types shown in Table 4. The first two PCA dimensions captures more than 90% of the variance in the input data.

Despite the efficacy of PCA in reducing the dimensions of input data, what is lost in the dimension-reduction procedure is the physical interpretability of the PCA dimensions. However, not all is lost in our case. The first PCA component from the SFE grid coincidentally correlates with the estimated mean SFE. Figure 4 shows that the first PCA component extracted from the SFE grid data captured the same trend for the yield stress as using the prescribed and estimated mean SFE even though it is an unsupervised process. In other words, the first principal component is equivalent to scalar multiple of the estimated mean SFE from the grid data.

### 4.2 Predictions Using Machine Learning Model.

The availability of a trained ML model allowed us to create plots that elucidate the relationship between the critical yield stress and the different statistical quantities that describe the distribution of SFE landscape. We applied GBR and GPR to predict the yield stress curve as a function of mean SFE with different SFE region sizes. Both the predicted yield stress and the PFDM yield stress used for training are plotted in Figs. 5 and 6. We see that the GBR model provided nearly piecewise-constant predictions in regions where there are training data. It is important to emphasize that GBR is not simply using a piecewise-constant at the training data to predict the yield stress. It can be seen in the two lower curves in Figs. 5(a) and 5(b), respectively, that in regions between 4.0 nm and 6.0 nm, there is a step in the predicted yield stress even though there is not any training data in that region. The piecewise nature of the predicted curves from GBR is a result of the different stages of decision trees used in the models. The GPR model produced smoother predictions compared to the GBR model. Both ML models suggest that given a fixed SFE mean, there is slight increase in the yield stress for region sizes between 1 nm and 4 nm.

Next we applied the GPR model to predict the critical yield stress as a function of the mean SFE for several SFE region sizes d. The results are shown in Fig. 7. It is important to note that due to the sparsely available training and test data for a given region size, the ML-predicted yield stresses for varying mean SFE are not as a robust compared to the yield stress for varying mean region sizes. Especially for predictions using a standard deviation at 12.0 mJ/m, we only have a few data points at SFE at 35 mJ/m. Nevertheless, we filled in the rest of the curve using our ML prediction for completeness. The ML models predict that an increase in the mean SFE will lead to a decrease in the critical yield stress for all region sizes. In addition, the models show that the critical yield stress decreases more rapidly for higher values of mean SFE. At low values of mean SFE, the increase in critical yield stress begins to taper off with a peak between mean SFE at 40 mJ/m–50 mJ/m.

## 5 Summary and Conclusions

This study demonstrated the capability of ML learning model to learn the relationship between the yield stress and the variation in the SFE landscape using results from PFDM simulations. The principal component from the SFE grid data using PCA shows the same trend in its relationship to the yield stress as the mean SFE. In addition, by employing PCA, we were able to reduce the dimension of statistical features to two dimension, which is necessary for data visualization. Of the three different feature types, we used to train the ML models, the input types where statistical descriptors of the SFE variations produced the lowest error. The ML models can be used as surrogate models for the PFDM simulations at a small fraction of its computational cost.

## Acknowledgment

The authors thank the Balsell’s foundation for providing the scholarship for Pau Cutrina Vilalta’s undergraduate thesis at University of Colorado, Colorado Springs. The authors also thank the Women in Mathematics of Materials (WIMM) Organization and Michigan Center for Applied and Interdisciplinary Mathematics at University of Michigan for providing travel and lodging funds for collaborative work on this project.

## Conflict of Interest

There are no conflicts of interest.

## Data Availability Statement

The raw data required to reproduce these findings are available to download from github repository.2 The processed data required to reproduce these findings cannot be shared at this time due to technical limitations.

## References

References
1.
Wu
,
Y.
,
Si
,
J.
,
Lin
,
D.
,
Wang
,
T.
,
Wang
,
W. Y.
,
Wang
,
Y.
,
Liu
,
Z.
, and
Hui
,
X.
,
2018
, “
Phase Stability and Mechanical Properties of Alhfnbtizr High-Entropy Alloys
,”
Mater. Sci. Eng. A.
,
724
, pp.
249
259
. 10.1016/j.msea.2018.03.071
2.
Senkov
,
O. N.
,
Wilks
,
G. B.
,
Scott
,
J. M.
, and
Miracle
,
D. B.
,
2011
, “
Mechanical Properties of nb25mo25ta25w25 and v20nb20mo20ta20w20 Refractory High Entropy Alloys
,”
Intermetallics
,
19
(
5
), pp.
698
706
. 10.1016/j.intermet.2011.01.004
3.
Gali
,
A.
, and
George
,
E. P.
,
2013
, “
Tensile Properties of High- and Medium-Entropy Alloys
,”
Intermetallics
,
39
(
8
), pp.
74
78
. 10.1016/j.intermet.2013.03.018
4.
Yifan
,
Y.
,
Wang
,
Q.
,
Lu
,
J.
,
Liu
,
C. T.
, and
Yang
,
Y.
,
2015
, “
High-Entropy Alloy: Challenges and Prospects
,”
Mater. Today
,
19
(
12
), pp.
349
362
.
5.
Miracle
,
D. B.
,
Miller
,
J. D.
,
Senkov
,
O. N.
,
Woodward
,
C.
,
Uchic
,
M. D.
, and
Tiley
,
J.
,
2014
, “
Exploration and Development of High Entropy Alloys for Structural Applications
,”
Entropy
,
16
(
1
), pp.
494
525
. 10.3390/e16010494
6.
Yeh
,
J.-W.
,
2006
, “
Recent Progress in High-Entropy Alloys
,”
Eur. J. Control
,
31
(
6
), pp.
633
648
.
7.
Gludovatz
,
B.
,
Hohenwarter
,
A.
,
Catoor
,
D.
,
Chang
,
E. H.
,
George
,
E. P.
, and
Ritchie
,
R. O.
,
2014
, “
A Fracture-Resistant High-Entropy Alloy for Cryogenic Applications
,”
Science
,
345
(
6201
), pp.
1153
1158
.
8.
Zeng
,
Y.
,
Cai
,
X.
, and
Koslowski
,
M.
,
2019
, “
Effects of the Stacking Fault Energy Fluctuations on the Strengthening of Alloys
,”
Acta. Mater.
,
164
, pp.
1
11
.
9.
Rao
,
S. I.
,
Woodward
,
C.
,
Parthasarathy
,
T. A.
, and
Senkov
,
O.
,
2017
, “
Atomistic Simulations of Dislocation Behavior in a Model Fcc Multicomponent Concentrated Solid Solution Alloy
,”
Acta. Mater.
,
134
, pp.
188
194
.
10.
Varvenne
,
C.
,
Luque
,
A.
, and
Curtin
,
W. A.
,
2016
, “
Theory of Strengthening in Fcc High Entropy Alloys
,”
Acta. Mater.
,
118
, pp.
164
176
. 10.1016/j.actamat.2016.07.040
11.
Hunter
,
A.
,
Beyerlein
,
I. J.
,
Germann
,
T. C.
, and
Koslowski
,
M.
,
2011
, “
Influence of the Stacking Fault Energy Surface on Partial Dislocations in Fcc Metals With a Three-Dimensional Phase Field Dislocations Dynamics Model
,”
Phys. Rev. B
,
84
(
14
), p.
144108
. 10.1103/PhysRevB.84.144108
12.
Cao
,
L.
,
Hunter
,
A.
,
Beyerlein
,
I. J.
, and
Koslowski
,
M.
,
2015
, “
The Role of Partial Mediated Slip During Quasi-static Deformation of 3d Nanocrystalline Metals
,”
J. Mech. Phys. Solids.
,
78
, pp.
415
426
. 10.1016/j.jmps.2015.02.019
13.
Hunter
,
A.
,
Kavuri
,
H.
, and
Koslowski
,
M.
,
2010
, “
A Continuum Plasticity Model That Accounts for Hardening and Size Effects in Thin Films
,”
Modell. Simul. Mater. Sci. Eng.
,
18
(
4
), p.
045012
. 10.1088/0965-0393/18/4/045012
14.
Lee
,
D. W.
,
Kim
,
H.
,
Strachan
,
A.
, and
Koslowski
,
M.
,
2011
, “
Effect of Core Energy on Mobility in a Continuum Dislocation Model
,”
Phys. Rev. B
,
83
(
10
), p.
104101
. 10.1103/PhysRevB.83.104101
15.
Mura
,
T.
,
2013
,
Micromechanics of Defects in Solids
,
,
New York
.
16.
Koslowski
,
M.
,
Cuitiño
,
A.
, and
Ortiz
,
M.
,
2002
, “
A Phase-Field Theory of Dislocations Dynamics, Strain Hardening and Hysteresis in Ductile Single Crystals
,”
J. Mech. Phys. Solids.
,
50
(
12
), pp.
2957
2635
. 10.1016/S0022-5096(02)00037-6
17.
Hunter
,
A.
,
Zhang
,
R.
,
Beyerlein
,
I. J.
,
Germann
,
T. C.
, and
Koslowski
,
M.
,
2013
, “
Dependence of Equilibrium Stacking Fault Width in Fcc Metals on the γ-Surface
,”
Modell. Simul. Mater. Sci. Eng.
,
21
(
2
), p.
025015
. 10.1088/0965-0393/21/2/025015
18.
Douin
,
J.
,
Pettinari-Sturmel
,
F.
, and
Coujou
,
A.
,
2007
, “
Dissociated Dislocations in Confined Plasticity
,”
Acta. Mater.
,
55
(
19
), pp.
6453
6458
. 10.1016/j.actamat.2007.08.006
19.
Martinez
,
E.
,
Marian
,
J.
,
Arsenlis
,
A.
,
Victoria
,
M. P.
, and
,
J. M.
,
2008
, “
Atomistically Informed Dislocation Dynamics in Fcc Crystals
,”
J. Mech. Phys. Solids.
,
56
(
3
), pp.
869
895
. 10.1016/j.jmps.2007.06.014
20.
Hirth
,
J. P.
, and
Lothe
,
J.
,
1968
,
Theory of Dislocations
,
McGraw-Hill
,
New York
.
21.
Pedregosa
,
F.
,
Varoquaux
,
G.
,
Gramfort
,
A.
,
Michel
,
V.
,
Thirion
,
B.
,
Grisel
,
O.
,
Blondel
,
M.
,
Prettenhofer
,
P.
,
Weiss
,
R.
,
Dubourg
,
V.
,
Vanderplas
,
J.
,
Passos
,
A.
,
Cournapeau
,
D.
,
Brucher
,
M.
,
Perrot
,
M.
, and
Duchesnay
,
E.
,
2011
, “
Scikit-Learn: Machine Learning in Python
,”
J. Mach. Learn. Res.
,
12
(
85
), pp.
2825
2830
.
22.
Zhang
,
Z.
,
2016
, “
Introduction to Machine Learning: K-Nearest Neighbors
,”
Ann. Trans. Med.
,
4
(
11
).
23.
Box
,
G. E. P.
, and
Tiao
,
G. C.
,
2011
,
Bayesian Inference in Statistical Analysis
,
Wiley Classics Library
,
New York
, pp.
1
608
.
24.
Quinlan
,
J. R.
,
1986
, “
Induction of Decision Trees
,”
Mach. Learn.
,
1
(
1
), pp.
81
106
.
25.
Mason
,
L.
,
Baxter
,
J.
,
Bartlett
,
P.
, and
Frean
,
M.
,
1999
, “
,”
Proceedings of the 12th International Conference on Neural Information Processing Systems, NIPS’99
,
Cambridge, MA
,
Oct. 30–Nov. 2
, pp.
512
518
.
26.
Vovk
,
V.
,
2013
,
Kernel Ridge Regression
,
B.
Schoelkopf
,
Z.
Luo
,
V.
Vovk
, eds.,
Springer
,
Berlin/Heidelberg
, pp.
105
116
.
27.
Rasmussen
,
C. E.
, and
Williams
,
C. K. I.
,
2005
,
Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning)
,
The MIT Press
,
Cambridge, MA
.
28.
Abdi
,
H.
, and
Williams
,
L. J.
,
2010
, “
Principal Component Analysis
,”
Wiley Interdisci. Rev. Comput. Stat.
,
2
(
4
), pp.
433
459
. 10.1002/wics.101
29.
Nagelkerke
,
N. J. D.
,
1991
, “
A Note on a General Definition of the Coefficient of Determination
,”
Biometrika
,
78
(
3
), pp.
691
692
. 10.1093/biomet/78.3.691
30.
Kohavi
,
R.
,
1995
, “
A Study of Cross-Validation and Bootstrap for Accuracy Estimation and Model Selection
,”
Proceedings of the 14th International Joint Conference on Artificial Intelligence—Volume 2, IJCAI’95
,
San Francisco, CA
,
Dec. 10–14
, pp.
1137
1143
.