Abstract

Designing a new heterostructure electrode has many challenges associated with interface engineering. Demanding simulation resources and lack of heterostructure databases continue to be a barrier to understanding the chemistry and mechanics of complex interfaces using simulations. Mixed-dimensional heterostructures composed of two-dimensional (2D) and three-dimensional (3D) materials are undisputed next-generation materials for engineered devices due to their changeable properties. The present work computationally investigates the interface between 2D graphene and 3D tin (Sn) systems with density functional theory (DFT) method. This computationally demanding simulation data is further used to develop machine learning (ML)-based potential energy surfaces (PES). The approach to developing PES for complex interface systems in the light of limited data and the transferability of such models has been discussed. To develop PES for graphene-tin interface systems, high-dimensional neural networks (HDNN) are used that rely on atom-centered symmetry function to represent structural information. HDNN are modified to train on the total energies of the interface system rather than atomic energies. The performance of modified HDNN trained on 5789 interface structures of graphene|Sn is tested on new interfaces of the same material pair with varying levels of structural deviations from the training dataset. Root-mean-squared error (RMSE) for test interfaces fall in the range of 0.01–0.45 eV/atom, depending on the structural deviations from the reference training dataset. By avoiding incorrect decomposition of total energy into atomic energies, modified HDNN model is shown to obtain higher accuracy and transferability despite a limited dataset. Improved accuracy in the ML-based modeling approach promises cost-effective means of designing interfaces in heterostructure energy storage systems with higher cycle life and stability.

1 Introduction

Intricately engineered device designs are put forth very often in recent years under the broad spectrum of advanced technology to target sustainability and enhanced efficiency. Complexity revolving around such device designs calls for the discovery of new multicomponent systems or new materials altogether. The concept of engineering materials has been applied in the vast sense in multicomponent systems to enhance functionality based on structure–function relationship [1]. For instance, in energy storage alone, the development of electrode design as core-shell, layered, or coating is continuously practiced for enhancing the electrode capacity and cycle life [26]. Such arrangements of different materials together in engineered devices create unique interfaces whose equilibrium and properties remain of intense research interest.

Two-dimensional (2D) material-based heterostructures (2D + nD, n = 0,1,2,3) have attracted enormous interest in various fields [79], including batteries [10,11]. The discovery of graphene in 2004 [12] opened a new space for mixed-dimensional materials where heterostructures based on graphene-like 2D materials layered with three-dimensional (3D) bulk materials have exhibited intriguing properties [7,13]. The mechanical, electrical, optical, magnetic, and thermal properties exhibited by these heterostructures have applications in batteries, optics, solar technologies, and electronics [1422].

Arrangement of 2D–3D materials can be achieved either during epitaxial growth of one material over the other or with post-growth modification techniques [13]. Despite the development of multiple synthesis modes, a complete characterization of these complex nanoscale interfaces is yet a challenge. Scanning tunneling microscopy (STM) and atomic force microscopy (AFM) have enabled studying the interface between ordered 2D material heterostructures [23]. Yet, their scope is not expandable to polymorphing interfaces where 3D bulk undergoes lattice distortions and phase transitions to stabilize itself over a 2D substrate.

The fundamental understanding of interfacial properties in these systems is critical for sustaining the desired electro-chemo-mechanical behavior. Quantitative computational modeling efforts with ab-initio methods such as density functional theory (DFT) indicate the influence of structural polymorphism in 3D bulk on interfacial properties such as interfacial strength, electron transfer, and conductivity [24,25]. However, studies in this domain remain less explored due to the high computational cost involved in ab-initio methods. Moreover, molecular simulations of interfacial properties need reliable inter-atomic potential, limited to only a few elements. Due to these restrictions, there has been no significant advancement in interface studies at the atomic/molecular level. These limitations are expected to be overcome with the emergence of artificial intelligence (AI)-based models in materials.

Machine learning (ML)-based potential energy surface (PES) can describe complex systems at low computational cost and with close to first principles accuracy. These methods rely on a large amount of DFT data of materials (structures, energies, and forces) to efficiently explore the chemical space with respect to the target properties during training. Atomic structures of materials are represented by appropriate descriptors and fed to the neural network (NN) algorithm to generate PES that is invariant to translational, rotational, and permutation of homonuclear atoms [26,27]. Such PES are independent of any physical parameters and approximations, unlike empirical force fields, and hence are more accurate if descriptors describe the atomic local environments efficiently. Several ML techniques have been employed for PES development: linear regression [28,29], Gaussian approximation [30,31], high-dimensional neural networks (HDNN) [32], and graph neural networks (GNN) [33,34]. Of particular interest are HDNN using atom-centered symmetry functions (ACSF or SF) as input descriptors [26] that have been successful for a wide range of materials due to their generality [3538]. Consequently, a lot of effort has been spent on refining the original HDNN given by Behler and Parrinello[32] to develop more efficient approaches [3941].

The development of ML-based PES is still in its nascent stage and has barely explored materials’ domain beyond small organic molecules [4244], single component condensed systems [32,45,46], bicomponent systems [4749], and crystalline ordered motifs [50]. ML research in heterostructure territory is mostly limited to searching and predicting new ordered heterostructures with targeted properties [5153]. The recent attempts to model 3D-3D interface grain boundaries demonstrate the scope of ML models in capturing mechanics at the nanoscale when trained on high accuracy data [54]. However, the primary bottleneck ML needs to scale in modeling complex materials is the tradeoff between quantity and quality of training data. As the complexity of the problem increases, acquiring ample data either computationally or empirically becomes a challenge.

The present work explores a mixed-dimensional heterostructure (2D–3D) interface formed by graphene (Gr) and tin (Sn) for the lattice distortions by utilizing DFT. Geometry optimization of Sn bulk over Gr results in the lattice rearrangements in interfacial Sn atomic layers. The extensive computation undertaken to study these interfaces with DFT has been further used to develop an ML-based PES for Gr and Sn interfaces. Here, the development of ML models could reinforce the atomic realization of these complex interfaces. This is one of the first attempts at modeling complex polymorphing 2D–3D interfaces with ML. We use a modified approach to the HDNN algorithm to develop PES for Gr|Sn interfaces which exhibit good transferability to new Gr|Sn interface systems despite limited training data. The design and utilization of ML-based potentials can be extended to expedite interface simulations in the future. Their scope of utilization in overcoming existing battery electrode design issues has been presented as a comprehensive discussion in the last section.

2 Methodology and Computational Details

2.1 High-Dimensional Neural Networks.

The present work uses second-generation HDNN generalized by Behler and Parrinello [32]. Limitations possessed by first-generation feedforward neural networks in addressing the full dimensionality of large-condensed phase systems are overcome by assuming that the total energy of the system Etotal can be disseminated into energy contributions by each individual atom (Ei) based on their local chemical environment
(1)

A large part of this assumption relies on the accuracy in describing atomic interactions in the local chemical neighborhoods. Appropriate descriptors that can convert cartesian coordinates to numeric vectors describing atomic interactions with translational, rotational, and permutational invariance are critical for the PES development. The descriptor defines the chemical environment of each atom within a specific cutoff radius (Rc). A large Rc value ensures that all energetically relevant interactions such as covalent, electrostatic, dispersion, and van der Waals interactions are included. Once the descriptor obtains fingerprints of atomic local space, separate feedforward neural networks are used for each atom in the system to express atomic energy contributions Ei. These Ei are then added to yield the total energy of the system Etotal.

High-dimensional neural networks model in the present work (Fig. 1(b)) is a modified version of second-generation HDNN for bicomponent systems (also called BPNN for Behler and Parrinello Neural Networks, Fig. 1(a)). Figure 1 differentiates both HDNN models for a system of six atoms, three Sn and three C. In the BPNN model, chemical species are separated by a distinct set of weights. Atomic input features defined by descriptors (Gi) are fed to atomic neural networks (ann), and two sets of ann weights are fitted corresponding to each chemical species in the system. Weights corresponding to Sn atoms (set-a, red-ann) are identical, as are the weights corresponding to C atoms (set-b, yellow-ann). This ensures the invariance of total energy against the interchanging of two identical atoms within set-a and set-b. In addition, this permits easy size extrapolation of the model. If another atom is added to the system, additional ann corresponding to the species can be appended to the architecture and added to the total energy expression (represented in Eq. (1)). Complete details of second-generation HDNN (BPNN) for multicomponent systems can be found in Ref. [55]. In contrast, modified HDNN in Fig. 1(b) identifies chemical species with an added input feature rather than relying on separate trained weights. Thus, weights of all ann (set-k) are trained to be identical for the Sn–C system. All ann architectures and weights are suited for a bicomponent system of Sn–C rather than an individual species. This approach permits easy system size extrapolation.

Fig. 1
Comparative schematics of HDNN for bicomponent (Sn|C) system: (a) HDNN by Behler and Parrinello (BPNN) for bicomponent systems where weights and architecture of atomic neural networks (ann) are the same for single chemical species. Red-ann in set-a corresponds to Sn atoms and yellow-ann in set-b corresponds to C atoms and (b) Modified HDNN in the present study for bicomponent systems. Weights and architecture of all atomic neural networks (ann) are the same and correspond to the Sn|C system rather than single species. Atomic species are differentiated by the added feature of atomic number.
Fig. 1
Comparative schematics of HDNN for bicomponent (Sn|C) system: (a) HDNN by Behler and Parrinello (BPNN) for bicomponent systems where weights and architecture of atomic neural networks (ann) are the same for single chemical species. Red-ann in set-a corresponds to Sn atoms and yellow-ann in set-b corresponds to C atoms and (b) Modified HDNN in the present study for bicomponent systems. Weights and architecture of all atomic neural networks (ann) are the same and correspond to the Sn|C system rather than single species. Atomic species are differentiated by the added feature of atomic number.
Close modal

2.2 Atom-Centered Symmetry Functions.

There have been several attempts to use cartesian coordinates as structural inputs for ML-based PES [39,56]. These are recognized to be a poor choice. Cartesian coordinates are not independent of molecular translation and rotation. Since neural networks are mathematical fitting methods, the output is sensitive to absolute values of input features. To overcome these limitations in describing complex chemical structures to HDNN, Behler and Parinello [32] introduced ACSF that describe the chemical neighborhood of each atom with the help of radial and angular symmetry functions. There are two types of ACSF commonly used that define radial (G2) and angular (G4) information of the central atom’s neighborhood within the sphere defined by the cutoff radius (Rc). The cutoff function for all the neighboring atoms within Rc is defined as
(2)
where Rij is the distance between central atom i and its neighboring atom j. fc(Rij) is a continuous and differentiable function whose value turns 0 when Rij > Rc. The radial and angular SF for central atom i are defined with the help of the cutoff function as two-body and three-body sums
(3)
(4)

Here, Gi2 is a sum of Gaussians multiplied by the cutoff function. The width of the Gaussian and the center of the Gaussian can be defined by parameters η and Rs. A non-zero Rs value can shift the center of Gaussian away from the reference atom. Therefore, it is preferably set to 0. The parameter η is a Gaussian exponent responsible for indicating reduced interaction strength with increasing distance between the two atoms. Parameters ζ and λ in the function Gi4 control the angular resolution and cosine function, respectively. λ usually takes value either +1 or −1 for inverting the cosine function maxima from θijk = 0 deg to θijk = 180 deg [26]. The most preferred value for ζ is 1 as it provides sufficient coverage centered at 0 deg (when λ = 1). Higher values can increase angular resolution close to the center at a reduced coverage cost [57]. Multiple parameter set for each symmetry function (SF) type are needed to cover different portions of the chemical environment. Values of these parameters define the high-dimensional input vector representing the local environment of each atom in the material system. It is advisable that 100–150 Gi be used for the bicomponent system, such as the interface systems, to capture the full dimensionality of the system. Redundancy of the information that can arise due to a large number of Gi has been recognized to be not a problem for HDNN [26]. SF are uniquely beneficial as input vectors for HDNN because input vector size is independent of the actual number of neighboring atoms within a set cutoff radius Rc [36]. We used DScribe package in python to calculate SF for interface systems with the parameters defined in Table 1 [58].

Table 1

Parameters used to compute the ACSF in the study

DescriptorsParametersValues
G2Rc (Å)8.9
Rs (Å)0
η1)0.003214, 0.035711, 0.071421, 0.124987, 0.214264, 0.357106, 0.714213, 1.428426
G4Rc (Å)8.9
λ (Å)−1,1
ζ1, 2, 4
η−1)0.003214, 0.035711, 0.071421, 0.124987, 0.214264, 0.357106, 0.714213, 1.428426
DescriptorsParametersValues
G2Rc (Å)8.9
Rs (Å)0
η1)0.003214, 0.035711, 0.071421, 0.124987, 0.214264, 0.357106, 0.714213, 1.428426
G4Rc (Å)8.9
λ (Å)−1,1
ζ1, 2, 4
η−1)0.003214, 0.035711, 0.071421, 0.124987, 0.214264, 0.357106, 0.714213, 1.428426

Note: Several values of Rc were tested and the value of 8.9 Å was found to give optimum results for presented 2D|3D interface systems.

We use a range of η values to capture the full dimensionality of the structures. The presented parameter set is chosen based on the benchmarking studies on descriptors for bicomponent bulk systems having elemental makeup similar to our structures [35,59,60]. λ has both values +1 and −1 corresponding to both centers in the SF set. To attain high angular resolution as well as complete coverage for intended interface systems, we use higher values for ζ in addition to 1 (ζ = 1, 2, 4). These SF set yielded the best results in our comparative evaluation and served as the foundation for further assessment of our model. As described in Sec. 2.1, we preferred to use large Rc for sufficient atomic interaction coverage. Here, Rc = 8.9 Å is found to be sufficient for optimum coverage of atom’s local environment (validation presented in supplemental information section I available in the Supplemental Materials on the ASME Digital Collection). By using the SF parameter set in Table 1, the chemical environment of each atom was represented by 162 input features. It is important to note here that the DScribe package used for the conversion of cartesian coordinates to SF considers the atomic number (Z) of chemical species in the environment of a central atom by appending the value to the SF (G2 and G4). However, it does not consider an atomic number of central species in any way. To overcome this drawback, an additional input feature was added to the SF input feature for each atom which was its own atomic number. Thus, each atom was represented by 163 input features in our study.

2.3 Density Functional Theory Computation Details.

Coherent interface models were created between ordered single layer Gr and Sn allotropes with an optimum interfacial gap of 3.5 Å. Gr contains 60 sp2 hybridized carbon atoms in all the interface structures, whereas the size of Sn bulk varies from atom size of 16 to 64. Sn is known for having many energetically similar allotropes, with alpha (α-Sn) and beta (β-Sn) being the prominent ones. The interfaces are set in the periodic xy plane across Sn (100) miller indices. These interface structures were modeled with a vacuum of 15 Å in z dimensions to circumvent the periodic influences, followed by DFT optimization to obtain relaxed strain-free interface configurations. All crystalline Sn bulks were derived from the materials project database [61]. Amorphous Sn was created with computational quenching of α-Sn64 following the methodology discussed in our earlier works [24,62,63]. Both Gr and Sn structures were DFT optimized individually before interfacing. All DFT calculations are done using Vienna Ab initio Simulation Package (VASP) [64]. Projector-augmented-wave (PAW) potentials are used to mimic inert core electrons, while the valence electrons are represented by plane-wave basis set [65,66]. Plane-wave energy cutoff and convergence tolerance for all relaxations are 550 eV and 10−6 eV, respectively. The generalized gradient approximation (GGA) with the Perdew–Burke–Ernzerhof (PBE) exchange-correlation function is taken into account [67] with inclusivity of vdW correction to incorporate the effect of weak long-range van der Waals (vdW) forces [68]. The energy minimizations are done by conjugate gradient method with Hellmann-Feynman forces less than 0.02 eV/Å. Considering the vacuum slab structure of all interfaces, gamma-centered k-meshes 4 × 4 × 1 are used for good convergence.

2.4 Training and Testing Dataset.

Data for training and testing are systematically selected to meet the primary objective of the study, which is developing PES for Gr|Sn interfaces with the least possible computation necessary and best possible transferability. While material databases are a reliable source for most material’s structural data for learning-based workflows, they are significantly lacking in the domain of interfacial configurations. Tracing the equilibration of interfaces where exists the possibility of prominent lattice distortions, proved to be computationally expensive. It took approximately 700 h with 72 CPU cores to finish the complete relaxation of a single Gr|α-Sn interface system. To initiate 2D–3D interface-based structural analysis in the future, there is a need to develop machine learning-centered cheaper and faster workflows.

We optimized multiple unequilibrated Sn and Gr interface structures and divided them into training and testing datasets. The training dataset consists of five Gr|Sn interfaces: crystalline α-Sn(32 and 64), β-Sn(16 and 18), and amorphous Sn64 interfaced with Gr (shown in supplemental information section II available in the Supplemental Materials on the ASME Digital Collection). The training dataset is built from convergence iterations of DFT simulations which covers the trajectory of minima search for five interfaces starting from the initial non-equilibrated structure. This scheme ensured that non-equilibrated and intermittent interface structures were as much part of the learning process as the relaxed structures. The intermediate DFT iterations of five Gr|Sn interface structures accounted for 5789 structures with their reference energies for training.

To analyze the performance of our model and test the transferability of PES in a sequential order of unfamiliarity, we use four carefully contemplated test interface structures (Fig. 2). The first test structure (T1) is the very Gr|β-Sn16 interface used in the training dataset, except that the orientation of Sn bulk is slightly shifted over the Gr surface in T1. It is an example of a known interface structure and unknown orientation. The second test structure (T2) is again a familiar interface with increased Sn bulk size (Gr|β-Sn16 → Gr|β-Sn32). The objective of T2 is to test the system size extrapolation capabilities of the PES (see Fig. S2 available in the Supplemental Materials on the ASME Digital Collection).

Fig. 2
Test interface structures (T1–T4) labeled in the order of unfamiliarity
Fig. 2
Test interface structures (T1–T4) labeled in the order of unfamiliarity
Close modal

In contrast to T1 and T2, the third test structure (T3) is the completely unfamiliar interface. A new Sn bulk (mp-949028 from Materials Project Database) is interfaced with Gr. The fourth and final test interface (T4) is derived from T3 by creating divacancy defects in the interfacing Gr. This change adds complexities of defects and surface adsorption in the interface structure, which are not noted in the earlier test interfaces. Differences in test structures from the training dataset are summarized in Table 2. The initial test interface structures were created like training interfaces and subjected to DFT optimizations. Since the current machine learning scheme does not include automated equilibration, the ability of PES to predict energies of intermediate configurations as structures search for global minima is assessed by predicting energies of test interfaces between initial and final configurations. The variations in the test interfaces during DFT optimization can be noted in the iteration snapshots presented in supplemental information section III available in the Supplemental Materials on the ASME Digital Collection. While minimal Sn alignment changes are observed in initial—final T2 and T3 structures, major structural transformative and surface defects are seen in T1 and T4, respectively.

Table 2

Notable differences in test structures from training dataset

Notable features of test structuresT1T2T3T4
Familiar interfaceOO××
Familiar interface orientation×O××
Similar Sn bulk sizeO×××
Familiar Sn allotrope bulkOO××
Familiar Gr substrateOOO×
Notable features of test structuresT1T2T3T4
Familiar interfaceOO××
Familiar interface orientation×O××
Similar Sn bulk sizeO×××
Familiar Sn allotrope bulkOO××
Familiar Gr substrateOOO×

3 Results and Discussion

3.1 Phase Changes at Graphene|Sn Interface.

This section discusses the atomic specifications of the Gr|Sn interface in optimized structures, which predominantly set apart these interfaces from Gr-based interfaces reported previously [7,13]. Discussion of interface phenomenon is important for designing interfaces and controlling heterostructure properties in applied technologies. The Gr|Sn interface systems are optimized by DFT to obtain relaxed strain-free interface configurations. Presented interfaces structurally resemble interfaces assembled in devices post-growth rather than interfaces originated during direct epitaxial growth of Gr on a substrate. Final interfacial configurations of Gr|Sn are depicted in Fig. 3 for two prominent Sn allotropes, α-Sn, and β-Sn, respectively. α-Sn is a diamond cubic crystal and β-Sn is a body-centered tetragonal crystal (Fig. 3(b)), which are two solid allotropes of Sn commonly in use. At temperatures below room temperature (286 K), α-Sn is the stable phase, which transitions to its β configurations rather quickly as temperature rises [69]. The sensitivity of temperature conditions symbolizes the significance of solid Sn α«b transitions for practical applications. This sensitivity elevates in the interfacial conditions with large lattice mismatch.

Fig. 3
DFT relaxed Graphene|Sn interface systems: (a) Side view of relaxed graphene|α-Sn64 interface system. Phase transformations of α-Sn to β-Sn noted in the Sn surface layers due to the presence of graphene substrate, (b) Unit cell representations of α-Sn and β-Sn with lattice constants 4.7 Å and 4.48 Å, derived from materials project database (mp-117 and mp-84) and used for construction of Sn bulks, (c) Side view of relaxed Graphene|β-Sn32 interface system, (d) Side view of relaxed Graphene|α-Sn32 interface system. Phase transformation of α-Sn to β-Sn noted in the entire Sn bulk with modified lattice constant of 4.52 Å, and (e) Side view of relaxed Graphene|β-Sn16 interface with Sn rearranged over Gr surface as a single atomic layer of modified β′-Sn.
Fig. 3
DFT relaxed Graphene|Sn interface systems: (a) Side view of relaxed graphene|α-Sn64 interface system. Phase transformations of α-Sn to β-Sn noted in the Sn surface layers due to the presence of graphene substrate, (b) Unit cell representations of α-Sn and β-Sn with lattice constants 4.7 Å and 4.48 Å, derived from materials project database (mp-117 and mp-84) and used for construction of Sn bulks, (c) Side view of relaxed Graphene|β-Sn32 interface system, (d) Side view of relaxed Graphene|α-Sn32 interface system. Phase transformation of α-Sn to β-Sn noted in the entire Sn bulk with modified lattice constant of 4.52 Å, and (e) Side view of relaxed Graphene|β-Sn16 interface with Sn rearranged over Gr surface as a single atomic layer of modified β′-Sn.
Close modal

Differences in materials and lattice constants imply strained conditions in the buffer layer of Sn at the Gr|Sn interface, which conditions the Sn bulk towards a possible phase change. Consequently, the observed lattice constant of interfacial Sn (c = 4.5 Å) is different from the rest of the α-Sn bulk (c = 4.7 Å) in Fig. 3(a). This indicates a possible phase transformation from α-Sn to β-Sn in the buffer layer at Gr|α-Sn interface. However, these structural transformations of Sn are at a few layers limit at the surfaces and do not proliferate to central regions of the bulk where α conformations are retained (Fig. 4). Surface relaxation of independent α-Sn slab did not show any distortions, which eradicates the possibility of this structural reconstruction at Gr|α-Sn interface as mere surface defects. We repeated the simulation with increased vacuum in z dimensions to ensure structural distortions in the top layer are not due to periodic influences (shown in supplemental information section IV available in the Supplemental Materials on the ASME Digital Collection). Our observations indicate that this surface hardening of α-Sn is nucleated due to the presence of the Gr interface. In contrast to Gr|α-Sn interface, no structural distortions are noted in the relaxed Gr|β-Sn in Fig. 3(c), indicating the preference of the Gr interface toward β-Sn.

Fig. 4
Intermediate graphene|α-Sn64 interface structures between initial and equilibrium interface configurations. Structural configuration in the first 250 DFT iterations is presented depicting quick structural transformations in early DFT stages. Simulation took approximately 1100 iterations to completely optimize. No major structural rearrangements were noted in the later iterations.
Fig. 4
Intermediate graphene|α-Sn64 interface structures between initial and equilibrium interface configurations. Structural configuration in the first 250 DFT iterations is presented depicting quick structural transformations in early DFT stages. Simulation took approximately 1100 iterations to completely optimize. No major structural rearrangements were noted in the later iterations.
Close modal

Structural changes in Sn at the Gr interface are also significantly impacted by Sn bulk size. Figures 3(d) and 3(e) exhibit Gr|Sn interfaces with smaller α-Sn and β-Sn bulks. Complete distortion of α-Sn32 bulk is noted in Fig. 3(d) with an increased density (see supporting information section V available in the Supplemental Materials on the ASME Digital Collection). Likewise, β-Sn16 rearranged over Gr surface as a single atomic layer in Fig. 3(e) with near-atomic distances of 3.15 Å. A drop in the Sn bulk size causes a reduction in the dimensions of Sn bulk and brings all the Sn atoms to the surface, much like 2D materials. These observations in small Sn crystals fall in line with prior experimental evidence of Sn nanocrystals becoming denser over graphene surfaces [70]. The relative differences in the material surfaces and charge analysis of Gr|Sn interfaces strongly indicate the weak van der Waals forces as the foundation of the formed interfaces. Charge analysis was performed in the said interfaces using Bader charge scripts by the Henkelman group [71]. The net electron exchange across the Gr|Sn interface is less than 1 e−1 for all interfaces denoting negligible covalent interaction.

Sn is a well-known high-capacity anode for lithium ion batteries (LIB) and sodium ion batteries (NIB) with prominent shortcomings from phase transitions (β ↔ α) and volumetric strains. The presence of Gr substrate for Sn anode provenly scales down the volume expansion associated with mechanical failures [62,70]. Our analysis suggests it can possibly minimize frequented phase transitions from β ↔ α due to its preference for β-Sn. With our computational study, we attempt to closely understand the swift structural transformations in Gr|Sn interfaces. However, there is scope for further experimental X-ray diffraction analysis of Sn crystals interfaced with Gr, which can validate the presented observations.

3.2 Model Performance.

HDNN is traditionally trained on reference energy per atom. However, electronic simulations do not provide actual energy per atom for reference, and energy per atom is deduced based on the total energy of the system [26]. Atomic energies are realized by dividing the total energy of the system by the number of atoms. This scheme gives equivalent atomic energies for all the atoms in a system. Such an approximation is suitable for a single component condensed system where atoms are present in a single phase. However, in Sec. 3.1, it can be observed that Gr|Sn interface structures have multiple phases with strained interfacial Sn atoms. Assuming interfacial Sn atoms will have higher atomic energies than sub-interfacial Sn atoms, training distinct atomic chemical environments in interface systems on uniform atomic energies is not considered a suitable approach.

To validate this assumption, we use two different training approaches: loss calculation with atomic energies (Ei) and loss calculation with total system energies (Etotal). In the first approach, the model is trained to learn from the reference atomic energies derived from total system energies, as per earlier reports [26]. We use a uniform ann architecture throughout the study. Each ann consists of three hidden layers having 100–50–10 nodes. The input features count is 163 that defines an individual atom’s species and chemical environment, as described in Sec. 2.2. Hyperbolic tangent (tanH) activation function is used in the hidden layers, while the output layer giving atomic energy contribution is linear (ann: 163-100-50-10-1). Weights and biases are optimized through supervised learning process using Adam optimizer [72] with a learning rate of 0.00001. Loss function after each epoch was determined by the mean squared error of atomic energies from reference DFT energies
(5)

The batch size for the training is kept at 10 (Size), and the accuracy of the energy prediction after each epoch is measured in terms of root-mean-square error (RMSE). Models are trained until accuracy metrics RMSE of at least 0.002 eV/Atom has been achieved. This amounted to 5000 epochs. The performance of the trained model is validated (validation split = 10%) and then tested on test structures T1 and T2. The performance of the trained model on a 10% validation split results in RMSE 0.0042 eV/Atom. This result is comparable to earlier reported deep learning studies on condensed phase systems [47,73], thereby concluding that this approach effectively develops PES for interfaces if target interfaces are similar to the training data. Next, trained PES are further used to predict atomic energies of the test structures T1 and T2. The results are summarized in Table 3. While the model performs well on validation split (test data randomly separated out of training data), its performance for new interfaces has been poor, indicating an overfitting case. A potential reason for this performance could be the training process where different atomic environments in a single system are assigned the same atomic energies. This renders the model inefficient in differentiating atomic energies of the atoms with different chemical neighborhoods.

Table 3

Performance of PES on validation set and test interface structures

PerformanceRMSE in eV/atom
Validation set0.0042
T10.2235
T20.9496
PerformanceRMSE in eV/atom
Validation set0.0042
T10.2235
T20.9496

The second training approach considers the total energy of the system as the final output of modified HDNN. In the last layer of HDNN, atomic energies Ei from all ann are added to yield Etotal. This required fixing the atomic size (number of atoms in each system) in the training data to be 300. Input features for any system having less than 300 atoms have been extrapolated by zero padding. Inputs (Gi) to anns for each interface system are shuffled during each epoch. Gi is a one-dimensional array that encodes information about the central atom species and its local chemical environment. Hence, permutations at this stage do not change either the atomic energy or combined total energy. Loss is calculated from the total energies of the system. This allowed model to assign atomic energies based on the chemical space of each atom. Nested ann in modified HDNN has three hidden layers with 100–50–10 nodes. Hyperparameters of ann (nodes, activation function) remain the same as described before. Weights and biases are optimized through a supervised learning process using Adam optimizer and a learning rate of 0.00001. The loss function after each epoch was determined by the mean squared error of predicted total energies from reference DFT energies as per Eq. (5). The batch size for the training was kept one. The accuracy of the energy prediction after each epoch is measured in terms of RMSE, and the model is trained on 5789 Gr|Sn interface structures until the total energy RMSE of at least 0.2 eV is achieved.

Once trained, new PES is used to predict energies of test structures obtained from T1 (familiar interface, different orientation). Between unequilibrated and equilibrated T1 structures, there are approximately 260 structural configurations. All of which were used as test data. Figure 5 compares system energies of T1 structures obtained from DFT (EDFT) with energies predicted by new PES (Epredict). Both system energies match closely with the RMSE value 0.0901 eV. Slight error is noted for non-equilibrated structures (below 50 DFT iterative structures), which further reduces as the structure stabilizes. Because HDNN is fitted to the total energies of the structure, we note that Epredict is bordering on EDFT values but is not equivalent to the exact values even though the T1 interface is very close to trained structural configuration.

Fig. 5
Total energies of the test structures T1 predicted by trained HDNN model. Epredict and EDFT are total energies predicted by HDNN and DFT, respectively. The dashed sphere with cutoff radius Rc = 8.9 Å represents the chemical neighborhood that was observed for all the atoms in the system.
Fig. 5
Total energies of the test structures T1 predicted by trained HDNN model. Epredict and EDFT are total energies predicted by HDNN and DFT, respectively. The dashed sphere with cutoff radius Rc = 8.9 Å represents the chemical neighborhood that was observed for all the atoms in the system.
Close modal

3.3 Transferability of Potential Energy Surfaces.

The primary objective of the targeted potentials (PES) is the ability to predict energies of new Gr|Sn interfaces and avail the least computation necessary during the development. The HDNN model trained on structures from 5 Gr|Sn interfaces has been tested on new interfaces T1–T4 described in Sec. 2.4. The results are summarized in Table 4. Between unequilibrated and equilibrated system configurations, there are approximately 260–400 structural configurations for each test structure used for testing. Predicted energies of new interfaces by modified HDNN weights have smaller RMSE values (eV/Atom). RMSE values for T1 and T2 in Table 4 are significantly lower than RMSE values noted in Table 3. This clearly indicates that the deep neural network model designed for such multiphasic interfaces should train across total energies rather than atomic energies to gather complete system information. The difference noted in energies of completely unfamiliar interfaces T3 and T4 is also relatively small. In the absence of accurate empirical potentials in literature, the developed PES demonstrates acceptable performance for new Gr|Sn interfaces constituting structural distortions.

Table 4

Performance of PES on unfamiliar test interfaces

Test interfaceRMSE eV/atom
T10.016
T20.222
T30.360
T40.458
Test interfaceRMSE eV/atom
T10.016
T20.222
T30.360
T40.458

3.4 Discussion.

Mathematical models like machine learning or deep learning can have powerful predictive accuracy when trained on large datasets. Data requirements are major barriers that delay the adoption of ML/DL techniques in artificial intelligence-assisted material development and discovery. The lack of a sizable dataset can be compensated with advanced sampling techniques such as heterogeneous dataset, random sampling, and stochastic surface walking [35,74]. Each of these methods concentrates on a different aspect of PES development. In this work, we use the geometry optimization arc of interfaces as a dataset, which includes unequilibrated, metastable, and equilibrated structures. The advisable course of using ab-initio molecular dynamics (AIMD) simulations to explore new equilibria for interfaces prior to DFTs is skipped due to the computational demand of the undertaking. The time to run a single AIMD simulation with 500 steps and 2 fs timestep on Gr|Sn interface ranged between 400 and –700 h on a CPU with 72 cores during our initial tests. Despite the limited dataset, our model predicted energies of new interface structures with lower RMSE when compared with the traditional approach. This can be said especially for the interface structures such as T1 and T2, where the Sn phase is familiar with reference training data. By avoiding incorrect decomposition of total energy into atomic energies, the model can obtain a high degree of accuracy and transferability even with a limited dataset.

To overcome the limitations of data, a heterogenous dataset was created consisting of individual Gr and Sn structures and distribution of Sn clusters adsorbed on the surface of Gr (2D–1D interface). This new heterogeneous dataset consisted of 9646 structures (see supporting information Fig. S6 available in the Supplemental Materials on the ASME Digital Collection). However, the modified HDNN model trained on heterogeneous datasets performed poorly for test interface structures (T1–T4). Since RMSE values from the heterogenous dataset approach were higher than the values in Table 4, these results have not been shown in the present work. The failure of the heterogenous dataset approach to capture 2D–3D interface structures accurately is primarily on account of the unique microstructural characteristics of 2D–3D interface from its individual 2D–1D interface counterparts. While the present study emphasizes a correct description of atomic energies in neural networks, there is further scope to develop successful data sampling approaches for 2D–3D interfaces.

The current state of existing ML models is still miles away from completely replacing DFT for complex interface systems. However, for the purpose of molecular dynamics simulations, modified HDNN models trained with appropriate atomic energy decomposition could be sufficient. Applications of ML-based PES depend on the dataset sampling approach adopted. While AIMD generated dataset trained model could be successfully used for simulations where more than one equilibrium is searched for, DFT-based dataset could be sufficient to explore one global minimum for the structures with a certain tradeoff between accuracy and computation times. Ongoing advancements in training and sampling techniques can possibly overcome the challenges associated with ML-assisted interface studies.

4 2D–3D Interfaces in Electrochemical Energy Storage

The conventional bulk materials-based batteries (Fig. 6(a)) have practical issues to meet the ever-increasing energy demand [75]. The two most critical chemo-mechanical failure modes in batteries are—(i) interface failure leading to electrical isolation of active electrode particles [76] and (ii) mechanical failure of active materials [7779]. The active electrode particles (e.g., Si) have to be in contact with the metal current collector (Fig. 6(b1)), such as Ni, to ensure a uniform electron exchange [80]. During Li intercalation, active particles undergo substantial volume expansion [81]. For example, Si undergoes 300% volume expansion upon lithiation [82]. Since the metal current collectors (e.g., Cu and Ni) act as non-slippery surfaces, the volume expansion/contraction of active particles generates excessive interfacial stress (Fig. 6(b1)), leading to fracture of active particles, causing battery failures [62].

Fig. 6
(a) Schematic of anode, (b) failure at the interface of binder/active materials and current-collector/active-materials, and (c) failure of active materials
Fig. 6
(a) Schematic of anode, (b) failure at the interface of binder/active materials and current-collector/active-materials, and (c) failure of active materials
Close modal

Another critical interface is between the polymer binder and active particles (Fig. 6(b2)) [83]. The polymer binder network keeps active particles adhered together and ensures continued electrical contact throughout the electrode [84]. However, volume expansion/contraction of active particles causes excessive interfacial stress at polymer-active particle interface, leading to detachment and electrical isolation of active particles (Fig. 6(b2)) [83]. Besides interface failure, another prominent reason for battery failure is the active particles’ fracture (Fig. 6(c)) [85]. Again volume expansion/contraction of Si anode particle upon lithiation/delithiation causes its formation of cracks [86], leading to battery failure [87]. These practical problems in battery electrodes need to be addressed by strategizing the electrode design.

Two-dimensional material-based heterostructures are promising candidates to solve these burning issues [8890]. Electrical isolation of active particles can be avoided by replacing polymer binders with conductive and flexible 2D material such as MXenes, which can form an omnipresent electron-conducting network within the electrode [91] (Fig. 7(a1)). Next, the issue of the current collector and active particle interface failure (Fig. 7(b1)) can be addressed with two options: (i) Fig. 7(a2): addition of 2DM such as graphene as “coating” on the current collector to make it a “slippery” interface [24,62], (ii) Fig. 7(a3): completely replace the current collector with 2DM such as MXenes [92,93]. On the other hand, the problem of active materials failure (Fig. 6(c)) can be solved by using 2D materials-based anode (Fig. 7(b1)) [10] or 3D active materials integrated with 2D materials (Fig. 7(b2)) [7,94].

Fig. 7
(a) 2D materials as a current collector and binder and (b) 2D layers and 2D composite as anode materials
Fig. 7
(a) 2D materials as a current collector and binder and (b) 2D layers and 2D composite as anode materials
Close modal

In all the 2D materials-based cases discussed, interface plays a critical role in electrochemical performance, i.e., energy and power density, volumetric capacity and so on.. The mechanical integrity of these interfaces dictates the long-term performance of energy storage systems [9597]. Computational modeling methods such as DFT and MD simulations have been good alternatives to expensive experimental characterization for developing a deeper understanding of complex interfacial characteristics in batteries. Work by Basu et al.[62] on recognized benefits of slippery graphene surface at current collector end in combating stresses in Si anode upon lithiation is an example of the comprehensive scope of simulation studies. However, MD-based methodology to study interfacial stress cannot be extended to new and heavy metal electrodes such as Sn, Se, and more, due to the lack of appropriate forcefields. Presented work lays the foundation for developing the futuristic AI-based potentials to study a wide variety of 2D materials-based interfaces. Although the present study considers only graphene-based interfaces, the presented approach can be implemented in other 2D materials-based systems.

5 Conclusions

In summary, we performed optimization of different graphene and Sn-based 2D–3D interfaces resulting in a unique dataset, a kind lacking in existing databases. Our DFT simulation results show lattice distortions in Sn interfacing with graphene in great atomic details and highlight the preferred stability of β-Sn over graphene as opposed to α-Sn. One of the best application cases of Gr|Sn interface systems is in Sodium-ion batteries, where the presence of graphene interface can alleviate mechanical stresses upon Na intercalation in otherwise high capacity-low stability Sn anode. Usage of graphene-based heterostructures is undisputedly vast, and the need to model structural-functional aspects of such interfaces is an emergent need. We present the development of ML-based PES that can predict the energies of complex graphene-Sn 2D|3D interface systems with good accuracies that could be used to replace expensive ab-initio methods in future modeling efforts. Applicability of high-dimensional neural networks (HDNN) developed by Behler and Parrinello that utilize atom-centered symmetry functions as structural descriptors has been shown. The widely used approach to calculate loss function on atomic energies showed good performance on validation split but failed to predict energies of the new interface systems. To overcome this, we modify the HDNN model to enable training on the total energies of the system rather than atomic energies. This latter approach significantly improves the performance of PES in predicting the total energies of new Gr|Sn interface systems that constitute the test interfaces. The primary reason for this enhanced performance is the freedom model gained to assign atomic energies based on the atomic chemical environment. This opens the possibility for the more accurate evaluation of atomic energies and forces from ML models, allowing the scope for automated equilibration.

Acknowledgment

The work is supported by National Science Foundation (NSF) Civil, Mechanical and Manufacturing Innovation (CMMI) Program, Award Number # 1911900. The authors acknowledge the Extreme Science and Engineering Discovery Environment (XSEDE) for the computational facilities (Award Number—DMR180013). V.S. acknowledges Mr. Joy Datta for fruitful discussion.

Conflict of Interest

There are no conflicts of interest.

Data Availability Statement

The datasets generated and supporting the findings of this article are obtainable from the corresponding author upon reasonable request.

References

1.
Abdullahi
,
H.
,
Burcham
,
C. L.
, and
Vetter
,
T.
,
2020
, “
A Mechanistic Model to Predict Droplet Drying History and Particle Shell Formation in Multicomponent Systems
,”
Chem. Eng. Sci.
,
224
, p.
115713
.
2.
Chatterjee
,
K.
,
Sarkar
,
S.
,
Rao
,
K. J.
, and
Paria
,
S.
,
2014
, “
Core/Shell Nanoparticles in Biomedical Applications
,”
Adv. Colloid Interface Sci.
,
209
, pp.
8
39
.
3.
Wei
,
S.
,
Wang
,
Q.
,
Zhu
,
J.
,
Sun
,
L.
,
Lin
,
H.
, and
Guo
,
Z.
,
2011
, “
Multifunctional Composite Core–Shell Nanoparticles
,”
Nanoscale
,
3
(
11
), pp.
4474
4502
.
4.
Yan
,
D.
, and
Wei
,
M.
,
2015
,
Photofunctional Layered Materials
,
Springer
,
New York
.
5.
Di
,
J.
,
Xia
,
J.
,
Li
,
H.
,
Guo
,
S.
, and
Dai
,
S.
,
2017
, “
Bismuth Oxyhalide Layered Materials for Energy and Environmental Applications
,”
Nano Energy
,
41
, pp.
172
192
.
6.
Alanazi
,
A.
,
Nojiri
,
C.
,
Kido
,
T.
,
Noguchi
,
J.
,
Ohgoe
,
Y.
,
Matsuda
,
T.
,
Hirakuri
,
K.
,
Funakubo
,
A.
,
Sakai
,
K.
, and
Fukui
,
Y.
,
2000
, “
Engineering Analysis of Diamond-Like Carbon Coated Polymeric Materials for Biomedical Applications
,”
Artif. Organs
,
24
(
8
), pp.
624
627
.
7.
Bae
,
S.-H.
,
Kum
,
H.
,
Kong
,
W.
,
Kim
,
Y.
,
Choi
,
C.
,
Lee
,
B.
,
Lin
,
P.
,
Park
,
Y.
, and
Kim
,
J.
,
2019
, “
Integration of Bulk Materials With Two-Dimensional Materials for Physical Coupling and Applications
,”
Nat. Mater.
,
18
(
6
), pp.
550
560
.
8.
Schulman
,
D. S.
,
Arnold
,
A. J.
, and
Das
,
S.
,
2018
, “
Contact Engineering for 2D Materials and Devices
,”
Chem. Soc. Rev.
,
47
(
9
), pp.
3037
3058
.
9.
Oakes
,
L.
,
Carter
,
R.
,
Hanken
,
T.
,
Cohn
,
A. P.
,
Share
,
K.
,
Schmidt
,
B.
, and
Pint
,
C. L.
,
2016
, “
Interface Strain in Vertically Stacked Two-Dimensional Heterostructured Carbon-MoS2 Nanosheets Controls Electrochemical Reactivity
,”
Nat. Commun.
,
7
(
1
), pp.
1
7
.
10.
Chen
,
K.-S.
,
Balla
,
I.
,
Luu
,
N. S.
, and
Hersam
,
M. C.
,
2017
, “
Emerging Opportunities for Two-Dimensional Materials in Lithium-Ion Batteries
,”
ACS Energy Lett.
,
2
(
9
), pp.
2026
2034
.
11.
Hu
,
Z.
,
Liu
,
Q.
,
Chou
,
S.-L.
, and
Dou
,
S.-X.
,
2021
, “
Two-Dimensional Material-Based Heterostructures for Rechargeable Batteries
,”
Cell Rep. Phys. Sci.
,
2
(
1
), p.
100286
.
12.
Novoselov
,
K. S.
,
Jiang
,
D.
,
Schedin
,
F.
,
Booth
,
T.
,
Khotkevich
,
V.
,
Morozov
,
S.
, and
Geim
,
A. K.
,
2005
, “
Two-Dimensional Atomic Crystals
,”
Proc. Natl. Acad. Sci. U. S. A.
,
102
(
30
), pp.
10451
10453
.
13.
Hong
,
H.
,
Liu
,
C.
,
Cao
,
T.
,
Jin
,
C.
,
Wang
,
S.
,
Wang
,
F.
, and
Liu
,
K.
,
2017
, “
Interfacial Engineering of van der Waals Coupled 2D Layered Materials
,”
Adv. Mater. Interfaces
,
4
(
9
), p.
1601054
.
14.
Zhang
,
Q.
,
Fiori
,
G.
, and
Iannaccone
,
G.
,
2014
, “
On Transport in Vertical Graphene Heterostructures
,”
IEEE Electron Device Lett.
,
35
(
9
), pp.
966
968
.
15.
Huang
,
B.
,
Xiang
,
H.
,
Yu
,
J.
, and
Wei
,
S.-H.
,
2012
, “
Effective Control of the Charge and Magnetic States of Transition-Metal Atoms on Single-Layer Boron Nitride
,”
Phys. Rev. Lett.
,
108
(
20
), p.
206802
.
16.
Konstantatos
,
G.
,
Badioli
,
M.
,
Gaudreau
,
L.
,
Osmond
,
J.
,
Bernechea
,
M.
,
De Arquer
,
F. P. G.
,
Gatti
,
F.
, and
Koppens
,
F. H.
,
2012
, “
Hybrid Graphene–Quantum Dot Phototransistors With Ultrahigh Gain
,”
Nat. Nanotechnol.
,
7
(
6
), pp.
363
368
.
17.
Al Balushi
,
Z. Y.
,
Wang
,
K.
,
Ghosh
,
R. K.
,
Vilá
,
R. A.
,
Eichfeld
,
S. M.
,
Caldwell
,
J. D.
,
Qin
,
X.
,
Lin
,
Y.-C.
,
DeSario
,
P. A.
, and
Stone
,
G.
,
2016
, “
Two-Dimensional Gallium Nitride Realized via Graphene Encapsulation
,”
Nat. Mater.
,
15
(
11
), pp.
1166
1171
.
18.
Journot
,
T.
,
Bouchiat
,
V.
,
Gayral
,
B.
,
Dijon
,
J.
, and
Hyot
,
B.
,
2018
, “
Self-Assembled UV Photodetector Made by Direct Epitaxial GaN Growth on Graphene
,”
ACS Appl. Mater. Interfaces
,
10
(
22
), pp.
18857
18862
.
19.
Dutta
,
M.
,
Sarkar
,
S.
,
Ghosh
,
T.
, and
Basak
,
D.
,
2012
, “
ZnO/Graphene Quantum Dot Solid-State Solar Cell
,”
J. Phys. Chem. C
,
116
(
38
), pp.
20127
20131
.
20.
Sun
,
S.
,
Gao
,
L.
, and
Liu
,
Y.
,
2010
, “
Enhanced Dye-Sensitized Solar Cell Using Graphene-TiO 2 Photoanode Prepared by Heterogeneous Coagulation
,”
Appl. Phys. Lett.
,
96
(
8
), p.
083113
.
21.
Chou
,
C.-Y.
, and
Hwang
,
G. S.
,
2013
, “
Role of Interface in the Lithiation of Silicon-Graphene Composites: A First Principles Study
,”
J. Phys. Chem. C
,
117
(
19
), pp.
9598
9604
.
22.
Chou
,
S.-L.
,
Wang
,
J.-Z.
,
Choucair
,
M.
,
Liu
,
H.-K.
,
Stride
,
J. A.
, and
Dou
,
S.-X.
,
2010
, “
Enhanced Reversible Lithium Storage in a Nanosize Silicon/Graphene Composite
,”
Electrochem. Commun.
,
12
(
2
), pp.
303
306
.
23.
Li
,
Y.
,
Huang
,
S.
,
Wei
,
C.
,
Zhou
,
D.
,
Li
,
B.
,
Wu
,
C.
, and
Mochalin
,
V. N.
,
2021
, “
Adhesion Between MXenes and Other 2D Materials
,”
ACS Appl. Mater. Interfaces
,
13
(
3
), pp.
4682
4691
.
24.
Sharma
,
V.
,
Mitlin
,
D.
, and
Datta
,
D.
,
2021
, “
Understanding the Strength of the Selenium–Graphene Interfaces for Energy Storage Systems
,”
Langmuir
,
37
(
6
), pp.
2029
2039
.
25.
Khomyakov
,
P.
,
Giovannetti
,
G.
,
Rusu
,
P.
,
Brocks
,
G. V.
,
Van den Brink
,
J.
, and
Kelly
,
P. J.
,
2009
, “
First-Principles Study of the Interaction and Charge Transfer Between Graphene and Metals
,”
Phys. Rev. B
,
79
(
19
), p.
195425
.
26.
Behler
,
J.
,
2011
, “
Atom-Centered Symmetry Functions for Constructing High-Dimensional Neural Network Potentials
,”
J. Chem. Phys.
,
134
(
7
), p.
074106
.
27.
Zuo
,
Y.
,
Chen
,
C.
,
Li
,
X.
,
Deng
,
Z.
,
Chen
,
Y.
,
Behler
,
J. R.
,
Csányi
,
G.
,
Shapeev
,
A. V.
,
Thompson
,
A. P.
, and
Wood
,
M. A.
,
2020
, “
Performance and Cost Assessment of Machine Learning Interatomic Potentials
,”
J. Phys. Chem. A
,
124
(
4
), pp.
731
745
.
28.
Thompson
,
A. P.
,
Swiler
,
L. P.
,
Trott
,
C. R.
,
Foiles
,
S. M.
, and
Tucker
,
G. J.
,
2015
, “
Spectral Neighbor Analysis Method for Automated Generation of Quantum-Accurate Interatomic Potentials
,”
J. Comput. Phys.
,
285
, pp.
316
330
.
29.
Shapeev
,
A. V.
,
2016
, “
Moment Tensor Potentials: A Class of Systematically Improvable Interatomic Potentials
,”
Multiscale Model. Simul.
,
14
(
3
), pp.
1153
1173
.
30.
Bartók
,
A. P.
,
Payne
,
M. C.
,
Kondor
,
R.
, and
Csányi
,
G.
,
2010
, “
Gaussian Approximation Potentials: The Accuracy of Quantum Mechanics, Without the Electrons
,”
Phys. Rev. Lett.
,
104
(
13
), p.
136403
.
31.
Fujikake
,
S.
,
Deringer
,
V. L.
,
Lee
,
T. H.
,
Krynski
,
M.
,
Elliott
,
S. R.
, and
Csányi
,
G.
,
2018
, “
Gaussian Approximation Potential Modeling of Lithium Intercalation in Carbon Nanostructures
,”
J. Chem. Phys.
,
148
(
24
), p.
241714
.
32.
Behler
,
J.
, and
Parrinello
,
M.
,
2007
, “
Generalized Neural-Network Representation of High-Dimensional Potential-Energy Surfaces
,”
Phys. Rev. Lett.
,
98
(
14
), p.
146401
.
33.
Battaglia
,
P. W.
,
Hamrick
,
J. B.
,
Bapst
,
V.
,
Sanchez-Gonzalez
,
A.
,
Zambaldi
,
V.
,
Malinowski
,
M.
,
Tacchetti
,
A.
,
Raposo
,
D.
,
Santoro
,
A.
, and
Faulkner
,
R.
,
2018
, “
Relational Inductive Biases, Deep Learning, and Graph Networks
,”
arXiv preprint
https://arxiv.org/abs/1806.01261.
34.
Schütt
,
K. T.
,
Kindermans
,
P.-J.
,
Sauceda
,
H. E.
,
Chmiela
,
S.
,
Tkatchenko
,
A.
, and
Müller
,
K.-R.
,
2017
, “
Schnet: A Continuous-Filter Convolutional Neural Network for Modeling Quantum Interactions
,” Adv. Neur. Inform. Pocess. Sys., 30
arXiv preprint
. https://arxiv.org/abs/1706.08566
35.
Yanxon
,
H.
,
Zagaceta
,
D.
,
Wood
,
B. C.
, and
Zhu
,
Q.
,
2020
, “
Neural Network Potential From Bispectrum Components: A Case Study on Crystalline Silicon
,”
J. Chem. Phys.
,
153
(
5
), p.
054118
.
36.
Kondati Natarajan
,
S.
, and
Behler
,
J. R.
,
2017
, “
Self-Diffusion of Surface Defects at Copper–Water Interfaces
,”
J. Phys. Chem. C
,
121
(
8
), pp.
4368
4383
.
37.
Artrith
,
N.
, and
Kolpak
,
A. M.
,
2014
, “
Understanding the Composition and Activity of Electrocatalytic Nanoalloys in Aqueous Solvents: A Combination of DFT and Accurate Neural Network Potentials
,”
Nano Lett.
,
14
(
5
), pp.
2670
2676
.
38.
Behler
,
J.
,
Martoňák
,
R.
,
Donadio
,
D.
, and
Parrinello
,
M.
,
2008
, “
Metadynamics Simulations of the High-Pressure Phases of Silicon Employing a High-Dimensional Neural Network Potential
,”
Phys. Rev. Lett.
,
100
(
18
), p.
185501
.
39.
Zhang
,
L.
,
Han
,
J.
,
Wang
,
H.
,
Car
,
R.
, and
Weinan
,
E.
,
2018
, “
Deep Potential Molecular Dynamics: A Scalable Model With the Accuracy of Quantum Mechanics
,”
Phys. Rev. Lett.
,
120
(
14
), p.
143001
.
40.
Smith
,
J. S.
,
Isayev
,
O.
, and
Roitberg
,
A. E.
,
2017
, “
ANI-1: An Extensible Neural Network Potential With DFT Accuracy at Force Field Computational Cost
,”
Chem. Sci.
,
8
(
4
), pp.
3192
3203
.
41.
Schütt
,
K. T.
,
Arbabzadah
,
F.
,
Chmiela
,
S.
,
Müller
,
K. R.
, and
Tkatchenko
,
A.
,
2017
, “
Quantum-Chemical Insights From Deep Tensor Neural Networks
,”
Nat. Commun.
,
8
(
1
), pp.
1
8
.
42.
Yao
,
K.
,
Herr
,
J. E.
,
Toth
,
D. W.
,
Mckintyre
,
R.
, and
Parkhill
,
J.
,
2018
, “
The TensorMol-0.1 Model Chemistry: A Neural Network Augmented With Long-Range Physics
,”
Chem. Sci.
,
9
(
8
), pp.
2261
2269
.
43.
Yao
,
K.
,
Herr
,
J. E.
,
Brown
,
S. N.
, and
Parkhill
,
J.
,
2017
, “
Intrinsic Bond Energies From a Bonds-in-Molecules Neural Network
,”
J. Phys. Chem. Lett.
,
8
(
12
), pp.
2689
2694
.
44.
Bereau
,
T.
,
Andrienko
,
D.
, and
Von Lilienfeld
,
O. A.
,
2015
, “
Transferable Atomic Multipole Machine Learning Models for Small Organic Molecules
,”
J. Chem. Theory Comput.
,
11
(
7
), pp.
3225
3233
.
45.
Bartók
,
A. P.
,
Kermode
,
J.
,
Bernstein
,
N.
, and
Csányi
,
G.
,
2018
, “
Machine Learning a General-Purpose Interatomic Potential for Silicon
,”
Phys. Rev. X
,
8
(
4
), p.
041048
.
46.
Deringer
,
V. L.
,
Bernstein
,
N.
,
Bartók
,
A. P.
,
Cliffe
,
M. J.
,
Kerber
,
R. N.
,
Marbella
,
L. E.
,
Grey
,
C. P.
,
Elliott
,
S. R.
, and
Csányi
,
G.
,
2018
, “
Realistic Atomistic Structure of Amorphous Silicon From Machine-Learning-Driven Molecular Dynamics
,”
J. Phys. Chem. Lett.
,
9
(
11
), pp.
2879
2885
.
47.
Xu
,
N.
,
Shi
,
Y.
,
He
,
Y.
, and
Shao
,
Q.
,
2020
, “
A Deep-Learning Potential for Crystalline and Amorphous Li–Si Alloys
,”
J. Phys. Chem. C
,
124
(
30
), pp.
16278
16288
.
48.
Andolina
,
C. M.
,
Williamson
,
P.
, and
Saidi
,
W. A.
,
2020
, “
Optimization and Validation of a Deep Learning CuZr Atomistic Potential: Robust Applications for Crystalline and Amorphous Phases With Near-DFT Accuracy
,”
J. Chem. Phys.
,
152
(
15
), p.
154701
.
49.
Tang
,
L.
,
Yang
,
Z.
,
Wen
,
T.
,
Ho
,
K.-M.
,
Kramer
,
M. J.
, and
Wang
,
C.-Z.
,
2020
, “
Development of Interatomic Potential for Al–Tb Alloys Using a Deep Neural Network Learning Method
,”
Phys. Chem. Chem. Phys.
,
22
(
33
), pp.
18467
18479
.
50.
Banjade
,
H. R.
,
Hauri
,
S.
,
Zhang
,
S.
,
Ricci
,
F.
,
Gong
,
W.
,
Hautier
,
G.
,
Vucetic
,
S.
, and
Yan
,
Q.
,
2021
, “
Structure Motif–Centric Learning Framework for Inorganic Crystalline Systems
,”
Sci. Adv.
,
7
(
17
), p.
eabf1754
.
51.
Frey
,
N. C.
,
Akinwande
,
D.
,
Jariwala
,
D.
, and
Shenoy
,
V. B.
,
2020
, “
Machine Learning-Enabled Design of Point Defects in 2D Materials for Quantum and Neuromorphic Information Processing
,”
ACS Nano
,
14
(
10
), pp.
13406
13417
.
52.
Tanaka
,
K.
,
Hachiya
,
K.
,
Zhang
,
W.
,
Matsuda
,
K.
, and
Miyauchi
,
Y.
,
2019
, “
Machine-Learning Analysis to Predict the Exciton Valley Polarization Landscape of 2D Semiconductors
,”
ACS Nano
,
13
(
11
), pp.
12687
12693
.
53.
Shin
,
Y. J.
,
Shin
,
W.
,
Taniguchi
,
T.
,
Watanabe
,
K.
,
Kim
,
P.
, and
Bae
,
S.-H.
,
2020
, “
Fast and Accurate Robotic Optical Detection of Exfoliated Graphene and Hexagonal Boron Nitride by Deep Neural Networks
,”
2D Mater.
,
8
(
3
), p.
035017
.
54.
Fernández
,
M.
,
Rezaei
,
S.
,
Mianroodi
,
J. R.
,
Fritzen
,
F.
, and
Reese
,
S.
,
2020
, “
Application of Artificial Neural Networks for the Prediction of Interface Mechanics: A Study on Grain Boundary Constitutive Behavior
,”
Adv. Model. Simul. Eng. Sci.
,
7
(
1
), pp.
1
27
.
55.
Behler
,
J. R.
,
2021
, “
Four Generations of High-Dimensional Neural Network Potentials
,”
Chem. Rev.
,
121
, (
16
), pp.
10037
10072
.
56.
Han
,
J.
,
Zhang
,
L.
,
Car
,
R.
, and
Weinan
,
E.
,
2018
, “
Deep Potential: A General Representation of a Many-Body Potential Energy Surface
,”
Commun. Comput. Phys.
,
23
(
3
), pp.
629
639
.
57.
Gastegger
,
M.
,
Schwiedrzik
,
L.
,
Bittermann
,
M.
,
Berzsenyi
,
F.
, and
Marquetand
,
P.
,
2018
, “
wACSF—Weighted Atom-Centered Symmetry Functions as Descriptors in Machine Learning Potentials
,”
J. Chem. Phys.
,
148
(
24
), p.
241709
.
58.
Himanen
,
L.
,
Jäger
,
M. O.
,
Morooka
,
E. V.
,
Canova
,
F. F.
,
Ranawat
,
Y. S.
,
Gao
,
D. Z.
,
Rinke
,
P.
, and
Foster
,
A. S.
,
2020
, “
DScribe: Library of Descriptors for Machine Learning in Materials Science
,”
Comput. Phys. Commun.
,
247
, p.
106949
.
59.
Gao
,
H.
,
Wang
,
J.
, and
Sun
,
J.
,
2019
, “
Improve the Performance of Machine-Learning Potentials by Optimizing Descriptors
,”
J. Chem. Phys.
,
150
(
24
), p.
244110
.
60.
Yanxon
,
H.
,
Zagaceta
,
D.
,
Tang
,
B.
,
Matteson
,
D. S.
, and
Zhu
,
Q.
,
2020
, “
PyXtal_FF: A Python Library for Automated Force Field Generation
,”
Mach. Learn.: Sci. Technol.
,
2
(
2
), p.
027001
.
61.
Ceder
,
G.
, and
Persson
,
K.
The Materials Project: A Materials Genome Approach
,
2010
, DOE Data Explorer, http://www.osti.gov/dataexplorer/biblio/1077798, Accessed August 28, 2016.
62.
Basu
,
S.
,
Suresh
,
S.
,
Ghatak
,
K.
,
Bartolucci
,
S. F.
,
Gupta
,
T.
,
Hundekar
,
P.
,
Kumar
,
R.
,
Lu
,
T.-M.
,
Datta
,
D.
, and
Shi
,
Y.
,
2018
, “
Utilizing van der Waals Slippery Interfaces to Enhance the Electrochemical Stability of Silicon Film Anodes in Lithium-Ion Batteries
,”
ACS Appl. Mater. Interfaces
,
10
(
16
), pp.
13442
13451
.
63.
Sharma
,
V.
,
Ghatak
,
K.
, and
Datta
,
D.
,
2018
, “
Amorphous Germanium as a Promising Anode Material for Sodium ion Batteries: A First Principle Study
,”
J. Mater. Sci.
,
53
(
20
), pp.
14423
14434
.
64.
Kresse
,
G.
, and
Furthmüller
,
J.
,
1996
, “
Efficient Iterative Schemes for ab Initio Total-Energy Calculations Using a Plane-Wave Basis set
,”
Phys. Rev. B
,
54
(
16
), pp.
11169
11186
.
65.
Kresse
,
G.
, and
Joubert
,
D.
,
1999
, “
From Ultrasoft Pseudopotentials to the Projector Augmented-Wave Method
,”
Phys. Rev. B
,
59
(
3
), pp.
1758
1775
.
66.
Blöchl
,
P. E.
,
1994
, “
Projector Augmented-Wave Method
,”
Phys. Rev. B
,
50
(
24
), pp.
17953
17979
.
67.
Perdew
,
J. P.
,
Burke
,
K.
, and
Ernzerhof
,
M.
,
1996
, “
Generalized Gradient Approximation Made Simple
,”
Phys. Rev. Lett.
,
77
(
18
), pp.
3865
3868
.
68.
Dion
,
M.
,
Rydberg
,
H.
,
Schröder
,
E.
,
Langreth
,
D. C.
, and
Lundqvist
,
B. I.
,
2004
, “
Van der Waals Density Functional for General Geometries
,”
Phys. Rev. Lett.
,
92
(
24
), p.
246401
.
69.
Legrain
,
F.
, and
Manzhos
,
S.
,
2016
, “
Understanding the Difference in Cohesive Energies Between Alpha and Beta Tin in DFT Calculations
,”
AIP Adv.
,
6
(
4
), p.
045116
.
70.
Luo
,
B.
,
Qiu
,
T.
,
Ye
,
D.
,
Wang
,
L.
, and
Zhi
,
L.
,
2016
, “
Tin Nanoparticles Encapsulated in Graphene Backboned Carbonaceous Foams as High-Performance Anodes for Lithium-Ion and Sodium-Ion Storage
,”
Nano Energy
,
22
, pp.
232
240
.
71.
Sanville
,
E.
,
Kenny
,
S. D.
,
Smith
,
R.
, and
Henkelman
,
G.
,
2007
, “
Improved Grid-Based Algorithm for Bader Charge Allocation
,”
J. Comput. Chem.
,
28
(
5
), pp.
899
908
.
72.
Zhang
,
Z.
,
2018
, “
Improved Adam optimizer for Deep Neural Networks
,”
Proceedings of the 2018 IEEE/ACM 26th International Symposium on Quality of Service (IWQoS)
,
Banff, AB, Canada
,
June 4–6
, IEEE, pp.
1
2
.
73.
Comin
,
M.
, and
Lewis
,
L. J.
,
2019
, “
Deep-Learning Approach to the Structure of Amorphous Silicon
,”
Phys. Rev. B
,
100
(
9
), p.
094107
.
74.
Huang
,
S.-D.
,
Shang
,
C.
,
Kang
,
P.-L.
, and
Liu
,
Z.-P.
,
2018
, “
Atomic Structure of Boron Resolved Using Machine Learning and Global Sampling
,”
Chem. Sci.
,
9
(
46
), pp.
8644
8655
.
75.
Chu
,
S.
, and
Majumdar
,
A.
,
2012
, “
Opportunities and Challenges for a Sustainable Energy Future
,”
Nature
,
488
(
7411
), pp.
294
303
.
76.
Liang
,
B.
,
Liu
,
Y.
, and
Xu
,
Y.
,
2014
, “
Silicon-Based Materials as High Capacity Anodes for Next Generation Lithium Ion Batteries
,”
J. Power Sources
,
267
, pp.
469
490
.
77.
Liu
,
X. H.
,
Zhong
,
L.
,
Huang
,
S.
,
Mao
,
S. X.
,
Zhu
,
T.
, and
Huang
,
J. Y.
,
2012
, “
Size-Dependent Fracture of Silicon Nanoparticles During Lithiation
,”
ACS Nano
,
6
(
2
), pp.
1522
1531
.
78.
Jin
,
Y.
,
Zhu
,
B.
,
Lu
,
Z.
,
Liu
,
N.
, and
Zhu
,
J.
,
2017
, “
Challenges and Recent Progress in the Development of Si Anodes for Lithium-Ion Battery
,”
Adv. Energy Mater.
,
7
(
23
), p.
1700715
.
79.
Lee
,
S. W.
,
Ryu
,
I.
,
Nix
,
W. D.
, and
Cui
,
Y.
,
2015
, “
Fracture of Crystalline Germanium During Electrochemical Lithium Insertion
,”
Extreme Mech. Lett.
,
2
, pp.
15
19
.
80.
Zhang
,
L.
, and
Gong
,
H.
,
2015
, “
Partial Conversion of Current Collectors Into Nickel Copper Oxide Electrode Materials for High-Performance Energy Storage Devices
,”
ACS Appl. Mater. Interfaces
,
7
(
28
), pp.
15277
15284
.
81.
Jerliu
,
B.
,
Hüger
,
E.
,
Dorrer
,
L.
,
Seidlhofer
,
B.-K.
,
Steitz
,
R.
,
Oberst
,
V.
,
Geckle
,
U.
,
Bruns
,
M.
, and
Schmidt
,
H.
,
2014
, “
Volume Expansion During Lithiation of Amorphous Silicon Thin Film Electrodes Studied by In-Operando Neutron Reflectometry
,”
J. Phys. Chem. C
,
118
(
18
), pp.
9395
9399
.
82.
Ko
,
M.
,
Chae
,
S.
, and
Cho
,
J.
,
2015
, “
Challenges in Accommodating Volume Change of Si Anodes for Li-Ion Batteries
,”
ChemElectroChem
,
2
(
11
), pp.
1645
1651
.
83.
Santimetaneedol
,
A.
,
Tripuraneni
,
R.
,
Chester
,
S. A.
, and
Nadimpalli
,
S. P.
,
2016
, “
Time-Dependent Deformation Behavior of Polyvinylidene Fluoride Binder: Implications on the Mechanics of Composite Electrodes
,”
J. Power Sources
,
332
, pp.
118
128
.
84.
Zeng
,
W.
,
Wang
,
L.
,
Peng
,
X.
,
Liu
,
T.
,
Jiang
,
Y.
,
Qin
,
F.
,
Hu
,
L.
,
Chu
,
P. K.
,
Huo
,
K.
, and
Zhou
,
Y.
,
2018
, “
Enhanced Ion Conductivity in Conducting Polymer Binder for High-Performance Silicon Anodes in Advanced Lithium-Ion Batteries
,”
Adv. Energy Mater.
,
8
(
11
), p.
1702314
.
85.
Kalnaus
,
S.
,
Rhodes
,
K.
, and
Daniel
,
C.
,
2011
, “
A Study of Lithium Ion Intercalation Induced Fracture of Silicon Particles Used as Anode Material in Li-Ion Battery
,”
J. Power Sources
,
196
(
19
), pp.
8116
8124
.
86.
Fan
,
F.
,
Huang
,
S.
,
Yang
,
H.
,
Raju
,
M.
,
Datta
,
D.
,
Shenoy
,
V. B.
,
Van Duin
,
A. C.
,
Zhang
,
S.
, and
Zhu
,
T.
,
2013
, “
Mechanical Properties of Amorphous LixSi Alloys: A Reactive Force Field Study
,”
Modell. Simul. Mater. Sci. Eng.
,
21
(
7
), p.
074002
.
87.
Lee
,
S. W.
,
McDowell
,
M. T.
,
Berla
,
L. A.
,
Nix
,
W. D.
, and
Cui
,
Y.
,
2012
, “
Fracture of Crystalline Silicon Nanopillars During Electrochemical Lithium Insertion
,”
Proc. Natl. Acad. Sci. U. S. A.
,
109
(
11
), pp.
4080
4085
.
88.
Shi
,
L.
, and
Zhao
,
T.
,
2017
, “
Recent Advances in Inorganic 2D Materials and Their Applications in Lithium and Sodium Batteries
,”
J. Mater. Chem. A
,
5
(
8
), pp.
3735
3758
.
89.
Dong
,
Y.
,
Wu
,
Z.-S.
,
Ren
,
W.
,
Cheng
,
H.-M.
, and
Bao
,
X.
,
2017
, “
Graphene: A Promising 2D Material for Electrochemical Energy Storage
,”
Sci. Bull.
,
62
(
10
), pp.
724
740
.
90.
Pomerantseva
,
E.
, and
Gogotsi
,
Y.
,
2017
, “
Two-Dimensional Heterostructures for Energy Storage
,”
Nat. Energy
,
2
(
7
), pp.
1
6
.
91.
Zhang
,
C. J.
,
Park
,
S.-H.
,
Seral-Ascaso
,
A.
,
Barwich
,
S.
,
McEvoy
,
N.
,
Boland
,
C. S.
,
Coleman
,
J. N.
,
Gogotsi
,
Y.
, and
Nicolosi
,
V.
,
2019
, “
High Capacity Silicon Anodes Enabled by MXene Viscous Aqueous Ink
,”
Nat. Commun.
,
10
(
1
), pp.
1
9
.
92.
Wang
,
C.-H.
,
Kurra
,
N.
,
Alhabeb
,
M.
,
Chang
,
J.-K.
,
Alshareef
,
H. N.
, and
Gogotsi
,
Y.
,
2018
, “
Titanium Carbide (MXene) as a Current Collector for Lithium-Ion Batteries
,”
ACS Omega
,
3
(
10
), pp.
12489
12494
.
93.
Sharma
,
V.
, and
Datta
,
D.
,
2021
, “
Variation in the Interface Strength of Silicon With Surface Engineered Ti 3 C 2 MXenes
,”
Phys. Chem. Chem. Phys.
,
23
(
9
), pp.
5540
5550
.
94.
Raccichini
,
R.
,
Varzi
,
A.
,
Passerini
,
S.
, and
Scrosati
,
B.
,
2015
, “
The Role of Graphene for Electrochemical Energy Storage
,”
Nat. Mater.
,
14
(
3
), pp.
271
279
.
95.
Zhao
,
K.
, and
Cui
,
Y.
,
2016
, “
Understanding the Role of Mechanics in Energy Materials: A Perspective
,”
Extreme Mech. Lett.
,
9
, pp.
347
352
.
96.
McMeeking
,
R. M.
, and
Purkayastha
,
R.
,
2014
, “
The Role of Solid Mechanics in Electrochemical Energy Systems Such as Lithium-Ion Batteries
,”
Procedia IUTAM
,
10
, pp.
294
306
.
97.
Butler
,
S. Z.
,
Hollen
,
S. M.
,
Cao
,
L.
,
Cui
,
Y.
,
Gupta
,
J. A.
,
Gutiérrez
,
H. R.
,
Heinz
,
T. F.
,
Hong
,
S. S.
,
Huang
,
J.
, and
Ismach
,
A. F.
,
2013
, “
Progress, Challenges, and Opportunities in Two-Dimensional Materials Beyond Graphene
,”
ACS Nano
,
7
(
4
), pp.
2898
2926
.

Supplementary data