Abstract

This work describes neural network surrogate models for calculating the effective mechanical properties of a periodic composites. The models achieve good accuracy even when only provided with training data sampling a small portion of the design space. As an example, the surrogate models are applied to solving the inverse design problem of finding structures with optimal mechanical properties. The surrogate models are sufficiently accurate to recover optimal solutions in general agreement with established topology optimization methods. However, improvements will be required to develop robust, efficient neural network-based surrogate models and several directions for future research are highlighted here.

1 Introduction

Convolutional neural networks (CNNs) have been widely and successfully applied to image recognition problems identifying or categorizing images based on raw pixel data [15]. This work applies CNNs to the homogenization of periodic microstructures. The goal is a fast surrogate model for calculating the effective properties of periodic structures. Formally, this work presents identifying effective mechanical properties of periodic structures as an image regression problem—given a description of the structure as a binary bitmap image, the model returns the mechanical properties of the homogenized, periodic system.

Neural networks are composite functions organized into a logical sequence of layers. At the start of the sequence, a neural network applies a layer of functions to the set of input variables, here a description of a periodic cell structure. Subsequent layers take as input the output of some of the functions in the previous layer. Finally, the last layer in the network maps to the output data, here the effective mechanical properties of the periodic cell. The type of function and the connection between layer outputs and inputs defines the neural network topology. One reason for the increased popularity of neural networks in recent years is that modern GPUs can efficiently fit them to data using stochastic gradient descent algorithms. Similarly, once trained, GPUs can quickly evaluate the resulting neural network model. This paper describes a deep neural network surrogate model consisting of many interconnected layers of neurons. For more details of deep neural networks, see Ref. [6] among many other recent surveys.

The current work uses CNNs to represent the effective, homogenized response of a periodic mechanical system. Similar past work includes Papadrakakis et al. [7] who used neural networks as surrogate models for simple, parameterized structures. This work was later extended to the optimization of frame structures [811]. These early studies severely limited the number of input variables by heavily parameterizing the structural geometry and loading conditions, likely reflecting the limited computational resources available for training models at the time. Unger and Könke [12] used neural networks to homogenize the response of a mechanical system to a higher scale. However, they did not take advantage of the structure of the governing equations by using a convolutional network topology. Le et al. [13] used neural networks to homogenize nonlinear elastic composites; however, they used a simplified analysis on the mesoscale that did not allow arbitrary topologies.

In addition to quantifying the accuracy of the surrogate modeling approach, this work applies the trained surrogate models to solve an example inverse problem. Design and optimization by surrogate modeling is a well-known technique [14,15]. The CNN surrogate approach, given sufficient training data, produces optimal structures with similar mechanical properties to those produced using SIMP topology optimization methods [16]. While sufficiently accurate to solve this sample inverse problem, additional work will be required to develop deep neural networks that can accurately represent the results of mechanics problems from sparse training data. Several directions for future research are highlighted in the conclusions below.

2 Data Set, Convolutional Neural Network Topology, and Training

2.1 The Example Problem.

The example problem considered in this work is the homogenization of a 2D, plane stress, periodic structure with square symmetry. Figure 1 shows a single unit cell. The structure of interest is the infinite periodic tiling of the single cell.

Fig. 1
Problem geometry. The one-eighth section marked out with bold lines describes the entire, square-symmetric, periodic structure.
Fig. 1
Problem geometry. The one-eighth section marked out with bold lines describes the entire, square-symmetric, periodic structure.
Close modal

The methods described here consider a discretized unit cell that divides the cell into regular, square regions of materials that this work refers to as pixels. Each pixel can either be isotropic solid material with elastic properties Young’s modulus E = 1000 and Poisson’s ratio ν = 0.3—colored black in Fig. 1 and subsequent diagrams—or be void—colored white in the diagrams. The number of pixels along each edge of the square unit cell, 2N, describes the size of the discretization. The unit cell is then a binary bitmap image with black pixels representing material and white pixels representing void.

The periodic structures considered here have square symmetry. The minimal representation of a unit cell is then only one-eighth of the complete square cell shown in Fig. 1. The figure outlines this minimal region. The remainder of the square cell is the three square symmetry reflections applied to this generating region. This work uses this minimal region as the parameter space describing all possible structures with square symmetry. In particular, a vector of boolean values p represents each structure, where p has length
(1)
and represents the flattened lower diagonal of the upper-left quadrant of the full square unit cell (see Fig. 1). If pi = 1, it indicates that the corresponding pixel is a solid material, and if pi = 0, it indicates the corresponding pixel is a void material. The operation Q takes a vector p to the N × N boolean matrix representing the upper-left quadrant of the square unit cell. The operation F takes a vector p to the 2N × 2N boolean matrix representing the full, periodic structure.
The structure is in plane stress, i.e., the physical thickness of the 2D structure is much smaller than the physical length of a unit cell. Beyond this condition, the physical dimensions of the cell do not affect the homogenized properties of the periodic structure. This work considers two effective properties of the homogenized infinite, periodic tiled structure: its relative density ρ and its effective Young’s modulus E. The first property is a geometric relation:
(2)

This work considers two methods for evaluating the stiffness of a particular structure p: a finite element simulation E(p) and a surrogate model E′(p). The surrogate model is a convolutional neural network trained on a large collection of data from finite element simulations. The finite element method takes the full unit cell F(p) and creates a 2D mesh representing the cell geometry. The FE model uses a regular mesh of the cell, including void regions, using the bulk material properties for the solid regions and the soft material properties E = 10−3 and ν = 0.3 for the void regions. This prevents singular structures, so the FE model can return effective properties for any arbitrary structure p. The Young’s modulus of the effective structure can be deduced by applying a normal stress to the cell, calculating the resulting normal strain, and taking the ratio.

This study considers this problem with two discretizations: a small grid using N = 10 (ndiag = 55) and a large grid using N = 50 (ndiag = 1275). For each case, the full training database consists of 2,000,000 finite element evaluations of random structures uniformly sampling the design space. For the large grid, the total size of the finite design space is 21275 ≈ 10383, and so, the database of 2 × 106 simulations samples only a very small fraction of the complete design space.

Each finite element simulation was completed using WARP3D,1 an open-source FEA package, in serial on a single core of a Intel®Xeon®E5-2698 CPU. Each individual simulation is independent of the others, so the 20 cores of the processor generated the database by running individual simulations in parallel. It took just over 115 h wall time to generate the complete database for the large grid.

2.2 The Convolutional Neural Network Surrogate Model.

The network topology was determined by grid search hyperparameter optimization. Figure 2 shows the final topology arrived at through this process and shows the general framework used in the hyperparameter study. The form of the network is standard for image recognition problems [1,2] and consists of alternating layers of convolutional filters convolving the structure over the indicated window, with a stride of one, and applying a ReLU activation function followed by a max pool layer to reduce the spatial dimension of the data. Before applying the convolution the images are periodically padded to handle the part of the convolution extending outside the image at the boundaries. After the convolutional filters, the model flattens the image and applies a fully dense layer, again with ReLU activation functions. During training, a dropout filter is applied to help prevent overfitting. A final fully dense layer reduces the data to the single output—the effective modulus of the structure. This network was implemented in the Keras [17] framework using the TensorFlow [18] backend to train and evaluate the model on a single NVIDIA®Quadro®M6000 GPU with 24 GB of memory.

Fig. 2
The CNN topology

Within this general network framework, specific layer parameters were determined by brute force hyperparameter optimization. The complete set of 2,000,000 data points was divided into three categories: test (20% of the total), validation (20% of the remaining 1.6 million), and training (the remainder). Each potential model in the hyperparameter study trains against the training set and its accuracy is assessed against the validation set. Table 1 lists the different parameters and features considered in the hyperparameter study. All combinations of the different options in the table were tested with a grid search. Each network was trained using the Adam stochastic optimizer [19] over 25 epochs with a batch size of 1024. The different hyperparameter options were compared using the validation data set and the best selected for the final network topology shown in Fig. 2.

Table 1

Different network topology choices and hyperparameters considered in the grid search

OptionChoices
PaddingSymmetric, zero
First convolution layer window size5,3
Second convolution layer window size5,3
Third convolution layer window size5,3
First convolution layer depth16,32
Second convolution layer depth16,32
Third convolution layer depth16,32
Dropout parameter0.25,0.5,0.75
OptionChoices
PaddingSymmetric, zero
First convolution layer window size5,3
Second convolution layer window size5,3
Third convolution layer window size5,3
First convolution layer depth16,32
Second convolution layer depth16,32
Third convolution layer depth16,32
Dropout parameter0.25,0.5,0.75

The discussion below studies the effect of reducing the size of the training database on the accuracy of the surrogate model. Using the large grid case as a timing study, the cost of one forward model evaluation of the trained CNN is 7.5 × 10−5 s while the cost of one forward evaluation of the FEA model is 0.207 s. So, once trained, the CNN is three orders of magnitude faster than the direct evaluation. The time required to train the CNN over the full data set is about 3 min. Therefore, the cost of developing the surrogate model is essentially entirely in generating training data (115 h).

As the model trains using a stochastic optimization process, each run through the fit process will produce a slightly different model. This fitting process was repeated 10 times for both the N = 10 and N = 50 in order to assess the reproducibility of the approach.

3 Results

3.1 Surrogate Model Accuracy.

Figure 3 plots the accuracy of the surrogate model as a function of the training database size ((a): mean squared error R=(1/ntest)i=1ntest((EiE^i)/E0)2; (b): maximum absolute error A=max(|EiE^i|/E0), where Ei are model predictions, E^i are finite element evaluations, and E0 is the modulus of the solid material). About 20% of the total database (400,000 simulations) was reserved for testing. A subset of the remaining 80% of the database were used to train and validate the CNN. For each reduced database, 80% of the data was used for training and 20% for validation. Increasing the size of the database decreases the error between the surrogate and the direct simulation results, measured either globally or locally.

Fig. 3
(a) Mean squared error between the test FEA data and the surrogate model plotted as a function of the number of FEA simulations used to train the model. For both the N = 10 and N = 50 cases, the plot shows both the average error (solid line) for 10 repetitions of the model fitting process, as described in the text, and the minimum and maximum errors over those 10 repetitions (dotted lines). (b) Similar plot but now showing a maximum absolute error.
Fig. 3
(a) Mean squared error between the test FEA data and the surrogate model plotted as a function of the number of FEA simulations used to train the model. For both the N = 10 and N = 50 cases, the plot shows both the average error (solid line) for 10 repetitions of the model fitting process, as described in the text, and the minimum and maximum errors over those 10 repetitions (dotted lines). (b) Similar plot but now showing a maximum absolute error.
Close modal

The figure shows the mean error for 10 models constructed using 10 repetitions of the training process, discussed previously, as well as the error for the worst (maximum error) and best (minimum error) model for each case. There is very little difference between the average response of 10 models, the responses of the most and least accurate models, and the response of a randomly selected model, which suggests that the training process is reproducible.

The surrogate modeling problem solved over the smaller domain was somewhat less accurate than the large N = 50 domain for smaller training databases. This somewhat counter-intuitive result is discussed below.

3.2 Optimization Through the Surrogate Model.

The surrogate models are sufficiently accurate to solve inverse problems. As an example, consider the optimization problem
(3)
where ρ is the relative density (area of solid material divided by total cell area) and P is the perimeter of the solid part of the structure. Here, E′ is evaluated using a randomly selected trained surrogate model from the group of 10 models trained over the largest database. The perimeter constraint is required to regularize the problem. The specific example here imposes a 50% relative density constraint on the general problem and a perimeter constraint of P0 = 25 for N = 10 and P0 = 250 for N = 50 in units of pixel edge lengths.

This work solves the optimization problem with genetic algorithm (GA) optimization. The GA uses standard tournament selection with tournament size ts, two-point crossover applied to sequential pairs of individuals with probability pcx, and binary bit-flip mutation operator, applied to each individual with probability pmut and, if an individual is chosen for mutation, flipping each bit in pi with probability pbit. As with the training data set, each individual in the initiation population is generated by first randomly selecting a number of bits to be true, r, and then randomly choosing r entries in pi to be set to true.

Table 2 lists the optimization parameters for the examples below. These parameters were selected to achieve convergence to a stable structure in less than 50 generations. Beyond that criteria, the GA was not tuned to achieve optimal performance. A simple study was performed to test the GA: over the 10 × 10 domain the GA, when run using direct FE evaluations of the objective function, converges to a correct solution in the prescribed 50 generations. This structure is the same one found using the surrogate model in Fig. 4(a) for N = 10. This test does not directly demonstrate the effectiveness of the GA over the larger 50 × 50 domain but at least suggests that the optimization errors over this larger domain, discussed below, are caused by the surrogate model and not the GA.

Fig. 4
(a) Surrogate model GA optimization: Results for the E/E0 optimization problem for both the N = 10 and N = 50 design spaces, training the model with the full simulation database (E^50/E0=0.315, E^10/E0=0.296). The result is a Vigdergauz [20] structure, with a few defective pixels for the larger optimization. The missing pixel in the smaller structure is not necessarily a defect, one pixel from the lattice must be removed to meet the relative density criteria. (b) SIMP optimization: the result of applying the SIMP optimization method to both problems with a final 0/1 filter (E^50/E0=0.322, E^10/E0=0.262).
Fig. 4
(a) Surrogate model GA optimization: Results for the E/E0 optimization problem for both the N = 10 and N = 50 design spaces, training the model with the full simulation database (E^50/E0=0.315, E^10/E0=0.296). The result is a Vigdergauz [20] structure, with a few defective pixels for the larger optimization. The missing pixel in the smaller structure is not necessarily a defect, one pixel from the lattice must be removed to meet the relative density criteria. (b) SIMP optimization: the result of applying the SIMP optimization method to both problems with a final 0/1 filter (E^50/E0=0.322, E^10/E0=0.262).
Close modal
Table 2

Hyperparameters for the GA optimization

ParameterDescriptionValue
npopPopulation size1000
ngenNumber of generations50
tsizeTournament size3
pcxCrossover probability0.5
pmutMutation probability0.25
pbitBit flip probability0.05
ParameterDescriptionValue
npopPopulation size1000
ngenNumber of generations50
tsizeTournament size3
pcxCrossover probability0.5
pmutMutation probability0.25
pbitBit flip probability0.05
The GA scheme enforces the constraints as penalties to the objective function with penalty parameters selected to avoid constraint violations. The specific form of the penalty function is
(4)
where c(p) is the constraint function, c0 is the constraint value, and b and k are the penalty parameters. This penalty is subtracted from the objective function in defining the GA fitness. Separate penalty parameters were selected for each of the two constraints (b = 10 and k = 1000 for the density constraint, b = 10 and 1000 for the perimeter constraint).

Figure 4(a) shows the resulting optimal structure for both the small and large optimization domains. It is a square lattice, which is known to be the correct result [20]. To validate the GA/surrogate method, Fig. 4(b) shows the results of applying the SIMP optimization method [16,21] to the same problems. The SIMP problem was solved in a custom python code, based on the standard 88 line code [22], implemented in open-source python by Ref. [23], and modified to handle the symmetry constraint and periodic boundary condition. The two solutions are in reasonable agreement, allowing for the randomness introduced into the surrogate model optimization by the stochastic training and optimization methods and differences in the optimization techniques.

4 Discussion and Conclusions

4.1 Convergence.

Figure 3 demonstrates that the accuracy of the surrogate models increases with the size of the training database. However, there are diminishing returns to adding more training data—eventually large increases in the database size produce only small increases in the surrogate model accuracy. In the limit of providing a complete training database covering each configuration in the design space, the CNN should be perfectly accurate—all it would have to do is index a design to the corresponding mechanical property. However, even for the N = 10 case, 2 × 106 training simulations only sample a tiny fraction of the design space (2 × 106/255 ≈ 6 × 10−11). Viewed in this context, the CNN surrogate models are remarkably accurate given the relatively small amount of training data provided.

Figure 3 shows that the N = 50 model is actually more accurate than the N = 10 model for small training database sizes, though both models are comparatively accurate for larger training database. The amount of data in the training database scales linearly with the number of design variables but the number of neural network weights and biases does not—the N = 10 model has 21,505 network parameters whereas the N = 100 model has 93,185 parameters. So, relative to the number of design variables, the N = 100 case actually has less information to train to data than the N = 10 case (p10=2150555=391; p50=931851275=73.1), which may explain why this model is more accurate for smaller training data sets.

4.2 Choice of Network Topology.

CNNs were originally inspired by the structures and methods of image recognition in the visual cortex [24] and have successfully been applied to a wide range of image recognition problems (c.f. Ref. [2,1]). The convolutional topology combines output from small regions of adjacent structures—in the final network topology used here 5 × 5 and 3 × 3 regions of adjacent pixels. The idea underlying such networks is to train the model to recognize features—local patterns with support over small regions in the image. The idea of local support is fundamental: solid mechanics problems are of this local type. For example, the solution to classical elasticity problems only depend on the fourth derivative of a stress function and, generally, the stress field depends only on the gradient of the displacements, i.e., the strains, and not on the displacement field directly.

The structural mechanics problem underlying the training database is local but the actual database consists of homogenized information describing some average property of the complete unit cell. This kind of problem is non-local. For example, a homogenized quantity, such as the average cell Young’s modulus, can be described as a weighted integral over the cell volume. A fully connected dense layer of neurons can describe this kind of volume integral over the whole structure. In a dense layer, each neuron is connected to each other neurons. Strictly, this layer is over-connected for just homogenization—homogenization requires only local-to-global communication, filtering the results or features of local regions to a global average. However, the dense layers in the final network topology serve two purposes: homogenization and feature recognition. The dense layers not only average properties but also learn which features, from the convolutional layers, should be combined to represent the simulation results.

However, the surrogate models, particularly models fit with smaller training databases, can conflate a structure and its inverse—i.e., the structure where the solid material in the original structure is void and the void material is solid. A potential cause of this failure is the CNN topology itself, which is designed to recognize features based on contrast. This means, for example, CNNs can be fooled by optical illusions that also fool the human visual system [25]. This surrogate model failure points to the need for specialized network topologies specifically for mechanics problems. One promising area of future work is developing network topologies that respect physical constraints. In this example, the network structure was developed to respect the physical square symmetry of the problem. Other work has developed CNNs that are generically SO(3) invariant [26], which could be directly relevant to mechanics problems. Physic-based networks have been applied successfully in other areas of science [27,28]. Potentially, specialized network topologies could reduce the required size of the training database and make the surrogate modeling method more generally applicable to problems with sparse data. Additionally, physically constrained topologies at least guarantee that the heuristic surrogate model produces physically reasonable results.

4.3 Optimization.

Figure 3 demonstrates that the surrogate models have sufficient predictive power to approximate known solutions when solving inverse problems. The choice of problem definition and the selection of an optimization method are orthogonal to the development of the deep neural network surrogate model. Different optimizers and problem definitions could be explored while retaining the basic idea of optimization via a CNN surrogate model. The example here uses a hard 0/1 definition and a genetic algorithm solver. Potentially, the results obtained here could be improved by an alternate representation of the problem or by using a different optimization technique. Deep neural networks are, by construction, continuous functions and so gradient-based optimizers could be applied. An efficient method for computing the appropriate sensitivities would need to be derived, possibly based on the backpropagation algorithm used to train the network parameters.

4.4 Costs and Benefits of Surrogate Modeling.

The computational cost of generating the surrogate model is dominated by the cost of building the training database—here running a large number of FE simulations. Once the model is trained, the forward surrogate model is much faster than the finite element simulation. The cost of generating the train data can be amortized if the same surrogate model can be used in multiple applications. The surrogate approach then is amenable to problems of wide interest or common in engineering practice, particularly problems that can be relatively easily parameterized. Any surrogate modeling approach will remain a heuristic as the exact representation of the forward problem is replaced by the inexact surrogate. Additional work on quantifying the influence of the surrogate model accuracy on the final solution to the problem of interest is required to develop robust methods.

4.5 Summary and Future Work.

The key results of this study are as follows:

  1. A convolutional neural network can serve as a fast surrogate model for slower fully resolved simulations predicting the mechanical properties of periodic composites.

  2. The resulting surrogate models are sufficiently accurate to solve inverse problems.

The main barrier to the wider use of neural network surrogate models is collecting or generating sufficient training data.

This study considered a simple mechanical problem and improvements to the approach will be required to develop a robust surrogate modeling method. The previous discussion highlights three areas where additional work is required:

  1. Improved network topologies specialized for mechanics problems that could improve surrogate model accuracy and reduce the size of the required training database.

  2. Improved methods for interpreting the results and quantifying and controlling the accuracy of the heuristic surrogate models.

  3. Improved optimization methods, for example, an efficient scheme to calculate the gradient of the surrogate model so that gradient-based optimization methods could be applied.

Footnote

Acknowledgment

This research was sponsored by the U.S. Department of Energy, under Contract No. DE-AC02-06CH11357 with Argonne National Laboratory, managed and operated by UChicago Argonne LLC. The author thanks Prasanna Balaprakash and Sam Sham for providing feedback on early drafts of the manuscript.

References

1.
Krizhevsky
,
A.
,
Sutskever
,
I.
, and
Hinton
,
G. E.
,
2012
, “
ImageNet Classification With Deep Convolutional Neural Networks
,”
Proceedings of the 25th International Conference on Neural Information Processing Systems
,
Lake Tahoe, NV
, pp.
1
9
.
2.
Lawrence
,
S.
,
Giles
,
C.
,
Tsoi
,
A. C.
, and
Back
,
A.
,
1997
, “
Face Recognition: A Convolutional Neural-Network Approach
,”
IEEE Trans. Neural Networks
,
8
(
1
), pp.
98
113
. 10.1109/72.554195
3.
Ciresan
,
D. C.
,
Meier
,
U.
,
Masci
,
J.
,
Gambardella
,
L. M.
, and
Schmidhuber
,
J.
,
2011
, “
Flexible, High Performance Convolutional Neural Networks for Image Classification
,”
IJCAI International Joint Conference on Artificial Intelligence
,
Barcelona, Spain
, pp.
1237
1242
.
4.
Matsugu
,
M.
,
Mori
,
K.
,
Mitari
,
Y.
, and
Kaneda
,
Y.
,
2003
, “
Subject Independent Facial Expression Recognition With Robust Face Detection Using a Convolutional Neural Network
,”
Neural Networks
,
16
(
5–6
), pp.
555
559
. 10.1016/S0893-6080(03)00115-1
5.
Pinheiro
,
P.
, and
Collobert
,
R.
,
2014
, “
Recurrent Convolutional Neural Networks for Scene Labeling
,”
Proceedings of The 31st International Conference on Machine Learning
,
Beijing, China
, Vol.
32
(
June
), pp.
82
90
.
6.
Goodfellow
,
I.
,
Bengio
,
Y.
, and
Courville
,
A.
,
2016
,
Deep Learning
,
MIT Press
,
Cambridge, MA
.
7.
Papadrakakis
,
M.
,
Lagaros
,
N. D.
, and
Tsompanakis
,
Y.
,
1998
, “
Structural Optimization Using Evolution Strategies and Neural Networks
,”
Comput. Meth. Appl. Mech. Eng.
,
156
(
1–4
), pp.
309
333
. 10.1016/S0045-7825(97)00215-6
8.
Papadrakakis
,
M.
, and
Lagaros
,
N. D.
,
2003
, “
Soft Computing Methodologies for Structural Optimization
,”
Appl. Soft Comput. J.
,
3
(
3
), pp.
283
300
. 10.1016/S1568-4946(03)00040-1
9.
Srinivas
,
V.
, and
Ramanjaneyulu
,
K.
,
2007
, “
An Integrated Approach for Optimum Design of Bridge Decks Using Genetic Algorithms and Artificial Neural Networks
,”
Adv. Eng. Software
,
38
(
7
), pp.
475
487
. 10.1016/j.advengsoft.2006.09.016
10.
Papadrakakis
,
M.
,
Lagaros
,
N. D.
, and
Tsompanakis
,
Y.
,
1999
, “
Optimization of Large-Scale 3-D Trusses Using Evolution Strategies and Neural Networks
,”
Int. J. Space Struct.
,
14
(
3
), pp.
211
223
. 10.1260/0266351991494830
11.
Hasancebi
,
O.
,
Bahcecioglu
,
T.
,
Kurc
,
Ö.
, and
Saka
,
M. P.
,
2011
, “
Optimum Design of High-Rise Steel Buildings Using An Evolution Strategy Integrated Parallel Algorithm
,”
Comput. Struct.
,
89
(
21–22
), pp.
2037
2051
. 10.1016/j.compstruc.2011.05.019
12.
Unger
,
J. F.
, and
Könke
,
C.
,
2008
, “
Coupling of Scales in a Multiscale Simulation Using Neural Networks
,”
Comput. Struct.
,
86
(
21–22
), pp.
1994
2003
. 10.1016/j.compstruc.2008.05.004
13.
Le
,
B. A.
,
Yvonnet
,
J.
, and
He
,
Q. -C.
,
2015
, “
Computational Homogenization of Nonlinear Elastic Materials Using Neural Networks
,”
Int. J. Numerical Methods Eng.
,
104
(
12
), pp.
1061
1084
. 10.1002/nme.4953
14.
Wang
,
G. G.
, and
Shan
,
S.
,
2006
, “
Review of Metamodeling Techniques in Support of Engineering Design Optimization
,”
ASME J. Mech. Des.
,
129
(
4
), pp.
370
380
. 10.1115/1.2429697
15.
Demirkan
,
O.
,
Olceroglu
,
E.
,
Basdogan
,
I.
, and
Ozguven
,
H. N.
,
2013
, “
A New Design Approach for Rapid Evaluation of Structural Modifications Using Neural Networks
,”
ASME J. Mech. Des.
,
135
(
2
), p.
021004
. 10.1115/1.4023156
16.
Bendsoe
,
M. P.
,
1989
, “
Optimal Shape Design As a Material Distribution Problem
,”
Struct. Optim.
,
1
(
4
), pp.
193
202
. 10.1007/BF01650949
17.
Chollet
,
F.
,
2015
, Keras. https://keras.io/getting-started/faq/
18.
Abadi
,
M.
,
Agarwal
,
A.
,
Barham
,
P.
,
Brevdo
,
E.
,
Chen
,
Z.
,
Citro
,
C.
,
Corrado
,
G. S.
,
Davis
,
A.
,
Dean
,
J.
,
Devin
,
M.
,
Ghemawat
,
S.
,
Goodfellow
,
I.
,
Harp
,
A.
,
Irving
,
G.
,
Isard
,
M.
,
Jozefowicz
,
R.
,
Jia
,
Y.
,
Kaiser
,
L.
,
Kudlur
,
M.
,
Levenberg
,
J.
,
Mané
,
D.
,
Schuster
,
M.
,
Monga
,
R.
,
Moore
,
S.
,
Murray
,
D.
,
Olah
,
C.
,
Shlens
,
J.
,
Steiner
,
B.
,
Sutskever
,
I.
,
Talwar
,
K.
,
Tucker
,
P.
,
Vanhoucke
,
V.
,
Vasudevan
,
V.
,
Viégas
,
F.
,
Vinyals
,
O.
,
Warden
,
P.
,
Wattenberg
,
M.
,
Wicke
,
M.
,
Yu
,
Y.
, and
Zheng
,
X.
,
2015
, “
TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems
,” Technical Report. https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/45166.pdf
19.
Kingma
,
D. P.
, and
Ba
,
J. L.
,
2015
, “
Adam: A Method of Stochastic Optimization
,”
International Conference on Learning Representations 2015
,
San Diego, CA
, pp.
1
15
.
20.
Vigdergauz
,
S.
,
1976
, “
Integral Equation of the Inverse Problem of the Plane Theory of Elasticity
,”
J. Appl. Math. Mech.
,
40
(
3
), pp.
518
522
. 10.1016/0021-8928(76)90046-0
21.
Rozvany
,
G. I. N.
,
2001
, “
Aims, Scope, Methods, History and Unified Terminology of Computer-Aided Toplogy Optimization in Structural Mechanics
,”
Struct. Multidisciplinary Optim.
,
21
(
2
), pp.
90
108
. 10.1007/s001580050174
22.
Andreassen
,
E.
,
Clausen
,
A.
,
Schevenels
,
M.
,
Lazarov
,
B. S.
, and
Sigmund
,
O.
,
2011
, “
Efficient Topology Optimization in MATLAB Using 88 Lines of Code
,”
Struct. Multidisciplinary Optim.
,
43
(
1
), pp.
1
16
. 10.1007/s00158-010-0594-7
23.
24.
Hubel
,
D. H.
, and
Wiesel
,
T. N.
,
1962
, “
Receptive Fields, Binocular Interaction, and Functional Architecture in the Cat’s Visual Cortex
,”
J. Physiol.
,
160
(
1
), pp.
106
154
. 10.1113/jphysiol.1962.sp006837
25.
Gomez-Villa
,
A.
,
Martín
,
A.
,
Vazquez-Corral
,
J.
, and
Bertalmío
,
M.
,
2018
, “
Convolutional Neural Networks Deceived by Visual Illusions
,” arXiv preprint, pp.
1
15
.
26.
Esteves
,
C.
,
Allen-Blanchette
,
C.
,
Makadia
,
A.
, and
Daniilidis
,
K.
,
2018
, “
Learning SO(3) Equivariant Representations With Spherical CNNs
,” arXiv preprint, pp.
1
17
.
27.
Ba
,
Y.
,
Chen
,
R.
,
Wang
,
Y.
,
Yan
,
L.
,
Shi
,
B.
, and
Kadambi
,
A.
,
2019
, “
Physics-Based Neural Networks for Shape from Polarization
,” arXiv preprint, pp.
1
10
.
28.
Karpatne
,
A.
,
Watkins
,
W.
,
Read
,
J.
, and
Kumar
,
V.
,
2017
,
Physics-Guided Neural Networks (PGNN): An Application in Lake Temperature Modeling
,” arXiv preprint, pp.
1
11
.