Abstract
We present an application of convolutional neural networks for calibration of a tensile plasticity (TePla) damage model simulating the spallation in copper under high-explosive shock loading. Using a high-fidelity, multi-physics simulation developed by the Advanced Simulation and Computing program at Los Alamos National Laboratory (LANL), we simulate hundreds of variations of a high-explosive shock experiment involving a copper coupon. From this synthetic data, we train neural networks to learn the inverse mapping between the coupon’s late-time density field, or an associated synthetic radiograph, and the simulation’s TePla damage parameters. It is demonstrated that, using a simple convolutional architecture, we can train networks to infer damage parameters from density fields accurately. Neural network inference directly from synthetic radiographs is significantly more challenging. Application of machine-learning methods must be accompanied by an analysis of how they are making inferences in order to build confidence in predictions and to identify likely shortcomings of the technique. To understand what the model is learning, individual layer outputs are extracted and examined. Each layer in the network identifies multiple features. However, each of these features are not necessarily of equal importance in the network’s final prediction of a given damage parameter. By examining the features overlaid on the input hydrodynamic fields, we assess the question of whether or not the model’s accuracy can be attributed to human-recognizable characteristics. In this work we give a detailed description of our data-generation methods and the learning problem we address. We then outline our neural architecture trained for damage calibration and discuss considerations made during training and evaluation of accuracy. Methods for human interpretation of the network’s inference process are then put forward, including extraction of learned features from the trained network and techniques to assess sensitivity of inferences to the learned features.
This material is declared a work of the U.S. Government and is not subject to Copyright protection in the United States.