The goal of this work is to monitor the laser powder bed fusion (LPBF) process using an array of sensors so that a record may be made of those temporal and spatial build locations where there is a high probability of defect formation. In pursuit of this goal, a commercial LPBF machine at the National Institute of Standards and Technology (NIST) was integrated with three types of sensors, namely, a photodetector, high-speed visible camera, and short wave infrared (SWIR) thermal camera with the following objectives: (1) to develop and apply a spectral graph theoretic approach to monitor the LPBF build condition from the data acquired by the three sensors; (2) to compare results from the three different sensors in terms of their statistical fidelity in distinguishing between different build conditions. The first objective will lead to early identification of incipient defects from in-process sensor data. The second objective will ascertain the monitoring fidelity tradeoff involved in replacing an expensive sensor, such as a thermal camera, with a relatively inexpensive, low resolution sensor, e.g., a photodetector. As a first-step toward detection of defects and process irregularities that occur in practical LPBF scenarios, this work focuses on capturing and differentiating the distinctive thermal signatures that manifest in parts with overhang features. Overhang features can significantly decrease the ability of laser heat to diffuse from the heat source. This constrained heat flux may lead to issues such as poor surface finish, distortion, and microstructure inhomogeneity. In this work, experimental sensor data are acquired during LPBF of a simple test part having an overhang angle of 40.5 deg. Extracting and detecting the difference in sensor signatures for such a simple case is the first-step toward in situ defect detection in additive manufacturing (AM). The proposed approach uses the Eigen spectrum of the spectral graph Laplacian matrix as a derived signature from the three different sensors to discriminate the thermal history of overhang features from that of the bulk areas of the part. The statistical accuracy for isolating the thermal patterns belonging to bulk and overhang features in terms of the F-score is as follows: (a) F-score of 95% from the SWIR thermal camera signatures; (b) 83% with the high-speed visible camera; (c) 79% with the photodetector. In comparison, conventional signal analysis techniques—e.g., neural networks, support vector machines, linear discriminant analysis were evaluated with F-score in the range of 40–60%.

## Introduction

### Motivation.

Powder bed fusion (PBF) refers to a family of additive manufacturing (AM) processes in which thermal energy selectively fuses regions of a powder bed [1]. A schematic of the PBF process is shown in Fig. 1. A layer of powder material is spread across a build plate. Certain areas of this layer of powder are then selectively melted (fused) with an energy source, such as a laser or electron beam. The bed is lowered, and another layer of powder is spread over it and melted [2]. This cycle continues until the part is built. The schematic of the PBF process shown in Fig. 1 embodies a laser power source for melting the material, and accordingly, the convention is to refer to the process as laser powder bed fusion (LPBF). A mirror galvanometer scans the laser across the powder bed. The laser is focused on the bed with a spot size on the order of 50–100 *μ*m, and the linear scan speed of the laser is typically varied in the 10^{2}–10^{3} mm/s range [2].

Close to 50 parameters are involved in the melting and solidification process in LPBF [3]. The defects in LPBF are multiscaled and linked to distinctive process phenomena. The following types of defects have garnered the most attention: porosity, surface finish, cracking, layer delamination, and geometric distortion [4,5]. Several empirical studies have mapped the effect of three process parameters on defects, namely, laser power, hatch spacing, viz., the distance between adjacent scan tracks within a layer and laser scan velocity [6–10]. Defects in LPBF are tracked to the following four root causes [4,5,11]:

- (1)
poor part design, such as inadequately supported features,

- (2)
machine and environmental factors, such as poor calibration of the bed and optics,

- (3)
inconsistencies in the input powder material, such as contamination and deviations in particle distributions, and

- (4)
improper process parameters, for example, inordinately high laser power causes vaporization of the material leading to keyhole collapse porosity, or insufficient overlap of adjacent scan tracks due to large hatch spacing results in the so-called lack of fusion porosity [4,12].

A major gap in the current research is in the lack of mapping of the process conditions to defects based on in situ sensor data. This knowledge of the process signatures that are symptomatic of impending defects is the key tenet for future in-process monitoring and control of part quality in LPBF and serves as the motivation for this work.

### Goal and Objectives.

The goal of this work is to monitor the LPBF process using in-process sensor signatures so that a record may be made of those temporal and spatial build locations where there is a high probability of defect formation. This goal is termed as build condition monitoring. In pursuit of this goal, a commercial LPBF machine was integrated with three sensors, namely, a photodetector (spectral response 300–1200 nm), high-speed visible spectrum video camera (4000 frames per second, spectral response 300–950 nm), and short wave infrared (SWIR) thermal camera (1800 frames per second, spectral response 1350 nm to 1600 nm, thermally calibrated from 500 °C to 1025 °C) with the following twofold objectives.

*Objective 1:* Develop and apply a spectral graph theoretic approach to monitor the build condition in LPBF from the data gathered by the aforementioned three sensors. The intent is to detect the onset of deleterious phenomena such as unexpected variations in the thermal history (cooling rate) which would lead to inconsistent properties [13–15]. In the worst case, these may ultimately result in build failures. The proposed approach is extensible to other AM processes and sensor systems.

*Objective 2:* Assess the statistical fidelity of the three different sensors, namely, high-speed camera, infrared thermal camera, and photodetector in monitoring the LPBF build condition by capturing the differences in the thermal signature of the part as it is being built. The intent is to ascertain the monitoring fidelity tradeoffs when replacing a relatively expensive, high-fidelity sensor such as a thermal camera with an inexpensive, low-fidelity sensor, e.g., photodetector.

Realizing these objectives will lead to the following consequential impacts:

- (1)
*In-Process Quality Monitoring in Laser Powder Bed Fusion*: Unfortunately, even with the high-level of process automation in commercial equipment, print defects are common in LPBF, which hinders use of LPBF parts in mission-critical applications, such as aerospace and defense [16,17]. While, there is an abundance of pioneering literature on sensor integration and hardware aspects for monitoring AM processes, there is persistent research gap in seamlessly integrating the in-process sensor data with approaches for online signal analytics [18,19]. This gap has been pointed out in roadmap reports published by federal agencies and national labs [16,20–23]. Addressing this need for online data analytics is critical to mitigate the poor repeatability and reliability in LPBF and more generally in AM. - (2)
*Layer-Wise Analysis of Sensor Data to Reduce Expensive Testing*: To ensure compliance, the norm is to subject LPBF parts to X-ray computed tomography or destructive materials testing. This is prohibitively expensive and time consuming [24,25]. However, if a layer-by-layer sensor data record is available, then these data, instead of destructive testing or X-ray computed tomography scanning, can be used to rapidly qualify the part quality, leading to considerable cost savings [26,27].

Furthermore, because AM phenomena and concomitant defects occur at multiple scales, there is also the need to combine data from multiple sensors. The challenge with this concept of using sensor data for layer-wise quality assurance in AM—termed as *certify-as-you-build* by Professor Jyoti Mazumder [28]—is that sensors may differ in resolution, sensitivity, or bandwidth appropriate to detect particular process signatures. The limited fidelity of a single sensor limits the variety of defects that it may be able to detect, if any at all.

In closing this section, we note that researchers in the AM area prefer the term *qualify-as-you-build* over certify-as-you-build, based on the reasoning that certification is typically done by a third-party in the quality assurance paradigm. In the same vein, Sigma Labs, Inc., Santa Fe, NM, has trademarked the term in-process quality assurance in reference to their printrite3d software that combines process monitoring, data analysis, and feedback control in AM [29,30].

### Scientific Rationale and Hypothesis.

Each type of build defect in LPBF relates to a specific process phenomenon. The onset of such defect-causing phenomena may manifest in statistically distinctive signatures from appropriately designed and utilized sensors [31–33]. Hence, by tracking the signatures from in-process sensor data, it is hypothesized that the defects in LPBF process can be discriminated. The hypothesis tested in Sec. 5 is that the spectral graph theoretic approach forwarded in this work leads to higher statistical accuracy for distinguishing the build condition compared to popular machine learning approaches, such as neural networks and support vector machines. The statistical accuracy is measured in terms of the statistical F-score, which combines both the type I (false alarm) and type II (failing to detect) statistical errors.

The applicability of the different sensors and the proposed analysis methodology is tested by building an overhang part. While not a defect, the LPBF of overhang features is a challenging proposition due to the following reason. As the thermal conductivity of the powder is roughly one third of a solid part, heat tends to accumulate within the overhang area, i.e., the thermal flux through an overhang is restricted [14]. Constriction of heat to a relatively small area leads to inconsistent thermal gradients within the overhang features compared to the bulk material, which ultimately manifests in distorted builds, poor surface finish, or heterogeneous microstructure [5,34]. In this work, the distinctive thermal signature representative of overhang features is used as a means to discriminate the build condition. Furthermore, this work provides an avenue for online monitoring of in-process signals through analysis in the spectral graph domain.

The understanding of thermal aspects of overhang geometries is also consequential in the related context of design for additive manufacturing, for instance, recent studies emphasize the need for an evolved approach for support design depending upon the severity of the overhang feature [35]. This need is exemplified through the following experimental observation from Fig. 2, which shows a biomedical titanium knee implant built by the authors using the LPBF process. This part has a severe overhang feature. To prevent the part from collapsing under its own weight, supports were automatically built under the overhang section by the native software supplied by the machine manufacturer. After the build, the overhang area was found to have coarse-grained microstructure and poor surface finish which renders this implant potentially unsafe in clinical use. Such defects in overhang geometries, also reported by other researchers, is primarily due to the heat being constrained in a small area in the overhang section, owing to the overly thin cross section area of the supports, i.e., due to poor heat conduction [34,36–39]. To avert such part inconsistencies, there is a need for a formal framework based on fundamental understanding of the thermal physics of the process to guide the design of the AM part. This work provides a means to distinguish the thermal-related signatures that are symptomatic of undesirable build quality in LPBF process through a simple test artifact. This understanding of the thermal behavior during melting of overhang will play a foundational role in the future for developing design rules for AM parts with complex geometries.

The rest of this paper is organized as follows: Sec. 2 summarizes the recent developments in sensing and monitoring in LPBF. Section 3 describes the experimental LPBF studies carried out at NIST. Section 4 elucidates the spectral graph theoretic approach and illustrates its application to a synthetic signal. Section 5 discusses the results from application of the spectral graph-theoretic approach to analyze the thermal imaging, high-speed videography, and photodetector signals acquired during the build process. In closure, the conclusions from this work and avenues for further research are discussed in Sec. 6.

## Sensor-Based Monitoring in Powder Bed Fusion

Tapia and Elwany [40] have conducted a comprehensive review of sensor-based process monitoring approaches, specifically focused on metal AM processes. More recently, Foster et al. [15], Purtonen et al. [41], Mani et al. [22], Everton et al. [42], and Grasso and Colosimo [4] provided excellent reviews of the status quo of sensing and monitoring focused in metal AM. However, there is a persistent gap in analytical approaches to synthesize this data and extract patterns that correlate with specific process conditions (build status) and defects [43]. Chua et al. in a recent paper have placed emphasis on the need for (a) data mining, (b) data processing, and (c) data analysis to monitor and subsequently translate the sensor signatures into actionable feedback control [18].

From the hardware vista, two methods are predominantly used in the literature toward monitoring the PBF; namely meltpool monitoring systems and layer-wise imaging (staring) systems. The relevant works under these respective headings are summarized in the following two sections, Secs. 2.1 and 2.2, respectively.

### Meltpool Monitoring Systems in Powder Bed Fusion.

The AM group at the Catholic University of Leuven, Belgium has published several influential articles in the area of quality monitoring and control in LPBF, as well as in the general area of AM, a select few of these are cited herewith [32,33,44–47]. The common leitmotif in these prior works is in extracting features from the data from one sensor at a time, typically, in terms of a statistical moment (mean, variation) of image-based gray scale values, and correlating these features with controlled flaws based on offline analysis. However, to take these pioneering works in sensing forward into the domain of real-time closed-loop process control and further to defect correction, there is a need to translate the signals into decisions in real-time. In turn, this work addresses a necessary and critical step to realize real-time decision-making by translating the AM process signatures in a form tractable for build condition monitoring.

Craeghs et al. [47] explained the need for a meltpool imaging system, which is also coupled with sensors capable of monitoring status of process inputs. Although meltpool imaging is valuable for monitoring the local thermal aspects, it is difficult to translate the meltpool information quickly into a corrective action since process dynamics are relatively faster than current technologies for sensor acquisition, processing, and feedback control. In other words, Craeghs et al. [47] recommended that a heterogeneous sensor suite be used for process monitoring PBF processes. The work reported in this paper assesses the fidelity of using different sensors for process monitoring.

For monitoring the meltpool, a photodiode and (complementary metal oxide semiconductor) CMOS camera coaxial with the laser and equipped with infrared (IR) filters is used by Craeghs et al. [47]. This constrains the wavelength of light in the region of 780–950 nm. The upper limit is at around 1000 nm to block out the laser wavelength from entering the detectors. The sampling rate is 10 kHz, this translates to a sample every 100 *μ*m, considering 1000 mm/s scan speed. Using image processing techniques, the authors ascertain the meltpool area and the length to width ratio of the meltpool, and use these for tracking the process. They found that these meltpool features are related to defects such as balling—however, the statistical significance of these studies has not been reported [48,49].

Chivel and Smurov [50] implemented a coaxial charge coupled device (CCD) camera (perpendicular to the powder bed through the optical track of the machine) and two color pyrometer (900 nm and 1700 nm) setup to monitor the meltpool morphology (100 *μ*m, local focal diameter) and temperature in powder bed fusion process. The temperature distribution and intensity of the meltpool (from processing the CCD camera data) are correlated with the laser power. A linear trend in laser power at three levels (50 W, 100 W, 150 W) and meltpool surface temperature is observed (viz., between approximately 1800 °C and 2000 °C). In the work predating Chivel and Smurov [50], Veiko et al. [51] used a similar setup with a IR camera along with a pyrometer with active wavelength of 1000–1500 nm mounted on a laser powder bed fusion machine. Pyrometer readings are obtained over time for different layer thickness and hatch spacing settings. The IR camera is used to monitor the dynamics of meltpool particles and spatter patterns as they interact with the laser beam.

Two recent reports by Sigma Labs describe a heterogeneous sensing system to relate the thermal aspects of the L-PBF process to physical properties of the part, namely, the part density (porosity) [29,30]. One of these reports describes a hardware system incorporating four in situ sensors, consisting of two photodetectors, one pyrometer, and one position sensor to map the sensor signatures vis-à-vis the density of titanium alloy samples made under different laser power and velocity conditions [30]. The connection between the sensor signatures and part density is made via a trademarked proprietary metric called thermal emission density (TED™). The TED metric is reported to have a nearly one-to-one correlation with the part density. While this work demonstrates the efficacy and need for combining data from multiple sensors for online monitoring, the mathematical details of the data fusion process is not revealed, and the statistical error is not assessed.

### Layer-Wise Imaging or Staring Configuration Systems in Powder Bed Fusion.

Jacobsmuhlen et al. [13] implemented an image-based monitoring approach specifically for detecting build super-elevations effects. Builds are said to be super-elevated if the prior solidified layers protrude out of a freshly deposited powder bed due to distortion. Super-elevated builds will cause the recoater to make contact with the part as the powder is raked across the bed, leading to damage to the part and/or the recoater. To detect this condition, Jacobsmuhlen et al. coupled a CCD camera with a tilt shift lens and mounted the camera assembly on a geared head. This setup has the ability to traverse the camera in three axes, and the tilt shift lens allows corrections of perspective distortions and enables the camera to maintain focus on the powder bed.

The central theme of the work of Jacobsmuhlen et al. is to visually detect these super-elevated regions and compare the results with a reference, which will eventually allow adjustment of process parameters, such as laser power and hatch spacing. The experimental results of Jacobsmuhlen et al. indicate that super-elevations can be reduced by decreasing laser power and increasing hatch distance. By detecting the occurrence of super-elevation at an earlier stage, the layer height can be corrected, or the build can be canceled. The drawback of the cited work is that the analysis for this work uses image processing techniques, namely the Hough transform and areal operations on images (connectivity thresholding), which is exceedingly sensitive to image processing-related parameters. The ability to translate these image processing techniques to different build geometries and defects remains to be ascertained.

In a recent work, Cheng et al. used a near infrared thermal camera to correlate the effect of laser scan speed and layer height on the meltpool dimensions during LPBF of Inconel 718 material [52]. The intent is to use these meltpool measurements to monitor the build condition. While the meltpool length and width are reported to change with the laser scan velocity (in three levels, 400 mm/s, 600 mm/s, and 800 mm/s), the consequence of layer height on meltpool dimensions are negligible. While very valuable and foundational toward understanding the effect of process conditions on meltpool dynamics in LPBF, in this study by Cheng et al., the test artifact is a rectangular test coupon devoid of specific features. Furthermore, the test artifact is not examined for defects, such as porosity—which may result from changes in the scan velocity. This is because, the energy density (called Andrew number) is inversely proportional to the laser velocity, and at low energy density levels the powder particles may fail to fuse together, and consequently, lead to porosity.

Krauss et al. [53,54] incorporated a microbolometer-type infrared camera operating in the long wave infrared region, specifically in the 8000–14,000 nm range. The IR camera is mounted on the outside of the build chamber and looks down on the powder chamber at an angle of 45 deg through a germanium window. This setup allows measurement of larger area of the powder bed, as opposed to small local areas as in coaxial measurement systems. The central theme of the author's work is to obtain the area and morphology of the heat affected zone. They correlate the change in process parameters, such as laser power, scan velocity hatch distance, and layer thickness with the meltpool area, aspect ratio (length to width ratio). These correlations serve as the basis on which build quality can be monitored. For instance, the authors deliberately induced large flaws in the build (voids), as opposed pores that typically occur in the 20–100 *μ*m range. The measured melt pool morphology during the defective build with induced voids is compared with an ideal state. A significant difference is reported in the irradiance profile recorded for the ideal build versus defective build.

To reiterate, the practical applicability of these pioneering and early works is overshadowed by the offline analysis of data from a single sensor. To realize the qualify-as-you-build paradigm in AM, these foregoing studies should be coupled with emerging machine learning techniques from the big data analytics domain that can combine data from multiple sensors.

## Experimental Setup and Data Acquisition

### Measurement System and Test Artifact.

This section describes the sensor suite instrumented on a commercial LPBF machine (EOS M270) at NIST. The machine was integrated with three types of sensors, namely, a short wave infrared thermal camera, a high-speed visible camera, and a photodetector. Table 1 summarizes the location and relevant specifications of the sensors. The SWIR thermal camera and photodetector capture the thermal aspects of the meltpool, whereas the high-speed video camera captures its shape and surrounding spatter pattern. Photodetector data were acquired at a sampling rate of 1 MHz, in addition to frame pulses from each camera indicating the time each frame is acquired. Figures 3 and 4 show the schematic and actual implementation of the setup, respectively. The detailed explanation of the setup is available in Ref. [55] and [56].

The test artifact, which is made from nickel alloy 625 (tradename Inconel 625, UNS designation N06625), has an overhang of 40.5 deg $$ and does not include support structure. In this work, sensor information is analyzed at three example build heights, namely, 6.06 mm, 7.90 mm, and 9.70 mm. These example layers include formation of the overhang structure. The process parameters are shown in Table 2. The overarching aim is to distinguish the thermal patterns that emerge during melting of overhang.

The overhang here is specifically defined as being the last two scan vectors prior to or just after forming the edge, not including the pre- or postcontour scan as shown in Fig. 5. The rest of the scans, apart from the pre- or postcontour scans, are considered to belong to the bulk volume of the part. A stripe pattern scan strategy is used and shown in Figs. 5(c) and 5(d); hence the laser scans along the overhang four times (four stripes) for each layer past 4 mm build height. The stripe orientation shifts 90 deg between layers, and the three example layers demonstrate vertical stripe pattern such that each scan vector within a stripe is horizontally aligned with the thermal camera field of view.

Admittedly, the part design studied herein is a simple unsupported overhang geometry and bereft of the complex geometrical features that can be created with LPBF. The test artifact shown in Fig. 5 was chosen by researchers at NIST to study the physical aspects of the meltpool when building overhang geometries, so that the thermal phenomena can be explained using physical modeling. The relatively compact dimensions and tractable geometry of this test artifact allows researchers at NIST to avoid defocusing concerns with the infrared camera—the precision of the thermal measurements will be deleterious affected if a large object is observed, given that the field of view of the thermal camera is limited. In other words, because the sensors used in this study are not coaxial with the laser but are in the staring configuration; hence, if a bigger and more complex object is monitored the details of the meltpool shape will be occluded due to blurring if the field of view is increased.

We reiterate that this work takes the first-step in a series of forthcoming research that will focus on sensor-based monitoring of defects in AM using spectral graph theory. At the time of this writing, one paper that uses the photodetector sensor data to detect material cross-contamination in LPBF has been accepted in this journal. A second paper in using gray-scale static imaging of the powder bed to detect porosity in LPBF is currently under review. Both these articles apply spectral graph theoretic data analysis techniques. A more concise version of these papers has been accepted for publication in the proceedings of the 2018 ASME Manufacturing Science and Engineering Conference (MSEC) [57,58].

### Visualization of the Representative Data Acquired.

This section describes the qualitative differences in the three types of sensor data acquired while scanning the overhang and bulk features.

#### Thermal Camera Images.

Thermal video files were captured as raw 14-bit digitized data. These images are preprocessed and converted to radiance temperature values through a calibration procedure described in Ref. [59]. Radiance temperature, not to be confused with true temperature, is the equivalent temperature measured if the emitting surface has an emissivity of *ε* = 1. The image pixel values are multiplied by a factor of 10 and then stored as unsigned 16-bit integers to reduce the file size; hence there is a loss in numerical precision of 0.1 °C. Each thermal frame is a two-dimensional (2D) matrix of 128 pixels × 360 pixels. The data captured in a frame are an average over 40 *μ*s of data. This is related to the integration time (or shutter speed) of the camera. In this work, analysis will be conducted on the binary transformation of the thermal images, because the temperature recorded by the thermal camera is a radiance temperature, which has not been corrected using emissivity values to obtain the true thermodynamic surface temperature. However, this does not inhibit the analysis techniques described to observe relative effect of build conditions on thermal video signal.

For example, the meltpool images taken with an SWIR thermal camera (sensor) while scanning the bulk and overhang sections of a test artifact used in this work are shown in Figs. 6(a) and 6(b), respectively. Figure 6(b) reveals that melting of the overhang section manifests in distinctive meltpool shapes [13,14]; the meltpool for the overhang features, is roughly 1.5 times larger in length than its bulk counterpart. This is likely due to the residual heat in the overhang section stemming from the poor heat flux therein. Consequently, it is posited that correlation of the meltpool signature with the build condition facilitates in the isolation of process variation.

#### High-Speed Visible Camera Imaging.

The high-speed visible camera images are windowed to 256 pixels × 256 pixels. Images are acquired at 1000 frames per second. Representative images for the overhang and bulk build features are shown in Figs. 7(a) and 7(b), respectively. The difference in the meltpool characteristics between overhang and bulk features in high-speed visible camera images, although discernable, is not as prominent as in the corresponding thermal images shown in Fig. 6.

#### Photodetector Signal (Time Series Data).

The photodetector signal is acquired as a time series sampled at 1 MHz; the response is in voltage. To ensure photodetector and both thermal and visible camera signals can be synchronized during analysis, both photodetector raw signal and frame pulses (a 5 V square pulse indicating when a frame is captured) from the camera are collected on the same data acquisition system.

Furthermore, in analysis of the photodetector signal, the number of data points corresponding to the framerate of the thermal camera must be taken into consideration. This is obtained by dividing the sampling rate of the photodetector (1 MHz) by the framerate of the thermal camera (1800 frames per second). This equates to 555 data points (roughly 555 *μ*s) measured by the photodetector within one frame period of the thermal camera. A representative trace juxtaposing the photodetector signal for the overhang and bulk build features is shown Fig. 8(a). A spike in the photodetector signal for the overhang condition is observed. Some typical difficulties with using existing statistical signal processing approaches in the context of the LPBF photodetector sensor data from this work are exemplified in Fig. 8.

Figure 8(b) shows the Fourier transform of the same photodetector signal for time series for the overhang and bulk features described in Fig. 8(a). The difference in the spectral profile of the signal for the two build conditions, i.e., overhang and bulk, are scarcely distinguishable; only one clear peak was observed despite the high sampling rate (1 MHz). Analysis of the power spectrum revealed that the two build states were not statistically distinguishable.

The cumulative probability distribution of the photodetector trace for the overhang and bulk features over several frames (or 555 data points) is mapped in Fig. 8(c). The large shifts in the distribution shape and spread over different frames, evocative of the inherent nonstationarity of LPBF process, curtails any attempt to fit a fixed parametric statistical distribution to the data.

## Proposed Methodology

The aim of this section is to develop a spectral graph theoretic approach for analysis of multidimensional signals. This approach is used later to capture the differences in the thermal signatures during the melting of the overhang and bulk features of the test artifact shown in Fig. 5. Application of graph theoretic approaches for signal processing is a nascent domain with recent notable review articles by Hammond et al. [60], Sandryhaila and Moura [61], and Shuman et al. [62,63]. Niyogi et al. in a series of seminal articles proposed embedding high dimensional data as an undirected graph and subsequently projecting the data into the Eigenvector space of the graph Laplacian [64–66].

### Previous Work in Spectral Graph Theory by the Authors.

This work builds upon the authors' previous research in spectral graph theory for manufacturing applications [67–72]. These previous works are enumerated below:

- (1)
The authors used spectral graph theory to differentiate between different types of surfaces in ultraprecision semiconductor chemical mechanical planarization process [67]. The spectral graph theoretic invariant Fiedler number (

*λ*_{2}), viz., the second Eigenvalue of the spectral graph Laplacian matrix, described later in Sec. 4.2, in Eqs. (9) and (10), was used as a discriminant to track changes in the surface that were not detected using statistical surface roughness parameters [67]. - (2)
The preceding work was extended to online monitoring of surface finish in conventional machining. A CCD camera was used to take images of a rotating shaft as it was being machined. The machined surface images were analyzed online, and the Fiedler number ($\lambda 2$) is correlated with the surface roughness [68].

- (3)
The spectral graph theoretic approach was used for detection of change points from sensor data. The Fiedler number (

*λ*_{2}) from different types of planar graphs is monitored using multivariate control chart to capture the onset of anomalous process conditions in ultraprecision machining and chemical mechanical planarization processes [70]. - (4)
The Fiedler number was used to differentiate the geometric integrity of fused filament fabrication AM parts made using different materials [71] based on laser-scanned point cloud data. This work was further extended to parts made under different fused filament fabrication conditions using several spectral graph Laplacian Eigenvalues and not just Fiedler number [69,72].

This work differs from the authors' previous works in the following manner. It is the first to report the application of Laplacian Eigenvectors for diagnosis of process conditions in AM. The approach is integrated within a learning framework for online monitoring of process conditions. None of the previous studies by the authors had an online learning capability for state detection from sensor signals. This is not a trivial extension, because the Laplacian Eigenvectors present a multidimensional challenge to classification. Furthermore, the previous works were based on converting a signal into an unweighted and undirected graph. This required using thresholding functions, which in turn leads to loss of information. In this work, such a threshold is not required as the graph constructed is of the weighted and undirected type. A brief overview of the approach is provided in the forthcoming Sec. 4.2.

### Overview of the Approach.

Before describing the mathematical intricacies of the approach, a high-level overview is provided. The mathematical convention is to denote matrices and vectors with bold typesets. Suppose a sequence of sensor data, $X$ (time series or images) is gathered from a process. Further, consider that the process manifests in $n$ different known process conditions or build states labeled as $s1$, $s2$, $si\cdots sn$. In LPBF, these states could refer to different process conditions, such as melting of bulk, overhang, thin sections, etc. This allows the sensor data $X$ associated with each condition $si$ to be represented with the symbol $xi$. The aim is to identify the system state $si$ from which an unlabeled signal $y$ is observed; i.e., if a signal $y$ is observed, the purpose is to find the process condition *i* to which it belongs. From the LPBF perspective, for instance, the intent is to conclude from one frame of the high-speed video camera whether there is an impending build failure; or given a photodetector signal sample, infer if the onset of distortion is imminent. The signal $xi$ can take various forms depending on the type of sensor data acquired.

- Temporal data $xim\xd7d$: Each column of $xi$ is type of a sensor, and each row is a measurement in time $t=1\u2026m$ for the $d$ sensors; each $ajt$ is a data point for sensor $j=1\u2026d$ at time instant $t$. In the context of LPBF, this matrix could represent multiple photodetector signals acquired simultaneously, where each column of $xi$ is the data from a photodetector. It is restated that $xi$ is associated with a specific process state $si$$xi=a11a12\cdots a1dat1\vdots \ddots atd\vdots am1\cdots amd$(1)
Spatiotemporal data, such as from a high-speed visible camera or thermal camera, where each $xit$ is an image frame captured at time instant $t$ for a state

*i.*The matrix $xi$ must be further qualified with a time index $t$ because data are acquired in discrete frames. Thermal and video camera data are in such a format; the signal in this instance is a three-dimensional array. Each $xit$ is an array of image pixels. For a frame of a thermal camera image, each pixel corresponds to intensity of light converted to a radiant temperature value using the thermal calibration; for the high-speed video camera each pixel records the intensity of light.Purely spatial point cloud data, where $xi$ contains information of coordinate the locations. Another example is the 3D point cloud data, such as those obtained from a laser or structured light scanner. This information is obtained as spatial coordinate-indexed information [69].

The approach involves the following three broad steps (see Fig. 9); the detailed steps and mathematics are explained later.

*Step 1*: Transform the signals $xi$ corresponding to each prelabeled state $si$ into an undirected, weighted network graph $Gi(V,E,W)$, where, $V$ and $E$ are the vertices and edges of the graph, and *W* is the weight between the edges.

*Step 2:* The spectral graph Laplacian matrix $Li$ is computed from the graph $Gi$. The first nonzero $n$ graph Laplacian Eigenvectors $via$re used as an orthogonal basis set corresponding to the process state $si$.

*Step 3:* Each $xi$ is decomposed by taking an inner product $xiT\xb7vi$ akin to a Fourier transform into a set of coefficients $ci$ called graph Fourier coefficients.

The graph Fourier coefficients is written in block matrix form as $C=[c1Tc2T\cdots ciT\cdots cnT]$ corresponding to different states $s1$, $s2$, $si\cdots sn$ . The matrix $C$ is called the

*dictionary.*Given an unlabeled signal $y$, an inner product $pi=yT\xb7vi$ is taken with each of the

*n*basis vector sets one at a time; where*n*are the different states. The matrices $piT$ are called the candidate coefficients. Each $piT$ is compared with the corresponding $ciT$ in the dictionary $C$ in terms of the squared error $ei$. The comparison resulting in the least error is the designated state of $y$.

The advantages of the approach are as follows:

- (1)
The graph Fourier transform eschews intermediate signal filtering steps and accommodates multidimensional signals. It does not require mining statistical features, such as mean, standard deviation, etc., from the data. Hence, the presented approach is feature-free. Given an unlabeled signal $y$ belonging to an unknown state $si$, a computationally simple inner product is needed for classification. This is apt for online monitoring applications.

- (2)
The approach does not require a priori defined basis functions akin to the sinusoidal basis for the Fourier transform; nor does it rely on a predefined probability distribution as in typical stochastic modeling schemas; and finally, the need for a rigid model structure is eliminated, e.g., number of hidden layers and nodes in a neural network.

The disadvantages of this approach are:

- (1)
As with all supervised classification models, a prelabeled data set is needed.

- (2)
All the sensor data $xim\xd7d$, if they are temporal sensors, must have the same sampling rate. This assumption can be relaxed by signal smoothing steps. Frequent symbols and notations are noted in Table 3. Each of the three steps of the approach is next described in detail.

#### Step 1: Converting a Signal Into a Network Graph.

The aim of this step is to represent a sequence $X$ of sensor data (time series, images) as a weighted, undirected network graph $G$, i.e., achieve the mapping $X\u21a6G(V,E,W)$ with nodes (vertices) $V$, edges (links) $E$, and edge weights $W$. The graph $G(V,E,W)$ is a lower-dimensional representation of $X$. Consider a *m*-data point long signal $xi$ corresponding to a known state $si$, $i=1\u2026n$ as per the matrix shown in Eq. (1).

**x**

*and*

_{q}**x**

*are two rows of the signal window $xip$*

_{r}*q*with another node

*r*is $wqr$. It is apparent that the topology of the graph $G$ depends on the kernel $\Omega $. In this work, the Mahalanobis kernel, Eq. (5) with $C$ as the variance–covariance matrix is used exclusively. The mathematical implication of using the Mahalanobis kernel is as follows:

In other words, given two data points **x*** _{q}* and

**x**

*, the more similar*

_{r}**x**

*and*

_{q}**x**

*are, the*

_{r}*weaker*is the connection between the two. The symmetric

*similarity matrix*$Sk\xd7k=wqr$ represents a weighted and undirected network graph $G$; each row and column of $Sk\xd7k$ is the vertex $V$ (or node) of the graph, the relationship between two nodes is indexed by edges, in terms of its connection status $E$ and weight $W$. The graph is then represented as $G\u2261V,E,W$. The following notational additions are made: $Sxip$ and $Gxip$, where $xip$ relates to a specific window $xi$ for the signal $p$.

An analogy can be drawn between a graph network with an electrical circuit with resistors. Indeed there is an equivalence in the literature between the Laplacian matrix and the Kirchhoff matrix of electrical circuits [73]. The node $V$ of a graph corresponds to the node or common point in the circuit; the edge $E$ of the graph is a branch in the circuit; the resistance on the branch is the weight $W$. The smaller the weight of the edge connecting two nodes the smaller is the resistance between them.

Knowing that the electric current takes the path of shortest resistance, an electrical network can be characterized in terms of the path taken by the current; if the resistance along a branch changes the path taken may also change. Hence, by tracking changes in the path taken by the current, drastic changes that may have occurred in the circuit can be detected. This very idea carries over to the presented approach. A signal is redrawn as a graph, and the different paths on a graph are tracked in terms of the Eigenvectors of the Laplacian Matrix.

#### Step 2: Extracting Topological Information for the Graph Surface.

*degree*$dq$ of a node $q$, $q={1\u2026k}$ is computed, which is a count of the number of edges that are incident upon the node. The node degree is the sum of each row in the similarity matrix $SK\xd7K$ and the diagonal

*degree matrix*$D$ structured from $dq$ is obtained as follows:

In other words, the information in the signal $X$ is captured in the form of the eigenvectors ($v$) and eigenvalues ($\lambda *$) of the Laplacian matrix.

#### Step 3: Classification of Process States.

The aim of this step is to find out or classify the process state $si$, given a signal $y$. For instance, given a frame of the thermal image, the intent is to ascertain if there is an impending build fault. This is a type of a supervised classification approach, where a set of labeled data is assumed to exist a priori. This presumption of labeled data is one of the disadvantages of this approach, and it will be relaxed with new graph theoretic unsupervised learning approaches in the authors' future work.

*Step 3.1:*This step applies the graph transform from Eq. (12) to the signal $si$ corresponding to a state $si$, as follows, where

*h*is the number of windows in the signal

This means that the signal $xip$ corresponding to a state $si$, at window $p$ is associated with a Laplacian Eigenvector basis $vxip$ through the spectral graph transform. Each $vxip$ is a *k*-long column vector.

*Step 3.2:*Next, the aim is to learn a single universal basis $Vsi$ for a system $si$ as the data are continuously acquired (consider that the signal $xi$ arrives in discrete chunks as a window). This is done through a simple update schema, akin to the delta update rule frequently used in machine learning [74]. For each window, the basis vectors are updated as follows:

Initialized with $Vxi1=vxi1$ with $\Delta $ set to a small value ($\Delta $ = 0.01 in this work). To make the process computationally simpler, a smaller subset of the Laplacian Eigenvalues is updated; typically, the first ten nonzero Eigenvectors of the Laplacian $Lxip$ were found to be adequate. Hence, the universal basis $Vsik\xd7n$ is the matrix obtained when $Vxi$ converges, that is $Vsi=Vxih$, where $n$ is the number of nonzero Eigenvectors updated.

*Step 3.3:*The spectral graph Fourier transform, which is analogous to the discrete Fourier transform is now defined. A spectral graph Fourier transform $G\u0302\u22c5$ on a signal $XN\xd71$ (consider

*d*= 1 for simplicity) can be defined assuming that the Laplacian matrix ($L$) is not defective, i.e., the graph has no isolated nodes as follows:

*n*systems $s1\cdots sn$, then a dictionary of coefficients can be formed, written in block matrix form $Ch\xd7n$, and partitioned by $c1,siT$ each of which has dimensions $n\xd7d$

*Step 3.5:*The next step is to compare each of the candidate block matrices $p,s1$ with the dictionary of coefficients $cp,si$ in Eq. (17) having the corresponding label $si$. In other words, find the error between $psi$ and corresponding $cp,s1\u2200p$. This is done by taking the sum of square errors

The label assigned to $y$ is the one which has the minimum sum of square errors, i.e., $argminsiesi$. Having described the mechanics of the approach, in Sec. 4.3, the underlying mathematical intuition is elucidated.

### Mathematical Rationale for the Spectral Graph Theoretic Approach.

Two mathematical justifications as to why the Laplacian Eigenvectors are appropriate quantifiers for monitoring the process states are tendered:

- (a)
An analogy with the Fourier transform from the statistical signal processing is proffered.

- (b)
An explanation is given from the network topology perspective.

#### A Justification From the Signal Processing Viewpoint.

The following properties of the normalized Laplacian matrix $Ln$ are important. Because $Ln$ is a diagonally dominant symmetric matrix with nonpositive off-diagonal elements (called Steiltjes matrix) [75], it leads to the following properties:

- (1)
$L$ is symmetric positive semi-definite, i.e., $L\u22650$.

- (2)
The Eigenvectors of $L$ are orthonormal to each other, i.e.,$v1\u22a5v2\cdots \u22a5\cdots vk$; $vp,vq=0$; $vp,vp=1$, where $vp$ is an individual Eigenvector. The first eigenvector $v1$ is an identity vector.

In other words, the so-called graph Fourier coefficients $ci$ are multiples of the Eigenvalues $\lambda *$ of the Laplacian. In summary, a mapping $X\u21a6LX(\lambda *,v)$ can be achieved whose dynamics are characterized using the Laplacian Eigenvectors ($v$). Instead of tracking statistical features of the signal in the time and frequency domain, the proposed graph theoretic approach entails monitoring the topology of the network graph ($G$) in terms of the Laplacian Eigenvectors ($v$).

#### A Justification From the Network Topology Viewpoint.

This section provides the mathematical rationale from the geometric topology viewpoint for using the Laplacian Eigenspectrum $(\lambda *,v)$. The first justification in the literature is due to Belkin and Niyogi [64,65] who substantiated the intuition that the graph Laplacian indeed captures the complex spatio-temporal dynamics of high dimensional data in a low dimension space, namely, the graph $G(V,E,W)$ based on the theory of Laplace–Beltrami operators on Riemannian manifolds. Elucidating this justification is beyond the scope of this work.

The second justification is motivated from spectral graph segmentation area. It is based on the normalized Laplacian, and was proved by Shi and Malik [77]. Shi and Malik showed that the Laplacian Eigenvector $v2$ (Fiedler vector) is the most efficient means to partition a graph $G\u2261V,E,W$. Partitioning a graph is analogous to the number of edges that must be broken to cut a graph into two. The eigenvector $v2$ is the shortest way to partition a graph (sever the least amount of edges); the eigenvector $v3$ is longer, and so on ($v1$ is merely a vector of ones, and corresponds to a eigenvalue of 1). In other words, the Laplacian eigenvectors and eigenvalues are not merely statistics, but are *topological invariants* that are representative of the signal structure in the graph space.

Therefore, the Fiedler vector ($v2$) solves the graph segmentation (cutting) problem, with Fiedler number ($\lambda 2$) as the minimum attained [77]. The highest eigenvalue ($\lambda k$) is the maxima. Thus, the Laplacian eigenvectors are linked to the inherent structure in the signal.

### Application of the Approach to Synthetically Generated Signals.

The following procedure is used: four different levels of Gaussian white noise (η) are added to the system; *η* = {0,5%, 10%, 20%} From each of the four systems 125 samples each 20,000 data points long are selected. Referring to Eq. (1), the dimensions *d = 3* for the Rossler system, and *m* = 20,000. Three different window sizes of length *k* = 500, 750, and 1000 data points are evaluated. The classification fidelity on applying the graph theoretic approach in terms of the F-score is recorded. The F-score is an aggregate measure of the statistical type I (false alarm) and type II (failing to detect) error. The higher the value of the F-score, the higher the prediction accuracy, i.e., a high value of the F-score is desirable. The process is repeated five times, i.e., fivefold replication study. The result from this analysis is shown in terms of the F-score contingent on the noise level (*η*), and window size is presented in Table 4.

From Table 4, it is evident that window *k =* 750 gives a consistently higher *F*-score. Remarkably, addition of noise to the system does not lead to significant changes in *F*-score, which underscores the robustness of the proposed approach to noise. The reason a window of size of *k* =750 leads to the best results is because it is neither too short to be afflicted with temporal correlation, nor too large to affected by noise. The so-called confusion matrix for *k =* 750 is shown in Table 5 along with a sample calculation for the *F*-score. The approach is compared against seven other popular classifiers in Table 6.

The inputs to these classifiers are eight statistical moments: mean, median, standard deviation, skewness, kurtosis, minimum, maximum, and interquartile range. These features are extracted for each of the three components, $xt$, $yt$, and $zt$, of the Rossler system, and principal components (capturing 99% of the variation) are used within the seven different machine learning approaches. The results, presented in Table 6, indicate that the proposed approach with Laplacian Eigenvectors outperforms these other approaches.

## Results and Discussion—Application of Spectral Graph Theory to Laser Powder Bed Fusion

The aim of this section is to apply the spectral graph approach described in Sec. 4 to discriminate between the overhang and bulk build conditions. Data from each of the three type of signals, thermal images, high-speed video frames and the photodetector time traces are analyzed, and their ability to distinguish between the two build conditions (overhang and bulk) is statistically assessed in terms of the *F*-score. A critical parameter that needs to be determined a priori is the window length *k*. In the thermal video and IR images, the window size is 1 frame; for the photodetector, the window size was selected to be 555 data points (acquired over a time interval of 555 *μ*s) long to correspond to one thermal image frame, as explained before.

For the thermal and video images, each pixel row corresponds to a row on the matrix $xi$, shown in Eq. (1), whereas the photodetector signal is a column vector. Using Eq. (5), the weight matrix $wqr$ is obtained, and the steps in Eqs. (7)–(9) are followed. This gives the eigen spectrum $(\lambda *,\nu )$. The eigenvalues $\lambda *$ are plotted to illustrate visually the manner in which the signals for different build conditions, namely, melting of overhang and bulk features, are distinguishable in the spectral graph domain. These plots are shown in Fig. 11, based on which the following inferences are drawn:

Figure 11(a) traces the second eigenvalue ($\lambda 2$), also called the Fiedler number across 5000 thermal camera frames for one layer (9.70 mm layer height) of the process. Distinctive peaks are evident in the plot where the overhang sections are built. The smoothed trend line in the figure obtained using a seventh-order Savitzky–Golay filter taken over a window size 101 data points to accentuate the patterns in the data.

Corresponding to the same 5000 frames in Fig. 11(a), in Fig. 11(b) is the $L2$ norm of the eigenvalues ($\lambda *$) given by $\lambda 22,\lambda 32,\u2026,\lambda k2$ for the photodetector signal. This is because the Fiedler number alone failed to show any clear peaks. The trends are not as visually prominent as those obtained from the thermal camera, indeed some of the peaks in the photodetector signal do not seem to align with those of the thermal camera. This is most likely due to the sensitivity of photodetector to the direction of the scan. As the laser melts material, nearer to the photodetector higher amplitude peaks are observed, compared to the instances where the laser is farther away. A count of the (periodic) peaks in Fig. 11(b) reveals that they correspond to the number of hatches. Given this variation in the signal characteristics, it is reasonable to expect a lower detection fidelity for the photodetector signal compared to the thermal camera.

Continuing with the analysis, the approach is applied to the data acquired by the three sensors for distinguishing between the overhang and bulk build conditions. The approach is compared against seven other popular machine learning approaches following the procedure described in Sec. 4.4. For brevity, the parameter settings are encapsulated in the Appendix.

Noting that for the photodetector signal the random walk Laplacian for Eq. (11) is used. Table 7 represents the performance of the spectral analysis algorithm for all three types of sensor signals in terms of *F*-score value. Based on Tables 7 and 8 the following inferences are tendered:

- (1)
The proposed spectral graph theoretic approach outperforms all the other approaches tested, this holds for all sensing scenarios (Table 7). An

*F*-score in the range of 80–95% is possible with the proposed approach. While it is at best 60% with the other approaches, i.e., little better than a random guess. - (2)
The prediction results from the photodetector signal are inferior for spectral graph theoretic approach compared to the same approach applied to other sensor signals. Nonetheless, the

*F*-score results are within 20% of the highest resolution sensor, i.e., the thermal camera. The confusion matrix, based on 250 randomly selected samples—a sample is a frame for the thermal and video images and 555*μ*s of data for the photodetector—is shown in Table 8. - (3)
The detection fidelity is contingent on the analytical approach used. Even a sensor with the highest spatial resolution, such as a thermal camera, when integrated with an ill-suited analytical approach will lead to poor results. For instance, the thermal camera when combined with a linear discriminant classifier, has poor F-score (36%) compared to the visible camera (58%) and photodetector (59%).

## Conclusions and Future Work

This work proposed a spectral graph theoretic approach for monitoring the build condition in LPBF AM process via a sensing array consisting of a photodetector, SWIR thermal camera, and high-speed video camera. The central idea of the approach is to convert the sensor data into a lower dimensional manifold, specifically, a weighted and undirected network graph. Specific conclusions are as follows:

- (1)
An LPBF part with a steep overhang feature (40.5 deg) is built without supports. The build is monitored continuously with the aforementioned sensor suite with the intent to detect the difference in signal patterns when the bulk and overhang sections are sintered. Extracting and detecting the difference in sensor signatures for such a simple case is the first-step toward in situ defect detection in AM. The analysis was extended to more sophisticated machine learning approaches, such as neural networks and support vector machines, among others (Sec. 5). These approaches had a fidelity (

*F*-score) for distinguishing between the overhang and bulk states in the vicinity of 40–60%. - (2)
The proposed graph theoretic approach was applied to the sensor data with the intent to distinguish between the overhang and bulk build states, the F-score obtained is in the region of 80–95%, contingent on the type of sensors:

*F*-score ∼95% from the short wave infrared thermal camera;*F*-score ∼83% for the high-speed video camera, and*F*-score ∼ 79% for the photodetector sensor.

These results lead to the following inferences:

To monitor the LPBF process, in-process sensing must be integrated with new and advanced analytical approaches capable of combining data from multiple sensors. Existing approaches, such as neural networks are ineffective probably due to their inability to discern the subtle and short-lived indications of an incipient fault, and limitations with accommodating heterogeneous sensors.

A low fidelity sensor, such as photodetector, although not as capable in discriminating between build conditions as a high-fidelity sensor, its detection capability is still within 20% of the thermal camera. This limitation may be overcome by using multiple photodetector sensors together.

This work exposes the following unanswered, open research questions, which the authors will endeavor to address in their forthcoming work:

- (1)
What other different types and more relevant microstructure-level defects, such as powder contamination, poor fusion, porosity delamination, etc., may be detected?

- (2)
What is the link between specific defects and sensor signal patterns? In other words, is there a one-to-one link between a type of defect and its severity, and the sensor signature it manifests?

- (3)
What is the detection lag; does the detection accuracy improve with sensor redundancy? What is the effect of sensor noise and position on the detection accuracy?

- (4)
How does the approach translate into more complicated geometries and different types of defects, and eventually design rules in AM?

In closure, while this research proposes an approach for monitoring of process states and the detection of incipient defects, thus laying the conceptual ground work for a qualify-as-you-build paradigm in AM, nevertheless, it does not provide an avenue to repair or correct impending defects through closed loop feedback control. Prompt defect correction is important in AM, because, once a defect is created in a layer it is liable to be permanently sealed in by subsequent layers. Accordingly, the next-step for the authors, apart from addressing the four questions posed heretofore, is to build a mechanism for defect correction within the AM process. To realize this need for process correction, the authors have access to three hybrid additive-subtractive AM systems at their home institution—University of Nebraska-Lincoln, namely, two Matsuura Lumex Avance 25 hybrid LPBF systems, and one Optomec hybrid directed energy deposition system. These hybrid AM systems have a subtractive machining head inside the machine, which can be used for removal of an entire defect-prone layer. Moreover, these machines allow complete control over the process parameters; this freedom to alter parameters, which is absent in most commercial AM systems, engenders a means to correct defects. For instance, if porosity formation due to lack of fusion is detected from in-process sensor data, the laser power may be increased to an appropriate level to fuse the un-melted powder particles. On the other hand, if pinhole porosity due to overly high input laser power were to occur in a particular layer, the subtractive machining head may be used to remove such a layer, and the process commenced with changed parameters (e.g., lowering the laser power). Hence, this work is the critical first-step toward transcending the qualify-as-you-build concept and usher a new *correct-as-you-build* paradigm in AM leading to parts with zero defects.

## Acknowledgment

The experimental data for this work came from Dr. Brandon Lane and Dr. Jarred Heigel of the Intelligent Systems Division, National Institute of Standards and Technology (NIST), Gaithersburg, MD. The authors thank Dr. Lane and Dr. Heigel for their critical insights, time, edits, and constructive comments, which has gone a long way in reinforcing the rigor of this work.

One of the authors (PKR) thanks the National Science Foundation for funding his work. Specifically, the spectral graph theoretic approach for monitoring of complex signals was first proposed through the NSF grant CMMI-1719388. The further development and application of spectral graph theoretic approaches to realize in-process defect isolation and identification eventually leading to *correct-as-you-build* paradigm in AM was conceptualized and funded through CMMI-1752069 (CAREER). The authors also thank the three anonymous reviewers, and the associate editor of the journal – Dr. Z. J. Pei; their diligence and recommendations have doubtlessly impacted the quality of this work in a positive way.

## Funding Data

National Science Foundation, Directorate for Engineering (Grant Nos. 1719388, 1739696, and 175206).