## Abstract

For mobile robots, localization is essential for navigation and spatial correlation of its collected data. However, localization in Global Positioning System-denied environments such as underwater has been challenging. Light-emitting diode (LED)-based optical localization has been proposed in the literature, where the bearing angles extracted from the line-of-sight of the robot viewed from a pair of base nodes (also known as beacon nodes) are used to triangulate the position of the robot. The state-of-the-art in this approach uses a stop-and-go motion for the robot in order to ensure an accurate position measurement, which severely limits the mobility of the robot. This work presents an LED-based optical localization scheme for a mobile robot undergoing continuous motion, despite the two angles in each measurement cycle being captured at different locations of the robot. In particular, the bearing angle measurements are captured by the robot one at a time and are properly correlated with respect to the base nodes by utilizing the velocity prediction from Kalman filtering. The proposed system is evaluated in simulation and experiments, with its performance compared to the traditional state-of-the-art approach where the two angle measurements in each cycle are used directly to compute the position of the robot. In particular, the experimental results show that the average position and velocity estimation errors are reduced by 55% and 38%, respectively, when comparing the proposed method to the state-of-the-art.

## 1 Introduction

The operation of autonomous mobile robots relies on accurate positioning data in order to navigate, collect data, and maintain situational awareness [1–3]. Global Positioning System (GPS), which is arguably the most common tool for localization, is not available in all environments, such as underwater and indoors [3–5]. In Sec. 1.1, general techniques for handling localization in GPS-denied environments are reviewed, followed by a discussion of related work on light-emitting diode (LED)-based localization in Sec. 1.2. Section 1.3 summarizes the contribution of this paper in the context of the reviewed literature.

### 1.1 General Background on Global Positioning System-Denied Localization.

There are multiple solutions to localization in GPS-denied environments and they vary based on factors such as the type of data used, how the data is captured, and the algorithm that converts the measured data into position estimates. For example, some of the varieties of observed data include distance, bearing angle, and signal strength, which can be captured by sensors such as sonar transducers, RF antennas, inertial sensors, and optical-based sensors, and then processed with techniques like simultaneous localization and mapping (SLAM), dead-reckoning, triangulation, and trilateration [6–8].

Several of the aforementioned localization techniques involve collaboration of multiple agents or nodes. Triangulation is one such approach, where the angles relative to several neighbors with known positions, often referred to as beacons (or base nodes) are used to localize the individual robot [9,10]. However, due to the mobility of a robot, triangulation alone is not enough to effectively track its position [11,12]. Consequently, it is common to use an estimation technique to determine a more refined location of the robot, by filtering measurements up to that moment [11,13–15]. There are several estimation tools that could be used; however, Kalman filtering-based methods are often preferred due to their simplicity, minimal storage requirements, and low computation costs [15–17]. There are many examples and variations of Kalman filtering being used for position estimation in literature, for instance, Rana et al. [18] used Kalman filtering to estimate the position and velocity of a 2D moving object for video surveillance, and Feng et al. [19] used a Kalman filter to predict the future location of a vehicle.

Of the handful of localization techniques that can be used underwater, many are implemented with the use of acoustic signals. However, acoustic approaches tend to complicate or constrain the localization algorithm due to the inherent limited bandwidth, long propagation delays, and multipath effect, which result in low data rates and low signal reception reliability [20–22]. Moreover, acoustic-based methods typically require bulky and power-hungry hardware, making them unsuitable for small underwater robots with limited resources [23].

### 1.2 Relevant Work on Light-Emitting Diode-Based Localization.

Optical communication systems based on LEDs are becoming a popular alternative to acoustic-based methods. In recent years, LED systems have shown promise in high-rate, low-power underwater communication over short-to-medium distances [24–27]. However, a downside of LED-based communication is the requirement of near line-of-sight (LOS) between the transmitter and the receiver. This challenge has been addressed in several ways, including the use of redundant transmitters/receivers [28–31] and active alignment [25,32,33].

Indoor LED-based optical localization and communication systems have been developed by using visible light communication (VLC) systems, in which the overhead lights used to illuminate the room can also be used as the transmission medium for both data and localization purposes [5,34–37]. Nguyen et al. [38] developed a VLC localization approach that integrates the angle of arrival and received signal strength of the light to compute the location, getting a minimum simulated error of 10 cm. Qiu et al. [34] achieved a localization accuracy of 0.56 m using a fingerprint matching approach, where fingerprints are a mapping of position and the light intensities of each light in the environment, and each light transmits a unique beacon pattern allowing the localizing robot to associate a light intensity with a particular overhead fixture. In Ref. [39], Liang et al. presented a visual light positioning approach for localizing resource-constrained platforms, which used both modulating and nonmodulating LED light sources with a rolling-shutter camera. While VLC-based localization approaches are an alternative to radio frequency methods indoors and can work underwater in theory [40,41], they are not practical for a typical aquatic environment due to the difficulty in illuminating the significantly larger and more complex environment.

An alternative form of LED-based optical localization is through the use of cameras as the means for capturing the light. For instance, while Nguyen et al. [38] used an array of photodiodes in their VLC approach, Zachár et al. [5] and Liang et al. [35] used cameras to identify the light in their works. Cameras can also be used for LED-based optical localization in other ways. For instance, Giguere et al. [42] used cameras mounted on several robots that interacted in a cooperative manner to derive the position and orientation relative to each other based on the LED landmarks mounted on the robots. Suh et al. [43] proposed a similar approach where a group of robots with cameras localized themselves by splitting the robots into alternating groups of stationary and moving robots. The stationary robots would track the LED markers on the mobile robots using multiview geometry. However, implementing camera-based techniques underwater is challenging, due to various degradation problems associated with obtaining the images, such as light absorption and scattering [44,45]. While there are techniques for enhancing the imaging quality (for example, histogram equalization), they involve additional processing that is simply not needed when using photodiodes.

Our prior works [46–49] presented an approach to LED-based simultaneous localization and communication by taking advantage of the LOS requirement in LED-based communication to extract the relative bearing between a mobile robot and two nodes with known positions (referred to as base nodes). This approach used a pair of blue light LED and blue light-sensitive photodiode, as the transmitter and receiver, respectively, on a rotary platform for the LED-based communication. Blue light is known to experience the least attenuation underwater within the visible light spectrum, and the feasibility of our optical transceiver for underwater communication has been demonstrated in the remote control of an underwater robot [50]. The use of such optical transceivers for localization allows us to avoid the image processing complications of utilizing cameras, as well as eliminate the need to illuminate an entire area as is the case with VLC approaches. While it is possible to extract the distance from the intensity of the LED light, the measured intensity is a function of both the distance and the deviation from the line-of-sight, which makes it impossible to directly map the light intensity to the distance of its source. The bearing angles were then used to triangulate the position of the mobile robot. A Kalman filter was implemented to combat the challenge of measurement noises and to allow robot position prediction to facilitate the light scan for bearing measurement. However, this approach came with the assumption that the angles with respect to the two base nodes within each measurement cycle were captured when the mobile robot was at a single location. Consequently, because scanning for the light intensity with a rotating receiver cannot capture both angles simultaneously, our implementation used a stop-and-go motion in order to ensure that the robot was at a single location. Namely, the robot would alternate between moving and pausing for the angle measurement. However, this significantly slowed the movement of the robot, making it unsuitable for time-sensitive tasks.

### 1.3 Contribution.

In this work, we propose a novel solution for LED-based localization of a mobile robot while it is continuously moving. In particular, the proposed approach takes advantage of the estimated velocity from the Kalman filter, to properly correlate the two consecutive measurements of bearing angles with respect to the two base nodes for the position computation. Extensive simulations and experiments have been conducted to evaluate the proposed approach with a comparison to an alternative approach (which we call the traditional approach) that uses the two angles obtained in each measurement cycle to directly compute the measured position. Results from both simulation and experiments show that the proposed dynamic-prediction method performs consistently better than the traditional method, with an error reduction of 55% and 38% for the average position and velocity errors, respectively.

Preliminary results for the proposed method were reported at the 2020 ASME Dynamic Systems and Control Conference [51]. Aside from the enhancement in presentation throughout the paper, the major improvements of this work over [51] include the following:

In Ref. [51], only rudimentary simulation was used to evaluate the robustness of the proposed dynamic-prediction measurement method against varying levels of error in body orientation measurement. The simulation presented in this work has been significantly enhanced from the version in Ref. [51], with the continuous nature of the robot's motion better characterized, allowing for the simulation to capture how the robot's ever-changing position affects not only the angles obtained from scanning the light but also the LOS needed for the optical communication between the base nodes and the mobile node. The new simulation also analyzes the robustness of the proposed approach under different velocity settings of the robot.

While a point mass model was used in Ref. [51] for the robot, in this work a rigid body model is adopted to more accurately estimate the robot movement.

In Ref. [51] there were no experimental results. In this paper, we have implemented the algorithms in physical hardware and demonstrated the efficacy of the proposed method with localization experiments for a mobile robot.

The organization of the remainder of this paper is as follows: Section 2 presents an overview of the basic concept of the LED-based localization scheme and outlines the Kalman filter used in robot state prediction. Section 3 describes the proposed approach. Section 4 details the simulation setup and results, followed by the presentation of experimental setup and results in Sec. 5. Finally, concluding remarks are provided in Sec. 6.

## 2 Overview of the Basic Optical Localization Approach

This section provides an overview of the basic localization approach. It is not the main contribution of this work and is meant to help transition the discussion into our current contribution.

### 2.1 “Traditional” Position Measurement System.

To simplify the discussion, the localization approach is discussed in the 2D space. Each node is assumed to be equipped with an optical transceiver, consisting of an LED transmitter and photodiode receiver. In addition, each node is able to rotate and monitor the orientation of its transceiver within the horizontal plane using a stepper motor, which limits the rate at which the transceiver can rotate; see Fig. 1 for the actual physical hardware used in this work. We consider a network of three nodes, which includes a pair of base nodes (with known and fixed locations), $BN1$ and $BN2$, and a mobile node, $MN$, to be localized. See Fig. 2 for illustration.

*θ*

_{1}and

*θ*

_{2}in Fig. 2. With the captured bearing angles

*θ*

_{1}and

*θ*

_{2}, and the known positions of the base nodes $BN1$ and $BN2$, the position of the mobile robot, $MN$, can be computed

*x*,

*y*) coordinate vectors for the $MN$ and $BN1$, respectively, and $|D1|$ is the magnitude of vector

*D*

_{1}as shown in Fig. 2, which is obtained via the Laws of Sines

Here *θ _{n}* is the angle corresponding to the side $BN1$-$BN2$ within the $MN$-$BN1$-$BN2$ triangle, $\theta n=\theta 2\u2212\theta 1,\u2009\theta \xaf2$ is the complement of

*θ*

_{2}, $\theta \xaf2=180deg\u2212\theta 2$, and

*d*is the distance between $BN1$ and $BN2$.

Although this localization process seems simple, the task is involved because the target is mobile. The challenge comes from the need to have sufficient coordination among all three nodes to produce proper angle measurements. For instance, if a base node's transceiver is not pointing in the general “correct” direction, the mobile node's photodetector will not be able to pick up any light from the base node's LED due to the latter's directionality. Another challenge results from the error in the measured *θ*_{1} and *θ*_{2} – purely relying on the algebraic calculation (1) will lead to highly variable (instead of smooth) estimated trajectories for the mobile node $MN$.

To help address both challenges, Kalman filtering (see Sec. 2.2) is proposed for estimating and predicting the location of the mobile node $MN$, based on the sequence of measured locations computed via (1). In particular, the predicted position of the $MN$ allows each base node to orient its transceiver, to shine its LED light in the anticipated direction of the $MN$. On the other hand, the predicted position of the $MN$ allows the mobile node itself to calculate the anticipated relative position of each base node and thus determine its proper scanning range for detecting the LOS. Specifically, the scanning range of the $MN$ transceiver during each measurement cycle is set to be the span between the anticipated directions from the $MN$ to the two base nodes plus an additional 30 deg on either side of the span, to ensure that the peak light intensity from either base node is not cutoff. The design of the Kalman filtering algorithm is presented in Sec. 2.2.

### 2.2 Kalman Filtering Algorithm.

*x*and

*y*coordinates at the

*k*th time instance, respectively, $\psi k$ and $\omega k$ are the body orientation angle and the body's angular velocity, respectively, Δ

*is the*

_{k}*k*th sampling interval, and $w1,k,\u2009w2,k,\u2009w3,k$, and $w4,k$ are independent, zero-mean, white Gaussian noises. The observations $zk$ and $\zeta k$ are the noise-corrupted location and orientation measurements, respectively,

where $w5,k$ and $w6,k$ are assumed to be white, zero-mean Gaussian, and independent of each other and the process noises $w1,k,\u2009w2,k,\u2009w3,k$, and $w4,k$.

The measurement $zk$ is computed from (1) and (2), which is only possible in physical implementation when the bearing angles, *θ*_{1} and *θ*_{2}, are measured by the $MN$ at a single fixed position. The main focus of this work addresses how $zk$ can be computed properly when the bearing angles are captured by the mobile node at different positions due to the robot's movement. The measurement $\zeta k$ is obtained from an orientation sensor such as a magnetic compass. Body orientation estimation is needed for the mobile robot to compute the required rotation for the transceiver to establish the LOS, by properly accommodating the rotation of the robot itself.

where $[n\u0302x,\u2009n\u0302y]T,\u2009[v\u0302x,\u2009v\u0302y],\u2009\psi \u0302$, and $\omega \u0302$, are the estimated position, velocity, body orientation angle, and angular velocity of the mobile node at the *k*th time instance, respectively. The Kalman filter is then implemented to estimate $x\u0302k$ and $b\u0302k$ based upon Eqs. (3)–(8). The details of the Kalman filter implementation, which are standard [52], are omitted here for brevity. In this work, the estimated position and velocity, $x\u0302k$, which are computed by the mobile node and shared with the base nodes via optical communication, are used by both the mobile node and the base nodes for orienting their respective transceivers toward each other and setting the scan range of the mobile node, and the estimated angular orientation of $\psi \u0302k$ is used by the mobile node to compensate the orientation of the transceiver to address any underlying rotation of the robot's body. In addition, the estimated linear velocity, along with the bearing angle measurements, is also used to determine the measured position of the robot, as explained in Sec. 3.

## 3 Proposed Approach to Localization of a Continuously Moving Robot

In the state-of-the-art LED-based optical localization approaches mentioned in the literature, such as those in our previous works, the measurement system (1) and (2) was a static process, with the mobile robot assumed to be at the same single position when both bearing angles were captured. However, the physical angle scanning process takes time, making it impossible to instantaneously capture both angles with a rotating transceiver. Therefore, a stop-and-go scheme for the robot had to be used in order to ensure that the robot was at the same position for both angle captures. While alternative transceiver schemes with multiple transmitters and/or receivers (see, for example, Ref. [30]) could potentially alleviate the scanning requirement, they require more complex hardware and larger footprint on the robot and could perform poorly in localization. In particular, for a (nonrotating) transceiver fixed to a robot, the resolution of bearing angle measurement will depend on the density of photodiodes and thus be low unless a very large number of photodiodes are used.

The stop-and-go implementation of the localization scheme is time-consuming and thus limits how quickly the robot can traverse its environment, making it unsuitable for time-sensitive tasks. In this work, we propose an approach that allows the robot to localize while moving continuously in its environment. In particular, we propose an algorithm that can compute a proper measured position despite the pair of bearing angles (*θ*_{1} and *θ*_{2}) within each measurement cycle being captured at different times and positions. Let these positions be labeled as *P _{a}* and

*P*, where

_{b}*P*is the position whose

_{a}*x*-coordinate is smaller and not necessarily the position where the first bearing angle is captured. Localization of the robot is enabled by determining the coordinates of these spotting positions and treating one of these positions (

*P*in this work) as the observed location $zk$ that is used in the Kalman filter.

_{a}The concept for calculating these positions is considerably more involved than the traditional approach described in Eqs. (1) and (2). To better contrast their differences, Fig. 3 illustrates how the two approaches would determine a position given the same measured bearing angles. In particular, the diagram shows that the traditional approach would use the two angles to find a converging point at *P _{f}*, which is significantly distant from the two ground-truth locations,

*P*and

_{a}*P*, where the angles were actually captured by the robot. Moreover, with access to only the bearing angles, the coordinates for

_{b}*P*and

_{a}*P*could be any of the points along the two edges of the triangle formed by

_{b}*P*, $BN1$, and $BN2$. To determine an estimate of the positions for

_{f}*P*or

_{a}*P*, this work exploits the robot's most recently estimated velocity to properly combine the two measured angles.

_{b}### 3.1 Measurement Equations.

*P*and

_{a}*P*, where it captures the bearing angles, can be determined by solving for the

_{b}*x*and

*y*distances between each location and the base node of the corresponding captured angle, by using these angles along with the estimated velocity of the mobile node. For instance, in Fig. 3, $BN1$ and

*P*are separated from each other by

_{a}*x*and

_{a}*y*. Similarly, $BN2$ and

_{a}*P*are related by

_{b}*x*and

_{b}*y*. These distances can be expressed in generalized relationships as

_{b}where *P _{ax}*,

*P*and

_{bx}*P*,

_{ay}*P*are the

_{by}*x*and

*y*coordinates of

*P*and

_{a}*P*, respectively, $BN1x,\u2009BN2x$ and $BN1y,\u2009BN2y$ are the

_{b}*x*and

*y*coordinates of $BN1$ and $BN2$, respectively, and

**A**,

**B**,

**C**, and

**D**are the sign values of the distances

*x*,

_{a}*x*,

_{b}*y*, and

_{a}*y*, respectively.

_{b}**A**,

**B**,

**C**, and

**D**reflect where the spotting locations are relative to the base nodes and can be determined by inspecting the properties of the captured bearing angles. In particular,

**A**and

**B**take on the sign value of $cos\u2009\theta 1$ and $cos\u2009\theta 2$, respectively, and

**C**and

**D**take on the sign value of $sin\u2009\theta 1$ and $sin\u2009\theta 2$, respectively.

*x*,

_{a}*x*,

_{b}*y*, and

_{a}*y*can be derived as

_{b}In these equations, *d* is the distance between the base nodes, *η* is the *x*-distance between the spotted points, i.e., the distance from *P _{a}* to

*P*, and

_{b}*λ*is the

*y*-distance between the spotted points with

**E**being its associated sign value, which is determined from a combination of the slope,

*ρ*, and the sine values of the bearing angles. The operators $\u2227$ and $\u2228$ are the logical operators “and” and “or”, respectively. The variables

*α*and

*β*represent the inner angles of the triangles that each base node makes with its corresponding spotting point and are computed from

*θ*

_{1}and

*θ*

_{2}, respectively, and

*γ*and $\phi $ are the magnitude and angle, respectively, of the Kalman filter-predicted velocity of the mobile node's movement (see Fig. 3). To simplify the discussion, it is assumed, without loss of generality, that the base nodes are separated only along the

*x*-axis. By using the two sets of relationships (11)–(14) and (15)–(18), the position of

*P*(or

_{a}*P*) can be computed and then used as the position measurement in the Kalman filter's state estimate update.

_{b}The above relationships, Eqs. (11)–(18), were developed from the situation shown in Fig. 3, where $\theta 1$ and $\theta 2$ are captured at spots *P _{a}* and

*P*, respectively. In the case where $\theta 1$ and $\theta 2$ are alternatively captured at spots

_{b}*P*and

_{b}*P*, respectively, as is illustrated in Fig. 4, Eqs. (11)–(18) would need minor adjustments, which would involve changing several of the variables in order to reflect the new association between the angles, spotting positions, the base nodes, and respective distances. The changes to these equations are shown in the Appendix. Equations (19)–(23) would remain unchanged since they are independent of these relationships. For the cases when

_{a}*P*and

_{a}*P*are positioned lower than the base nodes along the

_{b}*y*-axis, the system of equations would remain the same, since the values

**C**and

**D**automatically account for changes in the

*y*position.

## 4 Simulation

Simulation of the proposed dynamic-prediction approach was conducted, with its performance compared to the traditional approach of computing the measured position. In particular, the robustness of both approaches was tested against different levels of velocity for the mobile robot.

### 4.1 Simulation Setup.

The robot was evaluated on a straight line trajectory starting at $[\u22127,\u22126]T$ and ending at $[9,\u22126]T$, as shown in Figs. 5 and 6. The ground-truth positions of the robot along this trajectory were determined by using its constant ground-truth velocity to determine a large number of position points between the starting and end locations. Base nodes $BN1$ and $BN2$ were positioned at $[\u22123,0]T$ and $[3,0]T$, respectively. The area of the simulated environment was defined in grid units to mimic the physical space of the experiment, where a grid unit is equivalent to approximately $23\u2009cm$.

### 4.2 Simulation Measurements.

The simulated robot body orientation measurement, $sk$, was generated by adding zero-mean Gaussian noise to the ground-truth orientation value. The amount of error in the orientation measurement was adjusted by varying the standard deviation of the Gaussian noise. When the robot's body orientation changes, say, by $\Delta \psi $ as part of its locomotion, the physical direction in which the robots' transceiver is oriented would be changed by the same amount. Therefore, to cancel the effect of rotation of the underlying robotic platform, the Kalman filter-estimated body orientation was used to adjust the robot's transceiver angle via the stepper motor that controlled the scanning of the transceiver. In particular, the stepper motor would rotate the transceiver by $\u2212\Delta \psi $ to counter the effect of the robot rotation. Consequently, an error in the estimated body orientation contributes directly to the error in bearing angle measurement. This is because the mobile node's transceiver keeps track of its absolute orientation based on the counted steps of the stepper motor, assuming a perfect cancelation of the body rotation effect.

Bearing angle measurements were generated by simulating the process of the mobile node scanning the light intensities produced by the base nodes. The range of the mobile node's scan was the angular distance between the two anticipated directions from the mobile node to the two base nodes plus an additional 30 deg on each side of this angular range. The scan resolution was set to a step size of 0.225 deg, to mimic the rotation resolution of the stepper motor used in the hardware implementation. The amount of time that elapsed between the steps of the scan was determined by averaging the amount of time that elapsed between steps in hardware trials.

The simulated light intensity was based on the degree of LOS achieved between the transceivers of the mobile node and the base nodes at each step of the mobile node's transceiver rotation. This degree of LOS, which ranged from $[0.0,\u20091.0]$ with a value of 1.0 representing direct LOS, was first scaled by 7.3 to mimic the range of voltages measured by the photodiode and was then injected with zero-mean Gaussian noise with a standard deviation of 0.5 volts to represent the inherent error associated with the light measuring process. The bearing angles were extracted from the simulated light intensities by determining the angular position of mobile node's transceiver at the center point of each peak in the intensity scan.

### 4.3 Simulation Results.

The simulation examined the system's performance under different velocity settings of the robot. In particular, the robot's velocity ranged from $0.17\u2009(grid\u2009units/s)$ to $0.37\u2009(grid\u2009units/s)$ in increments of $0.05\u2009(grid\u2009units/s)$, with each velocity setting being used for 100 trials with each trial associated with a unique random seed. The standard deviation of the Gaussian noise applied to the mobile node's body orientation angle measurement during these trials was kept at 1.0 deg.

Figures 5 and 6 show the comparison between the ground-truth positions and the Kalman filtering-based estimated positions of the robot corresponding to the same timestamps for one of the simulated trials using the proposed dynamic-prediction approach and the traditional measurement approach, respectively, where the robot's velocity was $0.27\u2009(grid\u2009units/s)$. Figures 7 and 8 show the means and standard deviations of the estimated position and velocity errors, respectively, among all of the trials for (a) the traditional measurement approach and (b) the proposed dynamic-prediction measurement approach, under each of the different velocity settings. The estimated position error is the magnitude of the difference between the ground-truth position and the position from the Kalman filter's state vector $x\u0302k=[n\u0302x,\u2009n\u0302y]$ after processing the observed position $zk$ that corresponds to the same timestamp. Similarly, the estimated velocity error is the magnitude of the difference between the ground-truth velocity and the velocity from the Kalman filter's state vector $x\u0302k=[v\u0302x,\u2009v\u0302y]$ after processing the observed position $zk$.

Figures 5 and 6 show that the proposed dynamic-prediction approach results in estimates that are more tightly aligned with ground-truth positions, whereas the traditional approach seems to be weaving about the ground-truth positions. From Figs. 7 and 8, it can be seen that the proposed dynamic-prediction approach is able to maintain a relatively consistent level of estimated position and velocity accuracy as the robot's velocity increases. In comparison, the position and velocity estimation performance of the traditional measurement approach deteriorates as the speed increases.

## 5 Experiment

In this section, we first describe the experiment setup and then present the results of the experiments for evaluating the proposed algorithm.

### 5.1 Setup.

The hardware used in our experiments was similar to what was used in our previous works [48,49]. In particular, each nodes' transceiver was composed of a CREE XRE 1 W Blue LED for the transmitter and a Blue Enhanced photodiode for the receiver. The transceiver and the printed circuit board (PCB) circuitry were connected to the shaft of a stepper motor to enable rotation. A slip ring was used to allow the wiring to move freely with rotation. All of these components were mounted together via a 3D-printed base structure as shown in Fig. 1.

The stepper motor was controlled via a respective stepper motor driver device, which converted step pulses sent from the embedded controller to motor rotation. These experiments used a Sparkfun^{®} Big Easy Driver (Boulder, CO) and it was set to the quarter step mode (i.e., each step rotated the shaft 0.225 deg). The orientation was maintained by having the embedded controller keep count of the step pulses sent.

The embedded controller for each node was an Intel^{®} Edison Board with an Arduino^{®} Expansion Board. The board had a 500 MHz Intel Atom dual-core processor with 1 GB of DDR3 RAM, and a built-in dual-band 2.4 GHz and 5 GHz Broadcom^{®} 43340 802.11 a/b/g/n Wi-Fi adapter. The Intel Edison Board managed stepper motor rotation, LED signal transmission and reception, and Kalman filter processing.

A Lynxmotion^{®} Aluminum 4WD1 Rover Kit and an 80/20^{®} metal beam were used to mount the 3D-printed bases of the mobile robot and base nodes, respectively. Figure 9 shows the grid used for conducting the experiments, which was laid out with blue painters tape and followed the grout in the tiles on the floor. Each square in the grid had a side length of approximately $23\u2009cm$ and was used to represent 1 grid unit, which was used as a generic unit of length to measure motion and position. Due to the limited space as well as the physical limitation of how quickly the hardware can collect data points, the practical speed of the robot is limited.

Orientation data for the mobile node was captured by NaturalPoint^{®}'s OptiTrack motion tracking system. Strategically placed infrared cameras captured the location and orientation of the robot by cross-referencing the positions of reflective markers attached to the robot as shown in Fig. 10. The mobile node would send and receive universal datagram protocol (UDP) packets over Wi-Fi to get position and heading data from the PC running the motion tracking software. The position data received was used only as the ground-truth position in postprocessing of the experiment's results.

In this work, the mobile node was responsible for measuring the bearing angles from a single scanning sweep of the light shone by the base nodes, $BN1$ and $BN2$. Consequently, a second Intel Edison Board was installed on the robot's chassis so that its two main tasks of collecting pose data from the motion tracking system and optical localization could be processed in parallel. The two Edison boards, as shown in Fig. 10, would periodically communicate with each other so that the localization algorithm could get access to the measured orientation data from the motion tracking system.

### 5.2 Results.

Experimental trials using the proposed dynamic-prediction approach and the traditional measurement approach were conducted, with nine trials for both approaches, respectively. Table 1 summarizes the performance across each set of trials. In particular, it shows the means and standard deviations of the estimated position and estimated velocity error magnitudes for each algorithm. From the table, it can be seen that the localization error under the dynamic-prediction approach is 54.9% lower than the error under the traditional approach and the velocity estimation error under the dynamic-prediction approach is 38.1% lower than that under the traditional approach. The table also indicates that the dynamic-prediction approach is much more consistent than the traditional approach given that it has smaller standard deviations for both localization and velocity estimation errors.

Estimated position error | Estimated velocity error | |||
---|---|---|---|---|

Mean | Standard deviation | Mean | Standard deviation | |

Dynamic prediction | 0.3601 | 0.0677 | 0.0574 | 0.0066 |

Traditional | 0.7985 | 0.2144 | 0.0928 | 0.0364 |

Estimated position error | Estimated velocity error | |||
---|---|---|---|---|

Mean | Standard deviation | Mean | Standard deviation | |

Dynamic prediction | 0.3601 | 0.0677 | 0.0574 | 0.0066 |

Traditional | 0.7985 | 0.2144 | 0.0928 | 0.0364 |

The results include the mean and standard deviation of the estimated position error magnitude and the estimated velocity error magnitude.

Figures 11 and 12 show the estimated position and estimated velocity, respectively, against the corresponding ground-truth over time for one of the trials which used the proposed dynamic-prediction measurement approach. There are a total of seven data points, which was the number of localization steps taken before the robot reached the end-point. Similarly, Figs. 13 and 14 show the estimated position and estimated velocity, respectively, against the corresponding ground-truth over time for one of the trials which used the traditional measurement approach. Note that there are only four data points plotted, which was the largest number of completed localization steps within all nine trials for the traditional approach, following which the robot was no longer able to establish the LOS with the base nodes. The ground-truth position shown in Figs. 11 and 13 were extracted from the datalog of the motion tracking system for that trial. This ground-truth position was then down-sampled to compute the ground-truth velocity shown in Figs. 12 and 14. Figures 15 and 16 compare the estimated position with the ground-truth positions for one of the trials which used the dynamic-prediction approach and the traditional measurement approach, respectively. Together the Table 1 and Figs. 11–16 show that the dynamic-prediction approach is able to sufficiently localize the moving robot whereas the traditional approach struggles.

## 6 Conclusions

In this paper, we presented an approach to LED-based localization of a continuously moving robot. By utilizing the estimated velocity of the mobile robot, we were able to address the main challenge of measuring the robot's position despite the bearing angles being measured at different times and positions. It was shown in simulation and experiments that the proposed dynamic-prediction approach was capable of successfully localizing the mobile robot and consistently outperformed the approach based on the traditional method for computing the measurement, with a reduction of 55% and 38% for the average position and velocity estimation errors, respectively, observed in the experiments. Note that although in this work the proposed method was only evaluated with a straight-line trajectory for the mobile robot, the algorithm itself is applicable to more general motions, as long as the trajectory does not undergo abrupt changes and can be approximately treated as piecewise-linear at a time scale commensurate with the duration of each measurement cycle.

The presented method can be applicable in general GPS-denied environments, including indoors, and outdoor areas with heavy canopy cover such as the rain forest, and underwater environments. Although blue LEDs were used in this work, motivated by the underwater applications, the algorithm can be used with LEDs of other colors. The experimental, and likewise the simulated, evaluations presented in this work were conducted in a two-dimensional terrestrial setting, so to validate the proposed design without the numerous overhead concerns associated with underwater experimentation including but not limited to waterproofing and light refraction. Future efforts will include extending the algorithm to the 3D scenario and developing an underwater solution that addresses both the waterproofing and any light refraction issues. The end goal is an experimental system where a mobile submersible robot can be localized and tracked in 3D space underwater using LED transceivers.

## Funding Data

National Science Foundation (Grant No. IIS 1734272; Funder ID: 10.13039/100000001).

## Footnotes

A Youtube video of the proposed approach is available at https://youtu.be/0IyIJrozOuk

### Appendix

*P*and

_{b}*P*, respectively, as illustrated in Fig. 4, are as follows:

_{a}