## Abstract

For mobile robots, localization is essential for navigation and spatial correlation of its collected data. However, localization in Global Positioning System-denied environments such as underwater has been challenging. Light-emitting diode (LED)-based optical localization has been proposed in the literature, where the bearing angles extracted from the line-of-sight of the robot viewed from a pair of base nodes (also known as beacon nodes) are used to triangulate the position of the robot. The state-of-the-art in this approach uses a stop-and-go motion for the robot in order to ensure an accurate position measurement, which severely limits the mobility of the robot. This work presents an LED-based optical localization scheme for a mobile robot undergoing continuous motion, despite the two angles in each measurement cycle being captured at different locations of the robot. In particular, the bearing angle measurements are captured by the robot one at a time and are properly correlated with respect to the base nodes by utilizing the velocity prediction from Kalman filtering. The proposed system is evaluated in simulation and experiments, with its performance compared to the traditional state-of-the-art approach where the two angle measurements in each cycle are used directly to compute the position of the robot. In particular, the experimental results show that the average position and velocity estimation errors are reduced by 55% and 38%, respectively, when comparing the proposed method to the state-of-the-art.

## 1 Introduction

The operation of autonomous mobile robots relies on accurate positioning data in order to navigate, collect data, and maintain situational awareness [13]. Global Positioning System (GPS), which is arguably the most common tool for localization, is not available in all environments, such as underwater and indoors [35]. In Sec. 1.1, general techniques for handling localization in GPS-denied environments are reviewed, followed by a discussion of related work on light-emitting diode (LED)-based localization in Sec. 1.2. Section 1.3 summarizes the contribution of this paper in the context of the reviewed literature.

### 1.1 General Background on Global Positioning System-Denied Localization.

There are multiple solutions to localization in GPS-denied environments and they vary based on factors such as the type of data used, how the data is captured, and the algorithm that converts the measured data into position estimates. For example, some of the varieties of observed data include distance, bearing angle, and signal strength, which can be captured by sensors such as sonar transducers, RF antennas, inertial sensors, and optical-based sensors, and then processed with techniques like simultaneous localization and mapping (SLAM), dead-reckoning, triangulation, and trilateration [68].

Several of the aforementioned localization techniques involve collaboration of multiple agents or nodes. Triangulation is one such approach, where the angles relative to several neighbors with known positions, often referred to as beacons (or base nodes) are used to localize the individual robot [9,10]. However, due to the mobility of a robot, triangulation alone is not enough to effectively track its position [11,12]. Consequently, it is common to use an estimation technique to determine a more refined location of the robot, by filtering measurements up to that moment [11,1315]. There are several estimation tools that could be used; however, Kalman filtering-based methods are often preferred due to their simplicity, minimal storage requirements, and low computation costs [1517]. There are many examples and variations of Kalman filtering being used for position estimation in literature, for instance, Rana et al. [18] used Kalman filtering to estimate the position and velocity of a 2D moving object for video surveillance, and Feng et al. [19] used a Kalman filter to predict the future location of a vehicle.

Of the handful of localization techniques that can be used underwater, many are implemented with the use of acoustic signals. However, acoustic approaches tend to complicate or constrain the localization algorithm due to the inherent limited bandwidth, long propagation delays, and multipath effect, which result in low data rates and low signal reception reliability [2022]. Moreover, acoustic-based methods typically require bulky and power-hungry hardware, making them unsuitable for small underwater robots with limited resources [23].

### 1.2 Relevant Work on Light-Emitting Diode-Based Localization.

Optical communication systems based on LEDs are becoming a popular alternative to acoustic-based methods. In recent years, LED systems have shown promise in high-rate, low-power underwater communication over short-to-medium distances [2427]. However, a downside of LED-based communication is the requirement of near line-of-sight (LOS) between the transmitter and the receiver. This challenge has been addressed in several ways, including the use of redundant transmitters/receivers [2831] and active alignment [25,32,33].

Indoor LED-based optical localization and communication systems have been developed by using visible light communication (VLC) systems, in which the overhead lights used to illuminate the room can also be used as the transmission medium for both data and localization purposes [5,3437]. Nguyen et al. [38] developed a VLC localization approach that integrates the angle of arrival and received signal strength of the light to compute the location, getting a minimum simulated error of 10 cm. Qiu et al. [34] achieved a localization accuracy of 0.56 m using a fingerprint matching approach, where fingerprints are a mapping of position and the light intensities of each light in the environment, and each light transmits a unique beacon pattern allowing the localizing robot to associate a light intensity with a particular overhead fixture. In Ref. [39], Liang et al. presented a visual light positioning approach for localizing resource-constrained platforms, which used both modulating and nonmodulating LED light sources with a rolling-shutter camera. While VLC-based localization approaches are an alternative to radio frequency methods indoors and can work underwater in theory [40,41], they are not practical for a typical aquatic environment due to the difficulty in illuminating the significantly larger and more complex environment.

An alternative form of LED-based optical localization is through the use of cameras as the means for capturing the light. For instance, while Nguyen et al. [38] used an array of photodiodes in their VLC approach, Zachár et al. [5] and Liang et al. [35] used cameras to identify the light in their works. Cameras can also be used for LED-based optical localization in other ways. For instance, Giguere et al. [42] used cameras mounted on several robots that interacted in a cooperative manner to derive the position and orientation relative to each other based on the LED landmarks mounted on the robots. Suh et al. [43] proposed a similar approach where a group of robots with cameras localized themselves by splitting the robots into alternating groups of stationary and moving robots. The stationary robots would track the LED markers on the mobile robots using multiview geometry. However, implementing camera-based techniques underwater is challenging, due to various degradation problems associated with obtaining the images, such as light absorption and scattering [44,45]. While there are techniques for enhancing the imaging quality (for example, histogram equalization), they involve additional processing that is simply not needed when using photodiodes.

Our prior works [4649] presented an approach to LED-based simultaneous localization and communication by taking advantage of the LOS requirement in LED-based communication to extract the relative bearing between a mobile robot and two nodes with known positions (referred to as base nodes). This approach used a pair of blue light LED and blue light-sensitive photodiode, as the transmitter and receiver, respectively, on a rotary platform for the LED-based communication. Blue light is known to experience the least attenuation underwater within the visible light spectrum, and the feasibility of our optical transceiver for underwater communication has been demonstrated in the remote control of an underwater robot [50]. The use of such optical transceivers for localization allows us to avoid the image processing complications of utilizing cameras, as well as eliminate the need to illuminate an entire area as is the case with VLC approaches. While it is possible to extract the distance from the intensity of the LED light, the measured intensity is a function of both the distance and the deviation from the line-of-sight, which makes it impossible to directly map the light intensity to the distance of its source. The bearing angles were then used to triangulate the position of the mobile robot. A Kalman filter was implemented to combat the challenge of measurement noises and to allow robot position prediction to facilitate the light scan for bearing measurement. However, this approach came with the assumption that the angles with respect to the two base nodes within each measurement cycle were captured when the mobile robot was at a single location. Consequently, because scanning for the light intensity with a rotating receiver cannot capture both angles simultaneously, our implementation used a stop-and-go motion in order to ensure that the robot was at a single location. Namely, the robot would alternate between moving and pausing for the angle measurement. However, this significantly slowed the movement of the robot, making it unsuitable for time-sensitive tasks.

### 1.3 Contribution.

In this work, we propose a novel solution for LED-based localization of a mobile robot while it is continuously moving. In particular, the proposed approach takes advantage of the estimated velocity from the Kalman filter, to properly correlate the two consecutive measurements of bearing angles with respect to the two base nodes for the position computation. Extensive simulations and experiments have been conducted to evaluate the proposed approach with a comparison to an alternative approach (which we call the traditional approach) that uses the two angles obtained in each measurement cycle to directly compute the measured position. Results from both simulation and experiments show that the proposed dynamic-prediction method performs consistently better than the traditional method, with an error reduction of 55% and 38% for the average position and velocity errors, respectively.

Preliminary results for the proposed method were reported at the 2020 ASME Dynamic Systems and Control Conference [51]. Aside from the enhancement in presentation throughout the paper, the major improvements of this work over [51] include the following:

• In Ref. [51], only rudimentary simulation was used to evaluate the robustness of the proposed dynamic-prediction measurement method against varying levels of error in body orientation measurement. The simulation presented in this work has been significantly enhanced from the version in Ref. [51], with the continuous nature of the robot's motion better characterized, allowing for the simulation to capture how the robot's ever-changing position affects not only the angles obtained from scanning the light but also the LOS needed for the optical communication between the base nodes and the mobile node. The new simulation also analyzes the robustness of the proposed approach under different velocity settings of the robot.

• While a point mass model was used in Ref. [51] for the robot, in this work a rigid body model is adopted to more accurately estimate the robot movement.

• In Ref. [51] there were no experimental results. In this paper, we have implemented the algorithms in physical hardware and demonstrated the efficacy of the proposed method with localization experiments for a mobile robot.

The organization of the remainder of this paper is as follows: Section 2 presents an overview of the basic concept of the LED-based localization scheme and outlines the Kalman filter used in robot state prediction. Section 3 describes the proposed approach. Section 4 details the simulation setup and results, followed by the presentation of experimental setup and results in Sec. 5. Finally, concluding remarks are provided in Sec. 6.

## 2 Overview of the Basic Optical Localization Approach

This section provides an overview of the basic localization approach. It is not the main contribution of this work and is meant to help transition the discussion into our current contribution.

### 2.1 “Traditional” Position Measurement System.

To simplify the discussion, the localization approach is discussed in the 2D space. Each node is assumed to be equipped with an optical transceiver, consisting of an LED transmitter and photodiode receiver. In addition, each node is able to rotate and monitor the orientation of its transceiver within the horizontal plane using a stepper motor, which limits the rate at which the transceiver can rotate; see Fig. 1 for the actual physical hardware used in this work. We consider a network of three nodes, which includes a pair of base nodes (with known and fixed locations), $BN1$ and $BN2$, and a mobile node, $MN$, to be localized. See Fig. 2 for illustration.

Fig. 1
Fig. 1
Close modal
Fig. 2
Fig. 2
Close modal
For localization, during every measurement cycle, each base node will orient its transceiver toward the mobile node and turn on its LED. The mobile node will then scan its transceiver and monitor its photodiode output, which peaks when the mobile node's transceiver is pointing directly at the light of a given base node. The mobile node will note the orientation of its own transceiver when its photodiode output peaks, which can be converted to its bearing angle with respect to the corresponding base node, as illustrated by θ1 and θ2 in Fig. 2. With the captured bearing angles θ1 and θ2, and the known positions of the base nodes $BN1$ and $BN2$, the position of the mobile robot, $MN$, can be computed
$[nxny]=[B1x+|D1| cos θ1B1y+|D1| sin θ1]$
(1)
where $[nx, ny]T$ and $[B1x, B1y]T$ are the (x, y) coordinate vectors for the $MN$ and $BN1$, respectively, and $|D1|$ is the magnitude of vector D1 as shown in Fig. 2, which is obtained via the Laws of Sines
$|D1|=d sin(θ¯2) sin(θn)$
(2)

Here θn is the angle corresponding to the side $BN1$-$BN2$ within the $MN$-$BN1$-$BN2$ triangle, $θn=θ2−θ1, θ¯2$ is the complement of θ2, $θ¯2=180deg−θ2$, and d is the distance between $BN1$ and $BN2$.

Although this localization process seems simple, the task is involved because the target is mobile. The challenge comes from the need to have sufficient coordination among all three nodes to produce proper angle measurements. For instance, if a base node's transceiver is not pointing in the general “correct” direction, the mobile node's photodetector will not be able to pick up any light from the base node's LED due to the latter's directionality. Another challenge results from the error in the measured θ1 and θ2 – purely relying on the algebraic calculation (1) will lead to highly variable (instead of smooth) estimated trajectories for the mobile node $MN$.

To help address both challenges, Kalman filtering (see Sec. 2.2) is proposed for estimating and predicting the location of the mobile node $MN$, based on the sequence of measured locations computed via (1). In particular, the predicted position of the $MN$ allows each base node to orient its transceiver, to shine its LED light in the anticipated direction of the $MN$. On the other hand, the predicted position of the $MN$ allows the mobile node itself to calculate the anticipated relative position of each base node and thus determine its proper scanning range for detecting the LOS. Specifically, the scanning range of the $MN$ transceiver during each measurement cycle is set to be the span between the anticipated directions from the $MN$ to the two base nodes plus an additional 30 deg on either side of the span, to ensure that the peak light intensity from either base node is not cutoff. The design of the Kalman filtering algorithm is presented in Sec. 2.2.

### 2.2 Kalman Filtering Algorithm.

As mentioned in Sec. 2.1, the purpose of using Kalman filtering is to help establish the LOS between the mobile node and the base nodes, by predicting the future state of the mobile node, thus enabling proper orienting of the base node transceivers and setting the scanning range of the mobile node. Unlike our prior works [4649], which modeled the mobile node as a point mass, this work adopts a rigid-body model for the mobile robot, allowing both the body's orientation and position to be tracked. A constant linear (angular, respectively) velocity model corrupted with Gaussian noise is used for the mobile node's position (orientation, respectively) dynamics, since in general the precise knowledge of its movement is not available. These dynamics can be represented as
$nk+1=nk+vkΔk+w1,k$
(3)
$vk+1=vk+w2,k$
(4)
$ψk+1=ψk+ωkΔk+w3,k$
(5)
$ωk+1=ωk+w4,k$
(6)
where $nk=[nx,k, ny,k]T$ and $vk=[vx,k, vy,k]T$ are the position and velocity vectors of the mobile robot in terms of the x and y coordinates at the kth time instance, respectively, $ψk$ and $ωk$ are the body orientation angle and the body's angular velocity, respectively, Δk is the kth sampling interval, and $w1,k, w2,k, w3,k$, and $w4,k$ are independent, zero-mean, white Gaussian noises. The observations $zk$ and $ζk$ are the noise-corrupted location and orientation measurements, respectively,
$zk=nk+w5,k$
(7)
$ζk=ψk+w6,k$
(8)

where $w5,k$ and $w6,k$ are assumed to be white, zero-mean Gaussian, and independent of each other and the process noises $w1,k, w2,k, w3,k$, and $w4,k$.

The measurement $zk$ is computed from (1) and (2), which is only possible in physical implementation when the bearing angles, θ1 and θ2, are measured by the $MN$ at a single fixed position. The main focus of this work addresses how $zk$ can be computed properly when the bearing angles are captured by the mobile node at different positions due to the robot's movement. The measurement $ζk$ is obtained from an orientation sensor such as a magnetic compass. Body orientation estimation is needed for the mobile robot to compute the required rotation for the transceiver to establish the LOS, by properly accommodating the rotation of the robot itself.

Two state vectors are used for Kalman filtering in this work. The first state vector, $x̂k$, maintains the estimate of the position and velocity, whereas the second state vector, $b̂k$, tracks the estimate of the body orientation angle and the angular velocity. The two state vectors are defined as
$x̂k=[n̂x, n̂y, v̂x, v̂y]T$
(9)
$b̂k=[ψ̂, ω̂]T$
(10)

where $[n̂x, n̂y]T, [v̂x, v̂y], ψ̂$, and $ω̂$, are the estimated position, velocity, body orientation angle, and angular velocity of the mobile node at the kth time instance, respectively. The Kalman filter is then implemented to estimate $x̂k$ and $b̂k$ based upon Eqs. (3)(8). The details of the Kalman filter implementation, which are standard [52], are omitted here for brevity. In this work, the estimated position and velocity, $x̂k$, which are computed by the mobile node and shared with the base nodes via optical communication, are used by both the mobile node and the base nodes for orienting their respective transceivers toward each other and setting the scan range of the mobile node, and the estimated angular orientation of $ψ̂k$ is used by the mobile node to compensate the orientation of the transceiver to address any underlying rotation of the robot's body. In addition, the estimated linear velocity, along with the bearing angle measurements, is also used to determine the measured position of the robot, as explained in Sec. 3.

## 3 Proposed Approach to Localization of a Continuously Moving Robot

In the state-of-the-art LED-based optical localization approaches mentioned in the literature, such as those in our previous works, the measurement system (1) and (2) was a static process, with the mobile robot assumed to be at the same single position when both bearing angles were captured. However, the physical angle scanning process takes time, making it impossible to instantaneously capture both angles with a rotating transceiver. Therefore, a stop-and-go scheme for the robot had to be used in order to ensure that the robot was at the same position for both angle captures. While alternative transceiver schemes with multiple transmitters and/or receivers (see, for example, Ref. [30]) could potentially alleviate the scanning requirement, they require more complex hardware and larger footprint on the robot and could perform poorly in localization. In particular, for a (nonrotating) transceiver fixed to a robot, the resolution of bearing angle measurement will depend on the density of photodiodes and thus be low unless a very large number of photodiodes are used.

The stop-and-go implementation of the localization scheme is time-consuming and thus limits how quickly the robot can traverse its environment, making it unsuitable for time-sensitive tasks. In this work, we propose an approach that allows the robot to localize while moving continuously in its environment. In particular, we propose an algorithm that can compute a proper measured position despite the pair of bearing angles (θ1 and θ2) within each measurement cycle being captured at different times and positions. Let these positions be labeled as Pa and Pb, where Pa is the position whose x-coordinate is smaller and not necessarily the position where the first bearing angle is captured. Localization of the robot is enabled by determining the coordinates of these spotting positions and treating one of these positions (Pa in this work) as the observed location $zk$ that is used in the Kalman filter.

The concept for calculating these positions is considerably more involved than the traditional approach described in Eqs. (1) and (2). To better contrast their differences, Fig. 3 illustrates how the two approaches would determine a position given the same measured bearing angles. In particular, the diagram shows that the traditional approach would use the two angles to find a converging point at Pf, which is significantly distant from the two ground-truth locations, Pa and Pb, where the angles were actually captured by the robot. Moreover, with access to only the bearing angles, the coordinates for Pa and Pb could be any of the points along the two edges of the triangle formed by Pf, $BN1$, and $BN2$. To determine an estimate of the positions for Pa or Pb, this work exploits the robot's most recently estimated velocity to properly combine the two measured angles.

Fig. 3
Fig. 3
Close modal

### 3.1 Measurement Equations.

The locations of the mobile robot, Pa and Pb, where it captures the bearing angles, can be determined by solving for the x and y distances between each location and the base node of the corresponding captured angle, by using these angles along with the estimated velocity of the mobile node. For instance, in Fig. 3, $BN1$ and Pa are separated from each other by xa and ya. Similarly, $BN2$ and Pb are related by xb and yb. These distances can be expressed in generalized relationships as
$Pax=BN1x+Axa$
(11)
$Pbx=BN2x+Bxb$
(12)
$Pay=BN1y+Cya$
(13)
$Pby=BN2y+Dyb$
(14)

where Pax, Pbx and Pay, Pby are the x and y coordinates of Pa and Pb, respectively, $BN1x, BN2x$ and $BN1y, BN2y$ are the x and y coordinates of $BN1$ and $BN2$, respectively, and A, B, C, and D are the sign values of the distances xa, xb, ya, and yb, respectively. A, B, C, and D reflect where the spotting locations are relative to the base nodes and can be determined by inspecting the properties of the captured bearing angles. In particular, A and B take on the sign value of $cos θ1$ and $cos θ2$, respectively, and C and D take on the sign value of $sin θ1$ and $sin θ2$, respectively.

From the relationships in Eqs. (11)(14), expressions for the distances xa, xb, ya, and yb can be derived as
$xa=d−η+BEγ sin φ tan βA−B tan α tan β$
(15)
$ya=xa tan α$
(16)
$yb=ya+Eλ$
(17)
$xb=yb tan β$
(18)
where
$d=BN2x−BN1x$
(19)
$η=γ cos φ$
(20)
$λ=γ sin φ$
(21)
$ρ={+1,(0deg≤φ<90deg)−1,(−90deg<φ<0deg)$
(22)
$E={(ρ=+1)∧(C=+1)∧(D=+1)+1,∨(ρ=−1)∧(C=−1)∧(D=−1)(ρ=−1)∧(C=+1)∧(D=+1)−1,∨(ρ=+1)∧(C=−1)∧(D=−1)$
(23)

In these equations, d is the distance between the base nodes, η is the x-distance between the spotted points, i.e., the distance from Pa to Pb, and λ is the y-distance between the spotted points with E being its associated sign value, which is determined from a combination of the slope, ρ, and the sine values of the bearing angles. The operators $∧$ and $∨$ are the logical operators “and” and “or”, respectively. The variables α and β represent the inner angles of the triangles that each base node makes with its corresponding spotting point and are computed from θ1 and θ2, respectively, and γ and $φ$ are the magnitude and angle, respectively, of the Kalman filter-predicted velocity of the mobile node's movement (see Fig. 3). To simplify the discussion, it is assumed, without loss of generality, that the base nodes are separated only along the x-axis. By using the two sets of relationships (11)(14) and (15)(18), the position of Pa (or Pb) can be computed and then used as the position measurement in the Kalman filter's state estimate update.

The above relationships, Eqs. (11)(18), were developed from the situation shown in Fig. 3, where $θ1$ and $θ2$ are captured at spots Pa and Pb, respectively. In the case where $θ1$ and $θ2$ are alternatively captured at spots Pb and Pa, respectively, as is illustrated in Fig. 4, Eqs. (11)(18) would need minor adjustments, which would involve changing several of the variables in order to reflect the new association between the angles, spotting positions, the base nodes, and respective distances. The changes to these equations are shown in the Appendix. Equations (19)(23) would remain unchanged since they are independent of these relationships. For the cases when Pa and Pb are positioned lower than the base nodes along the y-axis, the system of equations would remain the same, since the values C and D automatically account for changes in the y position.

Fig. 4
Fig. 4
Close modal

## 4 Simulation

Simulation of the proposed dynamic-prediction approach was conducted, with its performance compared to the traditional approach of computing the measured position. In particular, the robustness of both approaches was tested against different levels of velocity for the mobile robot.

### 4.1 Simulation Setup.

The robot was evaluated on a straight line trajectory starting at $[−7,−6]T$ and ending at $[9,−6]T$, as shown in Figs. 5 and 6. The ground-truth positions of the robot along this trajectory were determined by using its constant ground-truth velocity to determine a large number of position points between the starting and end locations. Base nodes $BN1$ and $BN2$ were positioned at $[−3,0]T$ and $[3,0]T$, respectively. The area of the simulated environment was defined in grid units to mimic the physical space of the experiment, where a grid unit is equivalent to approximately $23 cm$.

Fig. 5
Fig. 5
Close modal
Fig. 6
Fig. 6
Close modal

### 4.2 Simulation Measurements.

The simulated robot body orientation measurement, $sk$, was generated by adding zero-mean Gaussian noise to the ground-truth orientation value. The amount of error in the orientation measurement was adjusted by varying the standard deviation of the Gaussian noise. When the robot's body orientation changes, say, by $Δψ$ as part of its locomotion, the physical direction in which the robots' transceiver is oriented would be changed by the same amount. Therefore, to cancel the effect of rotation of the underlying robotic platform, the Kalman filter-estimated body orientation was used to adjust the robot's transceiver angle via the stepper motor that controlled the scanning of the transceiver. In particular, the stepper motor would rotate the transceiver by $−Δψ$ to counter the effect of the robot rotation. Consequently, an error in the estimated body orientation contributes directly to the error in bearing angle measurement. This is because the mobile node's transceiver keeps track of its absolute orientation based on the counted steps of the stepper motor, assuming a perfect cancelation of the body rotation effect.

Bearing angle measurements were generated by simulating the process of the mobile node scanning the light intensities produced by the base nodes. The range of the mobile node's scan was the angular distance between the two anticipated directions from the mobile node to the two base nodes plus an additional 30 deg on each side of this angular range. The scan resolution was set to a step size of 0.225 deg, to mimic the rotation resolution of the stepper motor used in the hardware implementation. The amount of time that elapsed between the steps of the scan was determined by averaging the amount of time that elapsed between steps in hardware trials.

The simulated light intensity was based on the degree of LOS achieved between the transceivers of the mobile node and the base nodes at each step of the mobile node's transceiver rotation. This degree of LOS, which ranged from $[0.0, 1.0]$ with a value of 1.0 representing direct LOS, was first scaled by 7.3 to mimic the range of voltages measured by the photodiode and was then injected with zero-mean Gaussian noise with a standard deviation of 0.5 volts to represent the inherent error associated with the light measuring process. The bearing angles were extracted from the simulated light intensities by determining the angular position of mobile node's transceiver at the center point of each peak in the intensity scan.

### 4.3 Simulation Results.

The simulation examined the system's performance under different velocity settings of the robot. In particular, the robot's velocity ranged from $0.17 (grid units/s)$ to $0.37 (grid units/s)$ in increments of $0.05 (grid units/s)$, with each velocity setting being used for 100 trials with each trial associated with a unique random seed. The standard deviation of the Gaussian noise applied to the mobile node's body orientation angle measurement during these trials was kept at 1.0 deg.

Figures 5 and 6 show the comparison between the ground-truth positions and the Kalman filtering-based estimated positions of the robot corresponding to the same timestamps for one of the simulated trials using the proposed dynamic-prediction approach and the traditional measurement approach, respectively, where the robot's velocity was $0.27 (grid units/s)$. Figures 7 and 8 show the means and standard deviations of the estimated position and velocity errors, respectively, among all of the trials for (a) the traditional measurement approach and (b) the proposed dynamic-prediction measurement approach, under each of the different velocity settings. The estimated position error is the magnitude of the difference between the ground-truth position and the position from the Kalman filter's state vector $x̂k=[n̂x, n̂y]$ after processing the observed position $zk$ that corresponds to the same timestamp. Similarly, the estimated velocity error is the magnitude of the difference between the ground-truth velocity and the velocity from the Kalman filter's state vector $x̂k=[v̂x, v̂y]$ after processing the observed position $zk$.

Fig. 7
Fig. 7
Close modal
Fig. 8
Fig. 8
Close modal

Figures 5 and 6 show that the proposed dynamic-prediction approach results in estimates that are more tightly aligned with ground-truth positions, whereas the traditional approach seems to be weaving about the ground-truth positions. From Figs. 7 and 8, it can be seen that the proposed dynamic-prediction approach is able to maintain a relatively consistent level of estimated position and velocity accuracy as the robot's velocity increases. In comparison, the position and velocity estimation performance of the traditional measurement approach deteriorates as the speed increases.

## 5 Experiment

In this section, we first describe the experiment setup and then present the results of the experiments for evaluating the proposed algorithm.

### 5.1 Setup.

The hardware used in our experiments was similar to what was used in our previous works [48,49]. In particular, each nodes' transceiver was composed of a CREE XRE 1 W Blue LED for the transmitter and a Blue Enhanced photodiode for the receiver. The transceiver and the printed circuit board (PCB) circuitry were connected to the shaft of a stepper motor to enable rotation. A slip ring was used to allow the wiring to move freely with rotation. All of these components were mounted together via a 3D-printed base structure as shown in Fig. 1.

The stepper motor was controlled via a respective stepper motor driver device, which converted step pulses sent from the embedded controller to motor rotation. These experiments used a Sparkfun® Big Easy Driver (Boulder, CO) and it was set to the quarter step mode (i.e., each step rotated the shaft 0.225 deg). The orientation was maintained by having the embedded controller keep count of the step pulses sent.

The embedded controller for each node was an Intel® Edison Board with an Arduino® Expansion Board. The board had a 500 MHz Intel Atom dual-core processor with 1 GB of DDR3 RAM, and a built-in dual-band 2.4 GHz and 5 GHz Broadcom® 43340 802.11 a/b/g/n Wi-Fi adapter. The Intel Edison Board managed stepper motor rotation, LED signal transmission and reception, and Kalman filter processing.

A Lynxmotion® Aluminum 4WD1 Rover Kit and an 80/20® metal beam were used to mount the 3D-printed bases of the mobile robot and base nodes, respectively. Figure 9 shows the grid used for conducting the experiments, which was laid out with blue painters tape and followed the grout in the tiles on the floor. Each square in the grid had a side length of approximately $23 cm$ and was used to represent 1 grid unit, which was used as a generic unit of length to measure motion and position. Due to the limited space as well as the physical limitation of how quickly the hardware can collect data points, the practical speed of the robot is limited.

Fig. 9
Fig. 9
Close modal

Orientation data for the mobile node was captured by NaturalPoint®'s OptiTrack motion tracking system. Strategically placed infrared cameras captured the location and orientation of the robot by cross-referencing the positions of reflective markers attached to the robot as shown in Fig. 10. The mobile node would send and receive universal datagram protocol (UDP) packets over Wi-Fi to get position and heading data from the PC running the motion tracking software. The position data received was used only as the ground-truth position in postprocessing of the experiment's results.

Fig. 10
Fig. 10
Close modal

In this work, the mobile node was responsible for measuring the bearing angles from a single scanning sweep of the light shone by the base nodes, $BN1$ and $BN2$. Consequently, a second Intel Edison Board was installed on the robot's chassis so that its two main tasks of collecting pose data from the motion tracking system and optical localization could be processed in parallel. The two Edison boards, as shown in Fig. 10, would periodically communicate with each other so that the localization algorithm could get access to the measured orientation data from the motion tracking system.

### 5.2 Results.

Experimental trials using the proposed dynamic-prediction approach and the traditional measurement approach were conducted, with nine trials for both approaches, respectively. Table 1 summarizes the performance across each set of trials. In particular, it shows the means and standard deviations of the estimated position and estimated velocity error magnitudes for each algorithm. From the table, it can be seen that the localization error under the dynamic-prediction approach is 54.9% lower than the error under the traditional approach and the velocity estimation error under the dynamic-prediction approach is 38.1% lower than that under the traditional approach. The table also indicates that the dynamic-prediction approach is much more consistent than the traditional approach given that it has smaller standard deviations for both localization and velocity estimation errors.

Table 1

Summarized experimental results from the trials of each algorithm

Estimated position errorEstimated velocity error
MeanStandard deviationMeanStandard deviation
Dynamic prediction0.36010.06770.05740.0066
Traditional0.79850.21440.09280.0364
Estimated position errorEstimated velocity error
MeanStandard deviationMeanStandard deviation
Dynamic prediction0.36010.06770.05740.0066
Traditional0.79850.21440.09280.0364

The results include the mean and standard deviation of the estimated position error magnitude and the estimated velocity error magnitude.

Figures 11 and 12 show the estimated position and estimated velocity, respectively, against the corresponding ground-truth over time for one of the trials which used the proposed dynamic-prediction measurement approach. There are a total of seven data points, which was the number of localization steps taken before the robot reached the end-point. Similarly, Figs. 13 and 14 show the estimated position and estimated velocity, respectively, against the corresponding ground-truth over time for one of the trials which used the traditional measurement approach. Note that there are only four data points plotted, which was the largest number of completed localization steps within all nine trials for the traditional approach, following which the robot was no longer able to establish the LOS with the base nodes. The ground-truth position shown in Figs. 11 and 13 were extracted from the datalog of the motion tracking system for that trial. This ground-truth position was then down-sampled to compute the ground-truth velocity shown in Figs. 12 and 14. Figures 15 and 16 compare the estimated position with the ground-truth positions for one of the trials which used the dynamic-prediction approach and the traditional measurement approach, respectively. Together the Table 1 and Figs. 1116 show that the dynamic-prediction approach is able to sufficiently localize the moving robot whereas the traditional approach struggles.

Fig. 11
Fig. 11
Close modal
Fig. 12
Fig. 12
Close modal
Fig. 13
Fig. 13
Close modal
Fig. 14
Fig. 14
Close modal
Fig. 15
Fig. 15
Close modal
Fig. 16
Fig. 16
Close modal

## 6 Conclusions

In this paper, we presented an approach to LED-based localization of a continuously moving robot. By utilizing the estimated velocity of the mobile robot, we were able to address the main challenge of measuring the robot's position despite the bearing angles being measured at different times and positions. It was shown in simulation and experiments that the proposed dynamic-prediction approach was capable of successfully localizing the mobile robot and consistently outperformed the approach based on the traditional method for computing the measurement, with a reduction of 55% and 38% for the average position and velocity estimation errors, respectively, observed in the experiments. Note that although in this work the proposed method was only evaluated with a straight-line trajectory for the mobile robot, the algorithm itself is applicable to more general motions, as long as the trajectory does not undergo abrupt changes and can be approximately treated as piecewise-linear at a time scale commensurate with the duration of each measurement cycle.

The presented method can be applicable in general GPS-denied environments, including indoors, and outdoor areas with heavy canopy cover such as the rain forest, and underwater environments. Although blue LEDs were used in this work, motivated by the underwater applications, the algorithm can be used with LEDs of other colors. The experimental, and likewise the simulated, evaluations presented in this work were conducted in a two-dimensional terrestrial setting, so to validate the proposed design without the numerous overhead concerns associated with underwater experimentation including but not limited to waterproofing and light refraction. Future efforts will include extending the algorithm to the 3D scenario and developing an underwater solution that addresses both the waterproofing and any light refraction issues. The end goal is an experimental system where a mobile submersible robot can be localized and tracked in 3D space underwater using LED transceivers.

## Funding Data

• National Science Foundation (Grant No. IIS 1734272; Funder ID: 10.13039/100000001).

## Footnotes

1

A Youtube video of the proposed approach is available at https://youtu.be/0IyIJrozOuk

### Appendix

The changes to Eqs. (11)(18) for the case where $θ1$ and $θ2$ are alternatively captured at spots Pb and Pa, respectively, as illustrated in Fig. 4, are as follows:
$Pax=BN2x+Axa$
(A1)
$Pbx=BN1x+Bxb$
(A2)
$Pay=BN2y+Cya$
(A3)
$Pby=BN1y+Dyb$
(A4)
$xa=d+η−AEγ sin φ tan α−B+A tan β tan α$
(A5)
$ya=xa tan β$
(A6)
$yb=ya+Eλ$
(A7)
$xb=yb tan α$
(A8)

## References

1.
Kim
,
M.
, and
Chong
,
N. Y.
,
2007
, “
RFID-Based Mobile Robot Guidance to a Stationary Target
,”
Mechatronics
,
17
(
4–5
), pp.
217
229
. 10.1016/j.mechatronics.2007.01.005
2.
Wanasinghe
,
T. R.
,
Mann
,
G. K. I.
, and
Gosine
,
R. G.
,
2014
, “
Decentralized Cooperative Localization for Heterogeneous Multi-Robot System Using Split Covariance Intersection Filter
,” Canadian Conference on Computer and Robot Vision (
CRV
), Montreal, QC, Canada, May 6–9, pp.
167
174
.10.1109/CRV.2014.30
3.
Wang
,
K.
,
Liu
,
Y.
, and
Li
,
L.
,
2014
, “
A Simple and Parallel Algorithm for Real-Time Robot Localization by Fusing Monocular Vision and Odometry/AHRS Sensors
,”
IEEE/ASME Trans. Mechatronics
,
19
(
4
), pp.
1447
1457
.10.1109/TMECH.2014.2298247
4.
Kim
,
A.
, and
Eustice
,
R.
,
2009
, “
Pose-Graph Visual Slam With Geometric Model Selection for Autonomous Underwater Ship Hull Inspection
,”
IEEE/RSJ International Conference on Intelligent Robots and Systems
, St. Louis, MO, Oct. 10–15, pp.
1559
1565
.10.1109/IROS.2009.5354132
5.
Zachár
,
G.
,
Vakulya
,
G.
, and
Simon
,
G.
,
2017
, “
Design of a VLC-Based Beaconing Infrastructure for Indoor Localization Applications
,” IEEE International Instrumentation and Measurement Technology Conference (
I2MTC
), Turin, Italy, May 22–25, pp.
1
6
.10.1109/I2MTC.2017.7969837
6.
Bergen
,
M. H.
,
Jin
,
X.
,
Guerrero
,
D.
,
Chaves
,
H. A. L. F.
,
Fredeen
,
N. V.
, and
Holzman
,
J. F.
,
2017
, “
Design and Implementation of an Optical Receiver for Angle-of-Arrival-Based Positioning
,”
J. Lightwave Technol.
,
35
(
18
), pp.
3877
3885
. 10.1109/JLT.2017.2723978
7.
Bergen
,
M. H.
,
Schaal
,
F. S.
,
Klukas
,
R.
,
Cheng
,
J.
, and
Holzman
,
J. F.
,
2018
, “
Toward the Implementation of a Universal Angle-Based Optical Indoor Positioning System
,”
Front. Optoelectron.
,
11
(
2
), pp.
116
127
. 10.1007/s12200-018-0806-0
8.
Browne
,
A. F.
, and
Padgett
,
S. T.
,
2018
, “
Novel Method of Determining Vehicle Cartesian Location Using Dual Active Optical Beacons and a Rotating Photosensor
,”
IEEE Sens. Lett.
,
2
(
4
), pp.
1
4
. 10.1109/LSENS.2018.2873841
9.
Peula
,
J. M.
,
Urdiales
,
C.
, and
Sandoval
,
F.
,
2010
, “
Explicit Coordinated Localization Using Common Visual Objects
,”
IEEE International Conference on Robotics and Automation
, Anchorage, AK, May 3–7, pp.
4889
4894
.10.1109/ROBOT.2010.5509398
10.
Easton
,
A.
, and
Cameron
,
S.
,
2006
, “
A Gaussian Error Model for Triangulation-Based Pose Estimation Using Noisy Landmarks
,”
IEEE Conference on Robotics, Automation and Mechatronics
, Bangkok, Thailand, June 1–3, pp.
1
6
.10.1109/RAMECH.2006.252663
11.
Font-Llagunes
,
J. M.
, and
Batlle
,
J. A.
,
2009
, “
Consistent Triangulation for Mobile Robot Localization Using Discontinuous Angular Measurements
,”
Rob. Auton. Syst.
,
57
(
9
), pp.
931
942
.10.1016/j.robot.2009.06.001
12.
Font
,
J. M.
, and
Batlle
,
J. A.
,
2006
, “
Mobile Robot Localization. Revisiting the Triangulation Methods
,”
IFAC Proc. Vols.
,
39
(
15
), pp.
340
345
.10.3182/20060906-3-IT-2910.00058
13.
Olsen
,
C. F.
,
2000
, “
Probabilistic Self-Localization for Mobile Robots
,”
IEEE Trans. Rob. Autom.
,
16
(
1
), pp.
55
66
.10.1109/70.833191
14.
Giuffrida
,
F.
,
Morasso
,
P.
,
Vercelli
,
G.
, and
Zaccaria
,
R.
,
1996
, “
Active Localization Techniques for Mobile Robots in the Real World
,”
Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems
, Vol.
3
, Osaka, Japan, Nov. 8, pp.
1312
1318
.10.1109/IROS.1996.568986
15.
Rahul Sharma
,
K.
,
Honc
,
D.
, and
Dusek
,
F.
,
2014
, “
Sensor Fusion for Prediction of Orientation and Position From Obstacle Using Multiple IR Sensors an Approach Based on Kalman Filter
,”
International Conference on Applied Electronics
, Pilsen, Czech Republic, Sept. 9–10, pp.
263
266
.10.1109/AE.2014.7011716
16.
Prabha
,
C.
,
Supriya
,
M. H.
, and
Pillai
,
P. R. S.
,
2009
, “
Improving the Localization Estimates Using Kalman Filters
,” International Symposium on Ocean Electronics (
SYMPOL 2009
)), Cochin, India, Nov. 18–20, pp.
190
195
.10.1109/SYMPOL.2009.5664196
17.
Xu
,
S.
,
Ou
,
Y.
, and
Wu
,
X.
,
2019
, “
Learning-Based Adaptive Estimation for AOA Target Tracking With Non-Gaussian White Noise
,” IEEE International Conference on Robotics and Biomimetics (
ROBIO
), Dali, China, Dec. 6–8, pp.
2233
2238
.10.1109/ROBIO49542.2019.8961815
18.
Rana
,
M. M.
,
Halim
,
N.
,
Rahamna
,
M. M.
, and
Abdelhadi
,
A.
,
2020
, “
Position and Velocity Estimations of 2D-Moving Object Using Kalman Filter: Literature Review
,” 22nd International Conference on Advanced Communication Technology (
ICACT
), Phoenix Park, South Korea, Feb. 16–19, pp.
541
544
.10.23919/ICACT48636.2020.9061241
19.
Feng
,
H.
,
Liu
,
C.
,
Shu
,
Y.
, and
Yang
,
O. W.
,
2015
, “
Location Prediction of Vehicles in VANETs Using a Kalman Filter
,”
Wireless Pers. Commun.
,
80
(
2
), pp.
543
559
.10.1007/s11277-014-2025-3
20.
Rui
,
G.
, and
Chitre
,
M.
,
2016
, “
Cooperative Multi-AUV Localization Using Distributed Extended Information Filter
,” IEEE/OES Autonomous Underwater Vehicles (
AUV
), Tokyo, Japan, Nov. 6–9, pp.
206
212
.10.1109/AUV.2016.7778673
21.
Emokpae
,
L. E.
,
DiBenedetto
,
S.
,
Potteiger
,
B.
, and
Younis
,
M.
,
2014
, “
UREAL: Underwater Reflection-Enabled Acoustic-Based Localization
,”
IEEE Sens. J.
,
14
(
11
), pp.
3915
3925
. November10.1109/JSEN.2014.2357331
22.
Akyildiz
,
I. F.
,
Wang
,
P.
, and
Sun
,
Z.
,
2015
, “
Realizing Underwater Communication Through Magnetic Induction
,”
IEEE Commun. Mag.
,
53
(
11
), pp.
42
48
.10.1109/MCOM.2015.7321970
23.
Tan
,
X.
,
2011
, “
Autonomous Robotic Fish as Mobile Sensor Platforms: Challenges and Potential Solutions
,”
Mar. Technol. Soc. J.
,
45
(
4
), pp.
31
40
. 10.4031/MTSJ.45.4.2
24.
Tian
,
B.
,
Zhang
,
F.
, and
Tan
,
X.
,
2013
, “
Design and Development of an LED-Based Optical Communication System for Autonomous Underwater Robots
,” IEEE/ASME International Conference on Advanced Intelligent Mechatronics (
AIM
), Wollongong, NSW, Australia, July 9–12, pp.
1558
1563
.10.1109/AIM.2013.6584317
25.
Solanki
,
P. B.
,
Al-Rubaiai
,
M.
, and
Tan
,
X.
,
2016
, “
Extended Kalman Filter-Aided Alignment Control for Maintaining Line of Sight in Optical Communication
,”
American Control Conference
, Boston, MA, July 6–8, pp.
4520
4525
.10.1109/ACC.2016.7526064
26.
Brundage
,
H.
,
2010
, “
Designing a Wireless Underwater Optical Communication System
,” Master's thesis,
Massachusetts Institute of Technology
, Boston, MA.
27.
Doniec
,
M.
,
2013
, “
Autonomous Underwater Data Muling Using Wireless Optical Communication and Agile AUV Control
,” Ph.D. thesis,
Massachusetts Institute of Technology
, Cambridge, MA.
28.
Anguita
,
D.
,
Brizzolara
,
D.
, and
Parodi
,
G.
,
2009
, “
Building an Underwater Wireless Sensor Network Based on Optical: Communication: Research Challenges and Current Results
,”
Third International Conference on Sensor Technologies and Applications
, Athens, Greece, June 18–23, pp.
476
479
.10.1109/SENSORCOMM.2009.79
29.
Anguita
,
D.
,
Brizzolara
,
D.
, and
Parodi
,
G.
,
2010
, “
Optical Wireless Communication for Underwater Wireless Sensor Networks: Hardware Modules and Circuits Design and Implementation
,”
OCEANS MTS/IEEE SEATTLE
, Seattle, WA, Sept. 20–23, pp.
1
8
.10.1109/OCEANS.2010.5664321
30.
Rust
,
I. C.
, and
Asada
,
H. H.
,
2012
, “
A Dual-Use Visible Light Approach to Integrated Communication and Localization of Underwater Robots With Application to Non-Destructive Nuclear Reactor Inspection
,”
IEEE International Conference on Robotics and Automation
(
ICRA
), Saint Paul, MN, May 14–18, pp.
2445
2450
.10.1109/ICRA.2012.6224718
31.
Simpson
,
J. A.
,
Hughes
,
B. L.
, and
Muth
,
J. F.
,
2012
, “
Smart Transmitters and Receivers for Underwater Free-Space Optical Communication
,”
IEEE J. Sel. Areas Commun.
,
30
(
5
), pp.
964
974
.10.1109/JSAC.2012.120611
32.
Al-Rubaiai
,
M.
,
2015
, “
Design and Development of an LED-Based Optical Communication System
,” Master's thesis,
Michigan State University
, East Lansing, MI.
33.
Solanki
,
P. B.
,
Al-Rubaiai
,
M.
, and
Tan
,
X.
,
2018
, “
Extended Kalman Filter-Based Active Alignment Control for LED Optical Communication
,”
IEEE/ASME Trans. Mechatronics
,
23
(
4
), pp.
1501
1511
. 10.1109/TMECH.2018.2841643
34.
Qiu
,
K.
,
Zhang
,
F.
, and
Liu
,
M.
,
2016
, “
Let the Light Guide Us: VLC-Based Localization
,”
IEEE Rob. Autom. Mag.
,
23
(
4
), pp.
174
183
. 10.1109/MRA.2016.2591833
35.
Liang
,
Q.
,
Lin
,
J.
, and
Liu
,
M.
,
2019
, “
Towards Robust Visible Light Positioning Under LED Shortage by Visual-Inertial Fusion
,” International Conference on Indoor Positioning and Indoor Navigation (
IPIN
), Pisa, Italy, Sept. 30–Oct. 3, pp.
1
8
.10.1109/IPIN.2019.8911760
36.
Armstrong
,
J.
,
Sekercioglu
,
Y. A.
, and
Neild
,
A.
,
2013
, “
Visible Light Positioning: A Roadmap for International Standardization
,”
IEEE Commun. Mag.
,
51
(
12
), pp.
68
73
. 10.1109/MCOM.2013.6685759
37.
Li
,
L.
,
Hu
,
P.
,
Peng
,
C.
,
Shen
,
G.
, and
Zhao
,
F.
,
2014
, “
Epsilon: A Visible Light Based Positioning System
,”
11th USENIX Symposium on Networked Systems Design and Implementation (NSDI 14)
, Seattle, WA, pp.
331
334
.
38.
Nguyen
,
N. T.
,
Nguyen
,
N. H.
,
Nguyen
,
V. H.
,
Sripimanwat
,
K.
, and
Suebsomran
,
A.
,
2014
, “
Improvement of the VLC Localization Method Using the Extended Kalman Filter
,”
TENCON 2014–2014 IEEE Region 10 Conference
, Bangkok, Thailand, Oct. 22–25, pp.
1
6
.10.1109/TENCON.2014.7022416
39.
Liang
,
Q.
,
Sun
,
Y.
,
Wang
,
L.
, and
Liu
,
M.
,
2021
, “
A Novel Inertial-Aided Visible Light Positioning System Using Modulated LEDs and Unmodulated Lights as Landmarks
,”
IEEE Trans. Autom. Sci. Eng.
, pp.
1
19
. 10.1109/TASE.2021.3105700
40.
Keskin
,
M. F.
,
Sezer
,
A. D.
, and
Gezici
,
S.
,
2018
, “
Localization Via Visible Light Systems
,”
Proc. IEEE
,
106
(
6
), pp.
1063
1088
. 10.1109/JPROC.2018.2823500
41.
Bai
,
L.
,
Yang
,
Y.
,
Guo
,
C.
,
Feng
,
C.
, and
Xu
,
X.
,
2019
, “
Camera Assisted Received Signal Strength Ratio Algorithm for Indoor Visible Light Positioning
,”
IEEE Commun. Lett.
,
23
(
11
), pp.
2022
2025
. 10.1109/LCOMM.2019.2935713
42.
Giguere
,
P.
,
Rekleitis
,
I.
, and
Latulippe
,
M.
,
2012
, “
I See You, You See Me: Cooperative Localization Through Bearing-Only Mutually Observing Robots
,”
IEEE/RSJ International Conference on Intelligent Robots and Systems
, Vilamoura-Algarve, Portugal, Oct. 7–12, pp.
863
869
.10.1109/IROS.2012.6385965
43.
Suh
,
J.
,
You
,
S.
,
Choi
,
S.
, and
Oh
,
S.
,
2016
, “
Vision-Based Coordinated Localization for Mobile Sensor Networks
,”
IEEE Trans. Autom. Sci. Eng.
,
13
(
2
), pp.
611
620
. April10.1109/TASE.2014.2362933
44.
Concha
,
A.
,
Drews
,
P.
, Jr
,
Campos
,
M.
, and
Civera
,
J.
,
2015
, “
Real-Time Localization and Dense Mapping in Underwater Environments From a Monocular Sequence
,”
OCEANS 2015–Genova
, Genova, Italy, May 18–21, pp.
1
5
.10.1109/OCEANSGenova.2015.7271476
45.
Liu
,
J.
,
Gong
,
S.
,
Guan
,
W.
,
Li
,
B.
,
Li
,
H.
, and
Liu
,
J.
,
2020
, “
Tracking and Localization Based on Multi-Angle Vision for Underwater Target
,”
Electronics
,
9
(
11
), p.
1871
.10.3390/electronics9111871
46.
Greenberg
,
J. N.
, and
Tan
,
X.
,
2016
, “
Efficient Optical Localization for Mobile Robots Via Kalman Filtering-Based Location Prediction
,”
ASME
Paper No. DSCC2016-9917.10.1115/DSCC2016-9917
47.
Greenberg
,
J. N.
, and
Tan
,
X.
,
2017
, “
Kalman Filtering-Aided Optical Localization of Mobile Robots: System Design and Experimental Validation
,”
ASME
Paper No. DSCC2017-5368.10.1115/DSCC2017-5368
48.
Greenberg
,
J. N.
, and
Tan
,
X.
,
2020
, “
Dynamic Optical Localization of a Mobile Robot Using Kalman Filtering-Based Position Prediction
,”
IEEE/ASME Trans. Mechatronics
,
25
(
5
), pp.
2483
2492
. October10.1109/TMECH.2020.2980434
49.
Greenberg
,
J. N.
, and
Tan
,
X.
,
2021
, “
Sensitivity-Based Data Fusion for Optical Localization of a Mobile Robot
,”
Mechatronics
,
73
, p.
102488
.10.1016/j.mechatronics.2021.102488
50.
Solanki
,
P. B.
,
Bopardikar
,
S. D.
, and
Tan
,
X.
,
2020
, “
Active Alignment Control-Based LED Communication for Underwater Robots
,” IEEE/RSJ International Conference on Intelligent Robots and Systems (
IROS
), Las Vegas, NV, Oct. 24–Jan. 24, pp.
1692
1698
.10.1109/IROS45743.2020.9341442
51.
Greenberg
,
J. N.
, and
Tan
,
X.
,
2020
, “
Dynamic Prediction-Based Optical Localization of a Robot During Continuous Movement
,”
ASME
Paper No. DSCC2020-3288. 10.1115/DSCC2020-3288
52.
Kalman
,
R. E.
,
1960
, “
A New Approach to Linear Filtering and Prediction Problems
,”
ASME J. Basic Eng.
,
82
(
1
), pp.
35
45
. 10.1115/1.3662552