## Abstract

Safe trajectory planning for high-performance automated vehicles in an environment with both static and moving obstacles is a challenging problem. Part of the challenge is developing a formulation that can be solved in real-time while including the following set of specifications: minimum time-to-goal, a dynamic vehicle model, minimum control effort, both static and moving obstacle avoidance, simultaneous optimization of speed and steering, and a short execution horizon. This paper presents a nonlinear model predictive control-based trajectory planning formulation, tailored for a large, high-speed unmanned ground vehicle, that includes the above set of specifications. The ability to solve this formulation in real-time is evaluated using NLOptControl, an open-source, direct-collocation-based optimal control problem solver in conjunction with the KNITRO nonlinear programming problem solver. The formulation is tested with various sets of specifications. A parametric study relating execution horizon and obstacle speed indicates that the moving obstacle avoidance specification is not needed for safety when the planner has a small execution horizon and the obstacles are moving slowly. However, a moving obstacle avoidance specification is needed when the obstacles are moving faster, and this specification improves the overall safety without, in most cases, increasing the solve-times. The results indicate that (i) safe trajectory planners for high-performance automated vehicles should include the entire set of specifications mentioned above, unless a static or low-speed environment permits a less comprehensive planner and (ii) the resulting formulation can be solved in real-time.

## 1 Introduction

In many high-performance automated vehicle applications for unmanned air vehicles (UAVs), unmanned ground vehicles (UGVs), and spacecraft, it is both desirable and challenging to plan safe trajectories in a dynamic environment. Part of this challenge is incorporating the set of specifications listed in Table 1 into a real-time planner, where real-time planning demands that the planner’s solve-times are all less than the execution horizon. While trajectory planning systems that include subsets of the specifications listed in Table 1 exist, a planner that consists of all of them has not yet been developed as reviewed in detail next. This paper addresses this gap.

Specification | Description |
---|---|

S_{1} | Static obstacle avoidance |

S_{2} | Minimum time-to-goal |

S_{3} | Dynamic vehicle model |

S_{4} | Minimum control effort |

S_{5} | Simultaneously optimize speed and steering |

S_{6} | Moving obstacle avoidance |

S_{7} | Small execution horizon |

Specification | Description |
---|---|

S_{1} | Static obstacle avoidance |

S_{2} | Minimum time-to-goal |

S_{3} | Dynamic vehicle model |

S_{4} | Minimum control effort |

S_{5} | Simultaneously optimize speed and steering |

S_{6} | Moving obstacle avoidance |

S_{7} | Small execution horizon |

### 1.1 Relation to State-of-the-Art.

Figure 1 shows a conceptual scheme for comparing and developing trajectory planners. This scheme illustrates the conceptual performance and safety of a vehicle controlled using trajectory planners with different sets of specifications, operating either in a static environment with a stationary obstacle (top four traces) or a dynamic environment with a moving obstacle (bottom two traces). In all cases, the planning and execution horizons are the same.

Static obstacle avoidance (*S*_{1}; Table 1) is a baseline specification in many trajectory planning systems but is not, by itself, sufficient for either performance or safety. Regarding safety, if the trajectory planner does not use a dynamic vehicle model (*S*_{3}; Table 1), a trajectory that the vehicle cannot follow may be determined [1]. Such a trajectory may result in either a collision [1,2] (case A; Fig. 1) or some other catastrophic event, such as rollover in the case of a ground vehicle [3]. Despite this, some planners designed to avoid static obstacles for UAV applications [4] utilize a kinematic vehicle model (case A; Fig. 1). By utilizing a dynamic vehicle model in trajectory planning, the actual vehicle can follow the prescribed trajectory more accurately. Planners designed to avoid static obstacles with a dynamic vehicle model (case B; Fig. 1) exist for UGV applications [5]. However, the planner in case B does not have a minimum time-to-goal specification (*S*_{2}; Table 1), which may result in mission failure in certain applications. For instance, in racing applications [6,7], planning without this specification will likely result in a lost race. In these applications, the planner should include at least *S*_{1}–*S*_{3} (case C; Fig. 1), such that it can arrive at the goal in less time than a planner with only the static obstacle avoidance and dynamic vehicle model specifications. If minimizing fuel consumption and mechanical wear are additional concerns, then the minimum control effort specification (*S*_{4}; Table 1) needs to be included in the planner as well. Planners with *S*_{1}–*S*_{4} exist in applications for UGVs [8] and UAVs [9–11]. A limitation of these planners is that they do not optimize both speed and steering (*S*_{5}; Table 1). Optimizing both allows the vehicle to both slow down more quickly and turn more tightly (shown in case D; Fig. 1), thereby improving both performance and safety [12].

In a dynamic environment, while the trajectory planning specifications *S*_{1}–*S*_{5} are necessary for both performance and safety, they are not sufficient (see case E; Fig. 1). To improve collision avoidance (i.e., safety) in a dynamic environment, there are three possibilities: including a moving obstacle avoidance specification (*S*_{6}; Table 1), including a small execution horizon specification (*S*_{7}; Table 1), or including both.

A moving obstacle avoidance specification accounts for the motion of the obstacle over the planning horizon, which increases safety (see case F; Fig. 1). This specification has been implemented for applications in UGVs [13], UAVs [14], and spacecraft [15]. These developments, however, have a limitation: they use a kinematic vehicle model as opposed to a dynamic vehicle model; case A, Fig. 1 depicts the potential outcome of using a kinematic vehicle model. Our preliminary work [16] developed a planner with *S*_{1}–*S*_{6} for a UGV application. This work, however, has several limitations, three of which are (i) it does not investigate closed-loop performance and safety, (ii) it assumes that the goal is within the LiDAR’s sensing range, and (iii) the planner’s solve-times are, at best, nearly two orders of magnitude above real-time (assuming an execution horizon of 0.5 s). This paper addresses these three limitations as one of its original contributions.

A small execution horizon specification engenders a more reactive planner with better obstacle avoidance capabilities. An execution horizon is described as “small” when reducing it does not improve safety within a given environment. For instance, to avoid the collision in case E (Fig. 1), a smaller execution horizon can be used. Previous research [12] includes a small execution horizon as well as *S*_{1}–*S*_{5}. While there is a reason to expect that such a planner may operate safely around slowly moving obstacles, this hypothesis has not yet been tested. Therefore, this paper also investigates, for the first time, whether a system with *S*_{1}–*S*_{5} and a small execution horizon can operate safely in a dynamic environment for a range of obstacle speeds.

This work uses a nonlinear model predictive control (NMPC) based trajectory planner; this approach is also used in Refs. [4,5,9–15,17]. Unfortunately, it is very difficult to solve the proposed planning formulation in real-time with a short execution horizon. For instance, the literature shows that gpocs, gpops-ii, and our custom software [12], all written in matlab, are not fast enough for NMPC applications in aircraft [18], robot [19], and UGV [12,16] systems, respectively. Therefore, in this work, NLOptControl [20] is used instead to evaluate if the proposed formulation can be solved in real time with a short execution horizon.

### 1.2 Summary of Original Contributions.

The original contributions of this paper are summarized as follows:

Introduction of an NMPC-based trajectory planner with

*S*_{1}–*S*_{7}, tailored for a UGV application.Investigation of the effect that different sets of specifications have on safety, performance, and solve-time.

Investigation of the need to include a moving obstacle avoidance specification for a range of execution horizons and obstacle speeds.

Investigation of the ability to solve the proposed formulation in real-time with a short execution horizon.

As an example application, the trajectory planning formulation developed in this work is tailored for a High Mobility Multipurpose Wheeled Vehicle (HMMWV) but can be adapted to other vehicles as well. Together, this planner and vehicle are referred to as a UGV.

### 1.3 Paper Structure.

The remainder of this paper is organized as follows. Section 2 describes the NMPC framework developed to consider non-negligible trajectory planning problem solve-times and the underlying optimal control problem (OCP) formulation developed to include *S*_{1}–*S*_{6}. Section 3 describes the test conditions under which the proposed planner is evaluated. In Sec. 4, the effect that adding different specifications to trajectory planners has on safety, performance, and solve-time is tested in a variety of environments. The results of these tests are discussed in Sec. 5. Section 6 summarizes the paper and draws conclusions.

## 2 Mathematical Formulation

The mathematical formulation presented below is based on the following assumptions:

Both the goal and obstacle information are known.

The vehicle state is known.

The terrain is flat.

### 2.1 NMPC Framework.

At the heart of an NMPC formulation lies an OCP. In NMPC simulation studies, OCP solve-times are often neglected [12,21]. In such a case, first the plant simulation is paused, and the OCP is initialized at the current time *t*_{0} with the current plant state **X0**. Next, the OCP is solved to produce an optimal control trajectory ** ζ***. With this control trajectory, the plant is then simulated starting at

*t*

_{0}with

**X0**until

*t*

_{0}+

*t*

_{ex}; this yields a new initial state, which is then used to initialize the next OCP. However, most practical OCPs take a non-negligible amount of time to solve, after which, in a more realistic simulation, the plant will have evolved from its current state, where the OCP was initialized, to a new state [21]. This computational delay renders the control signal sub-optimal and potentially infeasible or unsafe. To achieve optimal safety and performance, non-negligible OCP solve-times must be taken into account. The NMPC framework used in this work, shown in Fig. 2, accounts for these non-negligible OCP solve-times.

This framework has three main components: the OCP, the plant model, and the state prediction function. The OCP is provided with goal and environment information defined as follows:

*Goal information*$G$*includes the goal position* (*x*_{g}, *y*_{g}), *the desired vehicle orientation at the goal**ψ*_{g}, *and the radial tolerance for attaining the goal**σ*.

*Environment information*$E$*includes the sizes, initial positions, and velocities of the obstacles*.

The obstacles are assumed to be enclosed in ellipses, where **a _{obs}** and

**b**are arrays that describe the ellipses’ semi-major and semi-minor axes, respectively;

_{obs}**x0**and

_{obs}**y0**are arrays of the obstacles’ initial

_{obs}*x*and

*y*positions, respectively; and

**v**and

_{x}**v**are arrays of the obstacles’ speeds in the

_{y}*x*and

*y*directions, respectively.

During the first execution horizon, the OCP has not produced a control signal for the vehicle to follow. Therefore, the vehicle is sent a known control signal **U0** (see Fig. 2), set such that the vehicle will drive straight at a constant speed. To account for the evolution of the plant state during the execution horizon, a state prediction **X0**_{p}, which is made for *t*_{0} + *t*_{ex}, is used to initialize the OCP. The inputs of the state prediction function are the current state of the vehicle and the current control signal, which is **U0** during the first execution horizon and ** ζ** * afterward. Then, the plant model is simulated from the initial time

*t*

_{0}to

*t*

_{0}+

*t*

_{ex}and the first OCP is solved. Real-time feasibility of this framework requires that the OCP solve-times be all less than the execution horizon.

Model mismatch between the plant model, the vehicle dynamics model in the OCP, and the state prediction function can induce biases. Addressing these biases is acknowledged as an important research problem but is not one of the goals of this paper. Therefore, to avoid these biases while focusing on our research tasks, in this paper, the plant model, the vehicle model in the OCP, and the state prediction function all use the same set of differential equations, which is presented in detail later in this section.

This framework runs until the UGV either reaches the goal or fails the test. An algorithm is run after each execution horizon to determine if the vehicle has reached the goal within the radial goal tolerance *σ*. The test fails if

the vehicle crashes into an obstacle,

the vertical tire load in the plant model goes below 100 N for any of the four tires,

any of the solve-times exceeds 300 s,

^{2}or ifthe solution to the nonlinear programming problem (NLP) is not considered to be optimal based on the tolerances and the Karush–Kuhn–Tucker conditions.

### 2.2 Optimal Control Problem.

*S*

_{1}–

*S*

_{6}are incorporated into the OCP. At a high-level, these specifications are all incorporated into the single-phase, continuous-time OCP defined in (1)–(4) as

*t*

_{f}is the free final time,

*t*∈ [

*t*

_{0}+

*t*

_{ex},

*t*

_{f}] is the time, $\xi (t)\u2208Rnst$ is the state, and $\zeta (t)\u2208Rnctr$ is the control, with

*n*

_{st}defined as the number of states and

*n*

_{ctr}defined as the number of controls. The Mayer term is $M:Rnst\xd7R\xd7Rnst\xd7R\u2192R$ and the Lagrange term is $L:Rnst\xd7Rnctr\xd7R\u2192R$. The dynamic constraints are given by $f:Rnst\xd7Rnctr\u2192Rnst$. The path constraints are captured by $C:Rnst\xd7Rnctr\xd7Ra\xd7R\u2192Rp$, which bound (i) the state and control based on the vehicle’s dynamic limits and any additional information, denoted as $A(t)\u2208Ra$, and (ii)

*t*

_{f}based on a maximum final time $tfmax$. Finally, the event constraints are expressed with $\varphi :Rnst\xd7R\xd7Rnst\xd7R\u2192Rq$.

#### 2.2.1 Cost Functional.

*w*

_{t},

*w*

_{g},

*w*

_{cf}, $w\delta f$,

*w*

_{γ},

*w*

_{J}, $wFz$, and

*w*

_{haf}are weight terms,

*x*(

*t*) and

*y*(

*t*) represent the vehicle’s global position coordinates on a flat plane,

*ϵ*is a small number set to 0.01 to avoid singularities,

*δ*

_{f}(

*t*) is the steering angle at the front of the vehicle,

*γ*(

*t*) is the steering rate,

*J*

_{x}(

*t*) is the longitudinal jerk,

*a*and

*b*are parameters to prevent $Fzrl$ and $Fzrr$ from being close to the minimum vertical tire load limit, denoted as $Fzmin$, as described in [12], and ϱ is a term for penalizing the slack variables on the initial and terminal conditions.

There are six terms in (5). The first term minimizes the final time *t*_{f} to establish a minimum-time-to-goal specification. The second term establishes soft constraints on the initial and terminal conditions. This term is described in greater detail later in this section. The third term helps the vehicle reach the goal when the goal is not within the LiDAR range, denoted as *L*_{range}. If the goal is within a distance of *L*_{range}, then *w*_{g} is set to zero, and the vehicle is constrained to reach the goal. This constraint is described in greater detail later in this section. The fourth term minimizes the control effort, which encourages smooth control signals. The fifth term dissuades the controller from generating solutions near the minimum vertical tire load limit. This is done to prevent vehicle rollover and infeasible initializations in the next OCP. Finally, to help the vehicle pass the goal location through the desired direction *ψ*_{g}, the sixth term is added, which minimizes the area between a line in the (*x*, *y*) plane going through the goal in the desired direction *ψ*_{g} and the vehicle’s trajectory in the (*x*, *y*) plane [12].

#### 2.2.2 Dynamic Vehicle Model.

*V*(

*t*), the yaw rate

*ω*

_{z}(

*t*), the heading angle Ψ(

*t*), the steering angle

*δ*

_{f}(

*t*), the longitudinal speed

*U*(

*t*), and the longitudinal acceleration

*a*

_{x}(

*t*). The second term, $B\zeta (t)$, relates state variable rates to their respective control variables, i.e., (d

*δ*

_{f}/d

*t*)(

*t*) to the steering rate

*γ*(

*t*), and (d

*a*

_{x}/d

*t*)(

*t*) to the longitudinal jerk

*J*

_{x}(

*t*). Finally,

*L*

_{f}and

*L*

_{r}are the distances from the front and rear axles to center-of-mass (COM),

*I*

_{zz}is the moment of inertia about the COM,

*F*

_{yf}and

*F*

_{yr}are the front and rear lateral tire forces, and

*M*

_{t}is the total vehicle mass. Table 3, which is in the Appendix, contains all of the vehicle parameters used in this paper.

#### 2.2.3 State and Control Limits.

No explicit lateral speed or yaw rate constraints exist.

#### 2.2.4 Obstacle Avoidance.

Two possible approaches for incorporating the static and moving obstacle avoidance specifications into the OCP include soft constraints (or artificial potential fields) and time-varying hard constraints [23]. There are two limitations to the soft constraints approach: (i) a trajectory may be generated that is deemed feasible according to the formulation, but actually goes through an obstacle and (ii) the NLP solve-times are known to be large, when compared with the time-varying hard constraints approach [16]. Therefore, time-varying hard constraints for the avoidance of static and moving obstacles avoidance are incorporated into the path constraints in (3).

*sm*(

*t*) =

*sm*

_{1}+ (

*sm*

_{2}−

*sm*

_{1})

*t/t*describes the time-varying safety margin, which enforces the vehicle to operate further from the obstacles as

_{f}*t*increases. While this safety margin reduces the feasible space that the vehicle can travel, it helps keep the vehicle further away from distant obstacles.

*Q*is the total number of obstacles. The notation

**x0**[

_{obs}*i*] refers to the

*i*th element of the

**x0**vector.

_{obs}#### 2.2.5 LiDAR Region Constraints.

#### 2.2.6 Initial and Terminal State Constraints.

**x**

_{0}_{s}is the

*n*

_{st}dimensional vector of slack variables for the initial conditions, and

**x**

_{f}_{s}is the two-dimensional vector of slack variables for the terminal conditions. Both the initial and terminal slack variable vectors are constrained to be greater than zero.

**w**

_{s0}is a 1 ×

*n*

_{st}dimensional vector of individual weight terms on the slack variables for the initial state constraints, and

**w**

_{sf}is a 1 × 2 dimensional vector of individual weight terms on the slack variables for the final state constraints.

**X0**

_{p}within a specified tolerance

**X0**

_{tol}. Equation (16) establishes these constraints as

If the distance from the vehicle to the goal is greater than the vehicle’s planning range *L*_{range}, then the soft (9)–(14) and hard constraints (17) and (18) on the final conditions are relaxed. Setting the elements in **w**_{sf} to zero relaxes the terminal soft constraints, and setting *σ* to 10^{6} m relaxes the terminal hard constraints.

The remaining parameter modifications and additional constraints needed to relax the assumption that the goal is within the vehicle’s planning range [16] are now presented.

#### 2.2.7 LiDAR Range Constraints.

*L*

_{range}of the vehicle, then the vehicle is constrained to arrive at the edge of the LiDAR region within a distance of

*κ*at

*t*

_{f}. This is accomplished using (19) and (20) as

*p*1 and

*p*2 are set to (

*L*

_{range}+

*κ*)

^{2}and (

*L*

_{range}−

*κ*)

^{2}, respectively.

To avoid creating an infeasible problem while continuing to drive the UGV towards the goal, the goal constraints described in the previous section (i.e., (9)–(18)) are relaxed and a new soft constraint is used. This soft constraint minimizes the squared distance from the vehicle to the goal at *t*_{f}, normalized by squared distance from the vehicle to the goal at *t*_{0} + *t*_{ex} [12]. Setting the goal weight *w*_{g} in (5) to a non-zero value enforces this constraint.

In the case that the goal is within a distance of *L*_{range} to the vehicle, *w*_{g} is set to zero and the vehicle is constrained to reach the goal using (9)–(18). This is done by setting *σ* to a much smaller goal tolerance, which enforces the hard constraints on reaching the goal, and setting the elements in the weight vector **w**_{sf} to large positive weights establishes soft constraint on reaching the goal through slack variables. Then, to avoid creating an infeasible problem, the hard constraints for reaching the edge of the LiDAR region at *t*_{f} are relaxed. To do this, *p*1 and *p*2 are set to 10^{−6} and −10^{−6}, respectively,

The above specifies the details of the NMPC-based trajectory planning formulation with specifications *S*_{1}–*S*_{6}. The evaluation of this formulation as a function of its specifications follows.

## 3 Description of Evaluation

The next section presents comparisons among four planners within three different test environments and evaluates the proposed planner’s ability to improve both safety and performance without increasing solve-times. This section describes these planners and their test environments. Afterwards, the computer hardware platform used to produce the results presented in this paper and the software configuration under which NLOptControl is utilized are described.

### 3.1 Planners.

Comparisons are made among four planners denoted as *P*_{A}–*P*_{D}. The specifications of these planners are listed in Table 2, where *P*_{A} is used as the baseline planner. Note that *P*_{A} already includes the specifications of a dynamic vehicle model and simultaneous optimization of speed and steering, since previous work already illustrated the need to include them; see Refs. [1,8,9] for the first and Ref. [12] for the latter.

Planners | ||||
---|---|---|---|---|

Features | P_{A} | P_{B} | P_{C} | P_{D} |

Static obstacle avoidance | ✓ | ✓ | ✓ | ✓ |

Minimum time-to-goal | ✓ | ✓ | ✓ | |

Dynamic vehicle model | ✓ | ✓ | ✓ | ✓ |

Minimum control effort | ✓ | ✓ | ||

Simultaneously optimize speed and steering | ✓ | ✓ | ✓ | ✓ |

Moving obstacle avoidance | ✓ | |||

Small execution horizon | ✓ | ✓ | ✓ | ✓ |

Planners | ||||
---|---|---|---|---|

Features | P_{A} | P_{B} | P_{C} | P_{D} |

Static obstacle avoidance | ✓ | ✓ | ✓ | ✓ |

Minimum time-to-goal | ✓ | ✓ | ✓ | |

Dynamic vehicle model | ✓ | ✓ | ✓ | ✓ |

Minimum control effort | ✓ | ✓ | ||

Simultaneously optimize speed and steering | ✓ | ✓ | ✓ | ✓ |

Moving obstacle avoidance | ✓ | |||

Small execution horizon | ✓ | ✓ | ✓ | ✓ |

*P*

_{A}’s parameters. All of the weight terms used in this work are obtained either from previous research [12] or manual tuning. In addition to this, setting the moving obstacle avoidance constraint in (7) to

*false*means that (7) is modified to (21) as

*x*and

*y*positions of the obstacles, respectively. These arrays are updated to reflect the obstacles’ current position after each execution horizon.

Next, the values of *P*_{A}’s parameters are modified to define *P*_{B}–*P*_{D}. The difference between *P*_{A} and *P*_{B} is that the weight on the final time *w*_{t} is set to 100 for *P*_{B}. To allow *P*_{A} to reach the goal or the edge of the LiDAR region at will, the final time is left as a design variable, but the weight on it (i.e., *w*_{t}) is set to zero. Next, the difference between *P*_{B} and *P*_{C} is that *w*_{ce} is set to 1 for the latter. Finally, the difference between *P*_{C} and *P*_{D} is that *P*_{D} has a moving obstacle avoidance specification, while *P*_{C} does not. *P*_{D} establishes this specification with (7). *P*_{C} assumes that the obstacles will be static over each prediction horizon, while *P*_{D} incorporates the movement of the obstacles into the position constraints over the prediction horizon.

Closed-loop comparisons are made among four different vehicles denoted as *V*_{A}–*V*_{D}. *P*_{A}, *P*_{B}, *P*_{C}, and *P*_{D} control *V*_{A}, *V*_{B}, *V*_{C}, and *V*_{D}, respectively. Unless otherwise noted, the execution horizon is set to a value of 0.5 s for all comparisons.

### 3.2 Environment Categories.

To evaluate the proposed planner, this work uses four distinct environmental categories: unknown versus known, unstructured versus structured, dynamic versus static, and challenging versus simple. A description of each follows.

#### 3.2.1 Unknown Versus Known.

In an unknown environment [12], sensors collect data from the environment for algorithms that estimate factors including the obstacles’ sizes, positions, and velocities. In addition to assuming that the obstacle information is known, this paper assumes that the environment is known.

#### 3.2.2 Unstructured Versus Structured.

In an unstructured environment [12,26–28], there are no roads to follow or traffic rules to obey. In a structured environment [29,30], some combination of these factors needs to be considered. In this paper, the distinction between these environment categories is that in an unstructured environment, there are no lanes to follow, while in a structured environment there is a lane to follow.

#### 3.2.3 Dynamic Versus Static.

In a dynamic environment [31], at least one obstacle is moving. In a static environment, all of the obstacles are stationary. This paper uses both dynamic and static environments.

#### 3.2.4 Challenging Versus Simple.

In planning problems, the number of obstacles directly affects the computational load [31]. As such, the environment becomes more challenging as the number of obstacles increases. This paper uses an environment with 38 obstacles as a challenging example and an environment with three obstacles as a simple one.

### 3.3 Environments.

UGV safety, performance, and solve-times are evaluated in three different environments denoted as *E*_{A}–*E*_{C}. Each of these environments consists of some combination of the above environment categories, which is now described in detail.

#### 3.3.1 E_{A}: Simple, Static, Unstructured Environment.

Both the increase in performance and solve-times, consequent to including the minimum time-to-goal and minimum control effort specifications, are evaluated in a simple, static, unstructured environment denoted as *E*_{A}. *E*_{A} has three static obstacles denoted as 0_{1}, 0_{2}, and 0_{3}, and Table 5, which is in the Appendix, lists *E*_{A}’s parameters. The right trace of Fig. 4 shows the obstacle field and goal location of *E*_{A}.

#### 3.3.2 E_{B}: Simple, Dynamic, Unstructured Environment.

*E*

_{B}.

*E*

_{B}has three dynamic obstacles also denoted as 0

_{1}, 0

_{2}, and 0

_{3}.

*E*

_{B}is the same environment as

*E*

_{A}, except the obstacles are given non-zero velocities to test the planner’s ability to avoid collisions with moving obstacles. The respective velocities of 0

_{1}, 0

_{2}, and 0

_{3}are as follows:

Figure 5 shows the movement of these obstacles, which can be seen in the right trace by following the obstacles’ position at the indicated times.

#### 3.3.3 E_{C}: Challenging, Dynamic, Structured Environment.

Increases in safety and solve-times from including a moving obstacle avoidance specification are further evaluated within a challenging, dynamic, structured environment denoted as *E*_{C}. *E*_{C} is a double lane change scenario, which was originally developed to test a HMMWV within a static environment [32]. Figure 6 shows *E*_{C}, which has two large obstacles that need to be avoided, labeled 0_{1} and 0_{2}. In this test, the vehicle is started at the bottom of Fig. 6 traveling in the left lane at a speed of 17 m/s. From this point, it is restricted to perform a double lane change maneuver. To constrain the vehicle to perform this maneuver, first, minimum and maximum constraints on the vehicle’s *x* position are imposed. This restricted region is shaded, where the lower and upper limits on the vehicle’s *x* position are *x*_{min} = 0 m and *x*_{max} = 24 m, respectively. Next, to ensure that the vehicle stays in the left lane during the first part of the maneuver, at the start of the track, until after *y* = 175 m, a series of 36 cones are placed at the edge of the lane boundary. If these cones are not present or there are not enough cones, then the vehicle will change lanes earlier to minimize the sixth term in (5). The first large obstacle, 0_{1}, is static and is located in the left lane. The second large obstacle, 0_{2}, starts at the back of the track in the right lane near the goal and moves towards the front of the track. Table 6 lists the parameters for *E*_{C}.

To improve safety and performance within *E*_{C}, several planner parameters, which are listed in Table 4, are modified. Specifically, for all of the *E*_{C} simulations shown in this paper, the *L*_{range}, *N*, and *κ* planner parameters in Table 4 are modified. The *L*_{range} is modified because when using an *L*_{range} of 50 m, the vehicle crashes into 0_{2} over a large range of 0_{2} obstacle speeds. To avoid limiting the UGV based on its sensing range, and not its dynamic limits, *L*_{range} is increased to 90 m for *E*_{C}. To accommodate for this extended planning range, the number of points in the discretization *N* is increased from 10 to 15 and the LiDAR relaxation range *κ* is increased from 5 m to 10 m.

### 3.4 Hardware Platform and Software Stack.

The results in this paper are produced using a single machine running Ubuntu 16.04 with an Intel Core i7 − 4910MQ CPU @2.90GHz × 8, and 31.3GB of RAM. The OCPs are solved using NLOptControl, an open-source, direct-collocation-based OCP solver. In particular, NLOptControl 0.1.6 [20] is used with the *KNITRO* 10.3 NLP solver, where the default *KNITRO* settings are used, except the maximum CPU-time (i.e., solve-time), which is set to 300 s. The trapezoidal method [33,34] is used to approximate both the cost functional (1) and the dynamics (2). To more closely simulate practice, where time can typically be allocated to initialize trajectory planners, the optimizations are warm-started.

## 4 Results

### 4.1 Performance and Solve-Times Within *E*_{A}.

Planning with a minimum time-to-goal specification can reduce the time-to-goal without increasing the solve-times in a simple, static, unstructured environment. In particular, using either *P*_{B} or *P*_{C} in lieu of *P*_{A} within *E*_{A} reduces the time-to-goal from 9.0 s to 7.0 s (see Fig. 4). This is because, until about *t* = 5.5 s, both *V*_{B} and *V*_{C} accelerate while *V*_{A} decelerates; this results in higher speeds for both *V*_{B} and *V*_{C}. Next, while both *P*_{B} and *P*_{C} run in real-time in *E*_{A}, *P*_{A} does not. This can be seen in the top left trace of Fig. 4, where the solve-times for both *P*_{B} and *P*_{C} are all less than *t*_{ex}, but several of the solve-times obtained using *P*_{A} go above *t*_{ex}. This creates a safety issue because, in practice, if the solve-time is greater than *t*_{ex}, then the vehicle will not have an up-to-date trajectory to follow.

In *E*_{A}, compared to *P*_{B}, *P*_{C} reduces the control effort without increasing either the time-to-goal or the solve-times. More specifically, even though *V*_{B} and *V*_{C} arrive at the goal in 7.0 s, *V*_{C} uses less control effort for all of the three control effort terms. The third term in (5) calculates the control effort terms for the steering angle, steering rate, and longitudinal jerk. The overall values of each of these control effort terms, along with their percentage decrease, are in Table 7, which is in the Appendix. Next, in the top left trace in Fig. 4, it can be seen that the solve-times for *P*_{B} and *P*_{C} are below the real-time threshold of 0.5 s.

### 4.2 Safety and Solve-Times Within *E*_{B}.

Planning with a moving obstacle avoidance specification can increase safety without increasing the solve-times in a simple, dynamic, unstructured environment. This is shown in the comparison between *P*_{C} and *P*_{D} within *E*_{B} (see Fig. 5). At the start of this test, both vehicles accelerate and then turn in opposite directions: *V*_{C} to the left and *V*_{D} to the right. *P*_{C} tries to avoid 0_{1} to the left, which results in a crash at *t* = 3.5 s. On the other hand, by taking the obstacles’ motion over the prediction horizon into account, *P*_{D} turns *V*_{D} to the right. This allows *V*_{D} to arrive safely at the goal at *t* = 6.5 s. Lastly, as seen in the top left trace in Fig. 5, the solve-times for both *P*_{B} and *P*_{C} are below the real-time threshold of 0.5 s.

### 4.3 Safety and Solve-Times Within *E*_{C}.

Similarly, planning with a moving obstacle avoidance specification can increase safety without significantly increasing the solve-times in a challenging, dynamic, structured environment. This is demonstrated by testing *P*_{C} and *P*_{D} within *E*_{C} (see Figs. 6–8).

At the start of the test, the first lane change maneuver is performed successfully for both *V*_{C} and *V*_{D}. During this time, both vehicles accelerate aggressively to increase their speed from 17 m/s at *t* = 0 s to 26.5 m/s at *t* = 19.5 s. At this time, *V*_{C} crashes into 0_{2} (see Fig. 7 for a zoomed in view of the crash) while *V*_{D} avoids 0_{2} and eventually attains the goal. Additionally, *P*_{D} is able to avoid this collision with a solve-time that is only slightly higher than the one obtained with *P*_{C}’s just before it causes *V*_{C} to crash. The next section discusses the larger solve-times encountered at 19.0 s. Finally, the solve-times for *P*_{D} are less than the real-time threshold of 0.5 s, despite the fact that this is a challenging environment with 38 obstacles instead of 3.

### 4.4 Execution Horizon and Obstacle Speed Analysis Within *E*_{C}.

Including a moving obstacle avoidance specification increases safety over a range of execution horizons and obstacle speeds. To show this, *V*_{C} and *V*_{D} are tested within *E*_{C} for a range of execution horizons (**t _{ex}** = [0.01, 0.0621, …, 1] s) and obstacle velocities (

**v**[2] = [0, − 2.11, …, − 20] (m/s)). The data from this parameter sweep are shown in Fig. 9, where a plotted point indicates a successful simulation. For instance, when the execution horizon is 0.01 s and the velocity of 0

_{y}_{2}is −2.11 m/s, both

*V*

_{C}and

*V*

_{D}attain the goal.

The data follow the expected trend; i.e., *V*_{D} is safer than *V*_{C}, and the results are statistically significant (*p* = 2.2 × 10^{−16}) as shown by a Fisher test, see Table 8 in the Appendix. *V*_{D} accounts for the majority (87.1%) of the successful trials, and *V*_{C} accounts for the majority of (60.0%) of the trials that failed.

While making the execution horizon small creates a more reactive planner, which can more reliably avoid collisions with fast moving obstacles, it makes it more difficult to obtain the planning solutions in real-time. Figure 10 depicts this issue, where the real-time factor (RTF) and probability-of-safety (POS) are defined as follows.

*RTF:* RTF = *solve-times _{max}/t*

_{ex}.

*To calculate*

*solve-times*

_{max},

*the maximum value in a vector of solve-times for each test case (i.e., obstacle speed and execution horizon) is averaged across obstacle speeds*.

*POS: The probability that the vehicle will attain the goal, which is calculated over the range of obstacle velocities for each execution horizon.*

For both *P*_{C} and *P*_{D}, the RTF is very high at small execution horizons and drops for larger execution horizons, as shown in Fig. 10. For *P*_{C}, the POS is very low across the entire range of execution horizons. In contrast, *P*_{D}’s POS is higher for smaller execution horizons and lower for larger execution horizons. Additionally, when using *P*_{D} there are two cases where the RTF is less than 1, namely when the execution horizon is either 0.687 or 0.790 s. In these cases the POS is 0.45 and 0.25, respectively.

## 5 Discussion

This work was motivated by the assumption that including the set of specifications *S*_{1}–*S*_{7} into a planner will improve both performance and safety, compared with less comprehensive sets. The results presented support this assumption. In particular, the results show that including (i) minimum time-to-goal, (ii) minimum control effort, and (iii) moving obstacle avoidance specifications improves the closed-loop performance and safety for a UGV application.

Contrary to our expectations, adding several key planner specifications does not lead to larger solve-times. Specifically, the results show that adding (i) minimum time-to-goal, (ii) minimum control effort, and (iii) moving obstacle avoidance specifications does not lead to an increase in NLP solve-times. In fact, adding a minimum time-to-goal specification actually reduces the solve-times within the simple, static, unstructured environment (see the top left trace in Fig. 4). The minimum time-to-goal specification helps balance the sixth term in (5), which minimizes the area between the vehicle’s position trajectory and a line that runs through the goal in the *y*-direction. To see this balancing effect, compare the position trajectories of *V*_{A} to those of *V*_{B} and *V*_{C} in Fig. 4. The baseline planner, i.e., *P*_{A}, more effectively minimizes the area mentioned above for *V*_{A} than either *P*_{B} or *P*_{C} does for *V*_{B} and *V*_{C}, respectively. *V*_{B} and *V*_{C} have a larger area because both *P*_{B} and *P*_{C} have, in addition to the sixth term in (5), a minimum time-to-goal specification. To reduce this area more effectively, *V*_{A} aggressively decelerates over the entire test and operates at lower speeds; these lower speeds allow *V*_{A} to return to the line that runs through the goal in the *y*-direction sooner than either *V*_{B} or *V*_{C}. These differences between the vehicles’ trajectories may have led to the differences in the planners’ solve-times, where *P*_{A} has longer solve-times than either *P*_{B} or *P*_{C}. Notice that at around 6 s, *V*_{A}’s steering angle *δ*_{f}(*t*) and longitudinal deceleration *a*_{x}(*t*) are large, and *P*_{A}’s solve-time increases sharply. On the basis of such observations, it can be speculated that planning aggressive deceleration and steering trajectories at low speeds may be more computationally expensive than planning less aggressive deceleration and steering trajectories at high speeds.

The results show that the developed formulation with all the desired specifications included can be solved in real-time. In particular, using a direct-collocation method implemented in NLOptControl [20] in conjunction with the *KNITRO* NLP solver makes real-time solutions feasible. Similar solve-times are obtained with the open-source *IPOPT* NLP solver but are not reported here for brevity.

As the number of obstacles increase, the environment becomes more challenging, because the number of obstacles directly affects the computational load [31]. In the formulation developed in this work, the NLP dimensions grow linearly as the number of obstacles increases. Thus, increasing the number of obstacles from 3 (in *E*_{A} and *E*_{B}) to 38 (in *E*_{C}) enlarges the size of the NLP and the computational load. It is reasonable to assume that this increase is a major factor in the corresponding increase of solve-times. To see this increase in solve-times, compare Figs. 4 and 6–8. Increasing the number of obstacles may result in a loss of real-time solutions. However, several approaches may be taken to use the formulation presented in this paper in an environment with many obstacles. These approaches include developing a strategy that considers a smaller number of obstacles at a time, grouping several small, closely packed obstacles as a single obstacle, increasing computational power, or some combination of these.

For a given UGV, as obstacle speed increases, the environment becomes more challenging, because the vehicle is put in an increasingly difficult situation. The data plotted in Fig. 9 support this claim; even with a moving obstacle avoidance specification, it is not possible to reliably avoid the oncoming obstacle 0_{2} in *E*_{C} when it is moving faster than 21.1 m/s.

A moving obstacle avoidance specification may be unnecessary if the planner is updating quickly and the obstacles are moving slowly. The data plotted in Fig. 9 also support this claim. The data reveal that if the obstacle is moving directly toward the vehicle at a speed less than 2.11 m/s, and if the execution horizon is less than 0.375 s, then a planner without a moving obstacle avoidance specification will safely attain the goal. Removing the moving obstacle avoidance specification will also eliminate the need for an algorithm to predict the speed of the obstacles. This simplification may be appropriate for some industrial applications, where the obstacles are known to move slowly.

In addition to a moving obstacle avoidance specification, as obstacle speed increases, a small execution horizon becomes increasingly crucial for safety. The data plotted in Fig. 9 support this claim as well. Over a range of obstacle speeds, planning with a smaller execution horizon makes it more likely that the vehicle attains the goal. It is therefore desirable to make the execution horizon as small as possible to create a more reactive and safer planner. Having a small execution horizon, however, makes it more difficult for the planner to obtain solutions in real-time.

To ensure that the planning solutions are obtained in real-time while maintaining safety, it may be necessary to operate the UGV within environments where the obstacles are traveling from low to moderate speeds. Particular sets of data plotted in Fig. 9 support this claim as well. Specifically, when disregarding the cases where the obstacle is traveling faster than 16.8 m/s, the RTF decreases to 0.933 and the POS increases to 0.889. Similarly, when disregarding the data where the obstacle is traveling faster than 4.21 m/s, the RTF is further reduced to 0.920 and the POS increases to 1.

In addition to reducing the execution horizon, planning in a dangerous situation can increase the RTF. This results from the fact that planning in a dangerous situation can lead to less feasible or even infeasible NLP constraints, which make it more challenging or even impossible for the NLP solver to obtain a solution. The top trace of Fig. 8 supports this claim; it shows that solve-times increase sharply just before *V*_{D} avoids a collision with 0_{2}. It is important to consider these situations in terms of solve-time. If the planner cannot obtain a trajectory within the real-time limit, then the vehicle will not have an updated trajectory to follow and the situation status may go from dangerous to disastrous.

## 6 Conclusion

This paper develops and evaluates an NMPC-based trajectory planner for a UGV that incorporates all planner specifications *S*_{1}–*S*_{7} listed in Table 1. The evaluation includes comparisons against three other planner formulations that contain only a subset of the specifications *S*_{1}–*S*_{7} as state-of-the-art approaches.

Simulation results show that the developed formulation yields higher UGV safety and performance compared to the planners with less comprehensive sets of specifications than *S*_{1}–*S*_{7}. The formulation can be solved in real-time despite the more comprehensive formulation in the sense that the solve-times are all less than the chosen execution horizon of 0.5 s.

Comparisons also show that if the planner is updating quickly, then a slowly moving obstacle can be safely avoided without a moving obstacle avoidance specification. However, to avoid faster obstacles, both the moving obstacle avoidance and small execution horizon specifications are necessary. A small execution horizon improves safety but decreases the feasibility of obtaining trajectories in real-time. Planning in an environment with more obstacles increases OCP solve-times.

The finding that adding the minimum-time-to-goal, minimum control effort, and moving obstacle avoidance specifications to the formulation does not lead to larger solve-times is contrary to our expectations. In fact, it is observed that adding a minimum-time-to-goal specification reduces planning solve-times in the simple, static, unstructured environment.

While this work tailors the NMPC-based trajectory planner for a UGV application, a variety of automated vehicle systems, e.g., UAVs and spacecraft, can also make use of the approach detailed here.

Future efforts may focus on reducing the solve-times below 0.5 s to make the planner even more reactive. To this end, parallel computing solutions using GPU hardware may be pursued. It is also of interest to evaluate the formulation experimentally on a large, high-speed vehicle.

## Footnote

To conduct a parametric study between reduced execution horizon and improved safety, where solve-times are not important, solve-times are allowed to exceed the baseline 0.5 s execution horizon.

## Acknowledgment

The authors wish to acknowledge the financial support of the Automotive Research Center (ARC) in accordance with Cooperative Agreement W56HZV-14-2-0001 U.S. Army Tank Automotive Research, Development and Engineering Center (TARDEC) Warren, MI. DISTRIBUTION A. Approved for public release; distribution unlimited, #28420.

## Conflict of Interest

There are no conflicts of interest.

## Data Availability Statement

The datasets generated and supporting the findings of this article are obtainable from the corresponding author upon reasonable request.

### Appendix: Supplementary Data

Variable | Value | Units |
---|---|---|

M_{t} | 2689 | kg |

I_{zz} | 4110 | kg m^{2} |

L_{f}, L_{r} | 1.58, 1.72 | m |

$Kzx$, $Kzyr$, $Kzyf$ | 806, 1076, 675 | N/(m/s^{2}) |

$Fzmin$ | 1000 | N |

a, b | 1300, 100 | – |

ψ_{min}, ψ_{max} | [−2π, 2π] | deg |

δ_{f,min}, δ_{f,max} | [−30, 30] | deg |

γ_{f,min}, γ_{f,max} | [−5, 5] | deg/s |

J_{x,min}, J_{x,max} | [−5, 5] | m/s^{3} |

U_{min}, U_{max} | [0.01,29] | m/s |

Variable | Value | Units |
---|---|---|

M_{t} | 2689 | kg |

I_{zz} | 4110 | kg m^{2} |

L_{f}, L_{r} | 1.58, 1.72 | m |

$Kzx$, $Kzyr$, $Kzyf$ | 806, 1076, 675 | N/(m/s^{2}) |

$Fzmin$ | 1000 | N |

a, b | 1300, 100 | – |

ψ_{min}, ψ_{max} | [−2π, 2π] | deg |

δ_{f,min}, δ_{f,max} | [−30, 30] | deg |

γ_{f,min}, γ_{f,max} | [−5, 5] | deg/s |

J_{x,min}, J_{x,max} | [−5, 5] | m/s^{3} |

U_{min}, U_{max} | [0.01,29] | m/s |

Variable or conditions | Value and units |
---|---|

t_{ex}, N, L_{range}, κ | 0.5 s, 10, 50.0 m, 5.0 m |

sm_{1}, sm_{2} | 2.5 m, 4 m |

X0 | [200 m, 0 m, 0, 0, 1.57 rad, 0, 17 (m/s), 0] |

X0_{tol} | [0.5 m, 0.5 m, 0.5, 0.005, 0.5, 0.25, 0.5 (m/s), 0.5] |

XF_{tol} | [5.0 m, 5.0 m, NaN, NaN, NaN, … |

NaN, NaN, NaN] | |

w_{ic}, w_{x0}, w_{y0}, w_{v0}, w_{r0}, w_{ψ0}, w_{sa0}, w_{ux0}, w_{ax0}, w_{xf} | 100, 1, 1, 10, 10, 10, 2, 0.1, 0.1, 100 |

w_{g}, w_{t}, w_{haf}, $wFz$, w_{ce}, w_{sa}, w_{sr}, w_{ax}, w_{jx} | 10, 0, 1, 0.5, 0, 0.1, 1, 0.1, 0.01 |

Moving obstacle avoidance constraint in (7) | False |

Variable or conditions | Value and units |
---|---|

t_{ex}, N, L_{range}, κ | 0.5 s, 10, 50.0 m, 5.0 m |

sm_{1}, sm_{2} | 2.5 m, 4 m |

X0 | [200 m, 0 m, 0, 0, 1.57 rad, 0, 17 (m/s), 0] |

X0_{tol} | [0.5 m, 0.5 m, 0.5, 0.005, 0.5, 0.25, 0.5 (m/s), 0.5] |

XF_{tol} | [5.0 m, 5.0 m, NaN, NaN, NaN, … |

NaN, NaN, NaN] | |

w_{ic}, w_{x0}, w_{y0}, w_{v0}, w_{r0}, w_{ψ0}, w_{sa0}, w_{ux0}, w_{ax0}, w_{xf} | 100, 1, 1, 10, 10, 10, 2, 0.1, 0.1, 100 |

w_{g}, w_{t}, w_{haf}, $wFz$, w_{ce}, w_{sa}, w_{sr}, w_{ax}, w_{jx} | 10, 0, 1, 0.5, 0, 0.1, 1, 0.1, 0.01 |

Moving obstacle avoidance constraint in (7) | False |

Variable | Description | Value | Unit |
---|---|---|---|

a_{obs} | Array of the obstacles semi-major axes | [5, 4, 2] | m |

b_{obs} | Array of the obstacles semi-minor axes | [5, 4, 2] | m |

x0_{obs} | Array of the obstacles initial x positions | [205, 180, 200] | m |

y0_{obs} | Array of the obstacles initial y positions | [57, 75, 63] | m |

v_{x} | Array of obstacles speeds in x direction | [0, 0, 0] | m/s |

v_{y} | Array of obstacles speeds in y direction | [0, 0, 0] | m/s |

x_{g} | x position of goal location | 200 | m |

y_{g} | y position of goal location | 125 | m |

σ | Tolerance on goal location | 15 | m |

ψ_{g} | Desired orientation at goal | π/2 | rad |

Variable | Description | Value | Unit |
---|---|---|---|

a_{obs} | Array of the obstacles semi-major axes | [5, 4, 2] | m |

b_{obs} | Array of the obstacles semi-minor axes | [5, 4, 2] | m |

x0_{obs} | Array of the obstacles initial x positions | [205, 180, 200] | m |

y0_{obs} | Array of the obstacles initial y positions | [57, 75, 63] | m |

v_{x} | Array of obstacles speeds in x direction | [0, 0, 0] | m/s |

v_{y} | Array of obstacles speeds in y direction | [0, 0, 0] | m/s |

x_{g} | x position of goal location | 200 | m |

y_{g} | y position of goal location | 125 | m |

σ | Tolerance on goal location | 15 | m |

ψ_{g} | Desired orientation at goal | π/2 | rad |

Variable | Value | Unit |
---|---|---|

a_{obs} | [6, 6, 0.387, 0.387, 0.387, 0.387, 0.387, 0.387, 0.387, 0.387, 0.387, 0.387, 0.387, 0.387, …, | m |

0.387, 0.387, 0.387, 0.387, 0.387, 0.387, 0.387, 0.387, 0.387, 0.387, 0.387, 0.387, …, | ||

0.387, 0.387, 0.387, 0.387, 0.387, 0.387, 0.387, 0.387, 0.387, 0.387, 0.387, 0.387] | ||

b_{obs} | Same as a_{obs} | m |

x0_{obs} | $[6,18,12,12,12,12,12,12,12,12,12,12,12,12,12,12,12,12,12,12,12,12,12,l\u2026,$ | m |

12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12] | ||

y0_{obs} | [281, 650, 0, 5, 10, 15, 20, 25, 30, 35, 40, 45, 50, 55, 60, 65, 70, 75, 80, 85, 90, …, | m |

95, 100, 105, 110, 115, 120, 125, 130, 135, 140, 145, 150, 155, 160, 165, 170, 175] | ||

v_{x} | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] | m/s |

v_{y} | [0, − 10, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] | m/s |

x_{g} | 18 | m |

y_{g} | 700 | m |

σ | 25 | m |

ψ_{g} | π/2 | rad |

Variable | Value | Unit |
---|---|---|

a_{obs} | [6, 6, 0.387, 0.387, 0.387, 0.387, 0.387, 0.387, 0.387, 0.387, 0.387, 0.387, 0.387, 0.387, …, | m |

0.387, 0.387, 0.387, 0.387, 0.387, 0.387, 0.387, 0.387, 0.387, 0.387, 0.387, 0.387, …, | ||

0.387, 0.387, 0.387, 0.387, 0.387, 0.387, 0.387, 0.387, 0.387, 0.387, 0.387, 0.387] | ||

b_{obs} | Same as a_{obs} | m |

x0_{obs} | $[6,18,12,12,12,12,12,12,12,12,12,12,12,12,12,12,12,12,12,12,12,12,12,l\u2026,$ | m |

12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12] | ||

y0_{obs} | [281, 650, 0, 5, 10, 15, 20, 25, 30, 35, 40, 45, 50, 55, 60, 65, 70, 75, 80, 85, 90, …, | m |

95, 100, 105, 110, 115, 120, 125, 130, 135, 140, 145, 150, 155, 160, 165, 170, 175] | ||

v_{x} | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] | m/s |

v_{y} | [0, − 10, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] | m/s |

x_{g} | 18 | m |

y_{g} | 700 | m |

σ | 25 | m |

ψ_{g} | π/2 | rad |

Effort term | P_{B} | P_{C} | Decrease |
---|---|---|---|

Steering angle | 0.000586 | 0.000422 | 28.0% |

Steering rate | 0.00129 | 0.000922 | 28.5% |

Longitudinal jerk | 0.530 | 0.420 | 20.8% |

Total | 0.532 | 0.421 | 20.9% |

Effort term | P_{B} | P_{C} | Decrease |
---|---|---|---|

Steering angle | 0.000586 | 0.000422 | 28.0% |

Steering rate | 0.00129 | 0.000922 | 28.5% |

Longitudinal jerk | 0.530 | 0.420 | 20.8% |

Total | 0.532 | 0.421 | 20.9% |