## Abstract

Safe trajectory planning for high-performance automated vehicles in an environment with both static and moving obstacles is a challenging problem. Part of the challenge is developing a formulation that can be solved in real-time while including the following set of specifications: minimum time-to-goal, a dynamic vehicle model, minimum control effort, both static and moving obstacle avoidance, simultaneous optimization of speed and steering, and a short execution horizon. This paper presents a nonlinear model predictive control-based trajectory planning formulation, tailored for a large, high-speed unmanned ground vehicle, that includes the above set of specifications. The ability to solve this formulation in real-time is evaluated using NLOptControl, an open-source, direct-collocation-based optimal control problem solver in conjunction with the KNITRO nonlinear programming problem solver. The formulation is tested with various sets of specifications. A parametric study relating execution horizon and obstacle speed indicates that the moving obstacle avoidance specification is not needed for safety when the planner has a small execution horizon and the obstacles are moving slowly. However, a moving obstacle avoidance specification is needed when the obstacles are moving faster, and this specification improves the overall safety without, in most cases, increasing the solve-times. The results indicate that (i) safe trajectory planners for high-performance automated vehicles should include the entire set of specifications mentioned above, unless a static or low-speed environment permits a less comprehensive planner and (ii) the resulting formulation can be solved in real-time.

## 1 Introduction

In many high-performance automated vehicle applications for unmanned air vehicles (UAVs), unmanned ground vehicles (UGVs), and spacecraft, it is both desirable and challenging to plan safe trajectories in a dynamic environment. Part of this challenge is incorporating the set of specifications listed in Table 1 into a real-time planner, where real-time planning demands that the planner’s solve-times are all less than the execution horizon. While trajectory planning systems that include subsets of the specifications listed in Table 1 exist, a planner that consists of all of them has not yet been developed as reviewed in detail next. This paper addresses this gap.

Table 1

Planner specifications

SpecificationDescription
S1Static obstacle avoidance
S2Minimum time-to-goal
S3Dynamic vehicle model
S4Minimum control effort
S5Simultaneously optimize speed and steering
S6Moving obstacle avoidance
S7Small execution horizon
SpecificationDescription
S1Static obstacle avoidance
S2Minimum time-to-goal
S3Dynamic vehicle model
S4Minimum control effort
S5Simultaneously optimize speed and steering
S6Moving obstacle avoidance
S7Small execution horizon

### 1.1 Relation to State-of-the-Art.

Figure 1 shows a conceptual scheme for comparing and developing trajectory planners. This scheme illustrates the conceptual performance and safety of a vehicle controlled using trajectory planners with different sets of specifications, operating either in a static environment with a stationary obstacle (top four traces) or a dynamic environment with a moving obstacle (bottom two traces). In all cases, the planning and execution horizons are the same.

Fig. 1
Fig. 1
Close modal

Static obstacle avoidance (S1; Table 1) is a baseline specification in many trajectory planning systems but is not, by itself, sufficient for either performance or safety. Regarding safety, if the trajectory planner does not use a dynamic vehicle model (S3; Table 1), a trajectory that the vehicle cannot follow may be determined [1]. Such a trajectory may result in either a collision [1,2] (case A; Fig. 1) or some other catastrophic event, such as rollover in the case of a ground vehicle [3]. Despite this, some planners designed to avoid static obstacles for UAV applications [4] utilize a kinematic vehicle model (case A; Fig. 1). By utilizing a dynamic vehicle model in trajectory planning, the actual vehicle can follow the prescribed trajectory more accurately. Planners designed to avoid static obstacles with a dynamic vehicle model (case B; Fig. 1) exist for UGV applications [5]. However, the planner in case B does not have a minimum time-to-goal specification (S2; Table 1), which may result in mission failure in certain applications. For instance, in racing applications [6,7], planning without this specification will likely result in a lost race. In these applications, the planner should include at least S1S3 (case C; Fig. 1), such that it can arrive at the goal in less time than a planner with only the static obstacle avoidance and dynamic vehicle model specifications. If minimizing fuel consumption and mechanical wear are additional concerns, then the minimum control effort specification (S4; Table 1) needs to be included in the planner as well. Planners with S1S4 exist in applications for UGVs [8] and UAVs [911]. A limitation of these planners is that they do not optimize both speed and steering (S5; Table 1). Optimizing both allows the vehicle to both slow down more quickly and turn more tightly (shown in case D; Fig. 1), thereby improving both performance and safety [12].

In a dynamic environment, while the trajectory planning specifications S1S5 are necessary for both performance and safety, they are not sufficient (see case E; Fig. 1). To improve collision avoidance (i.e., safety) in a dynamic environment, there are three possibilities: including a moving obstacle avoidance specification (S6; Table 1), including a small execution horizon specification (S7; Table 1), or including both.

A moving obstacle avoidance specification accounts for the motion of the obstacle over the planning horizon, which increases safety (see case F; Fig. 1). This specification has been implemented for applications in UGVs [13], UAVs [14], and spacecraft [15]. These developments, however, have a limitation: they use a kinematic vehicle model as opposed to a dynamic vehicle model; case A, Fig. 1 depicts the potential outcome of using a kinematic vehicle model. Our preliminary work [16] developed a planner with S1S6 for a UGV application. This work, however, has several limitations, three of which are (i) it does not investigate closed-loop performance and safety, (ii) it assumes that the goal is within the LiDAR’s sensing range, and (iii) the planner’s solve-times are, at best, nearly two orders of magnitude above real-time (assuming an execution horizon of 0.5 s). This paper addresses these three limitations as one of its original contributions.

A small execution horizon specification engenders a more reactive planner with better obstacle avoidance capabilities. An execution horizon is described as “small” when reducing it does not improve safety within a given environment. For instance, to avoid the collision in case E (Fig. 1), a smaller execution horizon can be used. Previous research [12] includes a small execution horizon as well as S1S5. While there is a reason to expect that such a planner may operate safely around slowly moving obstacles, this hypothesis has not yet been tested. Therefore, this paper also investigates, for the first time, whether a system with S1S5 and a small execution horizon can operate safely in a dynamic environment for a range of obstacle speeds.

This work uses a nonlinear model predictive control (NMPC) based trajectory planner; this approach is also used in Refs. [4,5,915,17]. Unfortunately, it is very difficult to solve the proposed planning formulation in real-time with a short execution horizon. For instance, the literature shows that gpocs, gpops-ii, and our custom software [12], all written in matlab, are not fast enough for NMPC applications in aircraft [18], robot [19], and UGV [12,16] systems, respectively. Therefore, in this work, NLOptControl [20] is used instead to evaluate if the proposed formulation can be solved in real time with a short execution horizon.

### 1.2 Summary of Original Contributions.

The original contributions of this paper are summarized as follows:

1. Introduction of an NMPC-based trajectory planner with S1S7, tailored for a UGV application.

2. Investigation of the effect that different sets of specifications have on safety, performance, and solve-time.

3. Investigation of the need to include a moving obstacle avoidance specification for a range of execution horizons and obstacle speeds.

4. Investigation of the ability to solve the proposed formulation in real-time with a short execution horizon.

As an example application, the trajectory planning formulation developed in this work is tailored for a High Mobility Multipurpose Wheeled Vehicle (HMMWV) but can be adapted to other vehicles as well. Together, this planner and vehicle are referred to as a UGV.

### 1.3 Paper Structure.

The remainder of this paper is organized as follows. Section 2 describes the NMPC framework developed to consider non-negligible trajectory planning problem solve-times and the underlying optimal control problem (OCP) formulation developed to include S1S6. Section 3 describes the test conditions under which the proposed planner is evaluated. In Sec. 4, the effect that adding different specifications to trajectory planners has on safety, performance, and solve-time is tested in a variety of environments. The results of these tests are discussed in Sec. 5. Section 6 summarizes the paper and draws conclusions.

## 2 Mathematical Formulation

The mathematical formulation presented below is based on the following assumptions:

1. Both the goal and obstacle information are known.

2. The vehicle state is known.

3. The terrain is flat.

### 2.1 NMPC Framework.

At the heart of an NMPC formulation lies an OCP. In NMPC simulation studies, OCP solve-times are often neglected [12,21]. In such a case, first the plant simulation is paused, and the OCP is initialized at the current time t0 with the current plant state X0. Next, the OCP is solved to produce an optimal control trajectory ζ*. With this control trajectory, the plant is then simulated starting at t0 with X0 until t0 + tex; this yields a new initial state, which is then used to initialize the next OCP. However, most practical OCPs take a non-negligible amount of time to solve, after which, in a more realistic simulation, the plant will have evolved from its current state, where the OCP was initialized, to a new state [21]. This computational delay renders the control signal sub-optimal and potentially infeasible or unsafe. To achieve optimal safety and performance, non-negligible OCP solve-times must be taken into account. The NMPC framework used in this work, shown in Fig. 2, accounts for these non-negligible OCP solve-times.

Fig. 2
Fig. 2
Close modal

This framework has three main components: the OCP, the plant model, and the state prediction function. The OCP is provided with goal and environment information defined as follows:

Definition 1

Goal information$G$includes the goal position (xg, yg), the desired vehicle orientation at the goalψg, and the radial tolerance for attaining the goalσ.

Definition 2

Environment information$E$includes the sizes, initial positions, and velocities of the obstacles.

The obstacles are assumed to be enclosed in ellipses, where aobs and bobs are arrays that describe the ellipses’ semi-major and semi-minor axes, respectively; x0obs and y0obs are arrays of the obstacles’ initial x and y positions, respectively; and vx and vy are arrays of the obstacles’ speeds in the x and y directions, respectively.

During the first execution horizon, the OCP has not produced a control signal for the vehicle to follow. Therefore, the vehicle is sent a known control signal U0 (see Fig. 2), set such that the vehicle will drive straight at a constant speed. To account for the evolution of the plant state during the execution horizon, a state prediction X0p, which is made for t0 + tex, is used to initialize the OCP. The inputs of the state prediction function are the current state of the vehicle and the current control signal, which is U0 during the first execution horizon and ζ * afterward. Then, the plant model is simulated from the initial time t0 to t0 + tex and the first OCP is solved. Real-time feasibility of this framework requires that the OCP solve-times be all less than the execution horizon.

Model mismatch between the plant model, the vehicle dynamics model in the OCP, and the state prediction function can induce biases. Addressing these biases is acknowledged as an important research problem but is not one of the goals of this paper. Therefore, to avoid these biases while focusing on our research tasks, in this paper, the plant model, the vehicle model in the OCP, and the state prediction function all use the same set of differential equations, which is presented in detail later in this section.

This framework runs until the UGV either reaches the goal or fails the test. An algorithm is run after each execution horizon to determine if the vehicle has reached the goal within the radial goal tolerance σ. The test fails if

1. the vehicle crashes into an obstacle,

2. the vertical tire load in the plant model goes below 100 N for any of the four tires,

3. any of the solve-times exceeds 300 s,2 or if

4. the solution to the nonlinear programming problem (NLP) is not considered to be optimal based on the tolerances and the Karush–Kuhn–Tucker conditions.

### 2.2 Optimal Control Problem.

This section describes how the set of planner specifications S1S6 are incorporated into the OCP. At a high-level, these specifications are all incorporated into the single-phase, continuous-time OCP defined in (1)(4) as
$minimizeξ(t),ζ(t),tfM(ξ(t0+tex),t0+tex,ξ(tf),tf)+∫t0+textfL(ξ(t),ζ(t),t)dt$
(1)
$subjecttodξdt(t)−f(ξ(t),ζ(t),t)=0$
(2)
$C(ξ(t),ζ(t),A(t),tf)≤0$
(3)
$ϕ(ξ(t0+tex),t0+tex,ξ(tf),tf)=0$
(4)
where tf is the free final time, t ∈ [t0 + tex, tf] is the time, $ξ(t)∈Rnst$ is the state, and $ζ(t)∈Rnctr$ is the control, with nst defined as the number of states and nctr defined as the number of controls. The Mayer term is $M:Rnst×R×Rnst×R→R$ and the Lagrange term is $L:Rnst×Rnctr×R→R$. The dynamic constraints are given by $f:Rnst×Rnctr→Rnst$. The path constraints are captured by $C:Rnst×Rnctr×Ra×R→Rp$, which bound (i) the state and control based on the vehicle’s dynamic limits and any additional information, denoted as $A(t)∈Ra$, and (ii) tf based on a maximum final time $tfmax$. Finally, the event constraints are expressed with $ϕ:Rnst×R×Rnst×R→Rq$.

The remainder of this section describes how S1S6 are incorporated into (1)(4).

#### 2.2.1 Cost Functional.

The cost functional in (1) is set to (5) as
$J=wttf+ϱ+wg(x(tf)−xg)2+(y(tf)−yg)2(x(t0+tex)−xg)2+(y(t0+tex)−yg)2+ϵ+wcf∫t0+textf(wδfδf(t)2+wγγ(t)2+wJJx(t)2)dt+wFz∫t0+textf(tanh(−Fzrl−ab)+tanh(−Fzrr−ab))dt+whaf∫t0+textf(sin(ψg)(x−xg)−cos(ψg)(y−yg))2dt$
(5)
where wt, wg, wcf, $wδf$, wγ, wJ, $wFz$, and whaf are weight terms, x(t) and y(t) represent the vehicle’s global position coordinates on a flat plane, ϵ is a small number set to 0.01 to avoid singularities, δf (t) is the steering angle at the front of the vehicle, γ (t) is the steering rate, Jx(t) is the longitudinal jerk, a and b are parameters to prevent $Fzrl$ and $Fzrr$ from being close to the minimum vertical tire load limit, denoted as $Fzmin$, as described in [12], and ϱ is a term for penalizing the slack variables on the initial and terminal conditions.

There are six terms in (5). The first term minimizes the final time tf to establish a minimum-time-to-goal specification. The second term establishes soft constraints on the initial and terminal conditions. This term is described in greater detail later in this section. The third term helps the vehicle reach the goal when the goal is not within the LiDAR range, denoted as Lrange. If the goal is within a distance of Lrange, then wg is set to zero, and the vehicle is constrained to reach the goal. This constraint is described in greater detail later in this section. The fourth term minimizes the control effort, which encourages smooth control signals. The fifth term dissuades the controller from generating solutions near the minimum vertical tire load limit. This is done to prevent vehicle rollover and infeasible initializations in the next OCP. Finally, to help the vehicle pass the goal location through the desired direction ψg, the sixth term is added, which minimizes the area between a line in the (x, y) plane going through the goal in the desired direction ψg and the vehicle’s trajectory in the (x, y) plane [12].

#### 2.2.2 Dynamic Vehicle Model.

When including both the dynamic vehicle model and minimum time-to-goal specifications, it is important to consider that the vehicle may need to operate at its dynamic limits. Thus, this work leverages the 3DoF vehicle model developed in Ref. [8] (shown in Fig. 3). This model is designed to plan trajectories that operate a HMMWV safely at its dynamic limits [8]. To achieve this, it has eight states, two controls, uses a pure-slip Pacejka tire model [22], and considers the longitudinal and lateral load transfer effects when calculating the vertical tire forces. The differential equations that are used to model (i) the plant, (ii) the vehicle dynamics in the OCP, and (iii) the state prediction function are shown in (6) as
$f(ξ(t),ζ(t),t)=D(ξ(t))+Bζ(t)$
(6)
where
$D(ξ(t))=[U(t)cosΨ(t)−(V(t)+Lfωz(t))sinΨ(t)U(t)sinΨ(t)+(V(t)+Lfωz(t))cosΨ(t)(Fyf(t)+Fyr(t))/Mt−U(t)ωz(t)(Fyf(t)Lf−Fyr(t)Lr)/Izzωz(t)0ax(t)0],ξ(t)=[x(t)y(t)V(t)ωz(t)Ψ(t)δf(t)U(t)ax(t)],BT=[0000010000000001],andζ(t)=[γ(t)Jx(t)]$
Equation (6) breaks the dynamics constraints in (2) into two terms. The first of these terms, $D(ξ(t))$, establishes the state dynamics for the global position of the vehicle, the lateral speed V(t), the yaw rate ωz(t), the heading angle Ψ(t), the steering angle δf(t), the longitudinal speed U(t), and the longitudinal acceleration ax(t). The second term, $Bζ(t)$, relates state variable rates to their respective control variables, i.e., (dδf/dt)(t) to the steering rate γ(t), and (dax/dt)(t) to the longitudinal jerk Jx(t). Finally, Lf and Lr are the distances from the front and rear axles to center-of-mass (COM), Izz is the moment of inertia about the COM, Fyf and Fyr are the front and rear lateral tire forces, and Mt is the total vehicle mass. Table 3, which is in the  Appendix, contains all of the vehicle parameters used in this paper.
Fig. 3
Fig. 3
Close modal
The vertical tire load on each of the four tires is constrained to be above the minimum vertical tire load limit $Fzmin$. These tire load constraints help prevent vehicle roll-over and are incorporated into (3). To calculate the vertical loads on the tires, this work uses a vertical load transfer model [12], where the vertical tire forces are approximated as
$Fzrl=12(Fzr0+Kzx(ax(t)−V(t)ωz(t)))−KzyrFyf+FyrMtFzrr=12(Fzr0+Kzx(ax(t)−V(t)ωz(t)))+KzyrFyf+FyrMtFzfl=12(Fzf0−Kzx(ax(t)−V(t)ωz(t)))−KzyfFyf+FyrMtFzfr=12(Fzf0−Kzx(ax(t)−V(t)ωz(t)))+KzyfFyf+FyrMt$
where $Fzrl$ and $Fzrr$ are the rear left and rear right vertical tire loads, $Fzfl$ and $Fzfr$ are the front left and front right vertical tire loads, $Fzr0=MtLfg/(Lf+Lr)$ is the static rear axle load, $Fzf0=MtLrg/(Lf+Lr)$ is the static front axle load, and $Kzx$ is the longitudinal load transfer coefficient, and $Kzyf$ and $Kzyr$ are the front and rear lateral load transfer coefficients calculated as in Ref. [12].

#### 2.2.3 State and Control Limits.

Actuator and other physical plant limits help establish the state and control bounds, which are added to (3). Specifically, five of the states and both controls are bounded with constant upper and lower bounds as
$xmin≤x(t)≤xmaxymin≤y(t)≤ymaxψmin≤ψ(t)≤ψmaxδf,min≤δf(t)≤δf,maxUmin≤U(t)≤Umaxγf,min≤γf(t)≤γf,maxJx,min≤Jx(t)≤Jx,max$
Nonlinear functions of the vehicle’s speed bound the vehicle’s acceleration as
$ax,min[U(t)]≤ax(t)≤ax,max[U(t)]$
Maximum deceleration/acceleration data collected from a 14 DoF HMMWV model are used to establish these nonlinear functions for the maximum deceleration/acceleration [12].

No explicit lateral speed or yaw rate constraints exist.

#### 2.2.4 Obstacle Avoidance.

Two possible approaches for incorporating the static and moving obstacle avoidance specifications into the OCP include soft constraints (or artificial potential fields) and time-varying hard constraints [23]. There are two limitations to the soft constraints approach: (i) a trajectory may be generated that is deemed feasible according to the formulation, but actually goes through an obstacle and (ii) the NLP solve-times are known to be large, when compared with the time-varying hard constraints approach [16]. Therefore, time-varying hard constraints for the avoidance of static and moving obstacles avoidance are incorporated into the path constraints in (3).

Time-varying hard constraints enforce the vehicle’s trajectory to avoid intersecting with the obstacles’ trajectories, while accounting for the obstacles’ shapes and sizes. Because this OCP will be transcribed into an NLP, the obstacles’ shapes should be represented with twice continuously differentiable functions, e.g., a circle or an ellipse. As such, similar to planners tailored for spacecraft [15] and UGV [16] applications, this work establishes a moving obstacle avoidance specification using time-varying, elliptical hard constraints. Equation (7) defines these constraints as
$(x(t)−(x0obs[i]+vxt)aobs[i]+sm(t))2+(y(t)−(y0obs[i]+vyt)bobs[i]+sm(t))2>1fori∈1:Q$
(7)
where sm(t) = sm1 + (sm2sm1)t/tf describes the time-varying safety margin, which enforces the vehicle to operate further from the obstacles as t increases. While this safety margin reduces the feasible space that the vehicle can travel, it helps keep the vehicle further away from distant obstacles. Q is the total number of obstacles. The notation x0obs[i] refers to the ith element of the x0obs vector.

#### 2.2.5 LiDAR Region Constraints.

To ensure that the vehicle’s trajectory does not go beyond the LiDAR region, an additional path constraint is incorporated into (3). This constraint is defined in (8) as
$(x(t)−x(t0+tex))2+(y(t)−y(t0+tex))2−(Lrange+κ)2≤0$
(8)
where κ is the LiDAR relaxation range [12].

#### 2.2.6 Initial and Terminal State Constraints.

In a low-tolerance hard constraints approach, if the plant is driven into an infeasible state space, a feasible control signal cannot be computed [24]. To mitigate infeasible problems created using low-tolerance hard constraints on the initial and terminal conditions, slack variables are introduced into this formulation, where the size of the slack variable corresponds to the respective constraint violation [25]. These slack constraints are shown in (9)(14) as
$X0p−ξ(t0+tex)≤x0s$
(9)
$X0p+ξ(t0+tex)≥x0s$
(10)
$xg−x(tf)≤xfs[1]$
(11)
$xg+x(tf)≥xfs[1]$
(12)
$yg−y(tf)≤xfs[2]$
(13)
$yg+y(tf)≥xfs[2]$
(14)
where x0s is the nst dimensional vector of slack variables for the initial conditions, and xfs is the two-dimensional vector of slack variables for the terminal conditions. Both the initial and terminal slack variable vectors are constrained to be greater than zero.
Adding slack variables to the cost functional reduces the size of the slack constraint violations. The weight for these slack variables is chosen to be large enough to keep the slack constraint close to zero. The ϱ term in (1) is now defined in (15) as
$ϱ=ws0x0s+wsfxfs$
(15)
where ws0 is a 1 × nst dimensional vector of individual weight terms on the slack variables for the initial state constraints, and wsf is a 1 × 2 dimensional vector of individual weight terms on the slack variables for the final state constraints.
When using only soft constraints, “optimal” trajectories are found that have initial and terminal states that are too far from their desired values. Adding high-tolerance hard constraints on the initial and terminal state conditions mitigates this issue. Thus, high-tolerance hard constraints are added to (4), where the entire initial state is constrained to match X0p within a specified tolerance X0tol. Equation (16) establishes these constraints as
$X0p−X0tol≤ξ(t0+tex)≤X0p+X0tol$
(16)
Additionally, the vehicle’s final x and y positions are constrained to be within the goal tolerance σ using (17) and (18) as
$xg−σ≤x(tf)≤xg+σ$
(17)
$yg−σ≤y(tf)≤yg+σ$
(18)

If the distance from the vehicle to the goal is greater than the vehicle’s planning range Lrange, then the soft (9)(14) and hard constraints (17) and (18) on the final conditions are relaxed. Setting the elements in wsf to zero relaxes the terminal soft constraints, and setting σ to 106 m relaxes the terminal hard constraints.

The remaining parameter modifications and additional constraints needed to relax the assumption that the goal is within the vehicle’s planning range [16] are now presented.

#### 2.2.7 LiDAR Range Constraints.

If the goal is not within Lrange of the vehicle, then the vehicle is constrained to arrive at the edge of the LiDAR region within a distance of κ at tf. This is accomplished using (19) and (20) as
$(x(tf)−x(t0+tex))2+(y(tf)−y(t0+tex))2−p1≤0$
(19)
$−(x(tf)−x(t0+tex))2−(y(tf)−y(t0+tex))2+p2≤0$
(20)
where p1 and p2 are set to (Lrange + κ)2 and (Lrangeκ)2, respectively.

To avoid creating an infeasible problem while continuing to drive the UGV towards the goal, the goal constraints described in the previous section (i.e., (9)(18)) are relaxed and a new soft constraint is used. This soft constraint minimizes the squared distance from the vehicle to the goal at tf, normalized by squared distance from the vehicle to the goal at t0 + tex [12]. Setting the goal weight wg in (5) to a non-zero value enforces this constraint.

In the case that the goal is within a distance of Lrange to the vehicle, wg is set to zero and the vehicle is constrained to reach the goal using (9)(18). This is done by setting σ to a much smaller goal tolerance, which enforces the hard constraints on reaching the goal, and setting the elements in the weight vector wsf to large positive weights establishes soft constraint on reaching the goal through slack variables. Then, to avoid creating an infeasible problem, the hard constraints for reaching the edge of the LiDAR region at tf are relaxed. To do this, p1 and p2 are set to 10−6 and −10−6, respectively,

The above specifies the details of the NMPC-based trajectory planning formulation with specifications S1S6. The evaluation of this formulation as a function of its specifications follows.

## 3 Description of Evaluation

The next section presents comparisons among four planners within three different test environments and evaluates the proposed planner’s ability to improve both safety and performance without increasing solve-times. This section describes these planners and their test environments. Afterwards, the computer hardware platform used to produce the results presented in this paper and the software configuration under which NLOptControl is utilized are described.

### 3.1 Planners.

Comparisons are made among four planners denoted as PAPD. The specifications of these planners are listed in Table 2, where PA is used as the baseline planner. Note that PA already includes the specifications of a dynamic vehicle model and simultaneous optimization of speed and steering, since previous work already illustrated the need to include them; see Refs. [1,8,9] for the first and Ref. [12] for the latter.

Table 2

Planners compared

Planners
FeaturesPAPBPCPD
Static obstacle avoidance
Minimum time-to-goal
Dynamic vehicle model
Minimum control effort
Simultaneously optimize speed and steering
Moving obstacle avoidance
Small execution horizon
Planners
FeaturesPAPBPCPD
Static obstacle avoidance
Minimum time-to-goal
Dynamic vehicle model
Minimum control effort
Simultaneously optimize speed and steering
Moving obstacle avoidance
Small execution horizon
The set of parameters in the left-hand column in Table 4, which is in the  Appendix, define the planners. The right-hand column in Table 4 defines the values of PA’s parameters. All of the weight terms used in this work are obtained either from previous research [12] or manual tuning. In addition to this, setting the moving obstacle avoidance constraint in (7) to false means that (7) is modified to (21) as
$(x(t)−x0obs⋆[i]aobs[i]+sm(t))2+(y(t)−y0obs⋆[i]bobs[i]+sm(t))2>1fori∈1:Q$
(21)
where $x0obs⋆$ and $y0obs⋆$ are arrays that describe the initial x and y positions of the obstacles, respectively. These arrays are updated to reflect the obstacles’ current position after each execution horizon.

Next, the values of PA’s parameters are modified to define PBPD. The difference between PA and PB is that the weight on the final time wt is set to 100 for PB. To allow PA to reach the goal or the edge of the LiDAR region at will, the final time is left as a design variable, but the weight on it (i.e., wt) is set to zero. Next, the difference between PB and PC is that wce is set to 1 for the latter. Finally, the difference between PC and PD is that PD has a moving obstacle avoidance specification, while PC does not. PD establishes this specification with (7). PC assumes that the obstacles will be static over each prediction horizon, while PD incorporates the movement of the obstacles into the position constraints over the prediction horizon.

Closed-loop comparisons are made among four different vehicles denoted as VAVD. PA, PB, PC, and PD control VA, VB, VC, and VD, respectively. Unless otherwise noted, the execution horizon is set to a value of 0.5 s for all comparisons.

### 3.2 Environment Categories.

To evaluate the proposed planner, this work uses four distinct environmental categories: unknown versus known, unstructured versus structured, dynamic versus static, and challenging versus simple. A description of each follows.

#### 3.2.1 Unknown Versus Known.

In an unknown environment [12], sensors collect data from the environment for algorithms that estimate factors including the obstacles’ sizes, positions, and velocities. In addition to assuming that the obstacle information is known, this paper assumes that the environment is known.

#### 3.2.2 Unstructured Versus Structured.

In an unstructured environment [12,2628], there are no roads to follow or traffic rules to obey. In a structured environment [29,30], some combination of these factors needs to be considered. In this paper, the distinction between these environment categories is that in an unstructured environment, there are no lanes to follow, while in a structured environment there is a lane to follow.

#### 3.2.3 Dynamic Versus Static.

In a dynamic environment [31], at least one obstacle is moving. In a static environment, all of the obstacles are stationary. This paper uses both dynamic and static environments.

#### 3.2.4 Challenging Versus Simple.

In planning problems, the number of obstacles directly affects the computational load [31]. As such, the environment becomes more challenging as the number of obstacles increases. This paper uses an environment with 38 obstacles as a challenging example and an environment with three obstacles as a simple one.

### 3.3 Environments.

UGV safety, performance, and solve-times are evaluated in three different environments denoted as EAEC. Each of these environments consists of some combination of the above environment categories, which is now described in detail.

#### 3.3.1 EA: Simple, Static, Unstructured Environment.

Both the increase in performance and solve-times, consequent to including the minimum time-to-goal and minimum control effort specifications, are evaluated in a simple, static, unstructured environment denoted as EA. EA has three static obstacles denoted as 01, 02, and 03, and Table 5, which is in the  Appendix, lists EA’s parameters. The right trace of Fig. 4 shows the obstacle field and goal location of EA.

Fig. 4
Fig. 4
Close modal

#### 3.3.2 EB: Simple, Dynamic, Unstructured Environment.

The increases in both safety and solve-times, consequent to including a moving obstacle avoidance specification, are evaluated in a simple, dynamic, unstructured environment denoted as EB. EB has three dynamic obstacles also denoted as 01, 02, and 03. EB is the same environment as EA, except the obstacles are given non-zero velocities to test the planner’s ability to avoid collisions with moving obstacles. The respective velocities of 01, 02, and 03 are as follows:
$vx=[−2,−1,−0.5]m/sandvy=[0,1,6]m/s$

Figure 5 shows the movement of these obstacles, which can be seen in the right trace by following the obstacles’ position at the indicated times.

Fig. 5
Fig. 5
Close modal

#### 3.3.3 EC: Challenging, Dynamic, Structured Environment.

Increases in safety and solve-times from including a moving obstacle avoidance specification are further evaluated within a challenging, dynamic, structured environment denoted as EC. EC is a double lane change scenario, which was originally developed to test a HMMWV within a static environment [32]. Figure 6 shows EC, which has two large obstacles that need to be avoided, labeled 01 and 02. In this test, the vehicle is started at the bottom of Fig. 6 traveling in the left lane at a speed of 17 m/s. From this point, it is restricted to perform a double lane change maneuver. To constrain the vehicle to perform this maneuver, first, minimum and maximum constraints on the vehicle’s x position are imposed. This restricted region is shaded, where the lower and upper limits on the vehicle’s x position are xmin = 0 m and xmax = 24 m, respectively. Next, to ensure that the vehicle stays in the left lane during the first part of the maneuver, at the start of the track, until after y = 175 m, a series of 36 cones are placed at the edge of the lane boundary. If these cones are not present or there are not enough cones, then the vehicle will change lanes earlier to minimize the sixth term in (5). The first large obstacle, 01, is static and is located in the left lane. The second large obstacle, 02, starts at the back of the track in the right lane near the goal and moves towards the front of the track. Table 6 lists the parameters for EC.

Fig. 6
Fig. 6
Close modal

To improve safety and performance within EC, several planner parameters, which are listed in Table 4, are modified. Specifically, for all of the EC simulations shown in this paper, the Lrange, N, and κ planner parameters in Table 4 are modified. The Lrange is modified because when using an Lrange of 50 m, the vehicle crashes into 02 over a large range of 02 obstacle speeds. To avoid limiting the UGV based on its sensing range, and not its dynamic limits, Lrange is increased to 90 m for EC. To accommodate for this extended planning range, the number of points in the discretization N is increased from 10 to 15 and the LiDAR relaxation range κ is increased from 5 m to 10 m.

### 3.4 Hardware Platform and Software Stack.

The results in this paper are produced using a single machine running Ubuntu 16.04 with an Intel Core i7 − 4910MQ CPU @2.90GHz × 8, and 31.3GB of RAM. The OCPs are solved using NLOptControl, an open-source, direct-collocation-based OCP solver. In particular, NLOptControl 0.1.6 [20] is used with the KNITRO 10.3 NLP solver, where the default KNITRO settings are used, except the maximum CPU-time (i.e., solve-time), which is set to 300 s. The trapezoidal method [33,34] is used to approximate both the cost functional (1) and the dynamics (2). To more closely simulate practice, where time can typically be allocated to initialize trajectory planners, the optimizations are warm-started.

## 4 Results

### 4.1 Performance and Solve-Times Within EA.

Planning with a minimum time-to-goal specification can reduce the time-to-goal without increasing the solve-times in a simple, static, unstructured environment. In particular, using either PB or PC in lieu of PA within EA reduces the time-to-goal from 9.0 s to 7.0 s (see Fig. 4). This is because, until about t = 5.5 s, both VB and VC accelerate while VA decelerates; this results in higher speeds for both VB and VC. Next, while both PB and PC run in real-time in EA, PA does not. This can be seen in the top left trace of Fig. 4, where the solve-times for both PB and PC are all less than tex, but several of the solve-times obtained using PA go above tex. This creates a safety issue because, in practice, if the solve-time is greater than tex, then the vehicle will not have an up-to-date trajectory to follow.

In EA, compared to PB, PC reduces the control effort without increasing either the time-to-goal or the solve-times. More specifically, even though VB and VC arrive at the goal in 7.0 s, VC uses less control effort for all of the three control effort terms. The third term in (5) calculates the control effort terms for the steering angle, steering rate, and longitudinal jerk. The overall values of each of these control effort terms, along with their percentage decrease, are in Table 7, which is in the  Appendix. Next, in the top left trace in Fig. 4, it can be seen that the solve-times for PB and PC are below the real-time threshold of 0.5 s.

### 4.2 Safety and Solve-Times Within EB.

Planning with a moving obstacle avoidance specification can increase safety without increasing the solve-times in a simple, dynamic, unstructured environment. This is shown in the comparison between PC and PD within EB (see Fig. 5). At the start of this test, both vehicles accelerate and then turn in opposite directions: VC to the left and VD to the right. PC tries to avoid 01 to the left, which results in a crash at t = 3.5 s. On the other hand, by taking the obstacles’ motion over the prediction horizon into account, PD turns VD to the right. This allows VD to arrive safely at the goal at t = 6.5 s. Lastly, as seen in the top left trace in Fig. 5, the solve-times for both PB and PC are below the real-time threshold of 0.5 s.

### 4.3 Safety and Solve-Times Within EC.

Similarly, planning with a moving obstacle avoidance specification can increase safety without significantly increasing the solve-times in a challenging, dynamic, structured environment. This is demonstrated by testing PC and PD within EC (see Figs. 68).

Fig. 7
Fig. 7
Close modal
Fig. 8
Fig. 8
Close modal

At the start of the test, the first lane change maneuver is performed successfully for both VC and VD. During this time, both vehicles accelerate aggressively to increase their speed from 17 m/s at t = 0 s to 26.5 m/s at t = 19.5 s. At this time, VC crashes into 02 (see Fig. 7 for a zoomed in view of the crash) while VD avoids 02 and eventually attains the goal. Additionally, PD is able to avoid this collision with a solve-time that is only slightly higher than the one obtained with PC’s just before it causes VC to crash. The next section discusses the larger solve-times encountered at 19.0 s. Finally, the solve-times for PD are less than the real-time threshold of 0.5 s, despite the fact that this is a challenging environment with 38 obstacles instead of 3.

### 4.4 Execution Horizon and Obstacle Speed Analysis Within EC.

Including a moving obstacle avoidance specification increases safety over a range of execution horizons and obstacle speeds. To show this, VC and VD are tested within EC for a range of execution horizons (tex = [0.01, 0.0621, …, 1] s) and obstacle velocities (vy[2] = [0, − 2.11, …, − 20] (m/s)). The data from this parameter sweep are shown in Fig. 9, where a plotted point indicates a successful simulation. For instance, when the execution horizon is 0.01 s and the velocity of 02 is −2.11 m/s, both VC and VD attain the goal.

Fig. 9
Fig. 9
Close modal

The data follow the expected trend; i.e., VD is safer than VC, and the results are statistically significant (p = 2.2 × 10−16) as shown by a Fisher test, see Table 8 in the  Appendix. VD accounts for the majority (87.1%) of the successful trials, and VC accounts for the majority of (60.0%) of the trials that failed.

While making the execution horizon small creates a more reactive planner, which can more reliably avoid collisions with fast moving obstacles, it makes it more difficult to obtain the planning solutions in real-time. Figure 10 depicts this issue, where the real-time factor (RTF) and probability-of-safety (POS) are defined as follows.

Fig. 10
Fig. 10
Close modal
Definition 3

RTF: RTF = solve-timesmax/tex. To calculatesolve-timesmax, the maximum value in a vector of solve-times for each test case (i.e., obstacle speed and execution horizon) is averaged across obstacle speeds.

Definition 4

POS: The probability that the vehicle will attain the goal, which is calculated over the range of obstacle velocities for each execution horizon.

For both PC and PD, the RTF is very high at small execution horizons and drops for larger execution horizons, as shown in Fig. 10. For PC, the POS is very low across the entire range of execution horizons. In contrast, PD’s POS is higher for smaller execution horizons and lower for larger execution horizons. Additionally, when using PD there are two cases where the RTF is less than 1, namely when the execution horizon is either 0.687 or 0.790 s. In these cases the POS is 0.45 and 0.25, respectively.

## 5 Discussion

This work was motivated by the assumption that including the set of specifications S1S7 into a planner will improve both performance and safety, compared with less comprehensive sets. The results presented support this assumption. In particular, the results show that including (i) minimum time-to-goal, (ii) minimum control effort, and (iii) moving obstacle avoidance specifications improves the closed-loop performance and safety for a UGV application.

Contrary to our expectations, adding several key planner specifications does not lead to larger solve-times. Specifically, the results show that adding (i) minimum time-to-goal, (ii) minimum control effort, and (iii) moving obstacle avoidance specifications does not lead to an increase in NLP solve-times. In fact, adding a minimum time-to-goal specification actually reduces the solve-times within the simple, static, unstructured environment (see the top left trace in Fig. 4). The minimum time-to-goal specification helps balance the sixth term in (5), which minimizes the area between the vehicle’s position trajectory and a line that runs through the goal in the y-direction. To see this balancing effect, compare the position trajectories of VA to those of VB and VC in Fig. 4. The baseline planner, i.e., PA, more effectively minimizes the area mentioned above for VA than either PB or PC does for VB and VC, respectively. VB and VC have a larger area because both PB and PC have, in addition to the sixth term in (5), a minimum time-to-goal specification. To reduce this area more effectively, VA aggressively decelerates over the entire test and operates at lower speeds; these lower speeds allow VA to return to the line that runs through the goal in the y-direction sooner than either VB or VC. These differences between the vehicles’ trajectories may have led to the differences in the planners’ solve-times, where PA has longer solve-times than either PB or PC. Notice that at around 6 s, VA’s steering angle δf(t) and longitudinal deceleration ax(t) are large, and PA’s solve-time increases sharply. On the basis of such observations, it can be speculated that planning aggressive deceleration and steering trajectories at low speeds may be more computationally expensive than planning less aggressive deceleration and steering trajectories at high speeds.

The results show that the developed formulation with all the desired specifications included can be solved in real-time. In particular, using a direct-collocation method implemented in NLOptControl [20] in conjunction with the KNITRO NLP solver makes real-time solutions feasible. Similar solve-times are obtained with the open-source IPOPT NLP solver but are not reported here for brevity.

As the number of obstacles increase, the environment becomes more challenging, because the number of obstacles directly affects the computational load [31]. In the formulation developed in this work, the NLP dimensions grow linearly as the number of obstacles increases. Thus, increasing the number of obstacles from 3 (in EA and EB) to 38 (in EC) enlarges the size of the NLP and the computational load. It is reasonable to assume that this increase is a major factor in the corresponding increase of solve-times. To see this increase in solve-times, compare Figs. 4 and 68. Increasing the number of obstacles may result in a loss of real-time solutions. However, several approaches may be taken to use the formulation presented in this paper in an environment with many obstacles. These approaches include developing a strategy that considers a smaller number of obstacles at a time, grouping several small, closely packed obstacles as a single obstacle, increasing computational power, or some combination of these.

For a given UGV, as obstacle speed increases, the environment becomes more challenging, because the vehicle is put in an increasingly difficult situation. The data plotted in Fig. 9 support this claim; even with a moving obstacle avoidance specification, it is not possible to reliably avoid the oncoming obstacle 02 in EC when it is moving faster than 21.1 m/s.

A moving obstacle avoidance specification may be unnecessary if the planner is updating quickly and the obstacles are moving slowly. The data plotted in Fig. 9 also support this claim. The data reveal that if the obstacle is moving directly toward the vehicle at a speed less than 2.11 m/s, and if the execution horizon is less than 0.375 s, then a planner without a moving obstacle avoidance specification will safely attain the goal. Removing the moving obstacle avoidance specification will also eliminate the need for an algorithm to predict the speed of the obstacles. This simplification may be appropriate for some industrial applications, where the obstacles are known to move slowly.

In addition to a moving obstacle avoidance specification, as obstacle speed increases, a small execution horizon becomes increasingly crucial for safety. The data plotted in Fig. 9 support this claim as well. Over a range of obstacle speeds, planning with a smaller execution horizon makes it more likely that the vehicle attains the goal. It is therefore desirable to make the execution horizon as small as possible to create a more reactive and safer planner. Having a small execution horizon, however, makes it more difficult for the planner to obtain solutions in real-time.

To ensure that the planning solutions are obtained in real-time while maintaining safety, it may be necessary to operate the UGV within environments where the obstacles are traveling from low to moderate speeds. Particular sets of data plotted in Fig. 9 support this claim as well. Specifically, when disregarding the cases where the obstacle is traveling faster than 16.8 m/s, the RTF decreases to 0.933 and the POS increases to 0.889. Similarly, when disregarding the data where the obstacle is traveling faster than 4.21 m/s, the RTF is further reduced to 0.920 and the POS increases to 1.

In addition to reducing the execution horizon, planning in a dangerous situation can increase the RTF. This results from the fact that planning in a dangerous situation can lead to less feasible or even infeasible NLP constraints, which make it more challenging or even impossible for the NLP solver to obtain a solution. The top trace of Fig. 8 supports this claim; it shows that solve-times increase sharply just before VD avoids a collision with 02. It is important to consider these situations in terms of solve-time. If the planner cannot obtain a trajectory within the real-time limit, then the vehicle will not have an updated trajectory to follow and the situation status may go from dangerous to disastrous.

## 6 Conclusion

This paper develops and evaluates an NMPC-based trajectory planner for a UGV that incorporates all planner specifications S1S7 listed in Table 1. The evaluation includes comparisons against three other planner formulations that contain only a subset of the specifications S1S7 as state-of-the-art approaches.

Simulation results show that the developed formulation yields higher UGV safety and performance compared to the planners with less comprehensive sets of specifications than S1S7. The formulation can be solved in real-time despite the more comprehensive formulation in the sense that the solve-times are all less than the chosen execution horizon of 0.5 s.

Comparisons also show that if the planner is updating quickly, then a slowly moving obstacle can be safely avoided without a moving obstacle avoidance specification. However, to avoid faster obstacles, both the moving obstacle avoidance and small execution horizon specifications are necessary. A small execution horizon improves safety but decreases the feasibility of obtaining trajectories in real-time. Planning in an environment with more obstacles increases OCP solve-times.

The finding that adding the minimum-time-to-goal, minimum control effort, and moving obstacle avoidance specifications to the formulation does not lead to larger solve-times is contrary to our expectations. In fact, it is observed that adding a minimum-time-to-goal specification reduces planning solve-times in the simple, static, unstructured environment.

While this work tailors the NMPC-based trajectory planner for a UGV application, a variety of automated vehicle systems, e.g., UAVs and spacecraft, can also make use of the approach detailed here.

Future efforts may focus on reducing the solve-times below 0.5 s to make the planner even more reactive. To this end, parallel computing solutions using GPU hardware may be pursued. It is also of interest to evaluate the formulation experimentally on a large, high-speed vehicle.

## Footnote

2

To conduct a parametric study between reduced execution horizon and improved safety, where solve-times are not important, solve-times are allowed to exceed the baseline 0.5 s execution horizon.

## Acknowledgment

The authors wish to acknowledge the financial support of the Automotive Research Center (ARC) in accordance with Cooperative Agreement W56HZV-14-2-0001 U.S. Army Tank Automotive Research, Development and Engineering Center (TARDEC) Warren, MI. DISTRIBUTION A. Approved for public release; distribution unlimited, #28420.

## Conflict of Interest

There are no conflicts of interest.

## Data Availability Statement

The datasets generated and supporting the findings of this article are obtainable from the corresponding author upon reasonable request.

### Appendix: Supplementary Data

Table 3

Vehicle parameters

VariableValueUnits
Mt2689kg
Izz4110kg m2
Lf, Lr1.58, 1.72m
$Kzx$, $Kzyr$, $Kzyf$806, 1076, 675N/(m/s2)
$Fzmin$1000N
a, b1300, 100
ψmin, ψmax[−2π, 2π]deg
δf,min, δf,max[−30, 30]deg
γf,min, γf,max[−5, 5]deg/s
Jx,min, Jx,max[−5, 5]m/s3
Umin, Umax[0.01,29]m/s
VariableValueUnits
Mt2689kg
Izz4110kg m2
Lf, Lr1.58, 1.72m
$Kzx$, $Kzyr$, $Kzyf$806, 1076, 675N/(m/s2)
$Fzmin$1000N
a, b1300, 100
ψmin, ψmax[−2π, 2π]deg
δf,min, δf,max[−30, 30]deg
γf,min, γf,max[−5, 5]deg/s
Jx,min, Jx,max[−5, 5]m/s3
Umin, Umax[0.01,29]m/s
Table 4

Simulation parameters for PA

Variable or conditionsValue and units
tex, N, Lrange, κ0.5 s, 10, 50.0 m, 5.0 m
sm1, sm22.5 m, 4 m
X0[200 m, 0 m, 0, 0, 1.57 rad, 0, 17 (m/s), 0]
X0tol[0.5 m, 0.5 m, 0.5, 0.005, 0.5, 0.25, 0.5 (m/s), 0.5]
XFtol[5.0 m, 5.0 m, NaN, NaN, NaN, …
NaN, NaN, NaN]
wic, wx0, wy0, wv0, wr0, wψ0, wsa0, wux0, wax0, wxf100, 1, 1, 10, 10, 10, 2, 0.1, 0.1, 100
wg, wt, whaf, $wFz$, wce, wsa, wsr, wax, wjx10, 0, 1, 0.5, 0, 0.1, 1, 0.1, 0.01
Moving obstacle avoidance constraint in (7)False
Variable or conditionsValue and units
tex, N, Lrange, κ0.5 s, 10, 50.0 m, 5.0 m
sm1, sm22.5 m, 4 m
X0[200 m, 0 m, 0, 0, 1.57 rad, 0, 17 (m/s), 0]
X0tol[0.5 m, 0.5 m, 0.5, 0.005, 0.5, 0.25, 0.5 (m/s), 0.5]
XFtol[5.0 m, 5.0 m, NaN, NaN, NaN, …
NaN, NaN, NaN]
wic, wx0, wy0, wv0, wr0, wψ0, wsa0, wux0, wax0, wxf100, 1, 1, 10, 10, 10, 2, 0.1, 0.1, 100
wg, wt, whaf, $wFz$, wce, wsa, wsr, wax, wjx10, 0, 1, 0.5, 0, 0.1, 1, 0.1, 0.01
Moving obstacle avoidance constraint in (7)False
Table 5

Environment for EA

VariableDescriptionValueUnit
aobsArray of the obstacles semi-major axes[5, 4, 2]m
bobsArray of the obstacles semi-minor axes[5, 4, 2]m
x0obsArray of the obstacles initial x positions[205, 180, 200]m
y0obsArray of the obstacles initial y positions[57, 75, 63]m
vxArray of obstacles speeds in x direction[0, 0, 0]m/s
vyArray of obstacles speeds in y direction[0, 0, 0]m/s
xgx position of goal location200m
ygy position of goal location125m
σTolerance on goal location15m
VariableDescriptionValueUnit
aobsArray of the obstacles semi-major axes[5, 4, 2]m
bobsArray of the obstacles semi-minor axes[5, 4, 2]m
x0obsArray of the obstacles initial x positions[205, 180, 200]m
y0obsArray of the obstacles initial y positions[57, 75, 63]m
vxArray of obstacles speeds in x direction[0, 0, 0]m/s
vyArray of obstacles speeds in y direction[0, 0, 0]m/s
xgx position of goal location200m
ygy position of goal location125m
σTolerance on goal location15m
Table 6

Environment for EC (see Table 5 for variable descriptions)

VariableValueUnit
aobs[6, 6, 0.387, 0.387, 0.387, 0.387, 0.387, 0.387, 0.387, 0.387, 0.387, 0.387, 0.387, 0.387, …,m
0.387, 0.387, 0.387, 0.387, 0.387, 0.387, 0.387, 0.387, 0.387, 0.387, 0.387, 0.387, …,
0.387, 0.387, 0.387, 0.387, 0.387, 0.387, 0.387, 0.387, 0.387, 0.387, 0.387, 0.387]
bobsSame as aobsm
x0obs$[6,18,12,12,12,12,12,12,12,12,12,12,12,12,12,12,12,12,12,12,12,12,12,l…,$m
12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12]
y0obs[281, 650, 0, 5, 10, 15, 20, 25, 30, 35, 40, 45, 50, 55, 60, 65, 70, 75, 80, 85, 90, …,m
95, 100, 105, 110, 115, 120, 125, 130, 135, 140, 145, 150, 155, 160, 165, 170, 175]
vx[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]m/s
vy[0, − 10, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]m/s
xg18m
yg700m
σ25m
VariableValueUnit
aobs[6, 6, 0.387, 0.387, 0.387, 0.387, 0.387, 0.387, 0.387, 0.387, 0.387, 0.387, 0.387, 0.387, …,m
0.387, 0.387, 0.387, 0.387, 0.387, 0.387, 0.387, 0.387, 0.387, 0.387, 0.387, 0.387, …,
0.387, 0.387, 0.387, 0.387, 0.387, 0.387, 0.387, 0.387, 0.387, 0.387, 0.387, 0.387]
bobsSame as aobsm
x0obs$[6,18,12,12,12,12,12,12,12,12,12,12,12,12,12,12,12,12,12,12,12,12,12,l…,$m
12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12]
y0obs[281, 650, 0, 5, 10, 15, 20, 25, 30, 35, 40, 45, 50, 55, 60, 65, 70, 75, 80, 85, 90, …,m
95, 100, 105, 110, 115, 120, 125, 130, 135, 140, 145, 150, 155, 160, 165, 170, 175]
vx[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]m/s
vy[0, − 10, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]m/s
xg18m
yg700m
σ25m
Table 7

Control effort

Effort termPBPCDecrease
Steering angle0.0005860.00042228.0%
Steering rate0.001290.00092228.5%
Longitudinal jerk0.5300.42020.8%
Total0.5320.42120.9%
Effort termPBPCDecrease
Steering angle0.0005860.00042228.0%
Steering rate0.001290.00092228.5%
Longitudinal jerk0.5300.42020.8%
Total0.5320.42120.9%
Table 8

Fisher’s exact test for attaining the goal for PC and PD in EC (p = 2.2 × 10−16)

FailPassTotal
PC378 (60.0%)22 (12.9%)400
PD252 (40.0%)148 (87.1%)400
Total630 (100%)170 (100%)800
FailPassTotal
PC378 (60.0%)22 (12.9%)400
PD252 (40.0%)148 (87.1%)400
Total630 (100%)170 (100%)800

## References

1.
Frasch
,
J. V.
,
Gray
,
A.
,
Zanon
,
M.
,
Ferreau
,
H. J.
,
Sager
,
S.
,
Borrelli
,
F.
, and
Diehl
,
M.
,
2013
, “
An Auto-Generated Nonlinear MPC Algorithm for Real-Time Obstacle Avoidance of Ground Vehicles
,”
European Control Conference
,
Zurich, Switzerland
,
July 17–19
, IEEE, pp.
4136
4141
.
2.
Frazzoli
,
E.
,
Dahleh
,
M. A.
, and
Feron
,
E.
,
2002
, “
Real-Time Motion Planning for Agile Autonomous Vehicles
,”
J. Guidance Control Dyn.
,
25
(
1
), pp.
116
129
.
3.
Lo
,
M. C.
,
Giffin
,
R. P.
,
Pakulski
,
K. A.
,
Davis
,
W. S.
,
Bernstein
,
S. A.
, and
Wise
,
D. V.
,
2017
, “
High-Mobility Multipurpose Wheeled Vehicle Rollover Accidents and Injuries to US Army Soldiers by Reported Occupant Restraint Use, 1992–2013
,”
Military Med.
,
182
(
5–6
), pp.
e1782
e1791
.
4.
Greatwood
,
C. M.
, and
Richards
,
A. G.
,
2013
, “
Implementation of Fast MPC With a Quadrotor for Obstacle Avoidance
,”
AIAA Guidance, Navigation, and Control Conference
,
Boston, MA
,
Aug. 19–22
, p.
4790
.
5.
Yoon
,
Y.
,
Shin
,
J.
,
Kim
,
H. J.
,
Park
,
Y.
, and
Sastry
,
S.
,
2009
, “
Model-Predictive Active Steering and Obstacle Avoidance for Autonomous Ground Vehicles
,”
Control Eng. Pract.
,
17
(
7
), pp.
741
750
.
6.
Kritayakirana
,
K.
, and
Gerdes
,
J. C.
,
2012
, “
Using the Centre of Percussion to Design a Steering Controller for an Autonomous Race Car
,”
Veh. Syst. Dyn.
,
50
(
Suppl. 1
), pp.
33
51
.
7.
Velenis
,
E.
,
Tsiotras
,
P.
, and
Lu
,
J.
,
2007
, “
Aggressive Maneuvers on Loose Surfaces: Data Analysis and Input Parametrization
,”
Mediterranean Conference on Control and Automation
,
Athens, Greece
,
July 27–29
, IEEE, pp.
1
6
.
8.
Liu
,
J.
,
Jayakumar
,
P.
,
Stein
,
J. L.
, and
Ersal
,
T.
,
2016
, “
A Study on Model Fidelity for Model Predictive Control-Based Obstacle Avoidance in High-Speed Autonomous Ground Vehicles
,”
Veh. Syst. Dyn.
,
54
(
11
), pp.
1629
1650
.
9.
Liu
,
S.
,
Atanasov
,
N.
,
Mohta
,
K.
, and
Kumar
,
V.
,
2017
, “
,”
International Conference on Intelligent Robots and Systems
,
,
Sept. 24–28
, IEEE, pp.
2872
2879
.
10.
Mohta
,
K.
,
Sun
,
K.
,
Liu
,
S.
,
Watterson
,
M.
,
Pfrommer
,
B.
,
Svacha
,
J.
,
Mulgaonkar
,
Y.
,
Taylor
,
C. J.
, and
Kumar
,
V.
,
2018
, “
Experiments in Fast, Autonomous, GPS-Denied Quadrotor Flight
,”
International Conference on Robotics and Automation
,
Brisbane, Australia
,
May 21–26
.
11.
Liu
,
S.
,
Mohta
,
K.
,
Atanasov
,
N.
, and
Kumar
,
V.
,
2018
, “
Search-Based Motion Planning for Aggressive Flight in SE(3)
,”
IEEE Rob. Autom. Lett.
,
3
(
3
), pp.
2439
2446
.
12.
Liu
,
J.
,
Jayakumar
,
P.
,
Stein
,
J. L.
, and
Ersal
,
T.
,
2017
, “
Combined Speed and Steering Control in High-Speed Autonomous Ground Vehicles for Obstacle Avoidance Using Model Predictive Control
,”
IEEE Trans. Veh. Technol.
,
66
(
10
), pp.
8746
8763
.
13.
Nishio
,
Y.
,
Nonaka
,
K.
, and
Sekiguchi
,
K.
,
2017
, “
Moving Obstacle Avoidance Control by Fuzzy Potential Method and Model Predictive Control
,”
Asian Control Conference
,
Gold Coast, Australia
,
Dec. 17–20
, IEEE, pp.
1298
1303
.
14.
Yao
,
P.
,
Wang
,
H.
, and
Su
,
Z.
,
2015
, “
Real-Time Path Planning of Unmanned Aerial Vehicle for Target Tracking and Obstacle Avoidance in Complex Dynamic Environment
,”
Aerosp. Sci. Technol.
,
47
, pp.
269
279
.
15.
Jewison
,
C.
,
Erwin
,
R. S.
, and
Saenz-Otero
,
A.
,
2015
, “
Model Predictive Control With Ellipsoid Obstacle Constraints for Spacecraft Rendezvous
,”
IFAC-PapersOnLine
,
48
(
9
), pp.
257
262
.
16.
Febbo
,
H.
,
Liu
,
J.
,
Jayakumar
,
P.
,
Stein
,
J. L.
, and
Ersal
,
T.
,
2017
, “
Moving Obstacle Avoidance for Large, High-Speed Autonomous Ground Vehicles
,”
American Control Conference
,
Seattle, WA
,
May 24–26
, pp.
5568
5573
.
17.
Geisert
,
M.
, and
Mansard
,
N.
,
2016
, “
Trajectory Generation for Quadrotor Based Systems Using Numerical Optimal Control
,”
International Conference on Robotics and Automation
,
Stockholm, Sweden
,
May 16–20
, IEEE, pp.
2958
2964
.
18.
Basset
,
G.
,
Xu
,
Y.
, and
Yakimenko
,
O.
,
2010
, “
Computing Short-Time Aircraft Maneuvers Using Direct Methods
,”
J. Comput. Syst. Sci. Int.
,
49
(
3
), pp.
481
513
.
19.
,
B.
,
Mehrabi
,
N.
,
Razavian
,
R. S.
, and
McPhee
,
J.
,
2017
, “
Nonlinear Model Predictive Control of an Upper Extremity Rehabilitation Robot Using a Two-Dimensional Human–Robot Interaction Model
,”
International Conference on Intelligent Robots and Systems
,
,
Sept. 24–28
, IEEE, pp.
502
507
.
20.
Febbo
,
H.
,
Jayakumar
,
P.
,
Stein
,
J. L.
, and
Ersal
,
T.
,
2020
, “
NLOptControl: A Modeling Language for Solving Optimal Control Problems
,” preprint arXiv:2003.00142.
21.
Simon
,
L. L.
,
Nagy
,
Z. K.
, and
Hungerbuehler
,
K.
,
2009
, “Swelling Constrained Control of an Industrial Batch Reactor Using a Dedicated NMPC Environment: OptCon,”
Nonlinear Model Predictive Control
,
L.
Magni
,
D. M.
Raimondo
, and
F.
Allgöwer
, eds.,
Springer
,
Berlin/Heidelberg
, pp.
531
539
.
22.
Pacejka
,
H.
,
2005
,
Tire and Vehicle Dynamics
,
Elsevier
,
Rotterdam
.
23.
Zhang
,
X.
,
Liniger
,
A.
, and
Borrelli
,
F.
,
2021
, “
Optimization-Based Collision Avoidance
,”
IEEE Transactions on Control Systems Technology
,
29
(
3
), pp.
972
983
.
24.
Scokaert
,
P. O.
, and
Rawlings
,
J. B.
,
1999
, “
Feasibility Issues in Linear Model Predictive Control
,”
AIChE J.
,
45
(
8
), pp.
1649
1659
.
25.
Kerrigan
,
E. C.
, and
Maciejowski
,
J. M.
,
2000
, “
Soft Constraints and Exact Penalty Functions in Model Predictive Control
,”
International Conference on Control
,
Cambridge, UK
,
Sept. 4–7
.
26.
Thrun
,
S.
,
Montemerlo
,
M.
,
Dahlkamp
,
H.
,
Stavens
,
D.
,
Aron
,
A.
,
Diebel
,
J.
,
Fong
,
P.
,
Gale
,
J.
,
Halpenny
,
M.
,
Hoffmann
,
G.
,
Lau
,
K.
,
Oakley
,
C.
,
Palatucci
,
M.
,
Pratt
,
V.
,
Stang
,
P.
,
Strohband
,
S.
,
Dupont
,
C.
,
Jendrossek
,
L.-E.
,
Koelen
,
C.
,
Markey
,
C.
,
Rummel
,
C.
,
van Niekerk
,
J.
,
Jensen
,
E.
,
Alessandrini
,
P.
,
,
G.
,
Davies
,
B.
,
Ettinger
,
S.
,
Kaehler
,
A.
,
Nefian
,
A.
, and
Mahoney
,
P.
,
2006
, “
Stanley: The Robot that Won the DARPA Grand Challenge
,”
J. Field Rob.
,
23
(
9
), pp.
661
692
.
27.
Dahlkamp
,
H.
,
Kaehler
,
A.
,
Stavens
,
D.
,
Thrun
,
S.
, and
,
G. R.
,
2006
, “
Self-Supervised Monocular Road Detection in Desert Terrain
,”
Robotics: Science and Systems
,
,
Aug. 16–19
.
28.
Krüsi
,
P.
,
Furgale
,
P.
,
Bosse
,
M.
, and
Siegwart
,
R.
,
2017
, “
Driving on Point Clouds: Motion Planning, Trajectory Optimization, and Terrain Assessment in Generic Nonplanar Environments
,”
J. Field Rob.
,
34
(
5
), pp.
940
984
.
29.
Katrakazas
,
C.
,
Quddus
,
M.
,
Chen
,
W.-H.
, and
Deka
,
L.
,
2015
, “
Real-Time Motion Planning Methods for Autonomous On-Road Driving: State-of-the-Art and Future Research Directions
,”
Transp. Res. Part C: Emerg. Technol.
,
60
, pp.
416
442
.
30.
,
B.
,
Čáp
,
M.
,
Yong
,
S. Z.
,
Yershov
,
D.
, and
Frazzoli
,
E.
,
2016
, “
A Survey of Motion Planning and Control Techniques for Self-Driving Urban Vehicles
,”
IEEE Trans. Intell. Veh.
,
1
(
1
), pp.
33
55
.
31.
Chiang
,
H.-T. L.
,
HomChaudhuri
,
B.
,
Smith
,
L.
, and
Tapia
,
L.
,
2020
, “Safety, Challenges, and Performance of Motion Planners in Dynamic Environments,”
Robotics Research
, Vol.
10
,
N.
Amato
,
G.
Hager
,
S.
Thomas
,
M.
Torres-Torriti
, eds.,
Springer
,
Cham
, pp.
793
808
.
32.
,
C. C.
,
1988
, ‘
Development of Driver/Vehicle Steering Interaction Models for Dynamic Analysis
,” Tech. Rep. UMTRI-86-41,
University of Michigan Transportation Research Institute
,
Ann Arbor, MI
.
33.
Betts
,
J. T.
,
2010
,
Practical Methods for Optimal Control and Estimation Using Nonlinear Programming
, Vol.
19
.
SIAM
.
34.
Kelly
,
M.
,
2017
, “
An Introduction to Trajectory Optimization: How to Do Your Own Direct Collocation
,”
SIAM Rev.
,
59
(
4
), pp.
849
904
.