In this paper, we demonstrate the application of a discrete control Lyapunov function (DCLF) for exponential orbital stabilization of the simplest walking model supplemented with an actuator between the legs. The Lyapunov function is defined as the square of the difference between the actual and nominal velocity of the unactuated stance leg at the midstance position (stance leg is normal to the ramp). The foot placement is controlled to ensure an exponential decay in the Lyapunov function. In essence, DCLF does foot placement control to regulate the midstance walking velocity between successive steps. The DCLF is able to enlarge the basin of attraction by an order of magnitude and to increase the average number of steps to failure by 2 orders of magnitude over passive dynamic walking. We compare DCLF with a one-step dead-beat controller (full correction of disturbance in a single step) and find that both controllers have similar robustness. The one-step dead-beat controller provides the fastest convergence to the limit cycle while using least amount of energy per unit step. However, the one-step dead-beat controller is more sensitive to modeling errors. We also compare the DCLF with an eigenvalue-based controller for the same rate of convergence. Both controllers yield identical robustness but the DCLF is more energy-efficient and requires lower maximum torque. Our results suggest that the DCLF controller with moderate rate of convergence provides good compromise between robustness, energy-efficiency, and sensitivity to modeling errors.

## Introduction

Passive dynamic robots that walk downhill are highly energy-efficient because they rely only on their mass distribution and geometry [1–3]. These robots are gravity powered and do not use external control or power. Consequently, such robots are highly energy-efficient. But no control means that these robots cannot correct their motion when disturbed, thus they fall down due to even the slightest disturbance [4,5]. The lack of stability limits the practical use of such robots.

The compass gait model is a well-studied model of walking [6]. The model consists of a point mass torso and two legs connected to each other by a pin joint. One leg pivots about the ground, while the other leg, actuated by a hip actuator, swings about the grounded leg. The legs exchange their roles after the swinging leg contacts the ground. The model is challenging to control because of nonlinearity and underactuation (more degrees-of-freedom than actuators). Iida and Tedrake [7] controlled the compass gait model by creating an open-loop hip actuator profile that effectively “phase locks” the swing leg to the gait cycle. Manchester et al. [8] used a closed-loop controller in which the swing leg is coupled to the motion of the stance leg using a high-gain feedback controller. In this paper, we consider the simplest walking model, which is a special case of the compass gait model: the ratio of the leg to torso mass is zero. This feature decouples the swing leg from the stance leg. Consequently, the hip actuator is not able to influence the motion of the stance leg in the swing phase of motion (that is, the system *cannot be stabilized*). Thus, a controller that does step-to-step or orbital stabilization (as opposed to local stabilization) is more appropriate for this model. Here, the hip actuator controls the step length in order to modulate the stance leg velocity between successive steps.

We develop a stabilizing controller using a discrete control Lyapunov function (DCLF). The Lyapunov function is chosen at the Poincaré section at midstance. The midstance is defined as the position when the stance leg (grounded leg) is normal to the ramp. Next, a control law is chosen that results in exponential stabilization of the system between steps at the Poincaré section. The key idea is to use the actuated degree-of-freedom (the swing leg) to control the unactuated degree-of-freedom (the stance leg) between steps. Although the continuous control Lyapunov function (CCLF) [9] has been used for gait stabilization, the use of DCLF is novel in this area. The major difference between the two is that DCLF does orbital or step-to-step stabilization, while CCLF does local stabilization (see Ref. [10] for difference between orbital and local stabilization).

The organization of the paper is as follows: The background and related work is presented in Sec. 2. The details about the simplest model including equations of motion are discussed in Sec. 3. The details about the DCLF technique are provided in Sec. 4. The results and the discussion are in Secs. 5 and 6, respectively.

## Background and Related Work

There are broadly two notations of bipedal robot stability: (1) stability is the ability to not fall down and (2) stability is the ability to follow a given reference trajectory. Next, we discuss some metrics associated with these notions of stability.

Viability, mean first-passage time, and gait sensitivity norm (GSN) are some metrics that quantify the robots ability to not fall down. Viability is the set of all states from which the robot can avoid falling down [11]. This is a computationally expensive metric to compute and almost intractable for high-dimensional systems. The mean first-passage time is the number of steps the robot can take before falling down [12]. This definition is a probabilistic one and is evaluated by simulating the robot in a variety of different disturbances conditions and tends to be computationally expensive as the system dimension increases. The GSN is the two norm of the ratio of a gait indicator (e.g., step time and velocity) to a disturbance (e.g., terrain height and push) [5]. This is easy to compute but is sensitive to the choice of a good gait indicator that correlates with falling. Moreover, as GSN is based on linearization, it works well only for small disturbances.

Basin of attraction, the largest eigenvalue of the limit cycle, and Lyapunov function are some metrics that quantify the robots ability to follow a given reference trajectory. The basin of attraction is the set of all initial states that will converge to the given reference trajectory [4,13]. This measure is computationally expensive even for the simplest system. The magnitude of the largest eigenvalue of the Poincaré map indicates how fast a perturbation in the state would converge back to the reference trajectory [1]. The closer the value is to zero, the faster will be the convergence, while values greater than 1 will diverge and lead to instability. This is a computationally simple measure but is only valid for small perturbations. A Lyapunov function is a positive definite function whose time derivative along any state trajectory of system decreases with time. It is nontrivial to find a Lyapunov function but the recent use of sum of squares optimization provides a generalizable numerical technique [14]. The issue with all these metrics is that they are used to check stability after the controller has been designed.

The eigenvalues of the Poincaré map can be controlled using feedback. We call this controller as an eigenvalue-based controller. The key idea is to linearize the Poincaré map with respect to the state and suitable control action (e.g., step length and push-off). Then, pole placement is used to set the eigenvalues of the linearized equation. For example, McGeer [15] used the instantaneous push-off as the control action to stabilize a two-dimensional model, while Kuo [16] used lateral foot placement to control lateral stability of a three-dimensional model of walking. We describe the eigenvalue-based controller in Sec. 4.5 and also compare it with our controller.

The control Lyapunov function provides a generalizable method to design a stable control law [17]. For example, Ames et al. [9] used CCLF to guarantee exponential stabilization of the hybrid zero dynamics of the system. A conceptualization of their methodology is shown in Fig. 1(a). The thick line shows the reference trajectory, and the thin line shows a perturbed trajectory. While the CCLF is chosen to keep the trajectory within the gray tube, the discontinuous foot-strike event at the section shown by the rectangle tends to push the system out of the gray tube. However, by choosing suitable tuning parameters, it is possible to keep the perturbed trajectory within the gray tube in spite of the discontinuous foot-strike event.

We take an alternate approach by using a DCLF. This is shown in Fig. 1(b). Here, we try to keep the perturbed trajectory of the unactuated degree-of-freedom (e.g., stance leg of bipedal robot) within the gray region *only* at the Poincaré section. This is done by using the actuated degrees-of-freedom, that are fully controllable in a step, to affect the unactuated degree-of-freedom over a complete step (e.g., using foot placement). In this view, DCLF does not think of the foot-strike event as a disturbance, but rather an essential quantity to modulate the unactuated degree-of-freedom. Another difference is that DCLF provides exponential orbital stability while CCLF can only provide asymptotic orbital stability, although CCLF is able to provide exponential stabilization of the hybrid zero dynamics.

In this paper, we illustrate an application of DCLF using the simplest walking model [18], but with the addition of an actuator between the two legs. The system is nonlinear and underactuated. Since the point mass torso is heavy compared to the legs, the swing leg motion cannot affect the stance leg motion in the swing phase, and the system is thus decoupled. However, the swing leg (actuated degree-of-freedom) motion can influence the stance leg (unactuated degree-of-freedom) motion by appropriate foot placement (e.g., big step reduces the stance leg velocity [19]), which is exploited in the DCLF approach.

## Model

### Model Description.

Figure 2 shows a cartoon of the simplest walker [18]. The model has mass *M* at the hip and point mass *m* at each of the feet. Each leg has length $\u2113$. Gravity *g* points downward. The leg in contact with the ramp is called the stance leg, while the other leg is called the swing leg. The angle made by the stance leg with the normal to the ramp is *θ*, and the angle made by the swing leg with the stance leg is $\varphi $. The hip torque is *τ*, and the ramp slope is *γ*.

A single step consists of two phases: a single stance phase where the swing leg pivots about a stationary stance foot and foot-strike phase where there is support transfer and the legs exchange their roles. These phases are connected through two switching events: a midstance event where the stance leg is normal to the ramp and a collision phase where the leading leg touches the ground. Note that we have chosen a single step to start and end at midstance unlike the usual convention of using the instant just after foot-strike. The use of midstance as opposed to foot-strike for the Poincaré section will become clear in Sec. 4. Next, we describe the equations that describe the phases and events.

#### Midstance Event.

#### Single Stance Phase (Continuous Dynamics).

*τ*is the nondimensional torque obtained by dividing the dimensional torque by $Mg\u2113$. The equations are

#### Collision Event.

*h*at the collision. This is taken to be zero, except for testing the robustness of the control approach. The collision event is given by

#### Foot-Strike Phase (Discontinuous Dynamics).

### Failure Modes.

There are two failure modes for the simplest walker, and they are described below. These lead to two conditions on the state of the system and are checked at each integration step. Violation of any of these conditions is interpreted as system failure.

- (1)
*Falling backward:*Falling backward is detected when the angular velocity of the stance leg is positive (note that forward velocity is indicated by a negative angular velocity). Thus, the condition for failure is $\theta \u02d9\u22650$. - (2)

## Methods

### Overview of Control Technique.

*within a step*. However, we can find a function,

*F*, that maps the midstance between consecutive steps and is indexed by step number,

*k*. Thus

where $\varphi \u2212$ is swing leg angle at foot-strike and is related to the step length. Given the measurement at step *k*, $\theta \u02d9k$, the hip torque can be used to modulate $\varphi \u2212$ to control $\theta \u02d9k+1$. Thus, the stance leg is fully controllable *between steps*. Based on these observations, we use a hierarchical control approach: *the stance leg velocity is controlled between steps using foot placement obtained from DCLF, while the swing leg is controlled using a trajectory tracking controller based on the foot placement angle.*

### Midstance to Midstance Map for Stance Leg.

In this section, we present equations that can be used to numerically solve for the midstance to midstance map, *F*, given by Eq. (11).

To get Eq. (13), we have used the foot-strike conditions given by Eqs. (7) and (8). We have also assumed that controller is not going to be aware of the step-down disturbance, so we set *h* = 0 (see Eq. (5)). This leads to the condition $\varphi \u2212=2\theta \u2212$ and is used to write the $cos\u2009$ expression on the right side of Eq. (13) in terms of $\varphi \u2212$.

### DCLF.

DCLF was explained in the last paragraph of Sec. 2 and illustrated in Fig. 1. We provide mathematical details next.

*θ*= 0. For the system to be asymptotically stable, the following condition needs to be satisfied:

*c*is a user chosen positive constant such that, $0<c<1$. Thus, the condition for exponential stability can be written as

The stance leg velocity at midstance, $\theta \u02d9k$, is measured, and for given *c* and $\theta \u02d90$, the swing leg angle just before foot-strike, $\varphi \u2212$, is solved using the above equation. The choice of *c* determines the rate of decay of a perturbation in the midstance velocity; a larger value of *c* indicates faster convergence. When *c* = 1, there is full correction of disturbances in a single step, also known as one-step dead-beat control [20]. Dead-beat control gives the fastest convergence, “dead-beat” convergence [21].

The time-based trajectory is open-loop but we have a proportional derivative controller on the position at the end of the time-based trajectory. The controller comes into effect only when the time from midstance to foot-strike is greater than the predicted time using Eq. (19) (e.g., during a step-down disturbance).

### Why Use Midstance Position for Poincaré Map

We use midstance position, defined as *θ* = 0, for the Poincaré section. In general, any section such as $\theta =constant$ will work just as well. This particular choice of Poincaré section enables us to choose *V* to be a function of only the stance leg velocity (e.g., $V(\Delta \theta \u02d9k)=\Delta \theta \u02d9k2$). This allows us to do a hierarchical control: the swing leg is controlled in the continuous sense to affect the stance leg velocity in the discrete sense or between steps. On the other hand, a section before or after foot-strike would be a function of stance leg and swing leg angle, $cos(\varphi \u2212\theta )\u2212cos(\theta )=0$. In this case, *V* needs to be a function of stance leg and swing leg position and velocity, that is, $V(\Delta \theta k,\Delta \theta \u02d9k,\Delta \varphi k,\Delta \varphi \u02d9k)$. This complicates the controller design. Moreover, in a practical sense, the instant just before or after foot-strike is most prone to modeling errors (e.g., misestimation of ground height) and noise (e.g., two sensors, stance and swing leg angles, are needed and each of them would have their own noise levels). Perhaps the best choice for the Poincaré section is the instant after foot-strike but following the decay of vibrations due to foot-strike because that instant gives the swing leg the entire step for adjusting the step length.

### Eigenvalue-Based Controller.

The eigenvalue-based controller imparts orbital stability in the linearized sense. The Poincaré map is linearized based on the state and suitable control action. Then, the eigenvalues are modulated/placed using the control action. We describe the mathematical details next.

where $A=\u2202\theta \u02d9k+1/\u2202\theta \u02d9k$ and $B=\u2202\theta \u02d9k+1/\u2202\varphi \u2212$.

This allows use to compare the DCLF controller with the eigenvalue-based controller in an objective way. Note that *A*, *B*, and *K* are scalars in our problem, and we have used the positive root from Eq. (18) (that is, $1\u2212c$) to ensure monotonic decay in the perturbed state, $\Delta \theta \u02d9k+1$.

## Results

All computations are done using matlab. We used *ode113* for integrating the equations of motion and *fsolve* to find the limit cycle. To find the passive limit cycle, we chose $\gamma =0.009$ and set the controller off (*τ* = 0). Our passive limit cycle has the fixed point: $\theta 0=0,\u2009\theta \u02d90=\u22120.0593,\u2009\varphi 0=\u22120.0532$, and $\varphi \u02d90=\u22120.3397$. The swing leg position and velocity just before foot-strike are $\varphi \u2212=\u22120.4006$ and $\varphi \u02d9\u2212=3.5\xd710\u22124\u223c0$. We used central differencing with step size of $10\u22125$ to compute *A* = 0.3711 and $B=\u22120.4026$. We also found the Jacobian of the Poincaré section using central difference. The largest eigenvalue of the Jacobian is 0.5891, which is less than 1, indicating that the passive limit cycle is stable.

### Stability.

We use the basin of attraction metric to compare the stability of the uncontrolled case (passive dynamic walking) with our control approach. The basin of attraction is defined as the set of initial conditions that converge to the limit cycle as time goes to infinity [13]. To compute the basin of attraction, we proceed as follows. We perturb the midstance velocity from its nominal value and do forward simulations of the walker for 50 steps. We find the upper and lower limits of the midstance velocity that allows the walker to complete 50 steps without falling down. The limits give the boundary of the basin of attraction. Note that 50 steps are large enough to be able to see the effect of the perturbation but small enough that we are able to obtain results in a short period of time. We repeat the above procedure for different ramp slopes and with and without control.

Figure 3 shows the basin of attraction for the simplest walker with and without control as a function of ramp slope. The white region shows the basin of attraction for the passive or uncontrolled case. The gray region (including the white region) shows the basin of attraction for three values of *c*. The basin of attraction is substantially improved using our control approach. We compute the area of the basin of attraction for the uncontrolled case and the three control cases with different *c* values using numerical quadrature. Then, we find the ratio of areas for a specific *c* value to the uncontrolled case. We found ratios of 19.56, 18.60, and 16.4 for *c* = 0.25, *c* = 0.5, and *c* = 1, respectively. This indicates that our controller was able to increase the region of attraction by an order of magnitude. Further, we note that *c* = 1 has the smallest basin of attraction, and this increases as *c* decreases. This can be explained as follows: A larger *c* value corresponds to a faster convergence to the limit cycle. For example, when *c* = 1, the convergence is in a single step. This is achieved by taking a bigger than nominal step as reduction in velocity between steps is directly proportional to the step length [19,22]. But a bigger step will lead to flight phase at a lower velocity (see Eq. (10)). As *c* decreases, the controller chooses shorter steps, leading to higher velocities for flight phase. Consequently, as *c* decreases, the basin of attraction increases.

Next, we demonstrate the exponential stabilization provided by the DCLF. We perturb the midstance velocity to $\theta \u02d9k=\u22120.5$ at a slope of $\gamma =0.009$ and plot the midstance velocity as a function of step number to obtain Fig. 4. For $0<c<1$, there is exponential stabilization (dashed and dashed-dotted lines), and a larger value of *c* gives faster convergence to the nominal midstance velocity. However, *c* = 1 (dotted line) leads to the condition $\theta \u02d9k+1=\theta \u02d90$ (see Eqs. (15) and (17)) or a dead-beat convergence in a single step. This is faster than exponential convergence.

### Robustness.

We evaluate robustness by computing the average number of steps that the controller can withstand without the flight phase or falling backward on uneven terrain chosen from a random distribution with maximum height of *σ* [12]. Note that *σ* can be interpreted as a maximum step down normalized against the leg length. We create ten terrains with 400 steps each, selected from a random distribution with maximum height *σ*. For each terrain, we do forward simulations of the system for a given value of *c*. We evaluate the average number of steps and average energy used per step for the ten terrains for different values of *σ* and *c*.

Figure 5(a) shows results for robustness as a function of *σ* for different values of *c* for a slope of $\gamma =0.009$. The average number of steps is infinity for *σ* = 0 which corresponds to no disturbance. As *σ* increases, the average number of steps decreases steadily as shown in the figure. The average number of steps to failure is almost the same for each value of *c*. The average number of steps to failure is 393 at $\sigma =0.005$ and decreases to 40 at $\sigma =0.05$. We found that for very low values of *c*, $0<c\u22640.005$, the robustness dropped appreciably with increase in step height. We also did the robustness test without any control. We observed that the average number of steps to failure is 4 at $\sigma =0.005$ and 0 at $\sigma =0.05$.

Figure 5(b) demonstrates the average energy used per step as a function of *σ*. We obtain the energy usage by integrating the absolute value of mechanical work done by the hip actuator as the robot walks on the terrain and dividing it by the total number of successful steps. For a given controller specified by *c*, the average energy per step increases with *σ*. This is because a larger *σ* leads to a larger deviation from the limit cycle, and consequently more energy is needed to get back to the nominal trajectory. For a given *σ*, the average energy per step is the lowest for *c* = 1 and increases as *c* decreases. This can be explained as follows: larger value of *c* implies a faster convergence to the limit cycle and hence lower energy usage since the limit cycle is passive (energy usage of zero).

Figure 5(c) illustrates the average step length, the control strategy, as a function of *σ*. Each data point for a particular *σ* and *c* was obtained by averaging the step length for the ten simulation runs. For a given maximum step down, *σ*, the largest average step length is for *c* = 1 and decreases as *c* decreases. This is because larger *c* values correspond to a greater decrease of the DCLF, which is possible by taking a bigger step length. Also, the average step length increases as the step down, *σ*, increases. Since a larger step down leads to a larger deviated midstance velocity, the average step length needs to consequently increase to regulate the midstance velocity at the subsequent step.

Finally, Fig. 5(d) depicts the maximum torque needed as a function of *σ*. This was obtained by searching for maximum torque across the ten simulations runs for a given *σ*. The maximum torque is almost the same for *c* = 0.5 and *c* = 1 but is greater for *c* = 0.25 for a given *σ*. This indicates that low values of *c* require more torque, consequently, a larger actuator for stabilization. The maximum torque increases as the maximum step down increases for a given *c* because bigger step down requires bigger step length and, hence, higher torque.

Figure 6 compares the sensitivity of the DCLF controller to modeling errors. To generate these simulations, we provided the DCLF controller with a slope of $\gamma =0.01$, which is 10% greater than the actual value of $\gamma =0.009$, a modeling error. The robustness measured by the average number of steps to failure for *c* = 0.25 and *c* = 0.5 is virtually unchanged (compare Fig. 5(a) with Fig. 6(a)). However, the robustness decreases substantially for *c* = 1. This can be explained as follows: a larger than actual *γ* in the model leads to a bigger step and to a greater energy lose during collision, and consequently the model falls backward. The average number of steps walked before failure for *c* = 1 is about 9 at $\sigma =0.005$, and it increases to about 40 at $\sigma =0.025$. This is because an increase in step-down disturbance nullifies the effect of this particular modeling error: bigger than actual *γ* in the model leads to a bigger than the actual step but this regulates the walking speed better as the step-down disturbance is increased. Figure 6(b) shows that *c* = 1 is more energy-efficient, followed by *c* = 0.5 and *c* = 0.25. The maximum torques for *c* = 1 and *c* = 0.5 are the same, while those for *c* = 0.25 are much higher.

Figure 7 compares the DCLF controller with the eigenvalue-based controller for *c* = 0.9, which corresponds to an eigenvalue of 0.32. Figure 7(a) demonstrates that both the controllers have similar robustness. However, the DCLF controller is more energy-efficient (see Fig. 7(b)) and requires a lower maximum torque (Fig. 7(d)). The main difference between the two controllers is that eigenvalue-based controller uses a linearization, while DCLF is based on the actual model, which is nonlinear. In this case, the linearization leads to the eigenvalue-based controller to over correct by taking longer than ideal steps, as shown in Fig. 7(c).

## Discussion

We have presented the DCLF to exponentially stabilize the simplest walking model with a hip actuator. Our control approach is able to enlarge the basin of attraction by an order of magnitude and increase the average number of steps to failure by 2 orders of magnitude over the passive dynamic walking. We compared the DCLF controller with a one-step dead-beat controller and eigenvalue-based controller. We found that all three controllers had similar robustness. However, the dead-beat controller was the most energy-efficient and required lower maximum torque followed by the DCLF controller and finally the eigenvalue-based controller.

Theoretically, one-step dead-beat stabilization is better than exponential stabilization in terms of convergence rate and final value achieved. While one-step dead-beat stabilization converges fully to the nominal value in a single step, there will always be some finite (usually very small) error between the actual and nominal state for exponential stabilization. Also, the one-step dead-beat stabilization leads to lower energy usage and lower maximum torque than exponential stabilization suggesting its superior performance. However, the robustness of the one-step dead-beat stabilization reduces quickly in the presence of modeling errors. Thus, in practice, an exponential stabilized controller is preferred.

The most common stability metric used in legged locomotion is the maximum eigenvalue of Jacobian of the Poincaré map [1]. This is a linear stability measure that holds true for small state perturbations only. On the other hand, our measure is a nonlinear measure that works for large state perturbations. Some authors have optimized the maximum eigenvalues to create stable walking gaits [23,24]. The issue with this approach is that the maximum eigenvalue sometimes is a nonsmooth function of the robot and control parameters [6], which will lead to numerical issues when used in conjunction with gradient-based optimization methods. In contrast, our stability measure is a function of the velocity of the underactuated degree-of-freedom only. Although velocity is discontinuous at foot-strike, the optimization can be made smooth by using multiple shooting method [25]. This makes it possible to apply gradient-based methods which lead to faster convergence to the optimal solution for smooth problems. Note that multiple shooting method allows one to handle discontinuities in the physics of the problem (e.g., due to foot-strike), and not in the control parameters.

The eigenvalues of the Jacobian of the Poincaré map can be manipulated using a feedback controller as was done in this paper. We used pole placement to place the eigenvalues but one can also use a discrete linear quadratic regulator (e.g., Ref. [26]). In the feedback approach, eigenvalues of the closed-loop system are manipulated using pole placement or discrete linear quadratic regulator, while in the optimization approach discussed in the previous paragraph, the eigenvalues of the open-loop system are modified. The feedback control method does not require smoothness of the eigenvalues and is computationally faster than the optimization-based method. However, since the eigenvalue-based feedback controller is based on linearization of the system, it usually performs poorly in the presence of large perturbations (disturbances, sensor noise, and modeling errors) as we have seen in this paper.

One advantage of the DCLF is that we are able to construct a controller for a given stability bound during the design phase. This is in contrast to the more common approach of checking stability after control design has taken place, which is time consuming and provides limited stability guarantees. Specifically, in our approach, Eq. (17) gives the stability condition and can be incorporated easily into the control design framework (e.g., optimization). One can incorporate stability in the controller design by maximizing the mean first-passage time using a stochastic optimization [12]. Another approach is to find a heuristic control strategy that imparts gait stability such as swing leg retraction (swing leg moves backward just before foot-strike) [27]. But recent results have shown that under certain conditions (big slopes and dead-beat stabilization), swing leg protraction can also impart gait stability [28].

The DCLF approach uses a single measurement, the stance leg velocity at midstance, to adjust the foot step location. A practical way to achieve this would be to place an inertial measurement unit on the torso. The inertial measurement unit serves two purposes: one, it detects midstance event using the accelerometer; and two, the forward velocity using the gyroscope. Another feature of DCLF is that it is predictive, that is, the control technique is able to calculate the desired foot placement at midstep, giving ample time for the hip actuator to perform the required control action.

Our control approach has several limitations. We use a single measurement at midstance that determines the foot placement. Thus, our method is sensitive to measurement noise and model parameters. Our method works best when the sequence of events is: disturbance followed by measurement (at midstance) followed by control action (at foot-strike). However, our approach might show poor performance when the sequence of events is: measurement followed by disturbance followed by control action. Because in this case, the measurement cannot act on the disturbance immediately but needs to wait for one complete step to be able to do any corrections. The sensitivity can be tackled by incorporating a robust control law under bounded uncertainty [29]. This model is incapable of walking on level ground because energy lost to collision during foot placement cannot be recovered. Recently, we have extended these results to walking on level ground by adding ankle push-off control to the model [30]. DCLF can also be extended to three-dimensional by using a decoupled approach where the step length and step width are independently controlled to stabilize walking motion in the fore-aft and side-to-side direction, respectively. Additional degrees-of-freedom (e.g., torso and knees) provide additional control actions that can be exploited for stabilization (e.g., Ref. [16]). We ignore actuator limits in our calculations which could substantially limit the basin of attraction of the controller. We also limit ourselves to walking solutions and assume controller failure when there is a flight phase. The latter two limitations of actuator limits and running solutions can easily be incorporated and will be basis for future work.

## Acknowledgment

The authors would like to thank Ahmad F. Taha for helpful discussions.

## Funding Data

Division of Information and Intelligent Systems (Grant No. 1566463).