This article demonstrates a multidisciplinary approach that proposes to augment future caregiving by prolonged independence of older adults. The human–robot system allows the elderly to cooperate with small flying robots through an appropriate interface. ASPIRE provides a platform where high-level controllers can be designed to provide a layer of abstraction between the high-level task requests, the perceptual needs of the users, and the physical demands of the robotic platforms. With a robust framework that has the capability to account for human perception and comfort level, one can provide perceived safety for older adults, and further, add expressively that facilitates communication and interaction continuously throughout the stimulation. The proposed framework relies on an iterative process of low-level controllers design through experimental data collected from psychological trials. Future work includes the exploration of multiple carebots to cooperatively assist in caregiving tasks based on human-centered design approach.
A recent study from the U.S. Department of Housing and Urban Development showed that most seniors would prefer to age in place, and a survey from the AARP concluded that nearly 90% of citizens 65+ wished to remain in their homes for as long as they can . However, prolonging independence is a challenging task for today's healthcare system due to its increased costs and technical feasibility. Although many advances in mobile technology, robotics, and wearable devices have been made, family members and professional caregivers are still the main options older adults have to further extend their capability of aging safely and comfortably at home. Based on the U.S. Census Bureau, the population of U.S. adults who are 65 and older is projected to be twice as large in 2030 as it was in 2000, increasing from 35 million to 71.5 million and representing nearly 20 percent of the total U.S. population . The current caregiving model will increase health care costs to unrealistic standards and cause disruptive changes to how individuals and families manage late-in-life decisions.
Although embedded with smart features and innovations, appliances such as washing machines, microwaves, vacuum cleaners and other household machines are still rigid when it comes to multiple functionalities. Technology that is flexible enough to help us at home, by carrying out multiple distinctive tasks, has been a driving motivation for the robotics community. Efforts with bio-inspired ground robots  have shown how complicated and large robots can become and how challenging it can be to replace caregivers with such cumbersome robots. Furthermore, finding home assistive robots that have elderly independence as the core development goal is difficult. To engineer an acceptable and useful technology, a framework must prioritize flexibility of operation and usability as the foundation for safe and trusted interaction; user acceptance lies almost solely within the purview of the elderly.
Our human-robot system allows the elderly to cooperate with small flying robots through an appropriate interface. This team of small robots is more affordable, more agile, smaller, and can reach higher floors and tighter spaces. We use experimental data to study older adults’ perceptions of non-humanoid robots to design a platform that is acceptable and does not interfere with their comfort of living at home.
ASPIRE Automation Supporting Prolonged Independent Residence for the Elderly
In order to achieve this flexible and adaptable framework, the project ASPIRE is redesigning the way small aerial vehicles (UAVs) interact with humans. The mechanical simplicity of multi-rotors makes them an affordable platform that has the potential of performing precision flight maneuvers with onboard grasping solutions and sensing hardware, which can be employed in activities that are not attainable by traditional humanoid or ground vehicles, e.g. reaching up high to grab objects. A human-centered design of a flying robotic platform is being developed to achieve the goal of providing caregiving with multi-rotor UAVs.
ASPIRE provides a platform where High-Level Controllers (HLC) can be designed in order to provide a layer of abstraction between the high-level task requests, the perceptual needs of the users, and the physical demands of the robotic platforms, shown in Figure 1.
With a robust framework that has the capability to account for human perception and comfort level, we can provide perceived safety for older adults, and further, add expressivity that facilitates communication and interaction between the user and the robotic team. To achieve these demanding levels of comfort, predictable and safe navigation must be guaranteed by the underlying Low-Level Controller (LLC), which is fundamental for indoor locomotion of any robotic form. Embedding human perception into the LLC and HLC is done by collecting experimental data from human subjects to understand how humans behave in the presence of these UAVs.
Virtual Reality (VR) is a desirable tool because it offers a safe and flexible environment in which the investigator has extremely granular control over the user experience. For example, aspects of the robot's physical appearance (e.g. shape, ergonomics, or durability), behavior (e.g. movement, manipulation, or sensing), and environmental context (e.g. low light, noise, or verticality), all constrain a robot's flight path and can be manipulated in isolation to reveal their unique effect on human observers. The human observer, however, also brings with them personal experiences that can influence the perception of a robot's agency and therefore its trustworthiness, safety, and so on.
Human Perception and Performance
The ASPIRE project addresses two fundamental issues that arise when deploying robotic systems to real-life human populated environments: (1) How do we character ize human behavior in the presence of co-located mobile robots? (2) How do we design and control mobile robots to maximize comfort and perceived safety for co-located others? Using current generation VR devices, in combination with physiological recordings, self-report data, and behavioral measures, we are able to generate a detailed model of human behavior in these situations. For details see “Why Virtual Reality?”
For example, our ongoing research with older adults and college-aged students explores the role of uncertainty and perceived safety in non-cooperative multi-rotor UAV interactions. In one version of this experiment, participants wear a VR head-mounted display (HMD) and enter a high fidelity simulation of an urban scene (see Figure 2.). During the simulation they experience several unanticipated UAV flybys, the nature of which is manipulated experimentally across subjects (e.g., distance to user, speed, or loudness). Biometric data is sampled continuously throughout the simulation, including galvanic skin response (GSR), photoplethysmography (PPG), and head rotation (e.g. angular (rotational) acceleration (1 rad/s2) and linear acceleration (1 m/s2), as a function of time-to-collision. Other demographic information related to athletic activity, UAV experience, and video game playing history, is collected offline and included in subsequent analysis. It is imperative to consider these types of continuous, indirect measures in order to acquire an unmitigated response from the human observer.
While it would be possible to simply ask participants to indicate their attitudes about UAV interaction, these types of deliberative questions can often result in an “I don’t know” or “I’m not sure” response; this is often because the participant does not want to report the answer, does not understand the question, or legitimately does not know. Indirect measures, like those mentioned above, circumvent these types of self-report issues. For example, in response to the UAV’s presence, increased arousal can lead to the sweat glands becoming more active, increasing moisture content on the surface of the skin, and allowing electrical current to pass more easily. This effect is known as skin conductance and is often cited as a measure of arousal or state anxiety . Similarly, using an optical pulse sensor placed near soft tissue (e.g. fingertip or earlobe) a PPG signal can be acquired and processed to determine the heart rate, which is known to increase with arousal . Together these two signals help provide a comprehensive description of a participant’s emotional response during a given UAV interaction. While this methodology provides an indirect assessment of an individual’s automatic response to an unanticipated drone interaction, it is equally important to acquire direct measures of behavior and to do so for circumstances in which a UAV interaction is not only expected, but also expected to be cooperative.
Design of a small robot suitable for indoor flight capable of minimizing disruption and discomfort is done by incorporating the findings from the experimental research with humans. Flying multi-rotors pose the hardest design constraints because the desired functionalities (such as flight endurance, payload capabilities and precision manipulation) are contrary to the characteristics needed for human acceptance (compact, quiet and lightweight).
To achieve the highest possible thrust-to-weight ratio while also satisfying payload constraints with commercially available parts, the multi-rotor is chosen to be 208 mm in diameter with desirable maximum total thrust of over 1.6 kg.
A two-degree-of-freedom serial manipulator was designed with an open truss structure in order to further decrease weight. Although the flight controllers are capable of preventing collisions, a protective enclosure is added on the outer part of the robot, see Fig. S2. This fully protective enclosure augments the safety of the system, but most importantly, communicates a message of safety to the user.
To this end, another series of experiments focuses on assessing boundaries of perceived safety by measuring the proximal distance in which users feel comfortable interacting with a co-located UAV. These experiments recruit both college-aged students and older adults, but extend the findings of our previous research by enabling users to locomote freely in a small area and provide commands to the UAV using a hand-tracked controller. This additional layer of complexity helps to better simulate a real-world scenario in which an older adult must cooperate with a UAV in performing a given task. By collapsing across responses from many trials under different conditions, we can generate a boundary of an interaction space within which a human observer feels comfortable interacting with the UAV. Because we expect carebots to be operating in assisted living communities or elderly care facilities, we can use this data to design parameters for the LLC, such as proximity bounds, velocity profiles, and obstacle avoidance thresholds (see Figure 3.).
Choreography of Expressive Robot Trajectories
The user’s perceived safety of the robot will also be based on their ability to determine the state of the current robot team. This requires communication via expressive robotic movement. For example, if the user requests a task to be accomplished with urgency, they will expect a more aggressive action from the system and thereby the parameters of perceived safety will change. The HLC will combine user requests with expressive actions and communicate new appropriate parameters to the LLC. The team is using Laban/ Bartenieff Movement Studies (LBMS) to choreograph these expressive pathways. This system of movement analysis and notation provides a taxonomy with which to describe the robot motion. Principles of body organization and movement quality, mapped to a quantitative measure of platform expressivity and task complexity, will guide the design of the HLC and the commands the user interface supports .
A Human-Centered Approach
It is important that we not only design our system to be acceptable to users, but that it helps them in their goal of aging in place. We are motivated by recent research on factors most important in driving elders out of independent living. In , literature searches and focus groups found that the factors most threatening to independence are related to problems with mobility, self-care, and social isolation. Thus, our goal is to mitigate some of those underlying factors through useful applications. Using UAVs to manipulate objects in the home is an example of how we can assist older adults when they are limited by mobility or reach. If they have difficulty with medication adherence, the correct medicine can be delivered directly to them, along with water or food. It is also important to use such technology to forestall further decreases in the user’s own capabilities - the robots could be utilized as personal dance partners to help keep users physically and mentally fit and socially fulfilled. The embodied cognition of the UAVs make them suitable and compelling to users as partners in their everyday lives, as opposed to an app on a phone - they can work together towards the goal of extending the user’s sphere of functionality.
Our current interface concept, in the form of a tablet computer and shown in Figure 4., takes into account much literature on designing interfaces according to the preferences and abilities of older adults, especially due to physical and cognitive declinations. For example, we eliminate anything that is not crucial, simplify navigation, and highlight important information due to diminished working memory and attention .
Safe Autonomous Flight
Safe indoor navigation for UAVs is a multi-pronged problem consisting of trajectory generation, collision avoidance, path following and state estimation while also acknowledging the safety concerns of humans in proximity.
To incorporate a human-centered approach to the navigation problem we propose an optimal control formulation Equation (1), which includes the perceived safety of humans as a constraint to generate trajectories and collision avoidance methods to navigate around obstacles in the environment. Since quad-rotor dynamics are differentially flat, the optimization can be devised in the output space, allowing us to easily add constraints such as velocity, acceleration, distance to obstacles, and human perceived safety directly to the problem formulation. The inference derived from the psychological experiments in VR will provide an expedited method to weigh the importance of these quantities against perceived safety and design constraints for the optimal control problem accordingly.
Safe optimal trajectories for the UAVs can be generated by minimizing the following cost functional:
perceived safety constraints
where E (x(t), u (t), t) is the state, input and/or time dependent running cost.
In order to provide a flight that does not adversely affect perceived safety, we must precisely follow the trajectory solved by the optimal control problem. Large deviations from the optimal solutions will disrupt the human’s level of comfort. We use nested feedback loops in order to guarantee tight performance bounds. Rate and attitude controllers provide stable flight and can also achieve behavior specific goals, e.g., “friendly” and less aggressive movements, bounds on velocity and acceleration. To fly UAVs through unstructured and cluttered environments in households, we use a trajectory-tracking controller, which guarantees precise locomotion in space. To ensure a predictable flight and guarantee minimal deviation from the designed level of comfort, we use a robust and adaptive control for the inner- loop controller design. The L1 adaptive control architecture is designed for such safety critical systems and provides the required robustness and performance .
Conclusion and Future Work
This paper demonstrates a multidisciplinary approach that proposes to augment future caregiving by prolonging independence for older adults. The proposed framework relies on an iterative process of LLC design through experimental data collected from psychological trials. We also present an intuitive tablet-based interface design for effective communication with the carebots.
Future work includes the exploration of multiple carebots to cooperatively assist in caregiving tasks based on our human-centered design approach. We are also investigating more lightweight and natural voice- and gesture-based interaction methods between humans and the carebots.
This work has been supported by the National Science Foundation through the National Robotics Initiative grant number 1528036. The authors would like to thank the students Venanzio Cichella, Bentic Sebastian, Ishaan Pakrasi and Aidan Jones and also the collaborators Camille Goudeseune and Ronald Carbonari from the Illinois Simulation Lab (ISL).