This article discusses the concept of human assistance systems (HAS) and research works to design the interface of HAS. It also focuses on the issue of how humans and HAS collaborate with each other during such interactions. HAS are expected to detect and compensate for human errors. In a case that a machine is a part of the team to complete an operation, it is highly desired that HAS collaborate with humans effectively. Advances on HAS have been made within application areas including vehicle driving, pilot–flight interfaces, healthcare and rehabilitation, robotics, etc. One important method for studying driver assistance system (DAS) is the availability of a powerful research tool, as the simulator is an effective means to generate real-world traffic scenarios without putting drivers in any real danger. A control strategy for HAS been investigated, especially for DAS. The goal is to provide a warning message and/or intervention to the driver if necessary to avoid hitting objects on road while not frustrating the user.
Humans create devices, which include structures, machines, etc., to help humans in coping with nearly all kinds of socio-technical systems (e.g., manufacturing, servicing, etc.). In fact, devices have never left humans alone; that is, a full automation has never taken place. Therefore, humans ubiquitously interact with devices to make sure that jobs are effectively carried out. Particularly, humans serve as masters while devices serve as slaves. In this context, devices are identified, inherently, as human assistance systems (HAS). There are naturally two issues on constructing HAS. The first issue is about how to design the interface of HAS. The second issue is about how humans and HAS collaborate with each other during such interactions.
With regard to the first issue, interfaces are of two types: (i) devices used by humans to communicate with machines, e.g., keyboard, mouse, and (ii) devices used by machines to communicate with humans, e.g., display screens, audio systems. Type (i) devices are called human-to-machine interfaces and Type (ii) devices are called machine-to-human interfaces. Both types of interfaces are responsible for effective and efficient operation of HAS. Interfaces are of soft and hard parts. The soft part refers to “What, Where, When (WWW)” the right information and/or action are communicated to humans from HAS, and the hard part refers to how the information and/or action are realized with devices.
With regard to the second issue, humans may make errors or act improperly in human-machine interactions. HAS are expected to detect and compensate for human errors. In a case that a machine is a part of the team to complete an operation, it is highly desired that HAS collaborate with humans effectively. Indeed, HAS possess a certain level of human intelligence as described below.
Has Intelligence Levels and Problem Dimensions
There are several levels of intelligence with HAS:
Level 1:HAS that can follow a pre-defined procedure of a human's operations. Many displays in process plants, flight displays, and vehicles fall into this level of intelligence with HAS. In essence, this level of intelligence is such that system intelligence is built in by interface designers. The HAS with this level of intelligence may also be called passive intelligence.
Level 2:HAS that can understand human action, cognition, and/or emotion. One example is the smart steering wheel where an array of sensors (including heart rate, Galvanic skin responses, etc.) were constructed on the surface of the steering wheel to measure driver states . HAS at this level do not change a machine's behavior and so, they still fall into the category of passive intelligence.
Level 3:HAS that possess Level 2 of intelligence and can perform cognitive tasks and change machines to respond to a new situation that happens at the human side. HAS at this level change machine's behavior, which could be in (physical) action, (non-physical) communication, and such intelligence may be called active intelligence.
Level 4:HAS that possess Level 3 of intelligence and can further exhibit intelligence emotionally. By emotionally, it is meant that emotion plays an important role in one's decision and action. For instance, HAS may take a more aggressive intervention to the braking operation when HAS detect that the driver is in an angry state . That type of intelligence is also called emotional intelligence.
Level 5:HAS that have Level 4 of intelligence and can express emotions known to humans. For instance, driver assistance system (DAS) may use a particular soft voice to remind a particular driver of a hazard ahead.
Level 6:HAS that have Level 5 of intelligence and can express emotions based on a machine's states in a physical and/or cognitive sense. For instance, DAS in the braking operation would give an emotional message to a particular driver based on a state of the braking system.
Remark (1): Level 4, 5, 6 of intelligence with HAS all fall into the category of active intelligence, and can be further called emotional intelligence I, II, and III;
Remark (2): In the case of automation, machines are controlled by computers, and as such, human-machine interactions become human-computer interactions. However, the nature of human-machine interaction is not changed, as the computer in this case is a part of the machine and part of the machine-to-human interface;
Remark (3): In the situation when a machine is a part of computer software with no interest in the physically tireless machine behind, the software exhibits and operates intelligently, i.e., persuasive technology, which is a type of HAS at Level 5.
Basic problems of HAS may be described in the following dimensions:
Dimension 1: interface design and interaction (for all the levels of intelligence), particularly the problem of determining “What, Where, When” for a piece of information relayed to humans and how to exhibit this piece of information.
Dimension 2: development of human-to-machine interacting devices, e.g. joystick, keyboard, etc. (for all the levels of intelligence).
Dimension 3: development of machine-to-human interacting devices, e.g., audio, display screen, etc. (for all the levels).
Dimension 4: development of sensors that are built on or worn by machines for HAS to infer and predict human states (for Level 2 of intelligence and above).
Dimension 5: design of software for HAS to provide assistance, including soft message and/or hard intervention, to humans based on the analysis of information regarding a human's action and cognition (for Level 3 and above).
Dimension 6: design of software for HAS to provide assistance, including soft message and/or hard intervention, to humans based on analysis of information regarding human's action and cognition as well as emotion (for Level 4 and above).
Dimension 7: development of HAS that can exhibit human emotions on appearance (for Level 5 of intelligence).
Dimension 8: understanding of the relationship between machine's states and human's emotions (for Level 6).
Has in Driving Applications
Advances on HAS have been made within application areas including vehicle driving, pilot flight interfaces, healthcare and rehabilitation, robotics, etc. In the following, research efforts in HAS for vehicle driving, which is also called DAS (driver assistance system), are described.
Advanced Driving Simulator
One important method for studying DAS is the availability of a powerful research tool (i.e., simulator), as the simulator is an effective means to generate real-world traffic scenarios without putting drivers in any real danger [3,4]. An advanced driving simulator has been in development for the past decade at the IHMS laboratory . The purpose of the driving simulator is to facilitate the research and development of DAS with an eye on more generalized findings for HAS in other application areas and to facilitate the training and assessment of drivers in terms of essential skills in driving, e.g., reaction to hazards.
Specifically, there are two primary functions within driving simulators, see Figure 1. The first function is to test sensors with algorithms for the HAS to understand a driver's states in action, cognition and emotion. The second function is to test operation management systems (both hardware and software) for driver assistance. With these functions, the driving simulator can support research across all of the dimensions of problems, Dimensions 1-8, as previously mentioned.
The quality of the driving simulator lies in its fidelity. Figures 2-3 show various road situations the driving simulator can simulate, and how the simulator facilitates the development of an operation management system for drivers’ reactions to hazardous situations.
The driving simulators are also developed into a networked platform (Figure 4), upon which scenarios can be constructed, where there are multiple drivers on the road [5,6,7]. Specifically, the networked simulator enables (a) the simulation of single- and multi-driver immersive driving, (b) the visualization of interactive surrounding traffic, (c) the specification and creation of reproducible traffic scenarios, (d) the capture of drivers’ behavioral and physiological data, and (e) real-time information communication between vehicles.
Driver Hazard Perception
The primary goal of DAS in this case is first to assess hazards (including the driver's state of hazard perception, intent to react, and reaction) and then to take action (or no action) accordingly. Apparently, understanding the driver's hazard perception is most crucial. The behavior of hazard perception of a driver is found to be sensitive to the physiological state of a driver, especially Electroencephalography (EEG). This provides an avenue to develop a real-time marker or indicator of the driver's behavior of hazard perception. A project was carried out to develop such a real-time marker. The objective is to build a map between the physiological signal and the hazard perception behavior derived from a standard test available in the literature. In this pilot study, a total of about 50 participants were asked to see images of two categories: not hazardous situations, and hazardous situations (see Figure 5). The participants were required to respond to the situations in the images (Yes or No for hazardous situation identification). While driving, physiological signals of the participants are measured, which include EEG and skin conductance signals. Figure 6 shows the experiment scene, in particular the display of hazardous images and the measurement of EEG signals and skin conductance (SC) of the participants. Data analysis establishes the mapping among the behavioral score, physiological signal, risk category or no-risk category. Data analysis also reveals that the physiological signal is more sensitive to the risk or no-risk category than the behavior score. Further, this strongly suggests that the driver's physiological responses are potentially reliable objective measures for driver licensing tests.
Driver Road Rage
Driving anger, called “road rage”, is a unique emotion caused by pressure or frustration from daily life or from bad traffic situations and the discourteous behavior of surrounding drivers. First, anger emotion was induced by elicitation events. Then, anger intensity was labeled in terms of the self-reported anger levels, and were associated with the EEG spectral features under different driving anger states. In particular, the relative energy spectrum of δ, θ, α and β bands of EEG signal among different anger levels were obtained, see Figure 7. As shown, the relative energy spectrum of β band (β%) in neutral state (anger level = 0) is the lowest, while β% at anger level 5 is the highest, and β% markedly increases with the increase of anger level. Meanwhile, the relative energy spectrum of θ band (θ%) markedly decreases with the increase of anger level. Additionally, the relative energy spectrum of δ band (δ%) in anger state (anger level = 1, 3, 5) is smaller than that in neutral state, and the relative energy spectrum of α band (α%) in anger state (anger level = 1, 3) is smaller than that in neutral state. However, the same consistent changing trends were not found for δ% and α%, respectively, with the increment of anger level [10,2].
Sensors are a fundamental problem in human-machine systems; see the previous discussion on the dimension of problems (Dimension 4) and on the level of intelligence of human-machine systems (Level 2 of intelligence and above). The sensor plays three roles: understanding of the scene, of the machine, and of the human. The sensors for the scene and the machine are not a focus of this paper; the sensor for the machine is the business of machine manufacturers. The IHMS laboratory focuses on the sensor for the human. Ample evidence shows that human physiological signals are sensitive to the human states. The essential criterion for sensors to measure human physiological signals is nonintrusiveness. We have focused on a so-called “natural contact sensor” . The natural contact sensor makes a machine “wear” a wrapper or engineers a “skin” on a machine with such sensors embedded in the wrapper or the skin. The natural contact sensor paradigm for human physiological signals is complementary to the wearable sensor paradigm for human physiological signals. For the wearable sensor, humans need to “wear” sensors in order to measure their physiological signals.
There are two challenges with the natural contact sensor paradigm: (i) how to install this suite of sensors into a machine, and (ii) how to predict what point on the machine subjects will contact for operation. Therefore, the concept of flexible thin film sensing array was proposed , which can be easily wrapped and retrofitted to machine surfaces to address both of the challenges. Based on this, two such non-intrusive sensors have been developed. (1) Skin Temperature Sensor. Change of temperature and pressure from humans are the main factors that induce cross interference. In order to perform temperature compensation, the temperature sensors should be robust thermometers which have stable performance and suffer little from cross interference. Specific skin temperature sensors have been developed and their performance has been verified ; (2) Heart Rate Sensor. For Heart Rate Variability, heart rate can be measured by observing the amount of infrared light reflected by the skin from a light source, to measure Blood Volume Pulse. By combining quantum dots with conductive polymers used to make organic LEDs, a thin, flexible film that can measure Blood Volume Pulse can be developed .
Control Strategy Of HAS
A control strategy for HAS has been investigated, especially for DAS. The goal is to provide a warning message and/or intervention to the driver if necessary to avoid hitting objects on road (e.g., lead cars) while not frustrating the user. DAS make use of the program of assessing driver's hazard perception and assesses the risk level of driving per se. For the middle risk level and high risk level, one example is that it provides a warning and/or intervention based on a discrete PID control law. Intuitively, adjusting the operators’ physical/mental states to achieve optimal performance is operators’ own responsibility. In reality, the idea of having the operator as the only controller is not sufficient because human (controlling) behavior is inherently uncertain. An idea that deserves more exploration is to develop an intelligent system that collaborates with the human operator in controlling the machine's executive unit. This kind of HAS intelligent system is referred to as an operator assistance system (OAS) . Subsequently, all human-machine cooperative systems can be simplified to a structure with operators, HAS or OAS, and peripheral executive mechanisms. In simpler terms, HAS become the brain of the machine, and the rest of the executive mechanisms are its actuators. The general function of HAS is to maintain the aforementioned nominal situation during human-machine cooperation. This general role is performed through three basic functions of HAS: (1) perceiving cues of human operators’ states, (2) inferring human operators’ mental state, and (3) making decisions and adjustments aimed at recovering or maintaining the nominal state. HAS leave the execution functions (e.g., steering wheel turning, gas pedal control, and brake pedal control in driving) to the actuators. From the viewpoint of HAS, the target system being dealt with is a human-in-the-loop system. From machine side, HAS collaborates with human operators to jointly control the actuators.
Conclusion, Limitations and Future Direction
A technical system should be viewed as a human-machine system, as the premise of any technical system is that the human serves as a master while the machine is a slave. HAS are a generic notion as part of machine systems to improve the level of intelligence of machines and to ultimately improve the mission accomplishment of human-machine systems, and HAS are attachable and detachable to machines. This paper presented six levels of intelligence with HAS to improve machine intelligence in working with humans, and eight dimensions of problems in developing HAS with these levels of intelligence. A summary description about some related research projects have been described. However, there are a few limitations for our current work. Take driver hazards perception, for example. Further work is planned for having participants perform the test in more realistic situations, i.e., driving simulator or real road tests. Based on the above analysis, there is still a need for research on sensors (i.e., on Dimension 4 and below) to further improve the accuracy of inferring and predicting human cognitive and emotional state, especially human intent to take actions and emotions. This includes both research sensors and information fusion algorithms. Among these, human intent might be one of those most challenging research problems, but it is probably well worth it given the potential benefits it will bring to advance human-machine interactions. Second, research needs to be conducted to build emotional intelligence of HAS (i.e., on Dimension 6 and beyond). The key challenge is to develop associations between the machine's state (cognitive and physical) and the human emotion and the principle behind the association.
Despite its great technical and social significance, the modeling of human states and behaviors remains one of the greatest challenges in science and technology development. It is known that human states and behaviors are highly nonlinear, uncertain, and random, which challenge many scientific disciplines. This line of research truly calls for interdisciplinary and transdisciplinary collaborations from experts from all the related fields to lead to groundbreaking discoveries in the new era of human-machine interactions.
Various aspects of this work have been supported by the National Science Foundation (NSF) through grant # 0954579 & #1333524.