Abstract
Researchers at the University of Central Florida have developed a method of using dynamic, realistic simulations to facilitate real human awareness and trust in an autonomous control system like a self-driving vehicle. The method employs online virtual humans that dynamically perform behaviors, such as gestures, poses, and verbal interactions based on the real-time input and output data of the autonomous system. Other examples of autonomous systems can include financial transaction applications and medical systems.
Most computerized or automated systems appear as “black boxes” to people by providing quantitative information like a system’s state or environmental conditions via words or symbology. For example, an autonomous car may use electronic gauges, alert indicators, or diagrams of nearby vehicles. Studies have shown that this kind of feedback causes real humans to have negative feelings toward the autonomous systems such as uncertainty, concern, stress or anxiety.
However, a study by the UCF researchers found that individuals who heard an autonomous agent respond to a command, such as turning on a light, experienced an increased sense of presence and trust after also seeing a visual representation of that agent perform the action. As a result, the researchers designed a method in which a dynamic virtual human not only appears to control an autonomous system but continually exhibits awareness of the system’s state by reacting to situational input and output data in synchronization with the system.
Technical Details
The invention comprises a method that uses virtual humans to foster trust in autonomous control systems. It includes using a computer processor, sensors and a display device to generate dynamic online and real-time virtual 2D or 3D characters. Computer-readable instructions cause the processor to evaluate input data and logically modify the output of the display device in response.
Preferably, the method uses some form of augmented reality (AR). In such cases, the virtual human and virtual controls appear via a head-worn stereo display, a fixed auto stereo display, or a monocular display that positions the virtual human in a specific location.
In one example application, an autonomous vehicle slows down to make a right turn. The real human passenger sees the virtual human driver’s movements, which correlate with the actions made by the autonomous vehicle. For instance, the virtual human uses her hands to manipulate simulated objects such as the steering wheel and the right-turn blinker. Additionally, the virtual human turns her head left and right, appearing to visually scan the area before the turn, synchronizing with the actions of the camera and sensors of the vehicle. She then moves her rendered foot from the accelerator pedal to the brake pedal as the control system applies the actual brakes. Finally, as the system turns the wheels of the vehicle, the virtual human appears to turn the steering wheel.
Partnering Opportunity
The research team is looking for partners to develop the technology further for commercialization.
Benefit
Reduces negative feelings that real humans have toward autonomous systems: uncertainty, stress or anxietyMay increase the confidence of vulnerable nearby individuals (for example, pedestrians or bystanders) that the system is safeApplicable to medical, financial and other autonomous systems
Market Application
Autonomous cars and other vehiclesArtificial intelligence, data driven-based systems
Brochure