Research Terms
Computer Simulation and Modeling
Industries
Modeling, Simulation, & Training (MST)
Researchers at the University of Central Florida have invented an innovative, multi-sensory, interactive training system that realistically mimics wounds and provides constant, dynamic feedback to medical trainees as they treat the wounds. Almost like a video game in real-life, the Tactile-Visual Wound (TVW) Simulation Unit portrays the look, feel, and even the smell of different types of human wounds (such as a puncture, stab, slice or tear). It also tracks and analyzes a trainee's treatment responses and provides corrective instructions.
The TVW invention is a multi-sensory wound simulation unit. By combining several technologies, the invention provides an immersive experience for trainees. A TVW unit can include augmented reality software and a headset; sensors; actuators and markers integrated into a medical manikin; and a computer processor. An alternative configuration uses interactive moulage components affixed to a real person instead of a manikin. When activated, the unit's AR system continuously tracks the TVW, estimates the deformation of the wound over time, and monitors its response to treatment. For example, a trainee might see (via the AR glasses or headset) a projection that shows blood flowing out of the manikin's wound and vital signs "dropping." When the trainee applies pressure to the wound, sensors detect the action and wirelessly relay the data to the AR system. In response, the AR system renders (via computer graphics) an appropriate dynamic view of the blood loss slowing, and the physiological simulation reflects stabilized vitals. Real-time depth or other models of the trainee's hands, medical devices, and so on, can also affect the simulated visuals that the AR rendering system generates.
Researchers at the University of Central Florida have developed a method of using dynamic, realistic simulations to facilitate real human awareness and trust in an autonomous control system like a self-driving vehicle. The method employs online virtual humans that dynamically perform behaviors, such as gestures, poses, and verbal interactions based on the real-time input and output data of the autonomous system. Other examples of autonomous systems can include financial transaction applications and medical systems.
Most computerized or automated systems appear as “black boxes” to people by providing quantitative information like a system’s state or environmental conditions via words or symbology. For example, an autonomous car may use electronic gauges, alert indicators, or diagrams of nearby vehicles. Studies have shown that this kind of feedback causes real humans to have negative feelings toward the autonomous systems such as uncertainty, concern, stress or anxiety.
However, a study by the UCF researchers found that individuals who heard an autonomous agent respond to a command, such as turning on a light, experienced an increased sense of presence and trust after also seeing a visual representation of that agent perform the action. As a result, the researchers designed a method in which a dynamic virtual human not only appears to control an autonomous system but continually exhibits awareness of the system’s state by reacting to situational input and output data in synchronization with the system.
Technical Details
The invention comprises a method that uses virtual humans to foster trust in autonomous control systems. It includes using a computer processor, sensors and a display device to generate dynamic online and real-time virtual 2D or 3D characters. Computer-readable instructions cause the processor to evaluate input data and logically modify the output of the display device in response.
Preferably, the method uses some form of augmented reality (AR). In such cases, the virtual human and virtual controls appear via a head-worn stereo display, a fixed auto stereo display, or a monocular display that positions the virtual human in a specific location.
In one example application, an autonomous vehicle slows down to make a right turn. The real human passenger sees the virtual human driver’s movements, which correlate with the actions made by the autonomous vehicle. For instance, the virtual human uses her hands to manipulate simulated objects such as the steering wheel and the right-turn blinker. Additionally, the virtual human turns her head left and right, appearing to visually scan the area before the turn, synchronizing with the actions of the camera and sensors of the vehicle. She then moves her rendered foot from the accelerator pedal to the brake pedal as the control system applies the actual brakes. Finally, as the system turns the wheels of the vehicle, the virtual human appears to turn the steering wheel.
Partnering Opportunity
The research team is looking for partners to develop the technology further for commercialization.
Researchers at the University of Central Florida have invented a better way to track objects like head-mounted displays (HMDs) and handheld controllers of multiple users as they interact in shared space. Virtual environment systems today encounter some amount of error (static or dynamic) that can affect the system's usefulness. If two or more users are tracked close to each other in a joint task, the error problems increase. The UCF multi-participant tracking system combines and extends conventional global and body-relative approaches to "cooperatively" estimate the relative poses between all useful combinations of devices worn or held by two or more users. Example applications include hands-on training where medical professionals simulate surgical or trauma team activities, small military units in joint training exercises, or civilians in multi-user scenarios.
Technical Details
The UCF invention consists of systems and methods for tracking one user/object with respect to all others and the environment. Tracking technologies for interactive computer graphics (for example, virtual/augmented reality or related simulation, training, or practice) are used to estimate the posture and movement of humans and objects in a three-dimensional working volume. This is typically known as six-degree-of-freedom (6DOF) "pose" tracking (estimation of x, y, z positions and roll, pitch, yaw orientations).
Compared with the UCF system, conventional technologies lack the following capabilities:
The invention has the added benefit of allowing the HMDs, handheld controllers, or other tracked devices to "ride out" periods of reduced or no observability of externally mounted devices.
A Novel Approach for Cooperative Motion Capture (COMOCAP), International Conference on Artificial Reality and Telexistence, Eurographics Symposium on Virtual Environments (2018). https://doi.org/10.2312/egve.20181317
University of Central Florida researchers have invented a method that enables people to view more details of distant objects better than existing visual magnification systems can provide. The UCF innovation uses cameras to capture objects at a higher resolution than the human eye and then presents imagery of the objects to a user via an augmented reality (AR) see-through display. Existing visual magnification systems, such as camera zooms and binoculars, typically provide the same level of magnification for all objects in the field of view.
The UCF technology, however, gives users real-time, dynamic control over what they view. Thus, users can selectively amplify the size of a target object's spatially registered retinal projection while maintaining a natural (unmodified) view in the remainder of the visual field. Also, while one user views a magnified object on an AR see-through display, other users can view the same or different target objects on their displays. When individual users face different directions, their displays present a consistent spatial representation of the target relative to their lines of sight.
In one example military application, an integrated AR magnification system enables users to selectively magnify one or more objects, including enemy combatants, civilians, vehicles, ships or airplanes. Another example uses the technology to magnify specific landmarks and road signs for navigation systems such as the heads-up displays in cars.
Technical Details
The UCF invention is a computer-implemented method of intelligently magnifying objects in a field of view using one or more cameras to capture objects at a higher resolution than the human eye can perceive. An example process can consist of the following steps:
Partnering Opportunity
The research team is looking for partners to develop the technology further for commercialization.
Stage of Development
Prototype available.
Virtual
Big Heads: Analysis of Human Perception and Comfort of Head Scales in Social
Virtual Reality, 2020 IEEE Conference on Virtual Reality and 3D User
Interfaces (VR), 2020, pp. 425-433,
doi: 10.1109/VR46266.2020.00063.
The University of Central Florida invention comprises tactile-visual systems and methods for social interactions between isolated patients (for example, those with COVID-19) and remote visitors such as loved ones, family members, friends, or volunteers. A primary goal is to provide the isolated patient and the remote visitors with a visual interaction augmented by touch—a perception of being touched for the isolated patient and a perception of touching for the remote visitors. For example, a loved one might be able to virtually stroke the patient’s arm or head, or even squeeze the patient’s hand. A simple realization might include tactile transducer “strips” placed on the patient, with two-way video via touch-sensitive tablets, where touching the visual image of the strips on the tablet results in tactile sensations on the patient’s skin.
A more sophisticated realization could use the Physical-Virtual Patient Bed (PVPB), developed under NSF Award #1564065, to serve as a remote physical, visual, and tactile surrogate for the isolated patient. The remote visitors would be able to see, hear, and touch the PVPB. The isolated patient would see the remote visitors via video and feel their touch interactions via the tactile transducer strips on their arms and head (for example). These interactive video, voice, and touch interactions could provide additional comfort for the isolated patient and the remote visitors. Further embodiments and enhancements include 3D depth and viewing for visitors and patients, with possible 3D free space interaction. For example, visitors wearing augmented reality head-mounted displays could reach out and touch a virtual version of the patient, and the patient would feel tactile sensations. The systems and methods are usable in any conditions giving rise to isolation, including isolation due to geographical distance.
The University of Central Florida invention comprises systems and methods for detecting and coordinating interruptions of the playback or generation of time-sequential digital media. Examples of such media include previously recorded digital video/audio stored on the device, previously recorded digital movies/audio streamed from remote servers (over the Internet), and interactive digital imagery/audio synthesized on the device such as with a computer graphics processor and a game engine. Examples of interruptions include notifications initiated from the device, for example, an incoming email or phone call, and external events related to other individuals or the environment. The timing and form of the interruptions could be "intelligently" coordinated with the playback or generation of time-sequential digital media, such as the immersive virtual reality experience.
The University of Central Florida invention comprises four complementary advantageous features that can help homeowners program, comprehend, and monitor increasingly complex home automation and robotics systems. The features include:
In particular, the visualization of otherwise invisible information associated with the first advantageous feature can be used to inform homeowners about multiple aspects of the detected errors, failures, or anomalies associated with the first advantageous feature. These same systems and methods can also be applied in other contexts, for example, at a workplace, in a vehicle, in a building, or around a city, and beyond.
Partnering Opportunity
The research team is seeking partners for licensing and/or research collaboration.