Research Terms
Industries
Researchers at the University of Central Florida have invented a Physical-Virtual Patient Bed System that simulates a human patient in a hospital bed. The invention has the ability to change the patient's appearance (race and various medical symptoms), alter size (child or adult), and mimic critical physiological signals. The system may be used for a range of civilian and military medical training.
Medical simulation and training centers often make use of stand-ins or surrogates for patients, including human actors who pretend to be sick or sophisticated robotic human mannequins called “human patient simulators” (HPS). Humans do not always offer consistency, and HPS cannot change appearance or exhibit human emotions. Because of deficiencies in the current technology, there is a need for this type of customizable system.
This is a real hospital bed with a Physical-Virtual Patient shell lying in it. The system’s electromechanical components change body shapes, project different appearances, change temperature, simulate pulse and breathing, and sense touch. The system provides very realistic dynamic visual appearances, including “nearly human” patients that can turn and look at you, appear pale or flush, appear to cry, smile, etc., to provide a more realistic experience to prepare trainees for diagnosing real patients. The Physical-Virtual Patient Bed is relatively inexpensive compared to an HPS, because of the interchangeability of the shells without replacing the expensive components fixed in the bed system.
Technical Details
The UCF system includes a translucent or transparent patient shell secured to a patient bed, which is illuminated from below by one or more projectors in the bed system. These projectors are adapted to render dynamic patient imagery onto the underneath of the shell so that the image appears on the surface of the shell in a realistic manner. One or more computing units including memory and a processor unit communicate with the projectors and other sensory devices to provide the interactive simulation. Sensory devices include, but are not limited to: optical touch sensing devices, targeted temperature feedback devices, audio-based tactile sense of pulse devices, and spatial audio components with a signal processing device to simulate vital signs. The system further includes interchangeable human shells and parts of human shells representing body parts. By using these shells, there is no need to change out the expensive and sensitive components that remain fixed in the patient bed system.
Researchers at the University of Central Florida have invented a novel approach for patients with multiple or bilateral implants where the electrode contacts are placed laterally to the target that electronically draw current in the direction of the target area. Unlike comparable implants, the electrical field can be shaped over space and time to reach more of the targeted area through the selection of various combinations of active contacts.
The current technology for circular electrode contacts on implanted leads for deep brain stimulation (DBS) has a major limitation: the electronic current flow is also a annular outflow from the cathode ring to an adjacent anode ring on the same lead. Although this arrangement works when the electrode contact is centered in the target site, for example, the subthalamic nucleus (STN), it doesn't work as well if the contact misses the small brain target. Current flow partially arrives at the target and the rest flows elsewhere in the brain, lowering efficiency and causing unwanted side effects.
To address this issue, the typical method is to place multiple contacts on the lead (typically four contacts) best placed superiorly or inferiorly for clinical efficacy. To reduce symptomatology, however, anterior-posterior or medial-lateral "misses" may require lead re-implantation.
Technical Details
With the UCF deep brain stimulation system, the anode ring on the stimulating lead is turned off while the adjacent cathode ring is turned on, simultaneously turning the anode ring on another lead (such as a bilateral lead), which thereby draws the current density across the nearby target. The cathode lead directs the electrical field to the target and the placement and number of anode contacts activated determines the electric field path and rate of dissipation based on vertical and horizontal distance and timing.
This invention also provides correction parameters for implanted electrodes by applying a cathode pulse to a bilateral implanted electrode while providing a synchronized anode on the opposite electrode. The correction parameter can be applied to anode and cathode contacts on a single implanted lead. Each lead can have independently controllable anode and cathode contacts.
UCF researchers have developed an automated detection system aimed at helping to prevent and reduce the number of healthcare-associated infections (HAIs) contracted during medical procedures. The new System for Detecting Sterile Field Events tracks activity during training or an actual medical procedure and identifies actions (such as contact-related events) that violate the sterile field and put patients at risk for infection. The computer-based system operates in real-time, using visual and auditory alerts to highlight potential contamination made during a procedure. For training purposes, the system can also project onto the patient/area an overlay of the surfaces that were contaminated during the procedure.
Technical Details
The system automates the job of overseeing medical procedures and providing objective visual/auditory performance indicators. Key to the system are sensors (such as cameras), processors and output devices (such as monitors and speakers)—all linked together to track, assess and respond to events that affect the sterile field. This includes objects and humans in or near the field. The system programming includes rules for maintaining the sterile field, along with contamination probability values, statistical measures and thresholds. The processors can iteratively update the contamination probability value for each surface during the procedure and can also connect to one or more output devices, which issue alerts when an event violates predefined rules for protecting the sterile field and patient safety.
UCF researchers have developed a system and methods that healthcare workers, firefighters and other first responders can use to ensure that they and colleagues worldwide properly don (put on) or doff (take off) personnel protective equipment (PPE) such as masks, gloves and other body coverings. The invention, called SterilEyes, enables professionals to record equipment donning/doffing attachment points, stream the recordings to multiple observers, and instantly receive feedback. An example attachment point is the wrist (where latex gloves are pulled over a coverall).
Technical Details
SterilEyes uses smartphones, associated apps, back-end computer servers and the internet to provide a crowd-sourced review and certifying service. With an iPhone, for example, a user can videotape a PPE procedure, such as donning a hazardous materials suit, and stream it to multiple locations. A unique user interface enables paid or volunteer observers to instantly send feedback to the user, who can also receive alerts about new reviews and information about the observers. In addition, the system lets users respond with a checklist of actions, provides data administration and stores videos and feedback. SterilEyes dynamically aggregates response data into a collective remote assessment of the protective state of the overall PPE or individual parts. The assessment includes an indication of the quality/confidence of the collective responses. With SterilEyes, participants can be located all over the world to provide 24/7 observer support.
Researchers at the University of Central Florida have invented an innovative, multi-sensory, interactive training system that realistically mimics wounds and provides constant, dynamic feedback to medical trainees as they treat the wounds. Almost like a video game in real-life, the Tactile-Visual Wound (TVW) Simulation Unit portrays the look, feel, and even the smell of different types of human wounds (such as a puncture, stab, slice or tear). It also tracks and analyzes a trainee's treatment responses and provides corrective instructions.
The TVW invention is a multi-sensory wound simulation unit. By combining several technologies, the invention provides an immersive experience for trainees. A TVW unit can include augmented reality software and a headset; sensors; actuators and markers integrated into a medical manikin; and a computer processor. An alternative configuration uses interactive moulage components affixed to a real person instead of a manikin. When activated, the unit's AR system continuously tracks the TVW, estimates the deformation of the wound over time, and monitors its response to treatment. For example, a trainee might see (via the AR glasses or headset) a projection that shows blood flowing out of the manikin's wound and vital signs "dropping." When the trainee applies pressure to the wound, sensors detect the action and wirelessly relay the data to the AR system. In response, the AR system renders (via computer graphics) an appropriate dynamic view of the blood loss slowing, and the physiological simulation reflects stabilized vitals. Real-time depth or other models of the trainee's hands, medical devices, and so on, can also affect the simulated visuals that the AR rendering system generates.
Researchers at the University of Central Florida have developed a method of using dynamic, realistic simulations to facilitate real human awareness and trust in an autonomous control system like a self-driving vehicle. The method employs online virtual humans that dynamically perform behaviors, such as gestures, poses, and verbal interactions based on the real-time input and output data of the autonomous system. Other examples of autonomous systems can include financial transaction applications and medical systems.
Most computerized or automated systems appear as “black boxes” to people by providing quantitative information like a system’s state or environmental conditions via words or symbology. For example, an autonomous car may use electronic gauges, alert indicators, or diagrams of nearby vehicles. Studies have shown that this kind of feedback causes real humans to have negative feelings toward the autonomous systems such as uncertainty, concern, stress or anxiety.
However, a study by the UCF researchers found that individuals who heard an autonomous agent respond to a command, such as turning on a light, experienced an increased sense of presence and trust after also seeing a visual representation of that agent perform the action. As a result, the researchers designed a method in which a dynamic virtual human not only appears to control an autonomous system but continually exhibits awareness of the system’s state by reacting to situational input and output data in synchronization with the system.
Technical Details
The invention comprises a method that uses virtual humans to foster trust in autonomous control systems. It includes using a computer processor, sensors and a display device to generate dynamic online and real-time virtual 2D or 3D characters. Computer-readable instructions cause the processor to evaluate input data and logically modify the output of the display device in response.
Preferably, the method uses some form of augmented reality (AR). In such cases, the virtual human and virtual controls appear via a head-worn stereo display, a fixed auto stereo display, or a monocular display that positions the virtual human in a specific location.
In one example application, an autonomous vehicle slows down to make a right turn. The real human passenger sees the virtual human driver’s movements, which correlate with the actions made by the autonomous vehicle. For instance, the virtual human uses her hands to manipulate simulated objects such as the steering wheel and the right-turn blinker. Additionally, the virtual human turns her head left and right, appearing to visually scan the area before the turn, synchronizing with the actions of the camera and sensors of the vehicle. She then moves her rendered foot from the accelerator pedal to the brake pedal as the control system applies the actual brakes. Finally, as the system turns the wheels of the vehicle, the virtual human appears to turn the steering wheel.
Partnering Opportunity
The research team is looking for partners to develop the technology further for commercialization.
Researchers at the University of Central Florida have invented a better way to track objects like head-mounted displays (HMDs) and handheld controllers of multiple users as they interact in shared space. Virtual environment systems today encounter some amount of error (static or dynamic) that can affect the system's usefulness. If two or more users are tracked close to each other in a joint task, the error problems increase. The UCF multi-participant tracking system combines and extends conventional global and body-relative approaches to "cooperatively" estimate the relative poses between all useful combinations of devices worn or held by two or more users. Example applications include hands-on training where medical professionals simulate surgical or trauma team activities, small military units in joint training exercises, or civilians in multi-user scenarios.
Technical Details
The UCF invention consists of systems and methods for tracking one user/object with respect to all others and the environment. Tracking technologies for interactive computer graphics (for example, virtual/augmented reality or related simulation, training, or practice) are used to estimate the posture and movement of humans and objects in a three-dimensional working volume. This is typically known as six-degree-of-freedom (6DOF) "pose" tracking (estimation of x, y, z positions and roll, pitch, yaw orientations).
Compared with the UCF system, conventional technologies lack the following capabilities:
The invention has the added benefit of allowing the HMDs, handheld controllers, or other tracked devices to "ride out" periods of reduced or no observability of externally mounted devices.
A Novel Approach for Cooperative Motion Capture (COMOCAP), International Conference on Artificial Reality and Telexistence, Eurographics Symposium on Virtual Environments (2018). https://doi.org/10.2312/egve.20181317
University of Central Florida researchers have invented a method that enables people to view more details of distant objects better than existing visual magnification systems can provide. The UCF innovation uses cameras to capture objects at a higher resolution than the human eye and then presents imagery of the objects to a user via an augmented reality (AR) see-through display. Existing visual magnification systems, such as camera zooms and binoculars, typically provide the same level of magnification for all objects in the field of view.
The UCF technology, however, gives users real-time, dynamic control over what they view. Thus, users can selectively amplify the size of a target object's spatially registered retinal projection while maintaining a natural (unmodified) view in the remainder of the visual field. Also, while one user views a magnified object on an AR see-through display, other users can view the same or different target objects on their displays. When individual users face different directions, their displays present a consistent spatial representation of the target relative to their lines of sight.
In one example military application, an integrated AR magnification system enables users to selectively magnify one or more objects, including enemy combatants, civilians, vehicles, ships or airplanes. Another example uses the technology to magnify specific landmarks and road signs for navigation systems such as the heads-up displays in cars.
Technical Details
The UCF invention is a computer-implemented method of intelligently magnifying objects in a field of view using one or more cameras to capture objects at a higher resolution than the human eye can perceive. An example process can consist of the following steps:
Partnering Opportunity
The research team is looking for partners to develop the technology further for commercialization.
Stage of Development
Prototype available.
Virtual
Big Heads: Analysis of Human Perception and Comfort of Head Scales in Social
Virtual Reality, 2020 IEEE Conference on Virtual Reality and 3D User
Interfaces (VR), 2020, pp. 425-433,
doi: 10.1109/VR46266.2020.00063.
The University of Central Florida invention comprises tactile-visual systems and methods for social interactions between isolated patients (for example, those with COVID-19) and remote visitors such as loved ones, family members, friends, or volunteers. A primary goal is to provide the isolated patient and the remote visitors with a visual interaction augmented by touch—a perception of being touched for the isolated patient and a perception of touching for the remote visitors. For example, a loved one might be able to virtually stroke the patient’s arm or head, or even squeeze the patient’s hand. A simple realization might include tactile transducer “strips” placed on the patient, with two-way video via touch-sensitive tablets, where touching the visual image of the strips on the tablet results in tactile sensations on the patient’s skin.
A more sophisticated realization could use the Physical-Virtual Patient Bed (PVPB), developed under NSF Award #1564065, to serve as a remote physical, visual, and tactile surrogate for the isolated patient. The remote visitors would be able to see, hear, and touch the PVPB. The isolated patient would see the remote visitors via video and feel their touch interactions via the tactile transducer strips on their arms and head (for example). These interactive video, voice, and touch interactions could provide additional comfort for the isolated patient and the remote visitors. Further embodiments and enhancements include 3D depth and viewing for visitors and patients, with possible 3D free space interaction. For example, visitors wearing augmented reality head-mounted displays could reach out and touch a virtual version of the patient, and the patient would feel tactile sensations. The systems and methods are usable in any conditions giving rise to isolation, including isolation due to geographical distance.
The University of Central Florida invention comprises systems and methods for detecting and coordinating interruptions of the playback or generation of time-sequential digital media. Examples of such media include previously recorded digital video/audio stored on the device, previously recorded digital movies/audio streamed from remote servers (over the Internet), and interactive digital imagery/audio synthesized on the device such as with a computer graphics processor and a game engine. Examples of interruptions include notifications initiated from the device, for example, an incoming email or phone call, and external events related to other individuals or the environment. The timing and form of the interruptions could be "intelligently" coordinated with the playback or generation of time-sequential digital media, such as the immersive virtual reality experience.
The University of Central Florida invention comprises four complementary advantageous features that can help homeowners program, comprehend, and monitor increasingly complex home automation and robotics systems. The features include:
In particular, the visualization of otherwise invisible information associated with the first advantageous feature can be used to inform homeowners about multiple aspects of the detected errors, failures, or anomalies associated with the first advantageous feature. These same systems and methods can also be applied in other contexts, for example, at a workplace, in a vehicle, in a building, or around a city, and beyond.
Partnering Opportunity
The research team is seeking partners for licensing and/or research collaboration.