Research Terms
Computer Science Computer Applications Computer Simulation and Modeling Computer Hardware Computer Peripherals Computer Methods Computer Graphics Computer Programming
Industries
Digital Media Microelectronics & Computer Products Modeling, Simulation, & Training (MST) Software & Computer Systems Design
Researchers at the University of Central Florida have developed a motion tracking method independent of external components commonly relied upon by conventional approaches that use environmental markers as reference points or pre-loaded models of the physical environment. The method consists of four cameras, such as digital video cameras, capturing a series of images from each quadrant of view when arranged in a square-shaped setup. The images are used to compute a series of positions and orientations of the object with attached cameras, reducing the complexity of computing an object's three-dimensional motion. Simplified positions and motion tracking applications include video game controllers, human-computer interaction input devices for scrolling, pointing, and tracking, and input devices for interacting with virtual environments.
Technical Details
The UCF method is an inside-out, vision-based tracking system based on cameras arranged in an orthogonal configuration of two opposing pairs. The arrangement of cameras moves along with the object being tracked and can be mounted on a mobile platform for use in robotic applications such as tracking, localization and mapping. A computing device receives a series of images from each camera and calculates successive positions for the object, simplified by the arrangement of cameras in conjunction with polar correlation of optical flow, to determine an object's three-dimensional motion.
UCF researchers have developed a method that quickly generates realistic synthetic data and enables gesture recognizers to significantly improve their accuracy. The new method, called Stochastic Resampling (SR), is computationally efficient, has minimal coding overhead, and does not require expert knowledge to implement. SR-generated synthetic samples also outperform those of competitive, state-of-the-art methods, namely Perlin Noise and Sigma-Lognormal Model. In some cases, reducing mean recognition errors by more than 70 percent.
Technical Details
SR intelligently selects random points along a 2D trajectory that scales the spaces between the points to create realistic variations of a given sample. For example, given a hand-drawn circle with a time series of K points, SR resamples the series into a fixed number of N points along the series' path. The path distance between points is non-uniform, and the direction vector between each consecutive pair of points is extracted and normalized to unit length. Next, the normalized in-between point direction vectors are concatenated together to create a new set of N points, with the origin of the first vector being at the center of the coordinate system. Thereafter, the resulting series can be translated, scaled, skewed, rotated, and so forth, as necessary.
Researchers at the University of Central Florida have invented an innovative, multi-sensory, interactive training system that realistically mimics wounds and provides constant, dynamic feedback to medical trainees as they treat the wounds. Almost like a video game in real-life, the Tactile-Visual Wound (TVW) Simulation Unit portrays the look, feel, and even the smell of different types of human wounds (such as a puncture, stab, slice or tear). It also tracks and analyzes a trainee's treatment responses and provides corrective instructions.
The TVW invention is a multi-sensory wound simulation unit. By combining several technologies, the invention provides an immersive experience for trainees. A TVW unit can include augmented reality software and a headset; sensors; actuators and markers integrated into a medical manikin; and a computer processor. An alternative configuration uses interactive moulage components affixed to a real person instead of a manikin. When activated, the unit's AR system continuously tracks the TVW, estimates the deformation of the wound over time, and monitors its response to treatment. For example, a trainee might see (via the AR glasses or headset) a projection that shows blood flowing out of the manikin's wound and vital signs "dropping." When the trainee applies pressure to the wound, sensors detect the action and wirelessly relay the data to the AR system. In response, the AR system renders (via computer graphics) an appropriate dynamic view of the blood loss slowing, and the physiological simulation reflects stabilized vitals. Real-time depth or other models of the trainee's hands, medical devices, and so on, can also affect the simulated visuals that the AR rendering system generates.