Abstract
University of Central Florida researchers have invented a method that enables people to view more details of distant objects better than existing visual magnification systems can provide. The UCF innovation uses cameras to capture objects at a higher resolution than the human eye and then presents imagery of the objects to a user via an augmented reality (AR) see-through display. Existing visual magnification systems, such as camera zooms and binoculars, typically provide the same level of magnification for all objects in the field of view.
The UCF technology, however, gives users real-time, dynamic control over what they view. Thus, users can selectively amplify the size of a target object's spatially registered retinal projection while maintaining a natural (unmodified) view in the remainder of the visual field. Also, while one user views a magnified object on an AR see-through display, other users can view the same or different target objects on their displays. When individual users face different directions, their displays present a consistent spatial representation of the target relative to their lines of sight.
In one example military application, an integrated AR magnification system enables users to selectively magnify one or more objects, including enemy combatants, civilians, vehicles, ships or airplanes. Another example uses the technology to magnify specific landmarks and road signs for navigation systems such as the heads-up displays in cars.
Technical Details
The UCF invention is a computer-implemented method of intelligently magnifying objects in a field of view using one or more cameras to capture objects at a higher resolution than the human eye can perceive. An example process can consist of the following steps:
- A sensor captures real-time imagery using a camera. Examples include a standard webcam, a gigapixel camera, or 360-degree omnidirectional cameras.
- The real-time image stream from the camera goes into a processing unit (computer) via a wired or wireless connection.
- Computer vision algorithms segment and classify targets of a pre-defined category (for example, vehicles, ships, people). The algorithms help segment the foreground target from the background so that only those pixels showing the target remain, and the rest of the image is excluded from further processing.
- The computer stores corresponding image regions of each classified target (for example, the various pixel x and y minimum/maximum measurements).
- The segmented image regions and position estimates are sent to a rendering engine to generate the output to the AR see-through display. Example output can include objects scaled up to human perceptible size and then superimposed over the unscaled background.
Partnering Opportunity
The research team is looking for partners to develop the technology further for commercialization.
Stage of Development
Prototype available.
Benefit
Allows one or more objects in the visual field to be magnified in real-time and is usable with different hardware configurations—cameras, displays and tracking systemsCan seamlessly increase the perceived size of real-world objects in a user's view, enabling them to see more details than the user would be naturally capable of seeingAutomatically determines salient/important objects to be magnified from the (less important) background without changing the remainder of the visual fieldTakes individual differences into account and can be adjusted for each user's visual acuity (that is, it can make objects bigger for a near-sighted user)Market Application
Navigation systems, such as heads-up displays in carsMilitaryLand/sea rescue operationsSports eventsPublications
Virtual
Big Heads: Analysis of Human Perception and Comfort of Head Scales in Social
Virtual Reality, 2020 IEEE Conference on Virtual Reality and 3D User
Interfaces (VR), 2020, pp. 425-433,
doi: 10.1109/VR46266.2020.00063.
Brochure