Research Terms
Researchers at the University of Central Florida have developed a system that outperforms conventional single-vehicle object detection approaches. The invention enables “cooperative cognition” by sharing partially processed data (“feature sharing”) from light detection and ranging (LIDAR) sensors among cooperative vehicles. The partially processed data are the features derived from an intermediate layer of a deep neural network. Experimental results show that the approach significantly improves object detection performance while keeping the required communication bandwidth/load low compared to sharing raw information methods.
In connected and autonomous vehicles, a driver's safety directly depends on the robustness, reliability, and scalability of systems that enable situational awareness. Cooperative mechanisms have provided a solution to improve situational awareness by using communication networks. However, the network capacity determines the maximum amount of information shared among cooperative entities. The UCF invention offers a solution by reducing the network bandwidth requirements and maintaining object detection performance.
The UCF approach achieves a better understanding of the surrounding environment by sharing partially processed data between cooperative vehicles while balancing computation and communication load. It is also scalable, unlike current methods of map sharing or the sharing of raw data. Through experiments on Volony dataset (the research team’s cooperative dataset collection tool), the researchers showed that the approach has significant performance superiority over the conventional single-vehicle object detection approaches.
Technical Details
Contrary to conventional methods, the UCF invention incorporates a method that comprises feature sharing and a new object detection framework. The feature sharing cooperative object detection (FS-COD) method offers a solution to partial occlusion, sensor range limitation, and lack of consensus challenges. To reduce communication capacity and enhance object detection performance, the UCF team introduces two new shared data alignment mechanisms and a novel parallel network architecture.
In one example embodiment, the method includes aligning point clouds obtained by a sensor device for a vehicle’s heading and a predefined global coordinate system. After the point clouds are globally aligned to rotation, a bird-eye view (BEV) projector unit (or point cloud to 3D tensor projector) is used to project the aligned point clouds onto a 2D/3D image plane. A BEV image/tensor is generated with one or more channels. Each channel provides the density of reflected points at a specific height bin. Information embedded in features determines a vector indicating the relative locations and orientations of objects to the observer and class and confidence of object detection. Fixels, which represent pixels produced in feature maps, are generated. Each fixel in a feature map represents a set of pixel coordinates in an input image and consequently represents an area of the environment in global coordinates. By applying Translation Mod Alignment on BEV images/tensors before they are fed to a convolutional neural network (CNN), the method enables fixels to represent a predetermined range of global coordinates. The mod-aligned BEV image is then sent to a CNN to acquire the feature map of the surrounding environment.
Partnering Opportunity
The research team is looking for partners to develop the technology further for commercialization.
Stage of Development
Prototype available.
Cooperative LIDAR Object Detection via Feature Sharing in Deep Networks, 2020 IEEE 92nd Vehicular Technology Conference (VTC2020-Fall) (2020): 1-7.
Researchers at the University of Central Florida have developed a new algorithm for cellular vehicle-to-everything communication (Cellular-V2X or CV2X). Vehicle-to-everything (V2X) communication allows vehicles to exchange information with other vehicles, as well as with infrastructure, pedestrians, networks, and other devices. CV2X relies on a semi-persistent scheduling algorithm for resource allocation in the channel so that multiple nodes can access the channel with minimal collision. An issue with CV2X is that some of the collisions experience prolonged durations. The UCF invention adds a new algorithm to CV2X to break the prolonged collision durations and improve latency in communication.
Technical Details
The UCF invention is a system that includes a transceiver and a controller embedded within V2X-enabled vehicles and roadside units that communicate with V2X-vehicles. The system does the following:
Stage of Development
Prototype available.
The University of Central Florida invention is a system and method for autonomous vehicle (AV) navigation. To enhance decision-making and safety, the system applies deep reinforcement learning that enables high-level policy creation for safe, tactical decision-making. It also optimizes social utility and increases sample efficiency and safety, making significant strides in autonomous vehicle operation.
Today’s roads support a mix of autonomous and human-driven vehicles (HVs), which must learn to co-exist by sharing the same road infrastructure. To attain socially desirable behaviors, autonomous vehicles (AVs) must be instructed to consider the utility of other vehicles around them in their decision-making process. Yet, despite the advances in the autonomous driving domain, AVs are still inefficient and limited in terms of cooperating with each other or coordinating with vehicles operated by humans. The UCF invention offers a solution with a system and method that allows autonomous agents (such as software programs) to implicitly learn the decision-making process of human drivers from experience.
Technical Details: The UCF invention comprises a Hybrid Predictive Network (HPN), a Value Function Network (VFN) and a safety prioritizer. Built on a symmetric encoder-decoder architecture, the HPN uses a series of observations to predict future scenarios. By combining the HPN's predictive capabilities with decision-making, the VFN estimates state-action value functions to improve navigation. A multi-step prediction chain uses the HPN to generate future hypotheses based on observation history. The safety prioritizer, integrated within the VFN, penalizes high-risk actions, masking them when selected, thus increasing safety.
Partnering Opportunity: The research team is seeking partners for licensing, research collaboration, or both.
Stage of Development: Prototype available.
Prediction-aware and Reinforcement Learning based Altruistic Cooperative Driving, arXiv:2211.10585, submitted on 19 Nov 2022