Abstract
Researchers at the University of Central Florida have developed a system that outperforms conventional single-vehicle object detection approaches. The invention enables “cooperative cognition” by sharing partially processed data (“feature sharing”) from light detection and ranging (LIDAR) sensors among cooperative vehicles. The partially processed data are the features derived from an intermediate layer of a deep neural network. Experimental results show that the approach significantly improves object detection performance while keeping the required communication bandwidth/load low compared to sharing raw information methods.
In connected and autonomous vehicles, a driver's safety directly depends on the robustness, reliability, and scalability of systems that enable situational awareness. Cooperative mechanisms have provided a solution to improve situational awareness by using communication networks. However, the network capacity determines the maximum amount of information shared among cooperative entities. The UCF invention offers a solution by reducing the network bandwidth requirements and maintaining object detection performance.
The UCF approach achieves a better understanding of the surrounding environment by sharing partially processed data between cooperative vehicles while balancing computation and communication load. It is also scalable, unlike current methods of map sharing or the sharing of raw data. Through experiments on Volony dataset (the research team’s cooperative dataset collection tool), the researchers showed that the approach has significant performance superiority over the conventional single-vehicle object detection approaches.
Technical Details
Contrary to conventional methods, the UCF invention incorporates a method that comprises feature sharing and a new object detection framework. The feature sharing cooperative object detection (FS-COD) method offers a solution to partial occlusion, sensor range limitation, and lack of consensus challenges. To reduce communication capacity and enhance object detection performance, the UCF team introduces two new shared data alignment mechanisms and a novel parallel network architecture.
In one example embodiment, the method includes aligning point clouds obtained by a sensor device for a vehicle’s heading and a predefined global coordinate system. After the point clouds are globally aligned to rotation, a bird-eye view (BEV) projector unit (or point cloud to 3D tensor projector) is used to project the aligned point clouds onto a 2D/3D image plane. A BEV image/tensor is generated with one or more channels. Each channel provides the density of reflected points at a specific height bin. Information embedded in features determines a vector indicating the relative locations and orientations of objects to the observer and class and confidence of object detection. Fixels, which represent pixels produced in feature maps, are generated. Each fixel in a feature map represents a set of pixel coordinates in an input image and consequently represents an area of the environment in global coordinates. By applying Translation Mod Alignment on BEV images/tensors before they are fed to a convolutional neural network (CNN), the method enables fixels to represent a predetermined range of global coordinates. The mod-aligned BEV image is then sent to a CNN to acquire the feature map of the surrounding environment.
Partnering Opportunity
The research team is looking for partners to develop the technology further for commercialization.
Stage of Development
Prototype available.
Benefit
Increases the performance of object detection while decreasing the required communication capacity between cooperative vehiclesHelps cooperative safety applications to be more scalable and reliableOvercomes the challenge of non-line-of-sight or partial occlusion used in single-vehicle object detection setupsMarket Application
Connected and autonomous vehicle applicationsSensor processing applications that use neural networksPublications
Cooperative
LIDAR Object Detection via Feature Sharing in Deep Networks, 2020 IEEE 92nd
Vehicular Technology Conference (VTC2020-Fall) (2020): 1-7.
Brochure