New study Introduces VRUFinder: A groundbreaking visual perception framework that revolutionizes the detection of vulnerable road users through advanced infrastructure-sensor technology, enhancing vehicle-infrastructure collaboration systems.

Jian Shi, Beijing, China, Le-Minh Kieu, Dongxian Sun, Haiqiu Tan, Ming Gao, Baicang Guo, Wuhong Wang

Please read the abstract for a detailed overview:

Abstract:

The provision of real-time, accurate perception of vulnerable road users (VRUs) via infrastructure-sensors-based devices is integral to roadside perception in vehicle-infrastructure collaboration system. However, prevailing data and algorithms fall short of accomplishing this task effectively on high-resolution imagery. In response, we introduce a visual perception framework, VRUFinder, designed specifically for infrastructure-enabled deployment, and a multi-view symmetrical knowledge distillation methodology for VRU recognition. This approach amalgamates various teacher networks into streamlined student networks from diverse perspectives. By integrating our novel logical connectivity and quality judgment model, we enhance the existing state-of-the-art algorithms of YOLOv7 and StrongSORT. Moreover, we present VRUNet, a novel dataset for VRU recognition, furnishing high-resolution, top-down perspective images with visual sensor acquisition system. To the best of our knowledge, datasets of this nature are seldom found in current VRU recognition research. The effectiveness of our approach is substantiated through a series of ablation experiments and engineering case study on a low computational infrastructure-sensor-enabled device. By encapsulating our approach, we provide mature solutions for commercial infrastructure-sensors-based devices, which will contribute to the development of connected and automated vehicles and intelligent transportation systems.

Print Friendly, PDF & Email