Autonomous vehicles are gradually entering city roads today, with the help of high-definition maps (HDMaps). However, the reliance on HDMaps prevents autonomous vehicles from stepping into regions without this expensive digital infrastructure. This fact drives many researchers to study online HDMap construction algorithms, but the performance of these algorithms at far regions is still unsatisfying. We present P-MapNet, in which the letter P highlights the fact that we focus on incorporating map priors to improve model performance. Specifically, we exploit priors in both SDMap and HDMap. On one hand, we extract weakly aligned SDMap from OpenStreetMap, and encode it as an additional conditioning branch. Despite the misalignment challenge, our attention-based architecture adaptively attends to relevant SDMap skeletons and significantly improves performance. On the other hand, we exploit a masked autoencoder to capture the prior distribution of HDMap, which can serve as a refinement module to mitigate occlusions and artifacts. We benchmark on the nuScenes and Argoverse2 datasets. Through comprehensive experiments, we show that: (1) our SDMap prior can improve online map construction performance, using both rasterized (by up to +18.73 mIoU) and vectorized (by up to +8.50 mAP) output representations. (2) our HDMap prior can improve map perceptual metrics by up to 6.34%. (3) P-MapNet can be switched into different inference modes that covers different regions of the accuracy-efficiency trade-off landscape. (4) P-MapNet is a far-seeing solution that brings larger improvements on longer ranges.
Performance comparison of HDMapNet baseline and ours on the nuScenes val set. "S" indicates that our method utilizes only the SDMap priors, while "S+H" indicates the utilization of the both priors. "M" represents the Modality of our method and "Epoch" represents the number of refinement epochs.
Perceptual Metric of HDmap Prior. We utilizing the LPIPS metric to evaluate the realism of fusion model on $120m\times 60m$ perception range. And the improvements in the HDMap Prior Module are more significant compared to those in the SDMap Prior Module.
We conduct a comparative analysis within a range of 240x60m on nuScenes dataset and 120x60m on Argoverse2 dataset, utilizing C+L as input. In our notation, "S" indicates that our method utilizes only the SDMap priors, while "S+H" indicates the utilization of both. Our method consistently outperforms the baseline method under various weather conditions and in scenarios involving viewpoint occlusion.
@ARTICLE{10643284,
author={Jiang, Zhou and Zhu, Zhenxin and Li, Pengfei and Gao, Huan-ang and Yuan, Tianyuan and Shi, Yongliang and Zhao, Hang and Zhao, Hao},
journal={IEEE Robotics and Automation Letters},
title={P-MapNet: Far-Seeing Map Generator Enhanced by Both SDMap and HDMap Priors},
year={2024},
volume={9},
number={10},
pages={8539-8546},
keywords={Feature extraction;Skeleton;Laser radar;Generators;Encoding;Point cloud compression;Autonomous vehicles;Computer vision for transportation;semantic scene understanding;intelligent transportation systems},
doi={10.1109/LRA.2024.3447450}}