← 返回
智能化与AI应用
★ 2.0
DyLPR: Dynamic Occlusion Inpainting-Enhanced LiDAR Place Recognition for Dynamic Traffic Environments
DyLPR: Dynamic Occlusion Inpainting-Enhanced LiDAR Place Recognition for Dynamic Traffic Environments
| 作者 | Dong Kong · Liye Zhang · Xiaoyu Sun · Shuo Zhang · Weiming Hu · Hairong Dong |
| 期刊 | IEEE Transactions on Industrial Informatics |
| 出版日期 | 2026年1月 |
| 卷/期 | 第 22 卷 第 2 期 |
| 技术分类 | 智能化与AI应用 |
| 相关度评分 | ★★ 2.0 / 5.0 |
| 关键词 |
High-frequency dynamic targets introduce substantial appearance variations in LiDAR scans of the same location over time, posing a major challenge for place recognition. To tackle this, we propose DyLPR, a cascaded PR framework that integrates a LiDAR depth inpainting network and a place recognition network (PTN-Net), leveraging the complementary strengths of convolutional neural networks (CNNs) and transformer architectures. Specifically, a supervised encoder–decoder combining CNNs and transformers is employed to effectively handle dynamic masks across multiple scales. A semantic auxiliary branch and a hybrid loss function are further introduced to enhance both structural consistency and texture fidelity, resulting in more accurate and realistic depth inpainting. PTN-Net employs a pyramidal convolutional backbone with parallel Transformer-NetVLAD modules to capture long-range multiscale dependencies and adaptively aggregate salient features, while context gating refines integration to improve descriptor compactness and discriminability. Extensive evaluations on the LiDAR depth inpainting dataset, constructed from benchmark SemanticKITTI and real-vehicle data, demonstrate that our method achieves competitive inpainting performance and outperforms existing approaches in place recognition under dynamic conditions. For instance, DyLPR improves Recall@1 by an absolute 5.4% over the best baseline.
该文献暂无深度解读