Uni-Mapper: Unified Mapping Framework for Multi-modal LiDARs in Complex and Dynamic Environments

IEEE Transactions on Intelligent Vehicles (T-IV) 2025
1Spatial AI and Robotics (SPARO) Lab, Inha University, South Korea
2Hyundai Robotics Lab, South Korea
3AI and Robotics Lab, Korea Institute of Machinery and Materials (KIMM), South Korea

TL;DR Multi-modal LiDARs Map Merging Framework!

Abstract

The unification of disparate maps is crucial for enabling scalable robot operation across multiple sessions and collaborative multi-robot scenarios. However, achieving a unified map robust to sensor modality and dynamic environments remains a challenging problem. This paper presents **Uni-Mapper**, a dynamic-aware 3D point cloud map merging framework for multi-modal LiDAR systems. Our approach consists of three core components: dynamic object removal, dynamic-aware loop closure, and multi-modal LiDAR map merging. Firstly, a voxel-wise free space hash map is constructed in a coarse-to-fine manner. By combining sequential free spaces in sliding windows, dynamic objects with discrepancies in temporal occupancy are rejected. This dynamic removal module is integrated with the map-centric place description (DynaSTD). As the removal module preserves static points only and the descriptor extracts multiple local features of the scene, DynaSTD is effective across multi-modal LiDARs in dynamic environments. Based on DynaSTD, multiple pose graph optimizations are conducted for both intra-session and inter-map loop closures. We adopt a centralized anchor-node-based approach for map merging to mitigate intra-session drift errors. Our framework is evaluated on two datasets: HeLiPR and INHA dataset, both featuring abundant dynamic objects and equipped with various types of LiDAR sensors. Compared to state-of-the-art place recognition methods and map merging systems, our framework demonstrates superior performance in loop detection across sensor modalities, robust mapping in dynamic environments, and precise alignment of multiple maps.