we propose the use of single-chip millimeter-wave (mmWave) radar, which is lightweight and inexpensive, for learning-based autonomous navigation. However, because mmWave radar signals are often noisy and sparse, we propose a cross-modal contrastive learning for representation (CM-CLR) method that maximizes the agreement between mmWave radar data and LiDAR data in the training stage to enable autonomous navigation using radar signal.
In the team, I am responsibile for the hardware system of the UGV and execute the experiment. Besides, I also takeover the future work for reconstructing lidar points to mmwave since Gazebo doesn't provide radar informations.