Special Session 3

2026 12th International Conference on Electrical Engineering, Control and Robotics (EECR 2026)

"Autonomous Robotic Mapping and Navigation in GPS‑Denied Environments: From Perception to Resilient Field Deployment"

Organizer:

Fanxin Wang, Xi’an Jiaotong-Liverpool University, China

Fanxin Wang received the B.Eng. degree in Mechatronic Engineering from Zhejiang University, China, in 2017, the M.Sc. degree in Mechanical Engineering from the University of Illinois at Urbana-Champaign (UIUC), USA, in 2019, and the Ph.D. degree in Mechanical Engineering from the University of Illinois at Urbana-Champaign (UIUC), USA, in 2023. He is currently an Assistant Professor with the Department of Mechatronics and Robotics, Xi’an Jiaotong-Liverpool University, Suzhou, China. His research interests include robot planning and control, focusing on the autonomous navigation of the quadruped robots and UAVs in GPS-denied environments.

Introcduction:

Robotic mapping and navigation in environments where GPS is unavailable or unreliable-such as indoors, underground, in dense urban or natural terrain, or in post-disaster scenarios-remains a core challenge for autonomous systems. Such settings demand that robots perceive, model, and traverse complex, often dynamically changing spaces using only onboard sensing (e.g., LiDAR, cameras, inertial units) and prior semantic or structural clues. While recent advances in geometric mapping, place recognition, and simultaneous localisation and mapping (SLAM) have laid critical foundations, real‑world deployment in unstructured, large-scale, or perceptually-degenerate GPS-denied environments continues to test the limits of robustness, accuracy, and operational endurance.
Emerging techniques in multimodal perception, learned representations, and uncertainty-aware planning are opening new pathways toward resilient autonomy. By integrating deep visual-inertial odometry, semantic-topological mapping, and self-supervised or simulation-trained navigation policies, robots can increasingly operate in conditions where traditional localisation fails. However, significant gaps persist in generalising across domains, maintaining long-term consistency without external anchors, and balancing computational efficiency with the precision required for safety‑critical applications.
This Special Session aims to unite researchers in robotics, computer vision, machine learning, and field robotics to present and discuss cutting-edge approaches that push the boundaries of autonomous operation where GPS cannot be relied upon. We seek contributions that span novel algorithms, system integrations, and real-world evaluations, with an emphasis on scalability, robustness, and readiness for practical use in industry, search-and-rescue, planetary exploration, and other demanding environments.

The session will be organised into the following two thematic blocks:

1. Perceptual Intelligence for Mapping and Localisation
This block focuses on core methods for building and maintaining spatial representations without GPS. Topics may include multi-sensor fusion (visual, LiDAR, inertial), semantic and topological mapping, place recognition under appearance change, lifelong SLAM, neural radiance fields (NeRF) for 3D scene modelling, uncertainty quantification, and learning‑based odometry and loop closure.

2. Navigation and Autonomy in Challenging Settings
This block covers planning, control, and full-system deployment in complex GPS-denied environments. Topics may include robust path planning under localisation uncertainty, adaptive navigation in dynamic or perceptually-degraded conditions (e.g., dust, smoke, darkness), multi-robot cooperative mapping, field evaluations in subterranean, marine, forest, or urban-canyon settings, and resource-efficient algorithms for long-duration missions.