News in English

QUT Researchers Reveal Enhanced Automated Visual Mapping System

Sydney, Australia (SPX) Jul 11, 2024 - Researchers at QUT have unveiled an automated system that significantly advances how robots map and navigate their surroundings. This new technique, which enhances the adaptability of vision-based mapping systems across diverse environments, will be presented at the prestigious Robotics Science and Systems (RSS) 2024 conference in Delft, the Netherlands, next week by Dr. Alejandro Fontan Villacampa from the QUT School of Electrical Engineering and Robotics and Professor Michael Milford, Director of the QUT Centre for Robotics.

Lead researcher Dr. Fontan explained that Visual SLAM is a technology aiding devices like drones, autonomous vehicles, and robots in navigation.

"It enables them to create a map of their surroundings and keep track of their location within that map simultaneously," Dr. Fontan said.

"Traditionally, SLAM systems rely on specific types of visual features - distinctive patterns within images used to match and map the environment. Different features work better in different conditions, so switching between them is often necessary. However, this switching has been a manual and cumbersome process, requiring a lot of parameter tuning and expert knowledge."

Dr. Fontan stated that QUT's new system, AnyFeature-VSLAM, integrates automation into the widely used ORB-SLAM2 system.

"It enables a user to seamlessly switch between different visual features without laborious manual intervention," Dr. Fontan said. "This automation improves the system's adaptability and performance across various benchmarks and challenging environments."

Professor Milford highlighted the primary innovation of AnyFeature-VSLAM: its automated tuning mechanism.

"By integrating an automated parameter tuning process, the system optimises the use of any chosen visual feature, ensuring optimal performance without manual adjustments," Professor Milford said. "Extensive experiments have shown the system's robustness and efficiency, outperforming existing methods in many benchmark datasets."

Dr. Fontan described this development as a promising step forward in visual SLAM technology.

"By automating this tuning process, we are not only improving performance but also making these systems more accessible and easier to deploy in real-world scenarios," Dr. Fontan said.

Professor Milford emphasized that the RSS conference is one of the most prestigious events in the field, attracting the world's leading robotics researchers.

"The presentation of AnyFeature-VSLAM at RSS 2024 highlights the importance and impact of this research," Professor Milford said. "The conference will provide a platform for showcasing this breakthrough to an international audience."

"Having our research accepted for presentation at RSS 2024 is a great honour," said Professor Milford, supervisor of the research project, which also involves collaboration with Associate Professor Javier Civera from the University of Zaragoza in Spain. "It shows the significance of our work and the potential it has to advance the field of robotics."

The project received partial funding from an Australian Research Council Laureate Fellowship and the QUT Centre for Robotics.

Research Report:AnyFeature-VSLAM: Automating the Usage of Any Feature into Visual SLAM

Читайте на 123ru.net