Student Projects

ETH Zurich uses SiROP to publish and search scientific projects. For more information visit sirop.org.

Learning LiDAR Registration Correspondences

Autonomous Systems Lab

The goal of this project is to train a neural network to learn to predict geometrically consistent point correspondences for accurate LiDAR odometry.

Keywords

Pointcloud, Lidar, Registration, Odometry, Neural Network, Deep Learning, Machine Learning, Robotics

Labels

Master Thesis

PLEASE LOG IN TO SEE DESCRIPTION

This project is set to limited visibility by its publisher. To see the project description you need to log in at SiROP. Please follow these instructions:

  • Click link "Open this project..." below.
  • Log in to SiROP using your university login or create an account to see the details.

If your affiliation is not created automatically, please follow these instructions: http://bit.ly/sirop-affiliate

More information

Open this project... 

Published since: 2025-03-24 , Earliest start: 2025-04-01 , Latest end: 2026-02-01

Organization Autonomous Systems Lab

Hosts Tuna Turcan , Pfreundschuh Patrick

Topics Information, Computing and Communication Sciences , Engineering and Technology

Multi-Sensor Semantic Odometry

Autonomous Systems Lab

Semantic segmentation augments visual information from cameras or geometric information from LiDARs by classifying what objects are present in a scene. Fusing this semantic information with visual or geometric sensor data can improve the odometry estimate of a robot moving through the scene. Uni-modal semantic odometry approaches using camera images or LiDAR point clouds have been shown to outperform traditional single-sensor approaches. However, multi-sensor odometry approaches typically provide more robust estimation in degenerate environments.

Keywords

Odometry, Sensor fusion, Semantics

Labels

Semester Project , Master Thesis

Description

The goal of this project is to develop a multi-sensor semantic odometry approach that combines multi-modal semantic information from camera and LiDAR data with visual and geometric constraints. The starting point will be integrating multi-modal semantic segmentation from MSeg3D [1] into the FAST-LIO2 [2] LiDAR-inertial odometry algorithm. The performance of the resulting approach can be evaluated in comparison with existing uni-modal semantic odometry approaches [3,4]. This project is offered by the Vision for Robotics Lab (www.v4rl.com) at ETH Zurich and the University of Cyprus. Students who undertake the project may have the opportunity to visit the lab at the University of Cyprus, but this is not a requirement.

References: [1] J. Li, H. Dai, H. Han, and Y. Ding, “MSeg3D: Multi-modal 3D Semantic Segmentation for Autonomous Driving,” CVPR, 2023. [2] W. Xu, Y. Cai, D. He, J. Lin, and F. Zhang, “FAST-LIO2: Fast Direct LiDAR-inertial Odometry,” arXiv, 2021. [3] W. Ye et al., “PVO: Panoptic Visual Odometry,” CVPR 2023. [4] X. Chen, A. Milioto, E. Palazzolo, P. Gigù, J. Behley, and C. Stachniss, “SuMa++: Efficient LiDAR-based Semantic SLAM,” IROS, 2019.

Work Packages

  • WP1: Literature review of work on semantic segmentation and multi-sensor odometry.
  • WP2: Develop a multi-sensor semantic odometry approach.
  • WP3: Evaluate the performance of the approach in comparison with existing work.

Requirements

Experience with C++, Python and ROS.

Contact Details

Please send CV and transcripts to Ruben Mascaro (rmascaro@ethz.ch) and Rowan Border (border.rowan@ucy.ac.cy)

More information

Open this project... 

Published since: 2025-02-17 , Earliest start: 2024-07-14 , Latest end: 2025-01-31

Applications limited to ETH Zurich

Organization Autonomous Systems Lab

Hosts Chli Margarita , Mascaro Rubén

Topics Information, Computing and Communication Sciences

LiDAR-Visual-Inertial Odometry with a Unified Representation

Autonomous Systems Lab

Lidar-Visual-Inertial odometry approaches [1-3] aim to overcome the limitations of the individual sensing modalities by estimating a pose from heterogenous measurements. Lidar-inertial odometry often diverges in environments with degenerate geometric structures and visual-inertial odometry can diverge in environments with uniform texture. Many existing lidar-visual-inertial odometry approaches use independent lidar-inertial and visual-inertial pipelines [2-3] to compute odometry estimates that are combined in a joint optimisation to obtain a single pose estimate. These approaches are able to obtain a robust pose estimate in degenerate environments but often underperform lidar-inertial or visual-inertial methods in non-degenerate scenarios due to the complexity of maintaining and combining odometry estimates from multiple representations.

Keywords

Odometry, SLAM, Sensor Fusion

Labels

Semester Project , Master Thesis

Description

The goal of this project is to develop a lidar-visual-inertial odometry approach that integrates visual and lidar measurements into a single unified representation. The starting point, inspired by FAST-LIVO2 [1], will be to investigate methods for efficiently combining visual patches from camera images with a set of geometric primitives extracted from FAST-LIO2 [4], a lidar-inertial odometry pipeline. The performance of the resulting approach will be evaluated in comparison with existing lidar-visual-inertial odometry approaches.

References: [1] C. Zheng et al., “FAST-LIVO2: Fast, Direct LiDAR-Inertial-Visual Odometry,” IEEE Transactions on Robotics, 2024. [2] J. Lin and F. Zhang, “R3LIVE++: A Robust, Real-time, Radiance reconstruction package with a tightly-coupled LiDAR-Inertial-Visual state Estimator,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2024. [3] T. Shan, B. Englot, C. Ratti, and D. Rus, “LVI-SAM: Tightly-coupled Lidar-Visual-Inertial Odometry via Smoothing and Mapping,” in IEEE International Conference on Robotics and Automation, 2021. [4] W. Xu, Y. Cai, D. He, J. Lin, and F. Zhang, “FAST-LIO2: Fast Direct LiDAR-inertial Odometry,” arXiv, 2021.

Work Packages

  • WP1: Literature review of work on lidar-visual-inertial odometry.
  • WP2: Develop a lidar-visual-inertial odometry approach with a single unified representation.
  • WP3: Evaluate the performance of the approach in comparison with existing work.

Requirements

Experience with C++ and ROS.

Contact Details

Please send CV and transcripts to Rowan Border (border.rowan@ucy.ac.cy) and Ruben Mascaro (rmascaro@ethz.ch).

More information

Open this project... 

Published since: 2025-02-17 , Earliest start: 2025-01-05 , Latest end: 2025-06-30

Applications limited to ETH Zurich

Organization Autonomous Systems Lab

Hosts Mascaro Rubén , Chli Margarita

Topics Information, Computing and Communication Sciences

Odometry and Mapping in Dynamic Environments

Autonomous Systems Lab

Existing lidar-inertial odometry approaches (e.g., FAST-LIO2 [1]) are capable of providing sufficiently accurate pose estimation in structured environments to capture high quality 3D maps of static structures in real-time. However, the presence of dynamic objects in an environment can reduce the accuracy of the odometry estimate and produce noisy artifacts in the captured 3D map. Existing approaches to handling dynamic objects [2-4] focus on detecting and filtering them from the captured 3D map but typically operate independently from the odometry pipeline, which means that the dynamic filtering does not improve the pose estimation accuracy.

Keywords

Odometry, Mapping, SLAM, Dynamic Environments

Labels

Semester Project , Master Thesis

Description

The goal of this project is to develop a lidar-inertial odometry approach that tightly integrates dynamic object filtering into the pose estimation and mapping pipeline. The starting point will be investigating whether changes in a set of geometric primitives extracted from the FAST-LIO2 [1] odometry pipeline can be used to detect dynamic objects. The performance of the resulting approach will be evaluated in comparison with existing odometry [1] and dynamic object filtering [2-4] approaches.

References: [1] W. Xu, Y. Cai, D. He, J. Lin, and F. Zhang, “FAST-LIO2: Fast Direct LiDAR-inertial Odometry,” arXiv, 2021. [2] L. Schmid, O. Andersson, A. Sulser, P. Pfreundschuh, and R. Siegwart, “Dynablox: Real-Time Detection of Diverse Dynamic Objects in Complex Environments,” IEEE Robotics and Automation Letters, 2023. [3] D. Duberg, Q. Zhang, M. Jia, and P. Jensfelt, “DUFOMap: Efficient Dynamic Awareness Mapping,” IEEE Robotics and Automation Letters, 2024. [4] H. Lim, S. Hwang, and H. Myung, “ERASOR: Egocentric Ratio of Pseudo Occupancy-based Dynamic Object Removal for Static 3D Point Cloud Map Building,” IEEE Robotics and Automation Letters, 2021.

Work Packages

  • WP1: Literature review of work on lidar-inertial odometry and dynamic object detection.
  • WP2: Develop a lidar-inertial odometry approach that can robustly handle dynamic environments.
  • WP3: Evaluate the performance of the approach in comparison with existing work.

Requirements

Experience with C++ and ROS.

Contact Details

Please send CV and transcripts to Rowan Border (border.rowan@ucy.ac.cy) and Ruben Mascaro (rmascaro@ethz.ch).

More information

Open this project... 

Published since: 2025-02-17 , Earliest start: 2025-01-05 , Latest end: 2025-06-30

Applications limited to ETH Zurich

Organization Autonomous Systems Lab

Hosts Mascaro Rubén , Chli Margarita

Topics Information, Computing and Communication Sciences , Engineering and Technology

Generating Detailed 3D Objects from Rough 3D Primitives

Autonomous Systems Lab

This project focuses on the generation of detailed 3D models from a user-specified set of 3D cuboids.

Keywords

Generative 3D Modelling, Diffusion Models, 3D Vision

Labels

Semester Project , Master Thesis , ETH Zurich (ETHZ)

Description

Diffusion models [1] have led to a rapid progression in generative AI by allowing for the generation of images of previously unseen quality and diversity [2]. Their impact also extends to the field of 3D Vision, where they allow for the generation of detailed object meshes from natural language prompts [3] or 2D images [4]. Nevertheless, despite these impressive results it is hard to strike a balance between the control the user has over the generation process and the effort she has to put in to exert such control. Natural language prompts e.g are easy to specify, but can be too abstract to sufficiently guide the object generation. 2D images, on the other hand, allow the user to exert a lot of control over the generated objects, but require a lot of effort and skill to create. In this project, we aim to strike a balance between user effort and control for the 3D case, by generating a detailed 3D object from a rough shape specification in the form of cuboids.

[1] https://arxiv.org/abs/2006.11239

[2] https://arxiv.org/abs/2112.10752

[3] https://arxiv.org/abs/2211.10440

[4] https://arxiv.org/abs/2303.14184

Work Packages

  • You will start from an existing codebase, which already contains code for extracting the 3D primitives and a proof of concept for chairs
  • Your main objective will be to extend the existing model to work on multiple categories
  • You will also be evaluating the effectiveness of different architectural choices on the model's performance

Requirements

  • Good understanding of basic concepts in machine learning
  • Prior experience with Python and a machine learning framework
  • Plus: courses such as 3D Vision or Computer Vision

Contact Details

More information

Open this project... 

Published since: 2025-02-10 , Earliest start: 2025-02-10 , Latest end: 2025-11-30

Organization Autonomous Systems Lab

Hosts Claessens Liesbeth

Topics Information, Computing and Communication Sciences

Note on plagiarism

We would like to suggest every student, irrespective of the type of project (Bachelor, Semester, Master, ...), to make himself/herself familiar with ETH rules regarding plagiarism