RoEL

Robust Event-based 3D Line Reconstruction

IEEE Transactions on Robotics (T-RO) 2026
1ECE, Seoul National University
2ME, Seoul National University
3Robotics, University of Michigan

Event cameras capture edges. We map them to 3D lines.
RoEL is a correspondence-based 3D line reconstruction pipeline for event camera,
from reliable correspondence search to Grassmannian optimization in 3D space.

Abstract

Event cameras in motion tend to detect object boundaries or texture edges, which produce lines of brightness changes, especially in man-made environments. While lines can constitute a robust intermediate representation that is consistently observed, the sparse nature of lines may lead to drastic deterioration with minor estimation errors. Only a few previous works, often accompanied by additional sensors, utilize lines to compensate for the severe domain discrepancies of event sensors along with unpredictable noise characteristics. We propose a method that can stably extract tracks of varying appearances of lines using a clever algorithmic process that observes multiple representations from various time slices of events, compensating for potential adversaries within the event data. We then propose geometric cost functions that can refine the 3D line maps and camera poses, eliminating projective distortions and depth ambiguities. The 3D line maps are highly compact and can be equipped with our proposed cost function, which can be adapted for any observations that can detect and extract line structures or projections of them, including 3D point cloud maps or image observations. We demonstrate that our formulation is powerful enough to exhibit a significant performance boost in event-based mapping and pose refinement across diverse datasets, and can be flexibly applied to multimodal scenarios. Our results confirm that the proposed line-based formulation is a robust and effective approach for the practical deployment of event-based perceptual modules.

Method Overview

RoEL consists of two stages: correspondence search and 3D line reconstruction.
The correspondence search stage takes events and camera poses as input. Through line detection, plane fitting, and matching processes that take into account the characteristics of event data, it finds line correspondences and identifies the events supporting each line.
The 3D line reconstruction stage triangulates 2D lines to generate initial 3D lines. We further optimize 3D lines with multi-view observations using cost functions defined in 3D space based on the Grassmann distance. Finally, our method outputs 3D line segments that effectively represent the scene.

Correspondence Search

In this stage, we firstly propose an efficient method to adapt an existing frame-based line detector to event data.
Then, we introduce a detection-based space–time plane fitting approach for line refinement and inlier event association.

3D Line Reconstruction

In the triangulation and optimization stage, we introduce Grassmannian cost functions for line observations and inlier events

Reprojection Error vs. Grassmannian Cost

The two columns illustrate the same error-calculation scenario consisting of 3D line estimates and a 2D observation. In this scenario, different 3D line estimates project to the same 2D location, leading to identical reprojection errors. Our Grassmann-based cost evaluates geometric consistency directly in 3D.

Results

Our method provides compact 3D line reconstruction results
that can be used for various cross-modal applications, such as registration and localization.

Reconstruction

Cross-modal Applications - Registration and Localization

BibTeX

          @article{bae2026roel,
            title={RoEL: Robust Event-based 3D Line Reconstruction},
            author={Bae, Gwangtak and Shin, Jaeho and Kang, Seunggu and Kim, Junho and Kim, Ayoung and Kim, Young Min},
            journal={IEEE Transactions on Robotics},
            year={2026}
          }