GATE3D: Generalized Attention-based Task-synergized Estimation in 3D

Eunsoo Im 1, Changhyun Jee1 Jung Kwon Lee1,
1Superb AI

{eslim, chjee, jklee}@superb-ai.com

Presented at CVPR 2025 Workshop on Computer Vision for Mixed Reality (CV4MR)

Abstract

The emerging trend in computer vision emphasizes developing universal models capable of simultaneously addressing multiple diverse tasks. Such universality typically requires joint training across multi-domain datasets to ensure effective generalization. However, monocular 3D object detection presents unique challenges in multi-domain training due to the scarcity of datasets annotated with accurate 3D ground-truth labels, especially beyond typical road-based autonomous driving contexts. To address this challenge, we introduce a novel weakly supervised framework leveraging pseudo-labels. Current pretrained models often struggle to accurately detect pedestrians in non-road environments due to inherent dataset biases. Unlike generalized image-based 2D object detection models, achieving similar generalization in monocular 3D detection remains largely unexplored. In this paper, we propose GATE3D, a novel framework designed specifically for generalized monocular 3D object detection via weak supervision. GATE3D effectively bridges domain gaps by employing consistency losses between 2D and 3D predictions. Remarkably, our model achieves competitive performance on the KITTI benchmark as well as on an indoor-office dataset collected by us to evaluate the generalization capabilities of our framework. Our results demonstrate that GATE3D significantly accelerates learning from limited annotated data through effective pre-training strategies, highlighting substantial potential for broader impacts in robotics, augmented reality, and virtual reality applications.

Qualitative Results

Unseen Images

Metric-scale fidelity in a novel environment

Estimated heights over 300 frames in the novel sequence
The qualitative snapshots and the quantitative height trace jointly demonstrate that GATE3D retains physically consistent scale when applied to an unseen indoor scene, confirming its reliability for downstream mixed-reality applications. Per-frame height estimates for the same sequence indicate a ground-truth stature of 1.73m, whereas GATE3D yields a mean of 1.718m, a median of 1.718 m, and a variance of 7.06× 10-3m2 (σ≈5.18cm) across 300 frames.

overview

Scene A

GATE3D architecture overview. The proposed framework incorporates a DETR-style 3D detection backbone enhanced with attention-based modules, and supports both fully and weakly supervised learning modes. For ground-truth-labeled samples, the model is trained via standard 3D detection losses. For weakly labeled data, pseudo-3D annotations are generated from 2D detection, monocular depth estimation, and orientation prediction. To mitigate label noise, we introduce a 2D–3D consistency loss that aligns projected 3D box dimensions with frozen 2D predictions. Notably, during weak supervision, the 2D detector remains fixed to preserve its reliability, while only the 3D decoder is optimized. This hybrid learning strategy improves robustness and generalization across diverse domains.

Experiments

KITTI Benchmark
Scene A
Scene B
Scene B
Scene B
office dataset
Scene A

BibTeX

@article{im2025gate3d,
      title={GATE3D: Generalized Attention-based Task-synergized Estimation in 3D},
      author={Im, Eunsoo and Lee, Jung Kwon and Jee, Changhyun},
      journal={arXiv preprint arXiv:2504.11014},
      year={2025}
    }