AI Illuminated

D3RoMa: Disparity Diffusion-based Depth Sensing for Material-Agnostic Robotic Manipulation


Listen Later

[00:00] Intro

[00:18] Current limitations in depth-sensing technology

[00:56] D3RoMa's diffusion model approach to depth estimation

[01:47] Integration of geometric constraints in the model

[02:27] HiSS: New dataset for transparent/specular objects

[03:18] Benchmark results showing major accuracy improvements

[04:02] Current limitations and future development areas

[05:34] Technical details of HiSS dataset creation

[06:30] Real-world testing with robotic systems

[07:15] Why diffusion models outperform GANs

[08:54] Implementation of consistency loss functions

[12:00] Solving simulation-to-real-world transfer

[13:25] Potential expansion to single-camera systems


Authors: Songlin Wei, Haoran Geng, Jiayi Chen, Congyue Deng, Wenbo Cui, Chengyang Zhao, Xiaomeng Fang, Leonidas Guibas, He Wang


Affiliations: Peking University, UC Berkeley, Stanford, Galbot, University of Chinese Academy of Sciences, Beijing Academy of Artificial Intelligence


Abstract: Depth sensing is an important problem for 3D vision-based robotics. Yet, a real-world active stereo or ToF depth camera often produces noisy and incomplete depth which bottlenecks robot performances. In this work, we propose D3RoMa, a learning-based depth estimation framework on stereo image pairs that predicts clean and accurate depth in diverse indoor scenes, even in the most challenging scenarios with translucent or specular surfaces where classical depth sensing completely fails. Key to our method is that we unify depth estimation and restoration into an image-to-image translation problem by predicting the disparity map with a denoising diffusion probabilistic model. At inference time, we further incorporated a left-right consistency constraint as classifier guidance to the diffusion process. Our framework combines recently advanced learning-based approaches and geometric constraints from traditional stereo vision. For model training, we create a large scene-level synthetic dataset with diverse transparent and specular objects to compensate for existing tabletop datasets. The trained model can be directly applied to real-world in-the-wild scenes and achieve state-of-the-art performance in multiple public depth estimation benchmarks. Further experiments in real environments show that accurate depth prediction significantly improves robotic manipulation in various scenarios.


Link: https://arxiv.org/abs/2409.14365

...more
View all episodesView all episodes
Download on the App Store

AI IlluminatedBy The AI Illuminators