TL;DR
Abstract
We study the 3D object understanding task for manipulating everyday objects with different material properties (diffuse, specular, transparent and mixed). Existing monocular and RGB-D methods suffer from scale ambiguity due to missing or imprecise depth measurements. We present CODERS, a one-stage approach for Category-level Object Detection, pose Estimation and Reconstruction from Stereo images. The base of our pipeline is an implicit stereo matching module that combines stereo image features with 3D position information. Concatenating this presented module and the following transform-decoder architecture leads to end-to-end learning of multiple tasks required by robot manipulation. Our approach significantly outperforms all competing methods in the public TOD dataset. Furthermore, trained on simulated data, CODERS generalize well to unseen category-level object instances in real-world robot manipulation experiments.
Architecture
Comparisons with the State-of-the-art
We present qualitative comparisons with the following state-of-the-art models:
- stereopose: The best method on TOD dataset.
Real World Test
Our Coders can handle everyday objects with various surface properties.
Robot Manipulation
Our Coders can provide reliable estimation results for robot manipulation.