The team of researchers from NYU and Meta aimed to address the challenge of robotic manipulation learning in domestic environments by introducing DobbE, a highly adaptable system capable of learning and adapting from user demonstrations. The experiments demonstrated the system’s efficiency while highlighting the unique challenges in real-world settings.
The study recognizes recent strides in amassing extensive robotics datasets, emphasizing the uniqueness of their dataset centered on household and first-person robotic interactions. Leveraging iPhone capabilities, the dataset provides high-quality action and rare-depth information. Compared to existing automated manipulation-focused representation models, in-domain pre-training for generalizable representations is highlighted. They suggest augmenting their dataset with off-domain information from non-robot household videos for additional improvements, acknowledging the potential of such enhancements in their research.
The foreword addresses challenges in creating a comprehensive home assistant, advocating a shift from controlled environments to real homes. Efficiency, safety, and user comfort are stressed, introducing DobbE as a framework embodying these principles. It utilizes large-scale data and modern machine learning for efficiency, human demonstrations for safety, and an ergonomic tool for user comfort. DobbE integrates hardware, models, and algorithms around the Hello Robot Stretch. The Homes of New York dataset, with diverse demonstrations from 22 homes, and self-supervised learning techniques for vision models are also discussed.
The research employs a behavior cloning framework, a subset of imitation learning, to train DobbE in mimicking human or expert-agent behaviors. A designed hardware setup facilitates seamless demonstration collection and transfer to the robot embodiment, utilizing diverse household data, including iPhone odometry. Foundational models are pre-trained on this data. The trained models undergo testing in real homes, with ablation experiments assessing visual representation, required demonstrations, depth perception, demonstrator expertise, and the need for a parametric policy in the system.
DobbE demonstrated an 81% success rate in unfamiliar home environments after receiving only five minutes of demonstrations and 15 minutes of adapting the Home Pretrained Representations model. Throughout 30 days in 10 different homes, DobbE successfully learned 102 out of 109 tasks, proving the effectiveness of simple methods such as behavior cloning with a ResNet model for visual representation and a two-layer neural network for action prediction. The completion time and difficulty of tasks were analyzed through regression analysis, while ablation experiments evaluated different system components, including graphical representation and demonstrator expertise.
In conclusion, DobbE is a cost-effective and versatile robotic manipulation system tested in various home environments with an impressive 81% success rate. The system’s software stack, models, data, and hardware designs have been generously open-sourced by the DobbE team to advance home robot research and promote the widespread adoption of robot butlers. The success of DobbE can be attributed to its powerful yet simple methods, including behavior cloning and a two-layer neural network for action prediction. The experiments also provided insights into the challenges of lighting conditions and shadows affecting task execution.
Check out the Paper and Project. All credit for this research goes to the researchers of this project. Also, don’t forget to join our 33k+ ML SubReddit, 41k+ Facebook Community, Discord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more.
Sana Hassan, a consulting intern at Marktechpost and dual-degree student at IIT Madras, is passionate about applying technology and AI to address real-world challenges. With a keen interest in solving practical problems, he brings a fresh perspective to the intersection of AI and real-life solutions.