In recent discussions about the future of artificial intelligence, Nvidia’s CEO Jensen Huang heralded the arrival of physical AI, a transition where machines evolve beyond mere language processing to perform physical tasks. This shift is exemplified by humanoid robots capable of simple household chores or assembly line work. However, what often remains obscured from public view is the significant human labor required to train and operate these robots, leading to misconceptions about their actual capabilities.
To train robots, human workers frequently demonstrate tasks, creating data that machines utilize to learn. For instance, a worker in Shanghai wore a VR headset and exoskeleton for an entire week, repetitively opening and closing a microwave door to provide the necessary training data. Similarly, the robotics firm Figure is partnering with Brookfield to gather extensive real-world data from various home environments to improve its robots. This model raises ethical questions surrounding the labor dynamics involved, as the reliance on human input for robot training may inadvertently reduce workers to data collectors.
Moreover, the concept of tele-operation complicates the vision of fully autonomous robots. Startups like 1X are developing humanoid robots that need remote operators to assist with complex tasks. While this system may not pose immediate risks—given that companies obtain customer consent—the idea that a human is controlling a robot in someone’s home blurs the lines of privacy and autonomy. This situation mirrors existing trends in gig economy work, where labor is sourced from low-cost regions, and raises concerns about the future of work as automation expands. As the industry continues to evolve, transparency in how these technologies function and the human effort behind them becomes crucial to avoid inflated expectations and ensure ethical practices in AI integration.
Source: The human work behind humanoid robots is being hidden via MIT Technology Review
