In a fascinating intersection of technology and the gig economy, workers like Zeus, a medical student from Nigeria, are contributing to the training of humanoid robots from the comfort of their homes. By recording themselves performing daily chores with their smartphones, they provide valuable data to companies such as Micro, which sells this information to robotics firms. With thousands of gig workers engaged across countries like India, Nigeria, and Argentina, this innovative approach not only compensates participants well but also raises significant ethical considerations regarding privacy and informed consent. As the demand for effective training methods increases, the role of remote data recorders becomes ever more pivotal.
In tandem with these developments, the field of artificial intelligence is facing a critical reassessment of its benchmarks. Historically, AI has been evaluated on its ability to outperform humans in isolated tasks. However, this method does not reflect the complexities of real-world applications, where AI operates within intricate, multi-faceted environments. Experts argue for the establishment of new benchmarks that evaluate AI performance over extended periods and in collaborative settings. One proposed approach, the Human AI Context-Specific Evaluation, aims to address this gap by assessing how well AI integrates and functions within human workflows. As the landscape of technology continues to evolve, it becomes increasingly vital to align AI assessment metrics with real-world scenarios to better understand its capabilities and risks.
Source: The Download: gig workers training humanoids, and better AI benchmarks via MIT Technology Review
