The question of how technology firms ascertain the age of their users has become increasingly relevant, particularly in light of heightened concerns regarding children’s safety when interacting with AI chatbots. Historically, many companies relied on users to provide their birth dates, a method that could easily be manipulated. However, recent developments indicate that the landscape is shifting, with various stakeholders advocating for more robust age verification measures. This evolving scenario has sparked a contentious dialogue among parents, lawmakers, and child advocacy groups across the United States.
On one side of the debate, Republican lawmakers have introduced legislation in several states mandating age verification for platforms hosting adult content. Critics argue this approach could inadvertently suppress valuable resources, such as sexual education materials, under the guise of protecting minors. Conversely, states like California are pursuing regulations specifically targeting AI firms, compelling them to implement age verification to safeguard minors engaging with chatbots. Amid these discussions, former President Trump is advocating for a unified national framework for AI regulation instead of a fragmented state-by-state approach, further complicating the legislative landscape.
In a recent blog post, OpenAI announced plans to introduce an automatic age prediction model for its ChatGPT platform. This system will utilize various indicators, including the time of day, to estimate the age of users and apply content filters for those identified as under 18. While such measures may seem promising for enhancing child safety, concerns remain regarding the accuracy of these systems, which might misclassify users. To combat this, individuals incorrectly identified as minors can verify their age by submitting a selfie or government-issued ID through a third-party service, Persona. However, critics highlight the potential privacy risks associated with storing vast amounts of biometric data and government IDs, raising alarms about the implications of data breaches.
Experts like Sameer Hinduja recommend a more privacy-conscious approach through device-level verification, where parents can set their child’s age when configuring their device. This method minimizes the need for external data storage and enhances security. Meanwhile, Apple CEO Tim Cook has taken a stand against proposals requiring app stores to shoulder the burden of age verification, fearing it would impose excessive liability. As discussions surrounding age verification continue, the Federal Trade Commission (FTC) is poised to host a workshop that will explore the nuances of these regulations, featuring key figures from major tech companies. As the debate evolves, it is clear that the intersection of privacy, safety, and regulation will play a pivotal role in shaping the future of AI chatbots and their engagement with younger audiences.
Source: Why chatbots are starting to check your age via MIT Technology Review
