Select Language:
As governments tighten regulations on online platforms—from social media to adult sites—business is thriving for companies providing AI-based age verification through selfies. This technology offers rapid and precise enforcement of laws such as Australia’s social media ban for minors under 16, which took effect December 10.
Verifying your age with these systems is straightforward: simply take a front-facing selfie with your phone or computer camera. An automated process then determines your age in less than a minute. On Roblox, for example, a pop-up might display, “We estimate your age is 18 or older.”
At Yoti’s spacious London headquarters, mannequin heads—some with wigs or masks—are used to test the AI’s effectiveness. The system is not deceived by these props, as CEO Robin Tombs explains: “We can’t always confirm if an image is of a real face, but over time, our algorithm has become highly skilled at analyzing facial patterns to estimate age, whether it’s 17 or 28.”
Today, Yoti conducts roughly one million age checks daily for clients like Meta, TikTok, Sony, and Pinterest. The company turned a profit this year, generating about 20 million pounds ($26 million) in revenue over the past year, and anticipates a 50% sales increase in the current fiscal year. Other firms like Persona, Kids Web Services, K-id, and VerifyMy are also thriving, collectively making up 34 member companies within the Age Verification Providers Association (AVPA).
The industry’s potential is vast; an Australian independent body projected that, by 2031-36, the sector could generate nearly $10 billion annually across the 37 OECD countries, though recent forecasts have not been published.
However, experts like Iain Corby from AVPA caution against making definitive predictions due to rapid regulatory developments and technological advances. AI age verification tools have raised concerns about bias and privacy — tools that might be intrusive or threaten personal privacy, as cybersecurity professor Olivier Blazy notes. He emphasizes that the extent of data sharing with third-party providers significantly impacts user privacy.
While the ecosystem currently favors AI solutions, Blazy suggests there may be a shift toward better privacy protections in coming years. He also points out that these systems have vulnerabilities; for instance, makeup can make someone appear older or younger, and biases exist, especially when estimating ages of non-white faces. An Australian report highlighted ongoing underrepresentation of Indigenous populations in training data, though companies are beginning to address this.
Robin Tombs acknowledges that certain demographics, such as some age groups or skin tones, have less training data. Nonetheless, he maintains that the system can identify the use of false accessories or makeup. All personal data are immediately discarded after analysis to safeguard privacy. Platforms using these tools often set verification thresholds—like requiring users to be over 21 to access certain content—and those falling into a gray area might need to provide official ID, such as a driver’s license, for manual verification.




