In Brief:
- Lawmakers across the country are increasingly concerned about the harmful effects of social media on young children. As a result, in 2026 alone 40 states and Puerto Rico have introduced 300 bills and resolutions addressing kids’ use of these platforms.
- Many of these bills would require age verification for users, which concerns privacy advocates who say these bills would eliminate privacy on the Internet.
- Companies that provide “age assurance” solutions are exploring ways to make these processes more privacy protective and accessible to people who need more options for how to prove their age.
Policymakers are increasingly worried about the damaging effects of social media use on kids’ physical and mental health.
“These platforms are causing anxiety, depression, addiction and lowering self-esteem,” Massachusetts Gov. Maura Healey said in a press release statement. “The fact is these social media platforms have been designed to get kids addicted.”
A 2023 Surgeon General report said that kids who use social media as heavily as most 8th and 10th graders have “double” the risk of experiencing symptoms of depression, anxiety and other poor mental health conditions. And the report said that “compulsive” use of the platforms can lead to sleep and attention problems.
These kinds of findings have generated widespread concern in statehouses across the country, and across the globe. At least 19 states have passed laws requiring social media platforms to treat minors under a certain age differently from other users. Countries like Denmark and Spain have proposed similar ideas, and in December 2025, Australia banned kids under 16 from the platforms.
In the U.S., states range in their approaches and have passed a variety of laws (though not all are currently enforced). Some take aim at platforms’ addictive qualities.
In 2024, New York, for example, barred platforms from sending notifications to kids during the night and from providing minor users with “addictive” social media feeds; California passed a similar law that same year. Virginia passed a law in 2025 that requires platforms to restrict users under 16 to just one hour per day on certain platforms (earlier this year, trade association NetChoice won a preliminary injunction barring the law’s enforcement, however). Many, but not all, states have enacted laws that require parental consent for kids to create accounts.
In Massachusetts, a new House bill would block kids 13 and under from accessing social media and require parental consent for 14- and 15-year-olds on these platforms. The governor has additionally proposed that platforms be required to turn off addictive features on minors’ accounts; only 16-year-olds, or kids’ parents, would be able to turn the features back on.
For either plan to work, social media platforms will need to be able to verify users’ ages — including adults. This would potentially require all users to verify their identities when creating their accounts.
“The No. 1 thing for people to understand with this type of legislation is, while the headline is about kids, the actual implementation is about you,” says Evan Greer, director of Fight for the Future, a digital rights advocacy group.
Identifying Kids On the Internet
There are several options for assessing an online user’s age. The most obvious, perhaps, is checking the user’s official government-issued ID.
Children and teens, however, often don’t have such documents. Adults, including domestic violence survivors, people experiencing homelessness and undocumented immigrants also may not have them.
Platforms can also use age estimation, in which a system analyzes biometrics like a person’s voiceprint or facial scan to guess their approximate age. Face scans can usually tell if someone is within 1.5 years above or below their actual age, says Iain Corby, executive director of the Age Verification Providers Association, an international trade organization representing 34 age assurance solutions providers.
That’s not precise enough to distinguish between a 15-year-old who needs parental consent to open an account and a 16-year-old who doesn’t. But it could do pretty well at distinguishing minors from adults. After that, anyone who’s close to the threshold age would need to use another method to prove their age.
Face scans are more error-prone for certain populations: trans men are often read as younger than they are and Black kids as older, Greer says. Such systems can struggle to recognize people with atypical facial features, like birthmarks.
Another method, “inference,” approximates a user’s age based on how they’ve behaved online historically — for example, checking a person’s email or phone number in data brokers’ records to see if they’re associated with car rentals or searches for mortgage rates.
No matter the age check method, however, people will always develop ways to trick the system and age assurance providers need to continually work to keep up, Corby says.
Data Security and Privacy
More than 400 security and privacy scientists and researchers signed a March 2026 letter urging a moratorium on requiring age checks “until the scientific consensus settles on the benefits and harms that age-assurance technologies can bring, and on the technical feasibility of such a deployment.”
Privacy and security advocates are alarmed about trusting sensitive biometric details and government documents to social media platforms or to their third-party partners.
Roughly 70,000 people’s government IDs were exposed when the streaming platform Discord’s third-party age verification provider was hacked; that vendor had stored the ID information to be able to respond to users who appeal age assessments, Discord said.
Other privacy experts worry that these bills could potentially chill free speech. Users who post anonymously might be unwilling to use the platforms if they’re required to provide a government-issued ID. Undocumented immigrants may also be unwilling to create accounts for themselves or their children.
According to Corby, companies should only use identifying data for verifying someone’s age and then immediately delete it. But this doesn’t always happen. The Massachusetts House bill, for example, calls for parents to be able to request the information their kids submitted to prove their age. And Corby says some companies believe they should keep identifying information, so that if they’re subpoenaed or sued, they can show it to prove they did an age check.
“This is horrendous behavior,” Corby says. “The only non-hackable database is no database at all. Do not keep data.”
Companies should instead be allowed to prove their compliance by showing that they did due diligence in selecting and vetting their age verification provider and then properly applying the age verification process, he says.
Some methods of checking ages are more privacy-protecting than others. For example, users could instead be directed to have their smartphones process the age checks and then — just like with mobile IDs — the phone would convey to the website that the user is or is not above the age limit, Corby says. The website then doesn’t learn anything else about the user, and the device doesn’t see what the user does online.
Of course, not everyone has a smartphone. And, Corby acknowledges, some fear this phone-based approach just gives smartphone operating system providers like Apple and Google even more information about people.
Some social media policies also require kids to get parental consent, but verifying that online is even more complicated than an age check. There’s not currently a good way to tell both that an adult is who they claim and that they really have a guardian relationship with the child, says Corby, whose organization is part of a working group trying to come up with a standard for doing this. Getting it wrong risks groomers exploiting kids by posing as their parent or guardian, he worries.
If Not a Ban, Then What?
Critics of age restriction policies think there are other ways to keep kids safe, without requiring everyone to prove their age.
Many child safety social media policies have platforms turn off addictive features for minors, but Greer recommends doing this for all users.
“If autoplay and infinite scroll are addictive and harmful, they're addictive and harmful for our grandparents, just as much as they are for our teenagers,” she says. Greer also sees promise in passing data privacy laws that might make the algorithms supplying content feeds less personalized and thus less addictive.
High school junior Max Nash says kids would benefit from education about how to combat phone addiction and how to critically evaluate information they see posted online, but shouldn’t be kept off social media, which can be a vital lifeline for some groups. The 2023 Surgeon General report found that while heavy social media use can have harmful mental health effects on kids, the platforms also can provide mental health benefits for LGBTQ youth and other marginalized groups who find support and connection online. “Taking away social media from youth will lead to increased suicide rates in trans kids, undoubtedly,” Nash says.