- Michigan children would face limits on using AI-driven chatbots and addictive social media under new legislation
- The bills would also aim to give parents more control over their kids’ digital privacy on social media
- The ACLU has concerns the age-based blocks might limit free speech
LANSING — Michigan children would be blocked from artificial intelligence-driven chatbots under newly proposed legislation aimed at reining in the ill effects of unfettered screen time and addictive social media platforms.
“Big tech has chosen to keep parents on the sidelines as they push unethical regulatory practices to make a profit at the expense of our children,” state Sen. Kevin Hertel, D-St. Clair Shores, said Wednesday in a press conference announcing the legislative package.
“I refuse to raise my children in a world” where that continues to happen, Hertel added.
Promising to put “kids over clicks,” the online safety legislation sponsored by Senate Democrats proposes a three-pronged approach: bolstering data privacy protections for minors, requiring parental consent for full access to social media and mandating companies prevent Michiganders under 18 from accessing AI chatbots powered by large language models.
RELATED:
- Data centers eyed in many Michigan towns. How they might change state
- Whitmer urges speed, critics want slowdown on Michigan data center
- Michigan Senate OKs sexual ‘deepfake’ ban
Free speech advocates are voicing concerns.
While attempting to keep kids safe online is an “honorable” goal, putting age-based barriers in front of certain online services is “a very slippery slope of unintended consequences,” said Kyle Zawacki, the legislative director for Michigan’s American Civil Liberties Union chapter.
“It’s a very, very difficult needle to thread to protect our First Amendment rights with regards to trying to protect kids to access things,” Zawacki told Bridge Michigan. He said the ACLU has “major concerns” from that perspective.
Bill sponsors and digital safety advocates kept “big tech” in the crosshairs throughout the introductory press conference at Michigan’s Capitol, and the legislation would put the onus on those companies to implement the new protections.
“As technology advances, online safety seems to become increasingly out of our grasp, whether it be AI programs or social media platforms,” said Sen. Stephanie Chang, D-Detroit.
Advocates emphasized the addictive aspects of social media and the well-documented ill effects the platforms have the potential to induce.
There’s a robust body of research that shows algorithmic social media releases a rush of dopamine into the brain, which provides a feeling of pleasure. But the algorithms that decide which content is displayed tailor themselves to individual preferences to keep users — children included — scrolling. The feeds make users lose sense of time and can, at times, steer children toward unhealthy behaviors, numerous studies have shown.
It doesn’t appear the legislation would restrict children’s access to social media feeds that are purely chronological, however.
“Big tech companies know that our children are incredibly vulnerable to their exploitative algorithms, but continue to profit,” Chang said.
While Democrats currently control the state Senate, Republicans hold the majority in the House, where the bill would have to advance in order to become law.
House Speaker Matt Hall, R-Richland Township, said Wednesday he was pleased Senate Democrats are “finally offering some idea” on the topic but didn’t say whether he’d endorse any of their proposals. He told reporters House Republicans would be “laying out our vision” for the issue.
It’s a hot topic nationally. A slew of states have either been eyeing or passing laws that restrict the ability of chatbots to act as mental health services, and New York has enacted a law that interrupts extended sessions with the technologies. Industry juggernaut OpenAI has backed a similar measure in California that would require its services to use age estimation technologies to bar anyone it thinks may be underage.
“Michigan families demand protections that hold big tech accountable by reining in predatory practices that turn kids into social media addicts and expose them to all kinds of risks every time they get online,” said Alisha Meneely, vice president of government and community affairs for Unspam, a Utah-based firm that helps governments develop no-contact registries.
For the ACLU of Michigan, the possibility the technologies behind age estimation could get it wrong raises red flags, Zawacki said.
“That could lead to the inability of someone who absolutely is of age to access material, which is very concerning from our perspective, of First Amendment rights,” Zawacki told Bridge, calling it “an improper burden on adults’ rights to access content.”
Sen. Dayna Polehanki, D-Livonia, pointed to teenagers who were allegedly led to suicide after developing artificial relationships with chatbots powered by large language models — the currently leading technology marketed as artificial intelligence — and were encouraged to harm themselves.
The proposal in Michigan would require the relevant tech companies to verify the age of their visitors or use “age estimation” technologies, but the legislation does not specify what methods the companies should use.
Recent laws in other countries, such as the United Kingdom’s Online Safety Act, led to outrage from some privacy advocates because they require platforms hosting adult content to verify the age of their visitors, at times by making them submit their official identification to the companies.
The sponsors of the package couldn’t say whether a similar scheme might be implemented in reaction to this law.
At the federal level, President Donald Trump issued an executive order that threatened to withhold broadband funding for states that adopt their own strong AI regulations. That said, the White House’s AI Czar David Sacks said the administration would not go after states that pass laws aimed at protecting children.
