Lawmakers in California last month advanced about 30 new measures on artificial intelligence aimed at protecting consumers and jobs, one of the biggest efforts yet to regulate the new technology.
The bills seek the toughest restrictions in the nation on A.I., which some technologists warn could kill entire categories of jobs, throw elections into chaos with disinformation, and pose national security risks. The California proposals, many of which have gained broad support, include rules to prevent A.I. tools from discriminating in housing and health care services. They also aim to protect intellectual property and jobs.
California’s legislature, which is expected to vote on the proposed laws by Aug. 31, has already helped shape U.S. tech consumer protections. The state passed a privacy law in 2020 that curbed the collection of user data, and in 2022 it passed a child safety law that created safeguards for those under 18.
“As California has seen with privacy, the federal government isn’t going to act, so we feel that it is critical that we step up in California and protect our own citizens,” said Rebecca Bauer-Kahan, a Democratic assembly member who chairs the State Assembly’s Privacy and Consumer Protection Committee.
As federal lawmakers drag out regulating A.I., state legislators have stepped into the vacuum with a flurry of bills poised to become de facto regulations for all Americans. Tech laws like those in California frequently set precedent for the nation, in large part because lawmakers across the country know it can be challenging for companies to comply with a patchwork across state lines.
State lawmakers across the country have proposed nearly 400 new laws on A.I. in recent months, according to the lobbying group TechNet. California leads the states with a total of 50 bills proposed, although that number has narrowed as the legislative session proceeds.
Colorado recently enacted a comprehensive consumer protection law that requires A.I. companies to use “reasonable care” while developing the technology to avoid discrimination, among other issues. In March, the Tennessee legislature passed the ELVIS Act (Ensuring Likeness Voice and Image Security Act), which protects musicians from having their voice and likenesses used in A.I.-generated content without their explicit consent.
It’s easier to pass legislation in many states than it is on the federal level, said Matt Perault, executive director of the Center on Technology Policy at the University of North Carolina at Chapel Hill. Forty states now have “trifecta” governments, in which both houses of the legislature and the governor’s office are run by the same party — the most since at least 1991.
“We’re still waiting to see what proposals actually become law, but the massive number of A.I. bills introduced in states like California shows just how interested lawmakers are in this topic,” he said.
And the state proposals are having a ripple effect globally, said Victoria Espinel, the chief executive of the Business Software Alliance, a lobbying group representing big software companies.
“Countries around the world are looking at these drafts for ideas that can influence their decisions on A.I. laws,” she said.
Discover more from reviewer4you.com
Subscribe to get the latest posts to your email.