With the rapid advancement of artificial intelligence, concerns over national security have prompted California lawmakers to take action. Passed by the California State Assembly on Aug. 28, and by the Senate on Aug. 29, the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act is meant to prevent physical and financial damages from AI. Key concerns highlighted by the bill include defamation through deepfakes, assistance in creating weaponry and cyberattacks on infrastructure.
If signed, the bill will require the developers of AI models to take a number of steps to ensure safety in their models. Under the bill, developers must do thorough testing of their models, take preventative measures against malicious use of their models, implement the ability to carry out a “full shutdown” and create a fully public safety and security protocol.
Although the legislation aims to protect the safety of California’s citizens and infrastructure, it has nonetheless sparked controversy within the technology industry. According to Reuters, this bill has created some divisions among tech companies. Companies such as Tesla support the bill, whereas others such as Google are concerned about the potential costs of testing and the threats it brings to innovation. While the bill may bring safety to consumers, the new lengthy testing measures and limitations may curb California’s lead in the industry.
If the bill is signed, its impacts will be felt in the state’s economy and in privacy. Ultimately, the effects must be properly weighed to determine whether the legislation will prove a net positive or negative, with both security and industry at risk.
“I think with this new act and forcing developers to test their model, making sure that it has some safeguards against malicious activity and having a full switch off, we’re going to be able to better protect citizens and other people that actually use artificial intelligence in the first place. It’s also going to help these companies be more credible because of the fact that consumers are going to believe that the AI they’re using is safe” Computer Science Club President Brendon Wu says.
Currently, the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act is awaiting approval from Governor Gavin Newsom to be signed into law. He has until the end of the month to either sign the bill into law or to veto it.