In October 2019, California Governor Gavin Newsom signed two bills (AB 730 and AB 602) into law with the aim of addressing the increasing concerns around deepfake technology, particularly with regards to election security and non-consensual pornography respectively. These laws were a series of groundbreaking legal efforts to manage the potential abuses of deepfake technology.
AB 730, otherwise known as the “Political Deepfakes Prohibition Act,” specifically prohibits the distribution of deceptive audio or visual media of a political candidate within 60 days of an election. The intent was to prevent the spreading of maliciously doctored photos or videos that could misrepresent a candidate’s words or actions and unduly influence voters.
However, these laws are currently under challenge in federal court. The argument against them is they could potentially infringe upon First Amendment rights. Critics argue that these laws may violate freedom of speech, as well as stifle digital creators who use the technology for satire or other legitimate artistic purposes.
Further, opposers argue that the language used in the laws is vague and broad, which may lead to inconsistent application. The standing or outcome of this legal challenge is not mentioned and would need to be researched for the most current update.
It’s clear that both the technology itself, and the legal efforts to regulate it, are part of a rapidly evolving landscape. Policymakers and legal professionals will need to continue to navigate through these challenges, while also considering the rights and protections of individuals.