Google to Require Disclaimer on Political Ads with AI-Generated Content: Move Follows Proposed Legislation and Efforts to Prompt FEC Action

By: Michael Bayes, Andrew D. Watkins, and Daniel Bruce

Google announced that it will require special disclaimers on political advertisements that feature “synthetic content that inauthentically depicts real or realistic-looking people or events” beginning in November 2023.  This new policy is intended to address growing concerns about the use of “AI” (artificial intelligence) and so-called “deepfakes” in political advertising and will apply to image, video, and audio content placed on all Google platforms, including YouTube.

The disclaimer will be required on ads with “synthetic content that makes it appear as if a person is saying or doing something they didn’t say or do,” or “that alters footage of a real event or generates a realistic portrayal of an event to depict scenes that did not actually take place.”  Google’s policy does not specify language to be used or how that language must be displayed, but it “must be clear and conspicuous” and “placed in a location where it is likely to be noticed by users.”

Google’s new policy is limited to AI-generated content and does not apply to “inconsequential … editing techniques such as image resizing, cropping, color or brightening corrections, defect correction … or background edits that do not create realistic depictions of actual events.”

Google’s announcement follows the introduction of the REAL Political Ads Act in the U.S. Senate, which would require a disclaimer on political ads that include AI-generated content.  This bill was introduced on May 15, 2023, by Senators Amy Klobuchar (D-MN), Cory Booker (D-NJ) and Michael Bennet (D-CO). The following day, the left-of-center interest group Public Citizen filed a petition for rulemaking with the Federal Election Commission (FEC) that proposed to regulate “deliberately deceptive AI campaign ads” under an existing statute that prohibits “fraudulently misrepresenting” oneself as acting on behalf of a candidate.  Public Citizen’s proposal focused largely on prohibiting “deceptive” AI-generated content in political ads, as opposed to disclaiming its use.  (Public Citizen’s first petition for rulemaking was rejected by the FEC as procedurally inadequate; a revised petition was submitted and advanced by the agency in late August.)

The prospects for the REAL Political Ads Act in the Senate are unclear, but passage appears unlikely.  The FEC is similarly unlikely to adopt regulations in response to Public Citizen’s petition after its three Republican Commissioners expressed concerns that the rulemaking request exceeds the agency’s statutory authority.  Google’s move likely reduces the pressure on the FEC to act hastily on what remains a developing issue and gives both Congress and the FEC an opportunity to evaluate the effectiveness of disclaimers for AI content, as well as whether an actual need for them exists, during the coming elections. Google’s action may also spur other tech and social media companies to implement similar disclaimer requirements.