AI Content Changes the World of Politics: Protecting Free Speech While Defending Basic Truths

By: Jason Torchinsky and Amber Hulse

Recent technological advancements in artificial intelligence (AI) have allowed for the creation of “deep fake” or “generative AI” content that has already been used in several political ads, most notably of both Joe Biden and Donald Trump. This content can be defined as “synthetic, computer-generated video, stills, or audio elements derived from a person’s likeness, voice, or image that is so near realistic that it is intended to lead voters to deceptive conclusions alternative to reality about a candidate, party, or issue,” according to the American Association of Political Consultants (AAPC).[1]

In response to the threats “deep fake” technology used by campaigns can have on voter confidence and trust, Rep. Yvette D. Clarke (D-N.Y.) introduced legislation that would require disclosure of any AI-generated content in political ads. The bill amends federal campaign finance law to require ads regulated by the Federal Election Campaign Act to include a statement disclosing any use of AI-generated content. The Federal Election Commission recently clarified disclosure rules, ruling  “paid by” disclosures on websites apply to advertising on other platforms, such as social media and streaming sites.  This proposed legislation would expand the applicable disclaimer requirements in essentially any medium.

However, the AAPC recently released a statement updating its ethics code, concluding this technology as the AAPC narrowly defines it “has no place in legitimate, ethical campaigns.” The statement, which condemns use of generative AI technology in ads to create realistic images or audio of persons intended to mislead or create false conclusions, also asserted that “a warning or disclaimer that the audience is viewing a ‘deep fake’ video created using generative AI is insufficient and will not be accepted as a method to avoid this standard.” Read the full statement here.

While much of AI generated content has been largely for satirical purposes, the AAPC’s statement declared the organization “does not believe ‘deep fake’ generative AI content qualifies as satire” and that “there is a strong distinction between parody and an intent to deceive.”

If your organization is planning to build any political advertisements using generative AI technology or is concerned about how this legislation or the AAPC’s position on the issue will impact your organization’s advertisements, reach out to your Holtzman Vogel contact.

[1] Holtzman Vogel Baran Torchinsky Josefiak PLLC partner, Jason Torchinsky, is a board member of the AAPC and serves as its general counsel.