State Regulations of A.I. in Elections
By Mary Margaret Burniston; Photo Credit: Sue Dorfman
In the final months leading up to the 2024 election, states have demonstrated an increasing appetite for regulating the use of AI-generated content in election-related content. Such bills have passed in both Democratic- and Republican-majority legislatures. Many of the bills share a similar structure: forbid the use of AI-generated content or deepfakes relating to elections, unless the content includes a prominent label identifying it as generated by AI. Yet the approach taken by each state varies considerably in terms of scope, cause of action, intent requirements, and relief offered.
Firstly, the scope of the regulations differ in whether they focus on AI broadly or focus more narrowly on the use of deepfakes. Bills in the former category regulate broadly any content generated with the use of AI. Utah S.B. 131, for instance, requires labeling of all content made with “any use of generative artificial intelligence in generating or modifying the substantive content.”[1] Likewise, Florida H.B. 919 covers content “created in whole or in part with the use of generative artificial intelligence.”[2]
Other states focus on the use of AI-generated deepfakes, which is often described as “materially deceptive media.” Hawaii S.B. 2687 regulates materially deceptive media, which it defines as (1) created by technology, such as artificial intelligence, (2) depicting “an individual engaging in speech or conduct in which the depicted individual did not in fact engage,” and (3) “would cause a reasonable [person] to believe the depicted individual engaged in the speech or conduct depicted.”[3] Similarly, Minnesota H.F. 1370 defines deepfakes as content that is “so realistic that a reasonable person would believe it depicts speech or conduct of an individual.”[4]
Secondly, the regulations vary in terms of to whom it provides a cause of action. Regulations range from providing a cause of action only to the individual depicted in the media (Indiana H.B. 1133) to providing a broad cause of action to “any person who believes that a violation has occurred” (Colorado H.B. 1147).[5] Other iterations include a cause of action for “registered voters” (California A.B. 730), organizations “that represent the interest of voters likely to be deceived” (New Mexico H.B. 182), and attorney generals (Mississippi S.B. 2577).[6]
Thirdly, the regulations vary in the intent requirements. Some focus on the intended impact on the depicted individual. For instance, Texas S.B. 751 requires that the charged individual have the intent to injure the depicted individual.[7] Likewise, New Hampshire H.B. 1432 requires that the charged individual acted “for the purpose of […] causing any financial or reputational harm” to the depicted individual.[8] Colorado H.B. 1147 requires a lower standard: the charged individual must “know or [have] reckless disregard for the fact that the depicted candidate did not say or do what the candidate is depicted as saying/doing in the communication.”[9] Bills also focus on the intended impact on the electoral process, such as intent “to change the voting patterns of electors” (Alabama H.B. 172), and harm the “electoral prospects of a candidate” (Michigan H.B. 5144).[10] Other bills focus on knowledge that the content is untrue, such as Arizona S.B. 1359, which requires that the charged individual “knows [that the content] is a deceptive and fraudulent deepfake.” [11]
Lastly, the regulations range in the relief provided to claimants. Some regulations provide only monetary relief, such as Arizona S.B. 1359, which accrues a civil penalty for each day the media is distributed without a label disclosing its use of AI.[12] Other regulations also extend injunctive relief. Injunctive relief may include removal of the content in question, such as Mississippi S.B. 2577, which enables a court to “order that any disseminated digitization be removed” from “any physical or electronic method the [media] was disseminated through.” [13] Likewise, Idaho H.B. 664 empowers a depicted candidate to seek injunctive relief “prohibiting the publication” of the media. [14] Other injunctive relief seeks to compel the inclusion of a label disclosing the use of AI. This is the approach utilized in California A.B. 2355, which enables the state Fair Political Practices Commission to “compel compliance” with the labeling requirements.[15]
Common to all the bills is a growing recognition that rapidly advancing AI technologies have the potential to generate lifelike and manipulative election-related content. However, the effectiveness of each bill—and their ability to withstand constitutional scrutiny—is yet to be seen.
Mary Margaret Burniston is currently a 2L at Vanderbilt Law School. She is from Austin, Texas and graduated from the University of Texas at Austin in 2021 with a double major in Government and Humanities.
[1] S.B. 131, 2024 Leg., Gen. Assemb. (Utah 2024).
[2] H.B. 919, 2024 Leg., 126th Sess. (Fl. 2024).
[3] S.B. 2687, 2024 Leg., 32nd Sess. (Haw. 2024).
[4] H.F. 1370, 2024 Leg. 93rd Sess. (Minn. 2024).
[5] H.B. 1133, 123rd Gen. Assemb., 2d Reg. Sess. (Ind. 2024); H.B. 1147, 74th Gen. Assemb., 2d Reg. Sess. (Colo. 2024).
[6] A.B. 730, 2019 Leg., Reg. Sess. (Cal. 2019); H.B. 182, 2024 Leg., Reg. Sess. (N.M. 2024; S.B. 2577, 2024 Leg., Reg. Sess. (Miss. 2024).
[7] S.B. 751, 2019 Leg., 86th Sess. (Tex. 2019).
[8] H.B. 1432, 2024 Leg., Reg. Sess. (N.H. 2024).
[9] H.B. 1147, 74th Gen. Assemb., 2d Reg. Sess. (Colo. 2024).
[10] H.B. 172, 2024 Leg., Reg. Sess. (Ala. 2024); H.B. 5144, 102nd Leg., Reg. Sess. (Mich. 2024).
[11] H.B. 5144, 56th Leg., 2nd Reg. Sess. (Ariz. 2024).
[12] H.B. 5144, 56th Leg., 2nd Reg. Sess. (Ariz. 2024).
[13] S.B. 2577, 2024 Leg., Reg. Sess. (Miss. 2024).
[14] H.B. 664, 67th Leg., Reg. Sess. (Idaho 2024).
[15] A.B. 2355, 2023 Leg., Reg. Sess. (Cal. 2024).