Jennifer DeStefano answered the phone, hearing her daughter’s voice on the other end: “Mom, I messed up.”

Her daughter’s voice was followed by a man’s; he made it clear that if she ever wanted to see her daughter again, she needed to cough up $1 million.

It was a fake—a deepfake.

The voice on the phone was not, in fact, that of her daughter. Yet Jennifer said she was unable to tell. “It was completely her voice. It was her inflection. It was the way she would have cried. I never doubted for one second it was her.”

Amidst the meteoric rise of artificial intelligence systems, one such nefarious AI tool has become incredibly accessible to even the most outmoded luddite—deepfakes. Deepfakes are a form of AI-generated video or audio content that is created with the intent to deceive. It’s made to depict a real person doing something that they did not in fact do or say.

Some of the more colloquial examples include a deepfake video of a drunk House Minority Leader Nancy Pelosi and a satirical depiction of Florida Gov. Ron DeSantis’ presidential campaign launch on Twitter, which included guests such as Dick Cheney, the FBI, and Hitler.

Given the stature of Nancy Pelosi and Ron DeSantis, these doctored videos were swiftly flagged and labeled conjecture, preventing any reputational damage from metastasizing. We already are witnessing deepfake audio being weaponized to target anyone and everyone, but this begs the question: What would the consequence be of a convincing deepfake video targeting any one of us?

These national figures have scores of published videos, interviews, and content that can easily corroborate claims of fabricated content; they have hyper-vigilant fans jumping to their defense and policing the internet on their behalf; and they have assumed a position where being in the public eye—and therefore subject to constant scrutiny—is an animating feature of their role.

While some of the “OG” deepfake videos are the product of a lot of research, technical sophistication, time, and resources, the mass-commercialization of AI products means I am a few clicks and $20 away from generating a deepfake video of my own. The number of apps available for this are endless, but let’s take one example, “Deepfakes Web.”

First, a user must upload a short video (about 20 seconds) of the person they would like to star in their deepfake video. From here, Deepfake Web’s AI system employs deep learning to learn everything it needs to know about the star of the show—nonverbals, voice, mannerisms, movements, etc. From there, the rendering of the original content begins, and users now have the ability to reuse their model. At $4 per hour, Deepfakes Web costs $20 for about 10,000 iterations of fabricated content. To give credit where due, this website employs deepfake technology in a way that is clearly imperfect, in attempt to mitigate nefarious use. But the broader point is that there is a plethora of deepfake services online that can be used by bad actors to hurt—even destroy—those in their crosshairs with incredible ease.

In the 88th Legislature, Texas Gov. Greg Abbott signed Senate Bill 1361 which criminalizes the production and distribution of deepfake videos depicting a person engaging in sexual conduct. The importance of this bill cannot be overstated, as evinced by scores of tragic stories where young, innocent girls have been targeted by repulsive criminals who incorporate their faces into pornographic videos.

Texas can and should go further next legislative session. Thankfully, commensurate with the rise in deepfake technologies has been an increase in watermarking tools. One such example is the partnership between Truepic and Revel.ai, which “watermarks” all published videos to indicate whether a video contains AI-generated content. So, in advance of all the tech companies that will proffer the same worn-down excuse they offer every time Texans inquire about consumer protection, the technology is here; it can be done.

Deepfake technology’s potential to undermine truth, threaten national security, and harm individuals’ reputations pose a major threat to the public. By building off the momentum from last legislative session, Texas can continue to lead in protecting consumers through transparency and accountability for online content.