The weaponization of AI against others has gone mainstream There’s nothing fake about deepfakes
At C.A. Goldberg, PLLC, we’ve been on the front lines warning that Artificial Intelligence and deepfakes present new risks for everybody.
Get in TouchAt C.A. Goldberg, PLLC, we’ve been on the front lines warning that Artificial Intelligence and deepfakes present new risks for everybody.
Get in TouchAt C.A. Goldberg, PLCC, we aren’t strangers to the dangers of tech-facilitated abuse. Our clients have suffered harms at the hands of Big Tech platforms, whether it’s from the sale of lethal poison to children or the stealing of intimate images from pervy cell phone store employees or stalking and harassment occurring on dating apps. And now with advancements in machine learning and natural language processing technologies accelerating by the day, the weaponization of AI is rapidly increasing. And it is ruining lives.
AI-powered image-generating systems are now enabling even the most tech-inept among us to manufacture photos and videos that are almost impossible to detect as fake. Deepfake technology uses deep learning algorithms (which are designed to learn from data to improve their own performance) to create convincing fake images and videos. Almost as soon as deepfake technology was born, it was weaponized against women to create or mimic non-consensual pornography. Abusers already destroy lives by distributing intimate images or videos that were shared with the expectation of privacy (formerly known as “revenge porn”). Thanks to AI, abusers can carry out image-based abuse without ever having to receive an intimate image; they can just create one.
As a surveillance tool, AI could enable offenders to track and monitor their victims with greater ease and precision than ever before. AI-powered algorithms could, for example, analyze and predict a person’s movements by gathering data from an array of sources: social media posts, geotagged photos, etc., to approximate or even anticipate a victim’s location.
Advanced facial recognition technology powered by AI is far more effective than humans at identifying individuals from images or videos; even when the quality is low or the person is partially obscured. Stalkers could track victims in real-time through surveillance cameras, social media, or other online sources. Those with access to these databases, for example members of law enforcement, could exploit them.
AI-powered software can analyze vast amounts of data in the blink of an eye, enabling stalkers to surveil their victims’ online activities. By monitoring their victims’ digital footprint, from browsing history to emails to downloads, abusers could gain insights into their daily lives and use this information to manipulate, control, coerce, or blackmail them.
AI could even automate and scale the process of manipulation by tracking interactions, identifying patterns in posting behavior, and even analyzing the sentiment of a victim’s communication. This could be used by interpersonal abusers, or even scammers looking for a target.
AI-powered tools can create convincing impersonations of people through voice synthesis and text generation. An offender could use AI tools to pose as a victim in order to endanger or frame them. Or an abuser could pose as someone the victim trusts in order to get access to them, gather information about them, isolate them, or manipulate their personal or professional relationships. Offenders could use AI-powered tools to fabricate text messages or emails that appear to come from trusted sources, and use that access to threaten or deceive victims, or isolate them from their support network.
An abuser could also generate or manipulate digital evidence to frame a victim for a crime.
In 2023, stories surfaced of scammers using artificial intelligence to sound like a family member in distress to con people out of thousands of dollars. An abuser or scammer could use similar techniques to convince a victim to send intimate images which would then be used to sextort or exploit them.
Technology will likely mainstream photorealistic animated AI-powered avatars in the near to medium term. Voices are basically already there (see impersonation above). Sex traffickers could use these to recruit victims. Currently, C.A. Goldberg, PLLC, regularly sees cases of adult predators impersonating teens to target minors for grooming, sexual abuse, and trafficking. As kids and teens may be less inclined to perceive a peer as dangerous, predators often pose as such, gaining information on social media about where a kid goes to school or what their interests are to create a convincing backstory. It’s getting exponentially easier for predators to access and groom minors in this way.
AI is going to make it far easier for bad actors to create and deploy “bots”. The technical barriers are dropping in real-time, and soon bad actors will be able to mobilize an army of avatars posting on social media at their instruction. This tech currently exists, but most of the general population don’t know how to weaponize it. That won’t be true much longer.
Secondly, AI search is going to make it easier than ever before for bad actors to find sensitive information. These two things – better, more relevant info, plus the ability to create seemingly organic mobs, might contribute to doxxing getting more regular, intense, and harmful.
This ability to deploy bots, or “intelligent bots” could also have a role in stalking and harassment, for example a stalker or harasser could generate bots to send emails to an employer or even fake a voice to make an angry call.
Social media platforms will likely play a central role in perpetuating AI-facilitated harms by providing a platform for, and amplifying efforts to, dox victims of targeted harassment campaigns.
C.A. Goldberg, PLLC has been at the forefront of how AI can be maliciously used to overturn a person’s life. While deepfakes have mainly impacted our celebrity and public figure clients over the past ten years, we have seen an increase in non-public figures and even children becoming victims of this abuse – the technology has just become too simple and accessible!. The good news is that New York was one of the first states to criminalize the publication of deepfakes. We are here to go after the wrongdoers, and the platforms that are making money from the technology and victims’ humiliation. Please reach out to us if you have suffered harms at the hands of malicious AI.
There is no ‘going offline’ anymore. In order to live, work, learn, connect, most of us must be online.
Tech abuse in any form is isolating and terrifying. But the pain and disruption experienced by victims can be overlooked or underplayed.
As victim’s advocates, we must anticipate the needs of those we advocate for by assessing the risks and potential for harms from emerging technologies.
We will continue to explore:
Can we imagine a world in which AI helps victims? While risk analysis is essential, we ideate and create outcomes that serve victims, too.
Currently, AI is being used to spot and remove Child Sexual Abuse Imagery at a rate human moderators can’t rival – while also sparing humans the psychological damage of sifting through the very worst content imaginable to stem its’ spread.
In the future, trained AI technologies could look like:
As artificial intelligence continues to evolve, so must our understanding of its potential impact on tech-facilitated violence and domestic abuse. We will continue to help those affected, and advocate for a safer digital landscape.
C.A. Goldberg, PLLC, is the country’s first law firm dedicated to justice for victims catastrophically injured by human maniacs and inhumane tech platforms. Since 2014, we’ve been at the forefront shutting down some of the worst humans and platforms (e.g. Harvey Weinstein, Omegle, GirlsDoPorn) and have litigated some of the most influential cases – against Amazon, Snap, Meta – reining in the tech companies that thought they were above the law.
Get in Touch