Child Pornography Law Set Legal Standards For Cybercrime. But How Do ‘Deep Fakes’ Factor In?

APRIL 26, 2018 | Merritt Baer

The crime of child pornography is generally not particularly contentious, at least in the statutory sense: we all agree it is horrible, and generally agree that it should be banned. But perhaps less well advertised is the role of child pornography law in defining our legal boundaries for cybercrime. Because it is particularly abhorred, and because child pornography is exchanged basically exclusively over the Internet, it is a particular case study of cyberlaw in which we tend to go very far. But technology, which enabled the proliferation of child pornography, may also make it more difficult to reconcile legally. And this matters, both in child pornography law and in considering child pornography as a case study of the ways in which our laws may require reexamination in the context of emerging technologies. The values-based considerations that underpin laws look different on a new backdrop. One of the technological sea changes, in conjunction with artificial intelligence (AI) is the rise to existence of “deep fakes.”

In Ashcroft v. Free Speech Coalition in 2002, the Supreme Court struck down the major provisions of the Child Pornography Prevention Act as overbroad. A key provision of the law prohibited so-called "virtual child pornography," which included the idea that one might ban computer generated images of children engaged in sexual acts. Ashcroft could not survive scrutiny, the Supreme Court’s logic went, because the strong First Amendment interest was only outweighed when we could show a real child victim’s welfare was at stake.

Part of the premise of the lack of proscription on “virtual” child pornography was the fundamental fact that in 2002, believable fakes couldn’t exist. Digital forensics simply weren’t good enough. We could ban “real” child pornography, or child pornography where a real child can be identified, and yet allow “virtual” child pornography to go unprosecuted, because we could know the difference.

Now, “deep fakes” exist. Deep fakes are convincing videos, assisted with AI, that convincingly show people doing actions they never did. They are already prevalent in pornography including “celebrity deep fakes” that transpose, for example, Gal Gadot’s face, onto existing adult pornography. Some have raised national security concerns given their potential use in politics and “fake news.”

Consider Ashcroft’s standard that we must be able to identify the real child victim for the images or videos to be prosecutable.  Does the crime of child pornography survive given the existence of deep fakes?

 Perhaps it’s about time we reevaluated our standards-- for example, it is worth noting that in the military, there is a different legal standard. In the military justice system, 18 USC Section 2252 (civilian child pornography law) is brought in through Uniform Code of Military Justice (UCMJ) Articles 133 and 134. These articles refer to conduct “to the prejudice of good order and discipline,” and “conduct unbecoming an officer and a gentleman” (where the accused is an officer).

Also, many other countries including the UK prohibit images that need not be photographs of real children: photographs and “pseudo-photographs” were prohibited as far back as their Protection of Children Act in 1978. The UK also explicitly prohibited non-photographic images (including computer generated images) of children that are “indecent” in Section 62 of the Coroners and Justice Act 2009, punishable by up to three years' imprisonment.

In other words, not everyone deals with child pornography in the way civilian Americans do. And our law is uniquely at issue now that the existence of deep fakes challenges fundamental forensic assumptions.

One option is to expand our definition of child pornography. Another option - and one that we already use in law enforcement context - is to learn more about the production of the image. It is possible that the forensic story behind a child pornography image could convey whether the image meets the definition of child pornography (a real child inolved in the image, compared to a doctored photograph). But, as the UCMJ and some international definitions demonstrate, maybe the moral issue that underpins our law isn’t the “realness” of the child but the wrongness of the image’s existence. The premise of US proscriptions against child pornography is that the child is re-violated every time the image is viewed—but what if the violation is better conceived as one against societal values, too?

The role of social media and other platforms is prominent. First, if we consider the crime in historical context then we would need to examine digital forensics to attempt to prove that a real child was involved in the production. Second, if we want to adjust legal standards to include child pornography images regardless of whether a real child victim can be identified, we must question the existence of images on online platforms.  Finally, beyond the provably criminal, should we expect from sites an attentiveness to the content they host?

Image credit: Andrey_Popov/Shutterstock

The views expressed herein are the personal views of the author and do not necessarily represent the views of the FCC or the US Government, for whom the author works.

Ready to join the Fels community?

Apply now

Want to know more about what Fels has to offer?

Request information

Our team is here to help.

Contact us