Meta Denies Using Pirated Content for AI Training
Meta denies allegations of using pirated adult videos for AI training, attributing downloads to personal use by employees, not corporate action.

Meta Denies Using Pirated Content for AI Training
Meta Platforms Inc., under the leadership of Mark Zuckerberg, has denied allegations of using pirated adult videos to train its artificial intelligence (AI) systems. This controversy arises from a lawsuit filed by Strike 3 Holdings, accusing Meta of illegally downloading thousands of copyrighted adult films for AI model training. Meta contends that these downloads were sporadic, dating back several years, and likely due to individual employees’ personal activities rather than a coordinated corporate effort.
Background of the Allegations
In July 2025, Strike 3 Holdings, a company specializing in adult content, filed a lawsuit against Meta. The lawsuit claims that Meta pirated at least 2,396 adult videos to train its AI models, including technologies like Meta Movie Gen and the LLaMA large language model. The suit alleges unauthorized use of BitTorrent to download these copyrighted works, asserting their incorporation into datasets to enhance Meta’s AI capabilities in video and multimedia content handling.
The plaintiffs highlight that the adult videos were “award-winning, critically acclaimed” and argue that using such material without permission constitutes copyright infringement and piracy. They believe the scale and nature of the downloads suggest intentional collection for AI training purposes.
Meta’s Response and Defense
Meta filed a motion to dismiss the lawsuit on October 27, 2025, denying corporate involvement in torrenting pornographic content for AI training. Instead, Meta claims the downloads were isolated incidents by “disparate individuals,” i.e., employees, for personal use rather than a corporate initiative.
Key points of Meta’s defense include:
-
Temporal Distribution: Downloads span from 2018 to recent years, averaging about 22 downloads per year, which Meta argues is insufficient for building effective AI training datasets.
-
Lack of Evidence: Meta asserts that Strike 3’s evidence relies on IP addresses without concrete proof of coordinated data collection efforts.
-
Internal Policies: Meta emphasizes its strict policies against using adult content in AI training, noting that such unauthorized downloads violate company rules.
-
Individual Responsibility: The company attributes these downloads to personal activities of certain staff members, distancing the corporate entity from liability.
Meta’s motion argues that the plaintiffs’ claims are “implausible on their face” and insufficient to proceed to trial.
Broader Context: AI Training Data and Content Controversies
This lawsuit and Meta’s defense highlight ongoing tensions in the AI industry regarding the sourcing of training data. AI models, especially those focused on multimedia and language generation, require vast datasets, raising legal and ethical questions about copyright infringement and data privacy.
Other incidents in the AI domain have involved employees mishandling explicit content or datasets. For example, a 2023 case involved a U.S. Department of Energy employee who uploaded over 187,000 pornographic images to a government network intending to create AI-generated “robot porn” — an issue unrelated to Meta but illustrative of challenges around managing sensitive and explicit content in AI contexts.
Meanwhile, companies like OpenAI are adjusting policies to relax restrictions on adult content generation for verified users, reflecting evolving norms around AI and erotica.
Industry and Legal Implications
If Strike 3’s claims were substantiated, this case could set significant precedents regarding copyright enforcement in AI training data. The use of copyrighted adult content raises questions about consent, licensing, and the boundaries of fair use in machine learning.
Meta’s firm denial and attribution to individual staff use may complicate legal accountability, potentially prompting companies to tighten internal controls over employee activity related to data downloads and usage.
This ongoing legal battle underscores the complex intersection of AI innovation, copyright law, and corporate responsibility. Meta’s stance reflects a defensive posture aimed at distancing the company from direct involvement while acknowledging the challenges of policing employee behavior. The case will likely be closely watched by the technology sector, legal experts, and content creators as it progresses.



