Google Denies Using Gmail for AI Training Amid Lawsuit
Google denies using Gmail emails for AI training amid privacy lawsuit, calling claims misleading and clarifying data practices.

Google Denies Using Gmail Emails for AI Training Amid Privacy Controversy and Lawsuit
Google is embroiled in a major privacy controversy after viral claims and a class-action lawsuit alleged the company secretly used Gmail users' private emails and attachments to train its Gemini AI model. The company has firmly denied these allegations, calling the reports “misleading” and clarifying that Gmail content is not used for AI training purposes. However, the incident has exposed widespread confusion and concern about Google’s data handling practices and transparency around AI features embedded in its services.
Background: Viral Claims and User Backlash
In November 2025, multiple reports and social media posts spread rapidly, suggesting Google automatically enabled settings that allowed its Gemini AI to access Gmail, Chat, and Meet content for training without explicit user consent. A notable cybersecurity report from Malwarebytes helped fuel these claims, highlighting that many users found “smart features” settings enabled by default even if they had previously opted out. This led to fears that private communications were being mined covertly for AI development.
The controversy escalated when a class-action lawsuit was filed on November 11, 2025, alleging Google violated privacy laws, particularly the California Invasion of Privacy Act, by secretly turning on AI access across its Workspace services around October 10, 2025. The suit argues Google failed to provide transparent notification or straightforward opt-out mechanisms, thus infringing on users’ expectations of privacy and control over their data.
Google’s Response: Denial and Clarification
Google responded swiftly to the uproar. A spokesperson, Jenny Thomson, told The Verge that the reports were “misleading” and confirmed that Google has not changed anyone’s settings and does not use Gmail content to train the Gemini AI model. The company emphasized that Gmail’s “smart features” — which include conveniences like automatic flight updates, package tracking, and enhanced spell checking — have existed for years and are designed to personalize user experience within Google Workspace, not to train AI systems.
However, some users reported being mysteriously re-enrolled in these smart features despite having opted out before, indicating a change in how Google manages these settings. This administrative update, introduced in January 2025, separated personalization controls between Workspace products and other Google services, inadvertently resetting some users’ preferences and fueling confusion.
What Are Gmail Smart Features?
Gmail’s smart features are AI-powered tools intended to enhance productivity and user convenience by analyzing email content for relevant updates and suggestions. For example, Gmail can automatically add flight details to calendars or detect package deliveries. When enabled, users agree to let Workspace use their content and activity to personalize their experience across Workspace apps.
Importantly, Google states that these features are not used to train Gemini or other AI models. The confusion arises because these smart features require content scanning within a user’s account to function but do not feed this data into AI training datasets.
Legal and Privacy Implications
The lawsuit centers on whether Google’s opt-out approach and the complexity of settings management violate privacy laws by effectively enrolling users by default without clear consent. Critics argue that automatic enabling of data-processing features without transparent communication undermines user privacy and trust, especially amid the rise of AI services that rely heavily on personal data.
To protect privacy, experts recommend users review and manage their Gmail and Workspace smart feature settings, as the opt-out process can be complicated and unintuitive. Google provides controls to disable smart features both on desktop and mobile, but many users remain unaware of these options.
Industry Context and Wider Impact
This case highlights the broader tension between AI innovation and user privacy. As tech giants embed AI deeply into communication platforms, questions about data consent, transparency, and security become paramount. Google’s situation mirrors debates faced by other companies over how to responsibly use private data while delivering AI-powered benefits.
The legal outcome of the ongoing class-action lawsuit could set important precedents on how AI training data must be collected, and how clearly companies must communicate such practices to users. The case underscores the urgent need for clearer privacy frameworks and user-friendly controls as AI technologies proliferate.
Summary
- Google denies using Gmail emails and attachments to train its Gemini AI, calling allegations misleading.
- The controversy arose after reports that Gmail’s smart feature settings were automatically enabled or reset for many users.
- Gmail smart features scan emails to power conveniences but are separate from AI training data.
- A class-action lawsuit alleges Google violated privacy laws by enabling AI access without proper user consent.
- Users are advised to review and disable smart features if concerned about privacy.
- The case reflects growing challenges in balancing AI innovation with transparent data practices.
Relevant Images for the Article
- Google Logo — Represents the company at the center of the controversy.
- Screenshot of Gmail Smart Features Settings — Visualizes where users can control AI-related personalization features.
- Gemini AI Conceptual Image — Depicts Google’s AI model involved in the controversy.
- Courtroom or Legal Document Image — Symbolizes the ongoing class-action lawsuit.
- Privacy Warning or Data Settings Icon — Highlights user privacy concerns and controls.



