Google’s rollout of Gemini across Workspace apps is a masterclass in coercive design, not privacy innovation. The company claims it won’t train foundational models on your Gmail or Drive files directly. However, Gemini’s outputs — summaries, snippets, and AI-generated replies — can and do feed into training datasets. Google says it tries to filter personal info, but there’s no independent audit possible. This is a data extraction pipeline disguised as a helpful assistant, where the default setting is always in Google’s favor, not yours.
The hidden toggle maze
Opting out of Gemini’s data collection is deliberately punishing. The only way to block AI training is to find and disable the obscure “Gemini Apps Activity” setting, which also deletes your chat history. Google has hidden this toggle from the main privacy dashboard, meaning millions of users will never find it. Even worse, to turn off Gemini in Gmail you must disable “Smart Features,” which nukes popular unrelated tools like inbox filtering, Smart Compose, and package tracking. This “forced action” dark pattern creates a false choice: accept AI surveillance or lose core functionality you depend on daily.
The big picture: defaults as weapons
Google paid billions to secure default search status on iPhones, and now it is weaponizing defaults to drive Gemini adoption. The presumption is always consent: AI summaries, AI writing tools, and data sharing are all on by default. Dr. Harry Brignull, who coined the term “dark pattern,” notes this is a “pre-selection” strategy where choices buried three clicks deep count as “opt-in.” Google knows most users will never navigate the maze. This is not a bug. It is a deliberate design philosophy that values training data over user agency, and regulators should treat it as the anticompetitive behavior it is.
Source: Arstechnica
