Random strangers are calling people after Google’s Gemini chatbot provides incorrect contact information that includes real phone numbers belonging to private individuals.

When AI Gets Your Number Wrong
A Reddit user reported receiving weeks of calls from people seeking lawyers, product designers, and locksmiths-all because Google’s generative AI had somehow connected his personal phone number to these services. The calls started about a month ago and haven’t stopped, leaving him scrambling for solutions that don’t appear to exist.
Daniel Abraham, a 28-year-old software engineer in Israel, experienced this firsthand in March when a stranger contacted him on WhatsApp asking for help with PayBox, an Israeli payment app. The stranger had followed Gemini’s customer service instructions, which incorrectly listed Abraham’s personal number as the company’s WhatsApp support line. Abraham doesn’t work for PayBox, and the company confirmed it doesn’t operate WhatsApp customer service.
Similar incidents keep surfacing. A PhD candidate at the University of Washington discovered that Gemini would provide her colleague’s personal cell phone number when prompted. Each case follows the same pattern: AI systems confidently deliver real phone numbers that have no connection to the requested services or information.
The exact mechanism behind these leaks remains unclear, but experts point to personally identifiable information in training data as the likely culprit. Phone numbers scattered across the internet-in databases, websites, and documents-become part of the vast datasets used to train these language models.
Privacy Complaints Surge 400%
DeleteMe, which helps customers scrub personal information from the internet, reports a 400% increase in AI-related privacy requests over the past seven months. These complaints specifically mention ChatGPT, Claude, Gemini, and other generative AI tools, totaling several thousand queries.

Rob Shavell, DeleteMe’s cofounder and CEO, says 55% of these AI privacy concerns involve ChatGPT, while Gemini accounts for 20%, Claude for 15%, and other AI tools make up the remaining 10%. The complaints typically fall into two categories: people discovering that chatbots reveal accurate personal details about themselves, or finding that AI systems generate plausible but incorrect contact information.
This data suggests the problem extends far beyond isolated incidents. When Abraham tested Gemini again after his initial experience, the chatbot produced yet another incorrect WhatsApp number for PayBox customer service-this time belonging to a different person entirely. Recent tests show Gemini continues providing wrong phone numbers when asked about PayBox, including a number that belongs to a separate credit card company.
The scope of exposure remains unknown since most victims likely never discover how strangers obtained their numbers. People receiving unexpected calls might assume it’s spam or misdirected marketing rather than an AI system distributing their contact information. The random nature of these leaks makes tracking and quantifying the problem nearly impossible.
Unlike traditional data breaches where companies can notify affected users and implement fixes, these AI-driven exposures offer no clear remediation path. Individuals cannot easily remove their information from training datasets that have already been processed and deployed across AI systems.
No Easy Solutions in Sight
The absence of straightforward fixes makes this privacy issue particularly troubling. Traditional methods for protecting personal information-like opting out of directories or requesting data removal-don’t apply when the information is embedded within AI training data. Once these massive datasets are created and models are trained, extracting specific phone numbers becomes technically challenging if not impossible.

PayBox’s customer service representative Elad Gabay confirmed the company doesn’t operate WhatsApp support, yet Gemini continues associating random phone numbers with their services. This disconnect between AI-generated information and reality creates ongoing problems for both the individuals whose numbers are leaked and the companies being misrepresented.








