Google's AI systems are surfacing real phone numbers in search results and chat responses, with users reporting an influx of unwanted calls from strangers seeking services. The issue appears widespread, with no straightforward mechanism for individuals to prevent their contact information from being returned by AI chatbots. Security researchers and users are raising concerns about how training data and indexing practices enable this unintended disclosure.
What This Means for Your Business
This vulnerability exposes a critical compliance risk for businesses deploying customer-facing AI. Companies using AI chatbots must audit what personal information their systems can access and return. Privacy teams should review data governance policies and consider implementing filters to prevent AI systems from surfacing PII in any customer-facing context.