In a recent update, Google has issued a cautionary statement regarding the potential privacy risks associated with its Gemini chatbot apps. The announcement comes amidst growing concerns over data collection and retention practices employed by AI-powered platforms.
Gemini Chatbot Apps: What Users Need to Know
Google’s Gemini chatbot apps, available on web browsers as well as Android and iOS devices, have garnered attention due to their data collection practices. Despite conversations being “disconnected” from Google Accounts, human annotators are tasked with reading, labeling, and processing them to enhance the service. These conversations, along with related data such as language preferences and device usage, are retained for up to three years.
Control Over Data Retention
Google acknowledges users’ desire for control over their data and offers some measures to manage Gemini-related data:
|Switch off Gemini Apps Activity
|Prevents future conversations from being saved to Google Account
|Delete individual prompts and conversations
|Removes specific interactions from the Gemini Apps Activity screen
Even with Gemini Apps Activity turned off, conversations may still be saved to a Google Account for up to 72 hours for safety and security purposes.
Google’s Cautionary Advice
In its support document, Google advises users to refrain from sharing confidential information or data they wouldn’t want reviewers or Google to access. This serves as a reminder of the delicate balance between privacy and the need for data to improve AI models.
Broader Implications and Industry Response
Google’s policies echo similar data collection and retention practices employed by other AI vendors. However, concerns over privacy have prompted regulatory scrutiny, with incidents such as the FTC’s inquiry into OpenAI’s data vetting processes and Italy’s Data Protection Authority’s critique of OpenAI’s data collection practices.
Growing Awareness and Precautions
As awareness of privacy risks associated with AI-powered tools grows, organizations are implementing limitations on data usage. A recent survey by Cisco revealed that a significant portion of companies have established restrictions or outright bans on the use of GenAI tools due to privacy concerns.
Google’s warning serves as a reminder for users to exercise caution when interacting with AI-powered platforms like Gemini. While these tools offer convenience and functionality, users must remain vigilant about the data they share to mitigate privacy risks in an increasingly digitized world.