Categories: Technology
| On
2024-02-09 7:50 AM

Google Warns Users About Privacy Risks with Gemini Chatbot Apps

  • Google warns users about privacy risks with Gemini chatbot apps.
  • Human annotators review and process conversations, which are retained for up to three years.
  • Users can control data retention to some extent but should avoid sharing sensitive information.
By Lethabo Ntsoane

In a recent update, Google has issued a cautionary statement regarding the potential privacy risks associated with its Gemini chatbot apps. The announcement comes amidst growing concerns over data collection and retention practices employed by AI-powered platforms.

Gemini Chatbot Apps: What Users Need to Know

Google’s Gemini chatbot apps, available on web browsers as well as Android and iOS devices, have garnered attention due to their data collection practices. Despite conversations being “disconnected” from Google Accounts, human annotators are tasked with reading, labeling, and processing them to enhance the service. These conversations, along with related data such as language preferences and device usage, are retained for up to three years.

Control Over Data Retention

Google acknowledges users’ desire for control over their data and offers some measures to manage Gemini-related data:

ActionEffect
Switch off Gemini Apps ActivityPrevents future conversations from being saved to Google Account
Delete individual prompts and conversationsRemoves specific interactions from the Gemini Apps Activity screen

Even with Gemini Apps Activity turned off, conversations may still be saved to a Google Account for up to 72 hours for safety and security purposes.

Google’s Cautionary Advice

In its support document, Google advises users to refrain from sharing confidential information or data they wouldn’t want reviewers or Google to access. This serves as a reminder of the delicate balance between privacy and the need for data to improve AI models.

Broader Implications and Industry Response

Google’s policies echo similar data collection and retention practices employed by other AI vendors. However, concerns over privacy have prompted regulatory scrutiny, with incidents such as the FTC’s inquiry into OpenAI’s data vetting processes and Italy’s Data Protection Authority’s critique of OpenAI’s data collection practices.

Growing Awareness and Precautions

As awareness of privacy risks associated with AI-powered tools grows, organizations are implementing limitations on data usage. A recent survey by Cisco revealed that a significant portion of companies have established restrictions or outright bans on the use of GenAI tools due to privacy concerns.

Conclusion

Google’s warning serves as a reminder for users to exercise caution when interacting with AI-powered platforms like Gemini. While these tools offer convenience and functionality, users must remain vigilant about the data they share to mitigate privacy risks in an increasingly digitized world.

Join Our Newsletter
Subscribe to our newsletter and stay updated.

Sponsored

Start trading with a free $30 bonus

Unleash your trading potential with XM—your gateway to the electric world of financial markets! Get a staggering $30 trading bonus right off the bat, with no deposit required. Dive into a sea of opportunities with access to over 1000 instruments on the most cutting-edge XM platforms. Trade with zest, at your own pace, anytime, anywhere. Don't wait, your trading journey begins now! Click here to ignite your trading spirit!

Lethabo Ntsoane

Lethabo Ntsoane holds a Bachelors Degree in Accounting from the University of South Africa. He is a Financial Product commentator at Rateweb. He is an expect financial product analyst with years of experience in reviewing products and offering commentary. Lethabo majors in financial news, reviews and financial tips. He can be contacted: Email: lethabo@rateweb.co.za Twitter: @NtsoaneLethabo