Google Chrome Quietly Installs 4GB AI Model on Users’ Devices
A recent discovery has sparked concern among tech users after it was revealed that Google Chrome has been silently installing a large AI model onto users’ computers. The file, which is about 4 gigabytes in size, appears without any explicit consent from users, raising questions about privacy and transparency. This unexpected move has led to widespread outrage and debate about how tech companies handle user data and device management.
What Was Found and How It Was Discovered
Security researcher Alexander “The Privacy Guy” Hanff uncovered the hidden AI model during a routine check of his browser’s files. He found a file named “weights.bin” stored in a directory called “OptGuideOnDeviceModel.” The file contains learned parameters of Google’s Gemini Nano AI, designed to run directly on users’ devices instead of in the cloud.
Hanff noted that Chrome did not notify users about this installation. If the file is deleted, the browser automatically re-downloads it, making it difficult for users to remove the AI model manually. The discovery has raised concerns about whether this process impacts device performance or consumes significant storage space, but most worry about the lack of transparency and user control.
Public Reaction and Privacy Concerns
Most internet users reacted with shock and frustration. Many criticized Google for installing software on their devices without asking for permission. Comments on social media and forums reflected distrust, with some users vowing to switch to alternative browsers like Firefox or Vivaldi. Critics argue that such silent installations could be a way for Google to artificially inflate its AI usage stats.
Environmental concerns also emerged, as Hanff estimated that deploying the AI across millions of devices could produce thousands of tonnes of CO2 emissions. This adds another layer to the debate, emphasizing the broader impact of hidden software deployments. Overall, the incident highlights ongoing worries about corporate transparency and user rights in the digital age.
Google has yet to publicly address the issue, leaving many questions unanswered. The company’s silence has only fueled suspicion, especially given the scale of Chrome’s user base, which exceeds three billion worldwide. Some experts see this as a potential breach of data privacy laws, particularly in regions with strict regulations like the European Union.
Response from Other Browsers and the Future of AI Features
Following the controversy, other browser makers responded differently. Mozilla announced it would include a “kill switch” to disable AI features in Firefox. On the other hand, Vivaldi took a different stance, emphasizing user choice and autonomy. The company’s CEO, Jon von Tetzchner, stated that Vivaldi aims to build a browser for curious users who value privacy and control.
While some companies are pushing back against intrusive AI features, the industry as a whole remains divided. The incident raises questions about the future of AI integration into everyday tools. Will companies prioritize transparency and user consent, or will silent background installations become the norm? For now, users are advised to scrutinize browser settings and stay informed about updates that could impact their privacy and device security.
As AI continues to advance, the balance between innovation and privacy remains delicate. This case serves as a reminder that users should remain vigilant and demand clearer communication from tech giants about what is installed on their devices. The industry may need to rethink how it approaches user consent, especially with powerful AI models becoming more prevalent in everyday software.












What do you think?
It is nice to know your opinion. Leave a comment.