Is Your Voice Data Being Used Without Permission
Many people rely on apps like Otter.ai, Read.ai, and even Google’s AI tools to record calls and take notes automatically during meetings. These tools are super convenient and save us time. But a recent legal complaint raises some important questions about privacy and consent.
What’s the Issue with Otter.ai?
The lawsuit, filed in California, claims that Otter records users’ conversations without asking for their permission, which is a requirement in California and other states. The plaintiff, Justin Brewer, argues that Otter is using people’s voices to train its AI models without getting proper consent from everyone involved. While Otter users are told and might agree to be recorded, the complaint points out that people who aren’t using Otter and are part of the calls aren’t asked for permission. This could be a violation of privacy laws and rights.
The complaint also suggests Otter might be breaking federal laws like the Electronic Communications Privacy Act and the Computer Fraud and Abuse Act, along with California’s strict privacy rules. Over 100 other people have joined Brewer’s class action, sharing similar concerns about their privacy being invaded.
How Does Otter.ai Respond?
Otter.ai has a large user base—more than 25 million globally—and recently hit a milestone of over $100 million in yearly revenue. Their service records meetings on platforms like Google Meet, Zoom, and Microsoft Teams, even when the participants aren’t Otter users. Their privacy policy states that they use the voices from these meetings to improve their AI.
When approached about the lawsuit, Otter’s spokesperson emphasized that they are committed to protecting user data and privacy. They said nobody should be recorded without knowing and agreeing to it. Otter claims that the recordings are initiated by users, not automatically started by the app itself. The company also says that users are responsible for following local laws about recording conversations and that their Terms of Service clearly state this.
Otter’s policies say users must get permission from everyone involved before recording, but the lawsuit argues that Otter shifts this responsibility onto its users instead of making sure everyone is aware and consents. The complaint points out that Otter and similar apps are active participants in calls, showing up alongside human participants, and that Otter doesn’t always ask for consent before joining meetings.
What Are the Legal and Ethical Concerns?
The lawsuit says Otter doesn’t get prior permission from everyone in the call or tell them their voices might be used to improve AI. Instead, Otter only asks the host for permission, often without informing other participants. It also can join meetings on its own, unless a user changes a setting that is off by default. When Otter joins, it doesn’t always send a link to its privacy policy, which could leave users unaware of how their data is being used.
Otter claims it trains its AI on “de-identified” audio recordings, meaning it removes obvious personal details to protect privacy. But the lawsuit points out that Otter doesn’t clearly explain what “de-identified” means, and it doesn’t guarantee speaker anonymity or that confidential info is removed. Data can be stored indefinitely, raising concerns about how securely and ethically this information is handled.
Some experts say this situation highlights a big difference between traditional call recording, which usually only the recorder’s organization can access, and AI-powered tools that store and analyze recordings across many users. Cases involving voice assistants, like Siri or Google Assistant, have faced similar legal scrutiny when data was recorded and stored without clear user knowledge.
Why This Matters for Businesses and Users
This case isn’t just about Otter. As more companies adopt AI transcription tools, questions about consent and data privacy grow. If businesses use these apps without ensuring everyone’s permission, they could face legal trouble and damage trust with employees and clients.
Cybersecurity experts warn organizations to pay attention to laws about recording conversations in their jurisdiction. They should also understand what their AI tools are doing with the data—whether it’s stored, shared, or used for training. Companies need to have clear policies and procedures to respect privacy rights and avoid legal pitfalls.
In the end, this lawsuit is a reminder that technology moves fast, but privacy rights remain crucial. Users and companies alike should be aware of how voice data is collected and used, and take steps to protect privacy. Transparency, consent, and responsible data handling aren’t just legal requirements—they’re essential for building trust in today’s digital world.















What do you think?
It is nice to know your opinion. Leave a comment.