Physical Address

304 North Cardinal St.
Dorchester Center, MA 02124

Otter AI accidentally transcripts confidential conversation of company after Zoom meeting

Imagine finishing a Zoom meeting with a company, feeling good about the conversation, and then later receiving an email containing not just the meeting notes—but also hours of private chatter that happened after you left. That’s exactly what happened to Alex Bilzerian, an engineer and researcher, who recently shared this AI mishap on social media.
Bilzerian was using Otter AI, a popular transcription tool, to record his meeting with a venture capitalist firm. The meeting went fine, but the surprise came later when Otter sent him an email. Instead of just the transcript of the meeting, it also included “hours” of the investors’ private, confidential conversations about their business.
Naturally, the investors were embarrassed and apologised profusely. But for Bilzerian, the damage was done. He decided to walk away from the deal entirely, fearing the firm’s inability to protect sensitive information.
It’s easy to assume that once someone logs off a Zoom call, the transcription will stop. But apparently, Otter didn’t get the memo. Instead, it kept transcribing everything, like a nosy assistant eavesdropping on private conversations. The venture capitalists had no idea that their private discussions about business strategies were being neatly typed out by AI and sent to someone outside their firm.
The issue here isn’t just an awkward tech glitch, it’s also a wake-up call about how easily AI tools can mishandle sensitive information. Naomi Brockwell, a researcher and privacy advocate, told The Washington Post that people haven’t yet grasped how invasive AI can be. She pointed out that tools like Otter AI can unintentionally leak company secrets or sensitive discussions, raising the risk of lawsuits or major breaches of trust.
Otter AI responded to Bilzerian’s complaint by saying they are committed to privacy and understand the concerns. But the incident highlights a growing issue with AI tech: it’s moving faster than our ability to fully control or understand it. Companies need to be aware that not everyone using these tools is tech-savvy, as pointed out by Hatim Rahman, an associate professor at Northwestern University. Technology isn’t perfect, and this incident is a prime example of how things can go wrong when there isn’t enough caution or understanding.So, the lesson here? If you’re using an AI transcription tool, make sure it knows when to stop listening! Or you might get more than you bargained for.

en_USEnglish