Lot's of interesting stuff this week, including OpenAI's agent wars, Amazon's move into always-on transcription, and that wild story about Replit AI nuking production data without permission!
But first, let's give a shout-out to our Patron of the Week: Casey Kamiyama. You are the BEST! Support us and power the show at patreon.com/aiinsideshow.
Soundslice vs. the Hallucinated Feature
Last week on the show, we talked about how OpenAI's ChatGPT made up a feature that didn't actually exist for the music education company Soundslice. This week, we got to chat with Adrian Holovaty, CEO of Soundslice. Adrian first noticed something was up when he spotted a surge of users trying to upload ASCII tablature, a feature Soundslice never actually had. Turns out, ChatGPT was confidently telling people to use it! Rather than posting a "hey, we don't do that" banner or ignoring frustrated users, Adrian just built the feature in about five hours. He called it a “defensive play” by implementing a feature he never planned, just to stop confusion. It's a wild look at how AI can shape features, even unintentionally. Adrian also shared fascinating thoughts on AI in music, saying he’s all for machine learning when it empowers musicians to learn and discover, but less so when it automates creative expression altogether. I was in heaven during this interview. As a lifelong musician, I savor the chance to combine two of my favorite worlds together.
Trump Administration Unveils AI Action Plan
The Trump administration announced a sweeping AI Action Plan to ramp up US leadership in AI. This move rolls back Biden-era restrictions, lowers regulatory barriers, and accelerates permits for AI infrastructure. There are boosts to energy capacity, attempts to withhold funding from states with "restrictive" AI rules, and directives for government AI to stay neutral and unbiased (aka anti-”woke”). White House AI leader David Sacks says the plan is all about centering American workers and avoiding "Orwellian" uses of AI. Several executive orders expected to follow. The current administration defining ANYTHING as Orwellian is pretty rich, to put a fine point on it.
OpenAI's Agent, Perplexity's Comet, and the Agentic Browser Wars
OpenAI debuted an agent that’ll make restaurant reservations, handle your files, shop for you, and more. The rollout is global—except for the EU. This agent can control your browser, desktop files, and software. There's even a demo where it checks your calendar, finds a free spot, and presents top restaurant options. Completion speed isn't perfect, but maybe you don't need it to be fast—you just want it done.
Perplexity is also getting in on the action, eyeing their Comet browser as a default on smartphones.
But none of that agentic action means anything if people don’t trust them enough to actually use them. We are officially in the age of agentic browser wars.
Replit AI Deletes Production Data—Without Permission
Jason Lemkin, founder of SaaStr, shared a wild tale from a 12-day "vibe-coding" sprint with Replit AI. The AI assistant wiped the company’s production database, despite clear instructions not to touch anything. Worse, it then covered up its tracks by creating fake data and reports. When asked, the AI admitted: "this was a catastrophic failure on my part." (ya think?) Fortunately, Replit's restore-from-backup feature saved the day. On one hand: AI agents can go rogue, and this is a concrete example of that. On the other: good backup protocols saved the day… this time.
Amazon Acquires Bee: Wearable Audio, User Concerns
Amazon bought Bee, creators of a $49.99 always-on wearable that records and transcribes your life. Amazon promises not to store your audio, but Bee users worry their privacy and control are now out the window. Some say their devices are headed to the trash.
On a personal note, I recently canceled my Limitless AI pendant order after a frustrating, year-long wait—turns out I was delayed just for being an Android user. These devices aren’t cheap to operate: Bee charges $20 a month for transcription, while Limitless’s free plan only does 20 hours before it gets pricey. Easy to burn through, hard to justify.
OpenAI's Altman Warns Banks: Voice Cloning Fraud Looms
Sam Altman, OpenAI's CEO, just issued a warning to banks at a Federal Reserve conference. Some banks still rely on voice recognition to verify customers, even as new AI tools can perfectly imitate real voices. Altman says the method is outdated and urges banks to step up their authentication game. Maybe it’s time for... THE ORB?
Authors vs. Anthropic: Class Action Moves Forward
A US federal judge ruled that three authors can lead a nationwide lawsuit against Anthropic, accused of using pirated books to train AI models. The lawsuit could cover all writers whose work ended up on sites like LibGen and PiLiMi. Anthropic is considering a challenge, arguing it’s nearly impossible to sort out copyright for millions of works.
Netflix Uses AI to Boost Epic Finale
Netflix just used generative AI for the first time in a big way. The new Argentinian sci-fi show "El Eternauta" leans heavily on AI for its big finale. CEO Ted Sarandos says AI tools made the show more efficient and enabled blockbuster-level effects at a fraction of the cost. He’s clear: it was about augmenting and not replacing human creativity. Reviews are great. I haven’t seen it, but I’m dying to know if the AI stuff is obvious.
Google’s Big Sleep AI Stops a Cyberattack
Google’s “Big Sleep” AI agent quietly made cybersecurity history. It caught a critical, unknown SQLite vulnerability that attackers were about to exploit—and shut it down before any damage. Google is tight-lipped on details, but Big Sleep is only months old and already finding real-world threats. This is where AI can really shine by spotting problems deep inside the data that humans can easily miss.
AI Labs Sound Alarm: What’s in the Black Box?
A group of 40 researchers from top labs like OpenAI, Google DeepMind, Anthropic, and Meta say AI is advancing so fast that we’re losing our grip on how it works. They recommend the community focus now on tracking AI’s “chain-of-thought” reasoning while we still can. The future? If we don’t invest in transparency, we might not understand the next generation of models at all. I bet THEY have the solution, now don’t they?
Thank you for watching and reading!
HUGE thank you to the Executive Producers making this show possible:
DrDew, Jeffrey Marraccini, Radio Asheville 103.7, Dante St James, Bono De Rick, Jason Neiffer, Jason Brady, Anthony Downs, and our newest executive producer Mark Starcher!!
See you next Wednesday on another episode of AI Inside.