2 Comments
User's avatar
Howard Salmon's avatar

This episode hits the real issue with agentic AI: once a system negotiates and buys on your behalf, “helpful” isn’t enough—it needs duty-of-care behavior. The first-offer bias and susceptibility to marketing aren’t edge cases; they’re exactly how the consumer internet is engineered to win. I like the simulation-first approach here—it’s the right way to surface failure modes before real money and real harm are on the line. The open question now is standards: logging, provenance, conflict checks, and skepticism by default—otherwise we’re just automating bad deals at scale.

Jason Howell's avatar

Spot on. Moving from helpful (aka sycophantic) to a fiduciary position is the biggest hurdle for agentic AI. We definitely need those standards in place to avoid automating bad deals.