Discussion about this post

User's avatar
Howard Salmon's avatar

This episode hits the real issue with agentic AI: once a system negotiates and buys on your behalf, “helpful” isn’t enough—it needs duty-of-care behavior. The first-offer bias and susceptibility to marketing aren’t edge cases; they’re exactly how the consumer internet is engineered to win. I like the simulation-first approach here—it’s the right way to surface failure modes before real money and real harm are on the line. The open question now is standards: logging, provenance, conflict checks, and skepticism by default—otherwise we’re just automating bad deals at scale.

1 more comment...

No posts

Ready for more?