overheard

Overhear : Securing AI Agents

A good framework on how to think about security when deploying AI agents.

Treat AI agents as insider threats

David Cox mentioned this during a recent conversation with Grant Harvey and Corey Noles on the Neuron podcast. Very simple, but very elegant. Once you frame agents this way, familiar tools – least privilege, role-based access, audit logs – suddenly apply cleanly. The attack surface shrinks not because agents are safer, but because their blast radius is smaller.

Overheard : Prosperity & Open Source

Loved this quote by Matt Ridley on how open sourcing and sharing ideas leads to improved prosperity. Positive sum instead of zero sum games. The free exchange, combination, and mating of different ideas (like trade and specialization) drive human progress and wealth far more effectively than when ideas are not not shared and guarded.

Prosperity happens when ideas have sex

Overheard : Kindness

Warren Buffett in his farewell letter on kindness

Greatness does not come about through accumulating great amounts of money, great amounts
of publicity or great power in government. When you help someone in any of thousands of ways, you
help the world. Kindness is costless but also priceless. Whether you are religious or not, it’s hard to
beat The Golden Rule as a guide to behavior.

I wish he sticks around a long time even though he is giving up his oversight role at Berkshire and continues to share his wisdom.

His letter is also a masterclass in great writing. Each paragraph is less than 5 sentences, and each page has less than 10 paragraphs. All written in simple to understand language.

Overheard : Attention span

Some interesting metrics on the attention span of consumers before they make a decision to spend more time on the content or not.

Shared by Lulu Cheng in a conversation with Shane Parrish

And then in terms of text, because one of the ways that people get to know you is through your writing, I don’t know about seconds, but it’s like the first paragraph. For an email, it’s the subject line. For a tweet, it’s the first line, first sentence, the hook. So the opportunity, the surface area of the opportunity we have to latch on, is getting more and more fine, which means that the hook that we need to use has to get more and more sharp.

  • Writing : First Paragraph
  • Email: Subject Line
  • Tweet : First Line
  • Video : First 30 seconds

Overheard : mental model and curiosity

Mark Bertolini in a chat with Patrick O’Shaughnessy on “Invest Like the Best” podcast, speaking about the need to update your mental model about the world constantly. And the two things that differentiate good leaders from the rest.

I always say to people, the mental model that exists inside your head about how the world works is the most critical tool you have. And if you don’t constantly add new information to it and are not a continual learner, you can’t possibly know what you need to know to make good decisions.

And so I look for two things in executives, curiosity and courage. The curiosity to continue to ask questions and learn more every day, and the courage to act on it when you have a hypothesis that might be powerful

Overheard : On constant increase in expectations

Sam Altman’s June 10, 2025 post on achieving singularity captured something I’ve been thinking about lately. There’s a particular passage that perfectly describes how we’re constantly ratcheting up our expectations:

Already we live with incredible digital intelligence, and after some initial shock, most of us are pretty used to it. Very quickly we go from being amazed that AI can generate a beautifully-written paragraph to wondering when it can generate a beautifully-written novel; or from being amazed that it can make live-saving medical diagnoses to wondering when it can develop the cures; or from being amazed it can create a small computer program to wondering when it can create an entire new company. This is how the singularity goes: wonders become routine, and then table stakes.

This hits at something fundamental about human psychology. We have this remarkable ability to normalize the extraordinary, almost immediately.

I see this everywhere now. My kids casually ask AI to help with homework in ways that would have seemed like science fiction just three years ago. We’ve gone from “can AI write coherent sentences?” to “why can’t it write a perfect screenplay?” in what feels like months.

The progression Altman describes—paragraph to novel, diagnosis to cure, program to company—isn’t just about AI capabilities scaling up. It’s about how our mental models adjust. Each breakthrough becomes the new baseline, not the ceiling.

What struck me most is his phrase: “wonders become routine, and then table stakes.” That’s exactly it. The wonder doesn’t disappear because the technology got worse—it disappears because we got used to it. And then we need something even more impressive to feel that same sense of possibility.

Overheard : AI needs cloud

On The Verge‘s Decoder podcast, Matt Garman, CEO of AWS, explained why AI’s potential is intrinsically tied to the cloud. The scale and complexity of modern AI models demand infrastructure that only major cloud providers can deliver

You’re not going to be able to get a lot of the value that’s promised from AI from a server running in your basement, it’s just not possible. The technology won’t be there, the hardware won’t be there, the models won’t live there, et cetera. And so, in many ways, I think it’s a tailwind to that cloud migration because we see with customers, forget proof of concepts … You can run a proof of concept anywhere. I think the world has proven over the last couple of years you can run lots and lots and lots of proof of concepts, but as soon as you start to think about production, and integrating into your production data, you need that data in the cloud so the models can interact with it and you can have it as part of your system.