overheard

Overheard : Defining a Culture

Ben Horowitz shared his ideas on defining a culture In a conversation with Patrick O’Schaugnessy

Let me give you a small but probably the most important insight, which is from Bushido, the Way of the Warrior from the Samurai. A culture is not a set of ideas. It’s a set of actions. If you define your culture as a kind of set of ideas – integrity, do the right thing, we have each other’s backs, or any corporate values – it’s actually just a bunch of fucking platitudes. It doesn’t mean anything. The culture has to be defined in terms of the exact behavior that you want that support that idea. What do you have to do to actually be that thing that you want it to be? It’s the little things. How responsive are you to your colleagues? What’s the SLA on returning a Slack message or an email? Do you show up to meetings on time? Not everybody has those ideas, but if you want that idea, you’ve got to manifest it through something else.

Overheard : Embedded Intelligence

The hype may be about the frontier models. The disruption really is in the workflow.

Read this in Om Malik’s post (https://om.co/2026/02/06/how-ai-goes-to-work/), and it’s one of the most grounded take on AI I’ve seen lately. We spend so much energy debating which LLM is ‘smarter’ by a fraction of a percentage, but that’s just benchmarking engines. The real shift happens when the engine is already under the hood of the car you’re driving.

Om calls it “embedded intelligence.” It’s when you’re in Excel or Photoshop and the AI isn’t a destination you visit (the prompt box), but a hover-state that helps you work.

The goal isn’t to ‘use AI.’ The goal is to do what you used to do, but better, faster, and with more room for the creative decisions that actually matter.

Overhead : Agentic Engineering

Andrej Karpathy’s nomenclature for the state of tech in AI has become the unofficial industry clock. Watching the terminology evolve over the last few years feels like watching innovation move from a crawl to a sprint:

Jan 2023: “English is the hottest new programming language.” The shift from syntax to semantics. We realized that the bottleneck wasn’t knowing where the semicolon goes, but being able to describe the logic clearly. Coding became a translation layer.

Feb 2025: “Vibe Coding.” The abstraction deepened. We stopped looking at the code entirely and started managing the ‘vibe’ of the output. It was the era of radical abstraction—prompting, iterating, and giving in to the exponential speed of LLMs.

Feb 2026: “Agentic Engineering.” The current frontier. We’ve moved from writing prompts to managing workers (agents). It’s no longer about a single interaction; it’s about architecting systems of agents that can self-correct, plan, and execute.

The timeline is compressing. AI isn’t just a pastime anymore; it’s the factory floor. We’ve gone from being writers to editors to architects in less than a thousand days!

We live in amazing times :-).

Google Gemini’s interpretation of the blog post in an infographic.

Overhear : Securing AI Agents

A good framework on how to think about security when deploying AI agents.

Treat AI agents as insider threats

David Cox mentioned this during a recent conversation with Grant Harvey and Corey Noles on the Neuron podcast. Very simple, but very elegant. Once you frame agents this way, familiar tools – least privilege, role-based access, audit logs – suddenly apply cleanly. The attack surface shrinks not because agents are safer, but because their blast radius is smaller.

Overheard : Prosperity & Open Source

Loved this quote by Matt Ridley on how open sourcing and sharing ideas leads to improved prosperity. Positive sum instead of zero sum games. The free exchange, combination, and mating of different ideas (like trade and specialization) drive human progress and wealth far more effectively than when ideas are not not shared and guarded.

Prosperity happens when ideas have sex

Overheard : Kindness

Warren Buffett in his farewell letter on kindness

Greatness does not come about through accumulating great amounts of money, great amounts
of publicity or great power in government. When you help someone in any of thousands of ways, you
help the world. Kindness is costless but also priceless. Whether you are religious or not, it’s hard to
beat The Golden Rule as a guide to behavior.

I wish he sticks around a long time even though he is giving up his oversight role at Berkshire and continues to share his wisdom.

His letter is also a masterclass in great writing. Each paragraph is less than 5 sentences, and each page has less than 10 paragraphs. All written in simple to understand language.

Overheard : Attention span

Some interesting metrics on the attention span of consumers before they make a decision to spend more time on the content or not.

Shared by Lulu Cheng in a conversation with Shane Parrish

And then in terms of text, because one of the ways that people get to know you is through your writing, I don’t know about seconds, but it’s like the first paragraph. For an email, it’s the subject line. For a tweet, it’s the first line, first sentence, the hook. So the opportunity, the surface area of the opportunity we have to latch on, is getting more and more fine, which means that the hook that we need to use has to get more and more sharp.

  • Writing : First Paragraph
  • Email: Subject Line
  • Tweet : First Line
  • Video : First 30 seconds

Overheard : mental model and curiosity

Mark Bertolini in a chat with Patrick O’Shaughnessy on “Invest Like the Best” podcast, speaking about the need to update your mental model about the world constantly. And the two things that differentiate good leaders from the rest.

I always say to people, the mental model that exists inside your head about how the world works is the most critical tool you have. And if you don’t constantly add new information to it and are not a continual learner, you can’t possibly know what you need to know to make good decisions.

And so I look for two things in executives, curiosity and courage. The curiosity to continue to ask questions and learn more every day, and the courage to act on it when you have a hypothesis that might be powerful

Overheard : On constant increase in expectations

Sam Altman’s June 10, 2025 post on achieving singularity captured something I’ve been thinking about lately. There’s a particular passage that perfectly describes how we’re constantly ratcheting up our expectations:

Already we live with incredible digital intelligence, and after some initial shock, most of us are pretty used to it. Very quickly we go from being amazed that AI can generate a beautifully-written paragraph to wondering when it can generate a beautifully-written novel; or from being amazed that it can make live-saving medical diagnoses to wondering when it can develop the cures; or from being amazed it can create a small computer program to wondering when it can create an entire new company. This is how the singularity goes: wonders become routine, and then table stakes.

This hits at something fundamental about human psychology. We have this remarkable ability to normalize the extraordinary, almost immediately.

I see this everywhere now. My kids casually ask AI to help with homework in ways that would have seemed like science fiction just three years ago. We’ve gone from “can AI write coherent sentences?” to “why can’t it write a perfect screenplay?” in what feels like months.

The progression Altman describes—paragraph to novel, diagnosis to cure, program to company—isn’t just about AI capabilities scaling up. It’s about how our mental models adjust. Each breakthrough becomes the new baseline, not the ceiling.

What struck me most is his phrase: “wonders become routine, and then table stakes.” That’s exactly it. The wonder doesn’t disappear because the technology got worse—it disappears because we got used to it. And then we need something even more impressive to feel that same sense of possibility.