A great graph on all the times SEO was pronounced dead, yet the market value of the SEO industry keeps growing. Graph by Chris Lourenco on linkedin.

A great graph on all the times SEO was pronounced dead, yet the market value of the SEO industry keeps growing. Graph by Chris Lourenco on linkedin.

Some interesting metrics on the attention span of consumers before they make a decision to spend more time on the content or not.
Shared by Lulu Cheng in a conversation with Shane Parrish
And then in terms of text, because one of the ways that people get to know you is through your writing, I don’t know about seconds, but it’s like the first paragraph. For an email, it’s the subject line. For a tweet, it’s the first line, first sentence, the hook. So the opportunity, the surface area of the opportunity we have to latch on, is getting more and more fine, which means that the hook that we need to use has to get more and more sharp.
Mark Bertolini in a chat with Patrick O’Shaughnessy on “Invest Like the Best” podcast, speaking about the need to update your mental model about the world constantly. And the two things that differentiate good leaders from the rest.
I always say to people, the mental model that exists inside your head about how the world works is the most critical tool you have. And if you don’t constantly add new information to it and are not a continual learner, you can’t possibly know what you need to know to make good decisions.
And so I look for two things in executives, curiosity and courage. The curiosity to continue to ask questions and learn more every day, and the courage to act on it when you have a hypothesis that might be powerful.
Sam Altman’s June 10, 2025 post on achieving singularity captured something I’ve been thinking about lately. There’s a particular passage that perfectly describes how we’re constantly ratcheting up our expectations:
Already we live with incredible digital intelligence, and after some initial shock, most of us are pretty used to it. Very quickly we go from being amazed that AI can generate a beautifully-written paragraph to wondering when it can generate a beautifully-written novel; or from being amazed that it can make live-saving medical diagnoses to wondering when it can develop the cures; or from being amazed it can create a small computer program to wondering when it can create an entire new company. This is how the singularity goes: wonders become routine, and then table stakes.
This hits at something fundamental about human psychology. We have this remarkable ability to normalize the extraordinary, almost immediately.
I see this everywhere now. My kids casually ask AI to help with homework in ways that would have seemed like science fiction just three years ago. We’ve gone from “can AI write coherent sentences?” to “why can’t it write a perfect screenplay?” in what feels like months.
The progression Altman describes—paragraph to novel, diagnosis to cure, program to company—isn’t just about AI capabilities scaling up. It’s about how our mental models adjust. Each breakthrough becomes the new baseline, not the ceiling.
What struck me most is his phrase: “wonders become routine, and then table stakes.” That’s exactly it. The wonder doesn’t disappear because the technology got worse—it disappears because we got used to it. And then we need something even more impressive to feel that same sense of possibility.
Doug Leone responding to Patrick O’shaughnessy in a podcast conversation, on how to think/position a product
Simplicity, crystal clearness, something a mere mortal can understand. If you can describe it and you can understand it you’re out to lunch. Singularity of purpose. When I go to the store, I buy a pencil because I want to write.
The dumber the disagreements, the better the world actually is.
Another great post by Morgan Housel on the dichotomy of life. We will always be stressed, but what we stress about will be different based on the quality of life. It was, it is and it will be 😊.
On The Verge‘s Decoder podcast, Matt Garman, CEO of AWS, explained why AI’s potential is intrinsically tied to the cloud. The scale and complexity of modern AI models demand infrastructure that only major cloud providers can deliver
You’re not going to be able to get a lot of the value that’s promised from AI from a server running in your basement, it’s just not possible. The technology won’t be there, the hardware won’t be there, the models won’t live there, et cetera. And so, in many ways, I think it’s a tailwind to that cloud migration because we see with customers, forget proof of concepts … You can run a proof of concept anywhere. I think the world has proven over the last couple of years you can run lots and lots and lots of proof of concepts, but as soon as you start to think about production, and integrating into your production data, you need that data in the cloud so the models can interact with it and you can have it as part of your system.
A simple visual of attributes of Good business vs Bad business based on a snippet Codie Sanchez shared in a podcast with Shane Parrish
Codie said
In my definition, good business equals profitable, cash flowing, what I call a cash-flow versus cash-suck business (so you get paid upfront for a service, not after you provide a service), sustainable (it can exist for a long time), historical (it has existed for a long time), understandable (you can explain it to grandma really easily), and you have what’s called the Lindy effect, the likelihood of the future continuing to cash-flow just as it did in the past. Those are my parameters for a good business. A bad business would be a business that is unprofitable, hard to understand, hasn’t been around for very long, and you have to provide the service before you get paid for the service. That is a business that is just much harder. That’s a harder game to win.
Codie Sanchez quoting Prof. Arthur Brooks on different types of friendship in a conversation with Shane Parrish.
Worthless friends are the friends that have no transactional value. You don’t want anything from them. They don’t want anything from you. They want to hang out with you. They want to go on a walk with you. They don’t want your email list. They don’t want access to your money. They just want to have a beer on a Friday night. And these friendships end up materially increasing, our happiness, these worthless friends, whereas these transactional friendships actually end up, in many ways, decreasing our happiness
Huggingface just released their agentic library to interact with LLMs. I liked the way they define agents.
AI Agents are programs where LLM outputs control the workflow.
And the way they defined the spectrum of agency for the agents
