technology

Overheard : Embedded Intelligence

The hype may be about the frontier models. The disruption really is in the workflow.

Read this in Om Malik’s post (https://om.co/2026/02/06/how-ai-goes-to-work/), and it’s one of the most grounded take on AI I’ve seen lately. We spend so much energy debating which LLM is ‘smarter’ by a fraction of a percentage, but that’s just benchmarking engines. The real shift happens when the engine is already under the hood of the car you’re driving.

Om calls it “embedded intelligence.” It’s when you’re in Excel or Photoshop and the AI isn’t a destination you visit (the prompt box), but a hover-state that helps you work.

The goal isn’t to ‘use AI.’ The goal is to do what you used to do, but better, faster, and with more room for the creative decisions that actually matter.

Overhead : Agentic Engineering

Andrej Karpathy’s nomenclature for the state of tech in AI has become the unofficial industry clock. Watching the terminology evolve over the last few years feels like watching innovation move from a crawl to a sprint:

Jan 2023: “English is the hottest new programming language.” The shift from syntax to semantics. We realized that the bottleneck wasn’t knowing where the semicolon goes, but being able to describe the logic clearly. Coding became a translation layer.

Feb 2025: “Vibe Coding.” The abstraction deepened. We stopped looking at the code entirely and started managing the ‘vibe’ of the output. It was the era of radical abstraction—prompting, iterating, and giving in to the exponential speed of LLMs.

Feb 2026: “Agentic Engineering.” The current frontier. We’ve moved from writing prompts to managing workers (agents). It’s no longer about a single interaction; it’s about architecting systems of agents that can self-correct, plan, and execute.

The timeline is compressing. AI isn’t just a pastime anymore; it’s the factory floor. We’ve gone from being writers to editors to architects in less than a thousand days!

We live in amazing times :-).

Google Gemini’s interpretation of the blog post in an infographic.

Overhear : Securing AI Agents

A good framework on how to think about security when deploying AI agents.

Treat AI agents as insider threats

David Cox mentioned this during a recent conversation with Grant Harvey and Corey Noles on the Neuron podcast. Very simple, but very elegant. Once you frame agents this way, familiar tools – least privilege, role-based access, audit logs – suddenly apply cleanly. The attack surface shrinks not because agents are safer, but because their blast radius is smaller.

AI and the value of taste

Anyone can now generate content (text, audio, images, video) with a single prompt. The cost of creation is collapsing to near zero. We live in amazing times.

It also produces what people have started calling slop: an overwhelming volume of content, much of it interchangeable. When supply becomes infinite, attention becomes scarce. Two thoughts follow from this.

First, the era of personalized content is finally here.
When generation is cheap, we don’t just filter existing content, we generate it. Instead of an algorithm deciding what you might like from a global pool, your feed can be created specifically for you, reflecting your interests, context, and intent. This is a meaningful shift: from recommendation to creation.

Second, as the cost of generation goes to zero, the value of taste goes to infinity.
When anyone can make something, what matters is knowing what should be made. Taste becomes the constraint. Just as there is one Picasso among thousands of painters, there will be people who can consistently direct AI toward work that resonates. They may not produce the content themselves, but they shape it—through judgment, curation, and intent.

In a world flooded with output, taste is the differentiator.

Overheard : On constant increase in expectations

Sam Altman’s June 10, 2025 post on achieving singularity captured something I’ve been thinking about lately. There’s a particular passage that perfectly describes how we’re constantly ratcheting up our expectations:

Already we live with incredible digital intelligence, and after some initial shock, most of us are pretty used to it. Very quickly we go from being amazed that AI can generate a beautifully-written paragraph to wondering when it can generate a beautifully-written novel; or from being amazed that it can make live-saving medical diagnoses to wondering when it can develop the cures; or from being amazed it can create a small computer program to wondering when it can create an entire new company. This is how the singularity goes: wonders become routine, and then table stakes.

This hits at something fundamental about human psychology. We have this remarkable ability to normalize the extraordinary, almost immediately.

I see this everywhere now. My kids casually ask AI to help with homework in ways that would have seemed like science fiction just three years ago. We’ve gone from “can AI write coherent sentences?” to “why can’t it write a perfect screenplay?” in what feels like months.

The progression Altman describes—paragraph to novel, diagnosis to cure, program to company—isn’t just about AI capabilities scaling up. It’s about how our mental models adjust. Each breakthrough becomes the new baseline, not the ceiling.

What struck me most is his phrase: “wonders become routine, and then table stakes.” That’s exactly it. The wonder doesn’t disappear because the technology got worse—it disappears because we got used to it. And then we need something even more impressive to feel that same sense of possibility.

Overheard : AI needs cloud

On The Verge‘s Decoder podcast, Matt Garman, CEO of AWS, explained why AI’s potential is intrinsically tied to the cloud. The scale and complexity of modern AI models demand infrastructure that only major cloud providers can deliver

You’re not going to be able to get a lot of the value that’s promised from AI from a server running in your basement, it’s just not possible. The technology won’t be there, the hardware won’t be there, the models won’t live there, et cetera. And so, in many ways, I think it’s a tailwind to that cloud migration because we see with customers, forget proof of concepts … You can run a proof of concept anywhere. I think the world has proven over the last couple of years you can run lots and lots and lots of proof of concepts, but as soon as you start to think about production, and integrating into your production data, you need that data in the cloud so the models can interact with it and you can have it as part of your system.

Agency for AI Agents

Huggingface just released their agentic library to interact with LLMs. I liked the way they define agents.

AI Agents are programs where LLM outputs control the workflow.

And the way they defined the spectrum of agency for the agents

30 day challenge : create software with AI

I like to do 30 day challenges to explore new areas, or to form habits. Some of my previous ones were

I am starting a new challenge today, to create software by leveraging AI. The recent boom in AI and GenAI specifically has made it very easy and quick to bring your ideas to fruition. It is time to start coding and developing software for ideas that have been swirling in my head for sometime.

I will be publishing them at https://kudithipudi.org/lab . I will expand and write up about some ideas and the experience in bringing them to life.

Inspired by https://tools.simonwillison.net/.