Technology

Overheard : Embedded Intelligence

The hype may be about the frontier models. The disruption really is in the workflow.

Read this in Om Malik’s post (https://om.co/2026/02/06/how-ai-goes-to-work/), and it’s one of the most grounded take on AI I’ve seen lately. We spend so much energy debating which LLM is ‘smarter’ by a fraction of a percentage, but that’s just benchmarking engines. The real shift happens when the engine is already under the hood of the car you’re driving.

Om calls it “embedded intelligence.” It’s when you’re in Excel or Photoshop and the AI isn’t a destination you visit (the prompt box), but a hover-state that helps you work.

The goal isn’t to ‘use AI.’ The goal is to do what you used to do, but better, faster, and with more room for the creative decisions that actually matter.

Overhead : Agentic Engineering

Andrej Karpathy’s nomenclature for the state of tech in AI has become the unofficial industry clock. Watching the terminology evolve over the last few years feels like watching innovation move from a crawl to a sprint:

Jan 2023: “English is the hottest new programming language.” The shift from syntax to semantics. We realized that the bottleneck wasn’t knowing where the semicolon goes, but being able to describe the logic clearly. Coding became a translation layer.

Feb 2025: “Vibe Coding.” The abstraction deepened. We stopped looking at the code entirely and started managing the ‘vibe’ of the output. It was the era of radical abstraction—prompting, iterating, and giving in to the exponential speed of LLMs.

Feb 2026: “Agentic Engineering.” The current frontier. We’ve moved from writing prompts to managing workers (agents). It’s no longer about a single interaction; it’s about architecting systems of agents that can self-correct, plan, and execute.

The timeline is compressing. AI isn’t just a pastime anymore; it’s the factory floor. We’ve gone from being writers to editors to architects in less than a thousand days!

We live in amazing times :-).

Google Gemini’s interpretation of the blog post in an infographic.

Overhear : Securing AI Agents

A good framework on how to think about security when deploying AI agents.

Treat AI agents as insider threats

David Cox mentioned this during a recent conversation with Grant Harvey and Corey Noles on the Neuron podcast. Very simple, but very elegant. Once you frame agents this way, familiar tools – least privilege, role-based access, audit logs – suddenly apply cleanly. The attack surface shrinks not because agents are safer, but because their blast radius is smaller.

Overheard : On constant increase in expectations

Sam Altman’s June 10, 2025 post on achieving singularity captured something I’ve been thinking about lately. There’s a particular passage that perfectly describes how we’re constantly ratcheting up our expectations:

Already we live with incredible digital intelligence, and after some initial shock, most of us are pretty used to it. Very quickly we go from being amazed that AI can generate a beautifully-written paragraph to wondering when it can generate a beautifully-written novel; or from being amazed that it can make live-saving medical diagnoses to wondering when it can develop the cures; or from being amazed it can create a small computer program to wondering when it can create an entire new company. This is how the singularity goes: wonders become routine, and then table stakes.

This hits at something fundamental about human psychology. We have this remarkable ability to normalize the extraordinary, almost immediately.

I see this everywhere now. My kids casually ask AI to help with homework in ways that would have seemed like science fiction just three years ago. We’ve gone from “can AI write coherent sentences?” to “why can’t it write a perfect screenplay?” in what feels like months.

The progression Altman describes—paragraph to novel, diagnosis to cure, program to company—isn’t just about AI capabilities scaling up. It’s about how our mental models adjust. Each breakthrough becomes the new baseline, not the ceiling.

What struck me most is his phrase: “wonders become routine, and then table stakes.” That’s exactly it. The wonder doesn’t disappear because the technology got worse—it disappears because we got used to it. And then we need something even more impressive to feel that same sense of possibility.

Overheard : AI needs cloud

On The Verge‘s Decoder podcast, Matt Garman, CEO of AWS, explained why AI’s potential is intrinsically tied to the cloud. The scale and complexity of modern AI models demand infrastructure that only major cloud providers can deliver

You’re not going to be able to get a lot of the value that’s promised from AI from a server running in your basement, it’s just not possible. The technology won’t be there, the hardware won’t be there, the models won’t live there, et cetera. And so, in many ways, I think it’s a tailwind to that cloud migration because we see with customers, forget proof of concepts … You can run a proof of concept anywhere. I think the world has proven over the last couple of years you can run lots and lots and lots of proof of concepts, but as soon as you start to think about production, and integrating into your production data, you need that data in the cloud so the models can interact with it and you can have it as part of your system.

Agency for AI Agents

Huggingface just released their agentic library to interact with LLMs. I liked the way they define agents.

AI Agents are programs where LLM outputs control the workflow.

And the way they defined the spectrum of agency for the agents

30 day challenge : create software with AI

I like to do 30 day challenges to explore new areas, or to form habits. Some of my previous ones were

I am starting a new challenge today, to create software by leveraging AI. The recent boom in AI and GenAI specifically has made it very easy and quick to bring your ideas to fruition. It is time to start coding and developing software for ideas that have been swirling in my head for sometime.

I will be publishing them at https://kudithipudi.org/lab . I will expand and write up about some ideas and the experience in bringing them to life.

Inspired by https://tools.simonwillison.net/.

On AI Agentic Workflows

Amazing conversation with Bret Taylor on agentic workflows leveraging AI in the enterprises. The whole conversation is worth listening to multiple times, but this specific segment where Bret speaks about the difference between traditional software engineering and AI driven solutions was thought provoking on how much change management organizations have to go through to adopt to these new solutions.

Now if you have parts of your system that are built on large language models, those parts are really different than most of the software that we’ve built on in the past. Number one is they’re relatively slow compared — to generate a page view on a website takes nanoseconds at this point, might be slightly exaggerating, down to milliseconds, even with the fastest models, it’s quite slow in the way tokens are emitted.

Number two is it can be relatively expensive. And again, it really varies based on the number of parameters in the model. But again, the marginal cost of that page view is almost zero at this point. You don’t think about it. Your cost as a software platform is almost exclusively in your head count. With AI, you can see the margin pressure that a lot of companies face, particularly of their training models or even doing inference with high-parameter-count models.

Number three is they’re nondeterministic fundamentally, and you can tune certain models to more reliably have the same output for the same input. But by and large, it’s hard to reproduce behaviors on these systems. What gives them creativity also leads to non-determinism.

And so this combination of it, we’ve gone from cheap, deterministic, reliable systems to relatively slow, relatively expensive but very creative systems. And I think it violates a lot of the conventions that software engineers think about — have grown to think about when producing software, and it becomes almost a statistical problem rather than just a methodological problem.