On AI Agentic Workflows

Amazing conversation with Bret Taylor on agentic workflows leveraging AI in the enterprises. The whole conversation is worth listening to multiple times, but this specific segment where Bret speaks about the difference between traditional software engineering and AI driven solutions was thought provoking on how much change management organizations have to go through to adopt to these new solutions.

Now if you have parts of your system that are built on large language models, those parts are really different than most of the software that we’ve built on in the past. Number one is they’re relatively slow compared — to generate a page view on a website takes nanoseconds at this point, might be slightly exaggerating, down to milliseconds, even with the fastest models, it’s quite slow in the way tokens are emitted.

Number two is it can be relatively expensive. And again, it really varies based on the number of parameters in the model. But again, the marginal cost of that page view is almost zero at this point. You don’t think about it. Your cost as a software platform is almost exclusively in your head count. With AI, you can see the margin pressure that a lot of companies face, particularly of their training models or even doing inference with high-parameter-count models.

Number three is they’re nondeterministic fundamentally, and you can tune certain models to more reliably have the same output for the same input. But by and large, it’s hard to reproduce behaviors on these systems. What gives them creativity also leads to non-determinism.

And so this combination of it, we’ve gone from cheap, deterministic, reliable systems to relatively slow, relatively expensive but very creative systems. And I think it violates a lot of the conventions that software engineers think about — have grown to think about when producing software, and it becomes almost a statistical problem rather than just a methodological problem.