I really liked this slide from Carvana’s 2024 Q3 earnings call presentation on their potential addressable market. Communicates the idea in an easy and impactful way.

I really liked this slide from Carvana’s 2024 Q3 earnings call presentation on their potential addressable market. Communicates the idea in an easy and impactful way.

Don’t ever forget where you came from, but always be looking where you’re going.
Micky Malka (Capital Herder and Team Finder) on a recent conversation with Patrick O’Schaughnessy on Invest Like the Best podcast.
Great discussion between Jim O’Shaughnessy and Sajith Pai on the India as a market in the Infinite Loops Podcast. Sajith did a great job describing India as a combination of three markets and not a monolithic market of 1.5 billion people.
India is not a 1.5-billion-person market that many Westerners believe. Instead, it’s three distinct “countries” hiding in plain sight. There’s India One: 120 million affluent, English-speaking urbanites (think the population of Germany) who love their iPhones and Starbucks. Then comes India Two: 300 million aspiring middle-class citizens who inhabit the digital economy but not yet the consumption economy. Finally, there’s India Three: a massive population with a similar demographic profile to Sub-Saharan Africa, that’s still waiting for its invitation to join India’s bright future.
Highly recommend checking out the podcast and this report (on Indus Valley – a play on words comparing the market in India to the tech market in Silicon valley) that Sajith and team put together.
I like to do 30 day challenges to explore new areas, or to form habits. Some of my previous ones were
I am starting a new challenge today, to create software by leveraging AI. The recent boom in AI and GenAI specifically has made it very easy and quick to bring your ideas to fruition. It is time to start coding and developing software for ideas that have been swirling in my head for sometime.
I will be publishing them at https://kudithipudi.org/lab . I will expand and write up about some ideas and the experience in bringing them to life.
Inspired by https://tools.simonwillison.net/.
Anthropic released their new Claude Sonnet 3.5 model yesterday that has a new capability to control computers. Computer Use capability allows Claude to directly interact with computer interfaces, enabling tasks like web browsing, data analysis, and file manipulation – all through natural language instructions. Similar to tools, but now you don’t have to define specific tools. I think this opens up a whole new window of opportunities to leverage LLMs for.
Anthropic shared a quick start guide to run the model in a container, but the instructions are for Mac/Linux based workstations. I had to make some tweaks to run them on a windows workstation.
Documenting them for anyone that might be trying to do the same
set ANTHROPIC_API_KEY=YOUR-ANTHROPIC-KEYdocker run -e ANTHROPIC_API_KEY=%ANTHROPIC_API_KEY% -v $HOME/.anthropic:/home/computeruse/.anthropic -p 5900:5900 -p 8501:8501 -p 6080:6080 -p 8080:8080 -it ghcr.io/anthropics/anthropic-quickstarts:computer-use-demo-latestI recently encountered some connectivity issues while working from home and trying to access some corporate resources. Notes for myself on some tips our infosec team shared to troubleshoot the Zscaler client since all the traffic to the interweb gets routed through it.
Amazing conversation with Bret Taylor on agentic workflows leveraging AI in the enterprises. The whole conversation is worth listening to multiple times, but this specific segment where Bret speaks about the difference between traditional software engineering and AI driven solutions was thought provoking on how much change management organizations have to go through to adopt to these new solutions.
Now if you have parts of your system that are built on large language models, those parts are really different than most of the software that we’ve built on in the past. Number one is they’re relatively slow compared — to generate a page view on a website takes nanoseconds at this point, might be slightly exaggerating, down to milliseconds, even with the fastest models, it’s quite slow in the way tokens are emitted.
Number two is it can be relatively expensive. And again, it really varies based on the number of parameters in the model. But again, the marginal cost of that page view is almost zero at this point. You don’t think about it. Your cost as a software platform is almost exclusively in your head count. With AI, you can see the margin pressure that a lot of companies face, particularly of their training models or even doing inference with high-parameter-count models.
Number three is they’re nondeterministic fundamentally, and you can tune certain models to more reliably have the same output for the same input. But by and large, it’s hard to reproduce behaviors on these systems. What gives them creativity also leads to non-determinism.
And so this combination of it, we’ve gone from cheap, deterministic, reliable systems to relatively slow, relatively expensive but very creative systems. And I think it violates a lot of the conventions that software engineers think about — have grown to think about when producing software, and it becomes almost a statistical problem rather than just a methodological problem.
AI will not replace humans but humans who use AI will replace humans that don’t – Dr. Fei-Fei Li
Really liked this simple explanation on the difference between BDR and BCP shared in a hacker news discussion
