technology

HOW TO : Use curl to connect to a Unix socket

Today I learned a incredibly useful trick for debugging web applications that don’t use standard TCP/IP networking. While we usually think of curl as a tool for fetching data over http:// or https:// via a port like 80 or 443, it is also fully capable of communicating over Unix Domain Sockets.

Why use Unix sockets?

You can use Unix sockets instead of TCP to proxy application servers. Sockets are often faster and more secure for local inter-process communication because they avoid the overhead of the network stack and can be protected by standard file permissions.

The challenge arises when you want to test the application server directly without going through the public-facing proxy. That is where this curl command comes in.

The Command

To request a page from a service listening on a Unix socket, use the --unix-socket flag followed by the path to the socket file:

curl --unix-socket /var/www/app/app.sock http://localhost/  

The --unix-socket flag tells curl to ignore the standard DNS resolution for the hostname and instead connect directly to the file path provided, which in this case is /var/www/app/app.sock.

The http://localhost/ part of the command is still required because curl needs to know which protocol to use and what to put in the Host header of the HTTP request. Even though the data is traveling through a file on your hard drive rather than a network card, the application server still expects a valid HTTP request format.

This is a lifesaver for troubleshooting “502 Bad Gateway” errors. By bypasssing Nginx and hitting the socket directly, you can determine if the issue lies with the application server itself or the proxy configuration. If the command above returns the expected HTML, you know your backend is healthy and the problem is likely in your nginx .conf file.

HOW TO : Launch tmux on SSH login

I’m terrible at remembering to start tmux when I SSH into servers. Then invariably, my connection drops mid-compile or during a long-running test, and I lose everything :-(.

The solution? Just make tmux start automatically when you SSH in. Here’s how.

The Problem

You SSH into your server, start working, and then:

  • Your WiFi hiccups
  • Your laptop sleeps
  • You close the terminal by accident

And poof—all your work is gone. tmux solves this by keeping your session alive on the server, but only if you remember to actually start it.

The Solution

Add a simple check to your ~/.bashrc that auto-launches tmux when you connect via SSH.

# Auto-launch tmux on SSH connection
# To skip: either detach from tmux (Ctrl+b, then d) or set TMUX_SKIP=1 before connecting
# Example: TMUX_SKIP=1 ssh user@host
if [[ -n "$SSH_CONNECTION" ]] && [[ -z "$TMUX" ]] && [[ -z "$TMUX_SKIP" ]] && command -v tmux &> /dev/null; then
tmux attach-session -t default || tmux new-session -s default
fi

Add this to the end of your ~/.bashrc file.

How It Works

The script checks four conditions before launching tmux:

  1. -n "$SSH_CONNECTION" — Are you connecting via SSH? (Doesn’t trigger for local terminal sessions)
  2. -z "$TMUX" — Is tmux not already running? (Prevents tmux-inception)
  3. -z "$TMUX_SKIP" — Did you explicitly skip tmux? (More on this below)
  4. command -v tmux — Is tmux actually installed?

If all checks pass, it tries to attach to an existing session named “default”. If that session doesn’t exist, it creates one.

Skipping tmux When Needed

Sometimes you just want a plain shell—for quick commands, debugging, or whatever. Two ways to skip:

Option 1: Set an environment variable

TMUX_SKIP=1 ssh user@server

Option 2: Detach after connecting

Ctrl+b, then d

This drops you to a regular shell without killing the tmux session.

Why This Setup?

  • Named session: Using default as a session name makes it easy to reconnect
  • Attach-or-create: The || operator means it’ll create the session only if it doesn’t exist
  • No exec: Some examples use exec tmux ... which replaces your shell entirely. I prefer not doing that—it makes skipping harder and can be annoying for scripted SSH commands

That’s it. Now you can SSH in, disconnect whenever, and pick up right where you
left off. No more lost work from dropped connections.

Overheard : Embedded Intelligence

The hype may be about the frontier models. The disruption really is in the workflow.

Read this in Om Malik’s post (https://om.co/2026/02/06/how-ai-goes-to-work/), and it’s one of the most grounded take on AI I’ve seen lately. We spend so much energy debating which LLM is ‘smarter’ by a fraction of a percentage, but that’s just benchmarking engines. The real shift happens when the engine is already under the hood of the car you’re driving.

Om calls it “embedded intelligence.” It’s when you’re in Excel or Photoshop and the AI isn’t a destination you visit (the prompt box), but a hover-state that helps you work.

The goal isn’t to ‘use AI.’ The goal is to do what you used to do, but better, faster, and with more room for the creative decisions that actually matter.

Overhead : Agentic Engineering

Andrej Karpathy’s nomenclature for the state of tech in AI has become the unofficial industry clock. Watching the terminology evolve over the last few years feels like watching innovation move from a crawl to a sprint:

Jan 2023: “English is the hottest new programming language.” The shift from syntax to semantics. We realized that the bottleneck wasn’t knowing where the semicolon goes, but being able to describe the logic clearly. Coding became a translation layer.

Feb 2025: “Vibe Coding.” The abstraction deepened. We stopped looking at the code entirely and started managing the ‘vibe’ of the output. It was the era of radical abstraction—prompting, iterating, and giving in to the exponential speed of LLMs.

Feb 2026: “Agentic Engineering.” The current frontier. We’ve moved from writing prompts to managing workers (agents). It’s no longer about a single interaction; it’s about architecting systems of agents that can self-correct, plan, and execute.

The timeline is compressing. AI isn’t just a pastime anymore; it’s the factory floor. We’ve gone from being writers to editors to architects in less than a thousand days!

We live in amazing times :-).

Google Gemini’s interpretation of the blog post in an infographic.

Overhear : Securing AI Agents

A good framework on how to think about security when deploying AI agents.

Treat AI agents as insider threats

David Cox mentioned this during a recent conversation with Grant Harvey and Corey Noles on the Neuron podcast. Very simple, but very elegant. Once you frame agents this way, familiar tools – least privilege, role-based access, audit logs – suddenly apply cleanly. The attack surface shrinks not because agents are safer, but because their blast radius is smaller.

AI and the value of taste

Anyone can now generate content (text, audio, images, video) with a single prompt. The cost of creation is collapsing to near zero. We live in amazing times.

It also produces what people have started calling slop: an overwhelming volume of content, much of it interchangeable. When supply becomes infinite, attention becomes scarce. Two thoughts follow from this.

First, the era of personalized content is finally here.
When generation is cheap, we don’t just filter existing content, we generate it. Instead of an algorithm deciding what you might like from a global pool, your feed can be created specifically for you, reflecting your interests, context, and intent. This is a meaningful shift: from recommendation to creation.

Second, as the cost of generation goes to zero, the value of taste goes to infinity.
When anyone can make something, what matters is knowing what should be made. Taste becomes the constraint. Just as there is one Picasso among thousands of painters, there will be people who can consistently direct AI toward work that resonates. They may not produce the content themselves, but they shape it—through judgment, curation, and intent.

In a world flooded with output, taste is the differentiator.

Overheard : On constant increase in expectations

Sam Altman’s June 10, 2025 post on achieving singularity captured something I’ve been thinking about lately. There’s a particular passage that perfectly describes how we’re constantly ratcheting up our expectations:

Already we live with incredible digital intelligence, and after some initial shock, most of us are pretty used to it. Very quickly we go from being amazed that AI can generate a beautifully-written paragraph to wondering when it can generate a beautifully-written novel; or from being amazed that it can make live-saving medical diagnoses to wondering when it can develop the cures; or from being amazed it can create a small computer program to wondering when it can create an entire new company. This is how the singularity goes: wonders become routine, and then table stakes.

This hits at something fundamental about human psychology. We have this remarkable ability to normalize the extraordinary, almost immediately.

I see this everywhere now. My kids casually ask AI to help with homework in ways that would have seemed like science fiction just three years ago. We’ve gone from “can AI write coherent sentences?” to “why can’t it write a perfect screenplay?” in what feels like months.

The progression Altman describes—paragraph to novel, diagnosis to cure, program to company—isn’t just about AI capabilities scaling up. It’s about how our mental models adjust. Each breakthrough becomes the new baseline, not the ceiling.

What struck me most is his phrase: “wonders become routine, and then table stakes.” That’s exactly it. The wonder doesn’t disappear because the technology got worse—it disappears because we got used to it. And then we need something even more impressive to feel that same sense of possibility.

Overheard : AI needs cloud

On The Verge‘s Decoder podcast, Matt Garman, CEO of AWS, explained why AI’s potential is intrinsically tied to the cloud. The scale and complexity of modern AI models demand infrastructure that only major cloud providers can deliver

You’re not going to be able to get a lot of the value that’s promised from AI from a server running in your basement, it’s just not possible. The technology won’t be there, the hardware won’t be there, the models won’t live there, et cetera. And so, in many ways, I think it’s a tailwind to that cloud migration because we see with customers, forget proof of concepts … You can run a proof of concept anywhere. I think the world has proven over the last couple of years you can run lots and lots and lots of proof of concepts, but as soon as you start to think about production, and integrating into your production data, you need that data in the cloud so the models can interact with it and you can have it as part of your system.