Skip to content
Alexis Gallagher, man of destiny Alexis Gallagher, man of destinyAlexis Gallagher, man of destinyAlexis Gallagher, man of destinyAlexis Gallagher, man of destiny

Hi, I'm Alexis! Please get in touch if you're curious about anything I'm writing about here.

I work at Answer.AI, figuring out how to make AI more useful. Previously at Google and various startups, I like learning how things work and trying to make objects and explanations absolutely and perfectly clear. AI is still confusing as hell so it's great fun!


Recently I've been building AI legal tools, and podcasting about our AI notebooks.

Michael Smith writes about the distinction between tools that make us smarter vs dumber. I thought the comparison between abacuses and calculators was memorable:

Learning how to use an abacus trains your brain to internalize it. Arithmetic becomes faster and more reliable over time, and the mechanisms behind why different strategies work become obvious and intuitive. Eventually you don’t even need the physical abacus anymore. Whereas with a calculator … those mental skills sort of fade away over time. And you will always need a calculator for math: it never becomes part of you the way an abacus does.

Michael Smith, Tools that Enrich us

This topic has many dimensions, which means it lends itself a little too easily to simplification.

Obviously, enlightening tools are better than stultifying ones. And obviously, certain educational benefits only come through unpleasant hard work. So lets have demanding tools that educate us.

However, also obviously, there’s value in tools that easy to use. So let’s make tools pleasant and effortless, and save education for classrooms.

Also, somewhat obviously, new tools are often not simply easier. They make one kind of hardness go away but introduce a new kind of hardness, which is educational in a new way. Right now, for instance, there exist people who are expert at writing code, but who are so bad at prompting LLMs to generate good code that they still claim it cannot be done!

In other words, there are a lot of obviously true points at play but they all point in different directions. Analogies are great in this situation, because they are just a specific, memorable peg for a particular set of tradeoffs.

So let’s follow the analogy. The idea is, an abacus is better than the calculator because it helps you internalize arithmetic. I buy that idea. That’s why I have a slide rule by my desk, in the hope it will help me internalize logarithmic relationships. (It’s not working.)

But…what are you really internalizing? Memorably, Feyman tells a story about initially losing in a mental calculation competition vs an abacus salesmen, but then ultimately winning as the problems because more complex, specifically because the abacus encouraged a mental skill whih was too rote and procedural, and did not promote insight:

A few weeks later, the man came into the cocktail lounge of the hotel I was staying at. He recognized me and came over. “Tell me,” he said, “how were you able to do that cube-root problem so fast?”

I started to explain that it was an approximate method, and had to do with the percentage of error. “Suppose you had given me 28. Now the cube root of 27 is 3 …”

He picks up his abacus: zzzzzzzzzzzzzzz— “Oh yes,” he says.

I realized something: he doesn’t know numbers. With the abacus, you don’t have to memorize a lot of arithmetic combinations; all you have to do is to learn to push the little beads up and down. You don’t have to memorize 9+7=16; you just know that when you add 9, you push a ten’s bead up and pull a one’s bead down. So we’re slower at basic arithmetic, but we know numbers.

Richard Feynman

Right now, many worry that LLMs will make us get worse at writing code. I think they probably will. But they may also be inviting us to get better at something deeper.

link ai

Hyperscale LLMs, like the Apollo mission?

This is a provocative analogy:

I’m skeptical that hyper-scale LLMs have a viable long-term future. They are the Apollo Moon missions of “AI”. In the end, quite probably just not worth it. Maybe we’ll get to visit them in the museums their data centres might become?

Jason Gorman, The Future of Software Development Is Software Developers

The whole post is worth a read and I do agree with some of it. The main point is that the hard part of software development is not necessarily the coding, but “turning human thinking – with all its wooliness and ambiguity and contradictions – into computational thinking that is logically precise and unambiguous”. That’s quite true.

But I find, LLMs help with that too. A lot! So it’s a false distinction to separate the thinking from the coding, and to say they don’t help with thinking.

It is true that AI tools are random and unreliable in a way that earlier abstraction technologies, like the compiler, were not. But I don’t think that distinction will matter very much in the long run. We will get better at handling imperfectly reliable AI tools, just as managers get good at handling imperfectly reliable human beings

So I think the post underestimates the value of the practical frontier LLMs, both in the future and right now.

Also, what does the analogy really imply? The moonshot was a world-historical achievement — by my reckoning, the most significant historical event of the last millenium. And even if we didn’t go back to the moon, we all use space technology indirectly every day. When Apollo 11 landed, there were a few hundred satellites in orbit. Now, there are nearly ten thousand. It’s quite possible Jason relied on the communication satellites in orbit today to publish his post.

link ai

How to vibewrite a manifesto

Two weeks ago around 3am I couldn’t sleep so I was browsing twitter (bad habit). I ran into this tweet.

Many motherfucking website links
View on X

In fact I have a soft spot in my heart for bettermotherfuckingwebsite. I used its spartan, bare bones wisdom as the starting point for my original site a few years ago. So I groggily thought, I should reply with a page for HTMX (the JavaScript library for HTML-oriented web development). So I bought a domain and went back to sleep.

The next morning I woke up, remembered what I had done, and vibed out a website. I used Claude for a variety of tasks:

  • Reviewed existing sites to characterize this de facto genre
  • Draft copy for the new site, and reorganize copy based on my edits and additions
  • Generate page HTML and JavaScript for an embedded HTMX demo
  • Lightly research new HTMX4 developments
  • Deploy it, and debug DNS and HTTPS issues with GitHub

This allowed me to reply to the original tweet with a website as a punchline. Behold!

Screenshot of pleasejusttryhtmx.com

Okay, it’s not Mark Twain. But this took less than two hours!

To frequent model users, it may not be news that you can use just one tool (Claude Code in this case, but I could have used SolveIt) to do so many different kinds of work so quickly.

But I still thought it was neat, so I recorded a dev chat with my colleauge Erik about it. Later it briefly ended up on the front page of hacker news. If you’re curious about the workflow for this sort of thing, I used Simon Willison’s new Claude export tool to export the chat transcripts warts-and-all, and the site is open source.

In fact, in the transcripts, you can even see my cringeworthy attempts to figure out how I should retweet it, and to fret over the merit of criticism there that I was wasting people’s time by pushing AI slop into the world.

I do a feel a little bad about that. But hey, I didn’t post it on Hacker News! I just replied to a tweet, and started a conversation. And now I have atoned for my sins, by writing every goddamn word of this blog post by hand, like a cave man, or like William Shakespeare.

link ai

Introducing fastmigrate

fastmigrate is a library and tool for database migrations, where migrations are nothing but a set of well-named scripts. This post explain what database migrations are, what problem they solve, and how to use fastmigrate for migrations in sqlite.

link toolsAnswerAI

A Linux ollama server for your Mac

I want to experiment more with local models to understand their limits, so I want them to be easy to install and run. That suggests using ollama. I don’t have a beefy MacBook Pro, so I’d like to run them on my local Linux server. Here are instructions for setting up ollama on a local Debian server, accessible from your laptop on the same local subnet.

link toolsAnswerAI