This is a provocative analogy:
I’m skeptical that hyper-scale LLMs have a viable long-term future. They are the Apollo Moon missions of “AI”. In the end, quite probably just not worth it. Maybe we’ll get to visit them in the museums their data centres might become?
The whole post is worth a read and I do agree with some of it. The main point is that the hard part of software development is not necessarily the coding, but “turning human thinking – with all its wooliness and ambiguity and contradictions – into computational thinking that is logically precise and unambiguous”. That’s quite true.
But I find, LLMs help with that too. A lot! So it’s a false distinction to separate the thinking from the coding, and to say they don’t help with thinking.
It is true that AI tools are random and unreliable in a way that earlier abstraction technologies, like the compiler, were not. But I don’t think that distinction will matter very much in the long run. We will get better at handling imperfectly reliable AI tools, just as managers get good at handling imperfectly reliable human beings
So I think the post underestimates the value of the practical frontier LLMs, both in the future and right now.
Also, what does the analogy really imply? The moonshot was a world-historical achievement — by my reckoning, the most significant historical event of the last millenium. And even if we didn’t go back to the moon, we all use space technology indirectly every day. When Apollo 11 landed, there were a few hundred satellites in orbit. Now, there are nearly ten thousand. It’s quite possible Jason relied on the communication satellites in orbit today to publish his post.