"Programming as Theory Building" speaks to LLMs being unable to replace humans.
In this essay, I will perform the logical fallacy of argument from authority (wikipedia.org) to attack the notion that large language model (LLM)-based generative “AI” systems are capable of doing the work of human programmers.
Dave uses Peter Naur’s 1980s paper to elegantly describe why he thinks LLMs cannot produce / have no “theory”.
Two sections that stand out:
Theories are developed by doing the work and LLMs do not do the work. They ingest the output of work.
…
Writing software is the production of code in the same way that writing poetry is the production of words. In both cases, the code and the words are the artifacts of the real work: building and maintaining a theory of the program.
I don’t think Dave here is saying LLMs have no merit, but that there are fundamental limitations of the ‘search space’ and ability to produce of said LLMs to creep into unknown spaces, spaces that require ‘theory’ to navigate.