“A Critique of Pure Reason”, Drew McDermott1987-02-01 (, , )⁠:

[1987 retrospective by noted proponent of logic for planning and reasoning in AI (‘GOFAI’); McDermott criticizes his own work fiercely, along with that of his colleagues (particularly John McCarthy, Robert Moore, James Allen, Jerry Hobbs, & Patrick Hayes), describing the ‘logicist’ paradigm—that sufficiently ingenious and persistent application of logical reasoning, mostly first-order logic, can eventually give rise to human-level understanding of the world, planning & execution of actions, and eventually AGI.

McDermott concludes that the nature of such programs is that they are unable to see if they are making real progress (because a failure to infer something could simply reflect a lacking axiom), and worse, that such logics are not even an approximation to what intelligence is, or a role model, or that failures reflect poor choice of axioms, but that logics only verify things and do not compute useful things like plans, and collapse into verifying trivialities which do no useful intellectual work. Resorts to powerful tools like temporal logics or nonmonotonic logics sacrifice the philosophical advantages of logical inference in an attempt to get working systems, but may obtain neither. What is necessary is doing without deduction.]

It must be the case that a substantial portion of the inferences we want [to make] are deductions, or it will simply be irrelevant how many theorems follow deductively from a given axiom set.

…To summarize: The logicist project of expressing “naive physics” in first-order logic has not been very successful. One reason may be that the basic argument was flawed. You cannot write down axioms independent of a program for manipulating them if the inferences you are interested in are not deductions. Unfortunately, very few interesting inferences are deductions, and the attempts by logicists to extend logic to cover more territory have been disappointing. Hence we must resign ourselves to writing programs, and viewing knowledge representations as entities to be manipulated by the programs.

…Finally, I should admit that I am still doing work in the paradigm that I criticize here. In the domain of shape representation, so little is known that focusing on an idealization cannot but help teach us something. The problem I would like to tackle is representing the knowledge required to answer questions like, Could a paper clip be used as a key ring? The idealization I have been forced to fall back on is to prove that a paper clip of a certain size and shape could fit through the hole of a typical key. It should be obvious how much of the original problem this leaves out. Still, the territory is so unexplored that a tour through the idealized fragment could turn up something interesting. What one cannot hope for is to express as logical axioms everything there is to know about using shapes in unusual ways, before designing programs for this task. This will probably come as a shock to no one but me and a few friends.