“The Role Of RAW POWER In INTELLIGENCE”, 1976-05-12 (; backlinks; similar):
[raw scan] This essay is an argument that an essential ingredient is absent in many current conceptions of the road to Artificial Intelligence.
The first section discusses natural intelligence, and notes two major branches of the animal kingdom in which it evolved independently, and several offshoots. The suggestion is that intelligence need not be so difficult to construct as is sometimes assumed.
The second part compares the information processing ability of present computers with intelligent nervous systems, and finds a factor of one million difference. This abyss is interpreted as a major distorting influence in current work, and a reason for disappointing progress.
§3 examines the development of electronics, and concludes the state-of-the-art can provide more power than is now available, and that the gap could be closed in a decade.
Parts 4 and 5 introduce hardware and software aspects of a system which is able to make use of the advancing technology.
The Natural History of Intelligence
Product lines
Unifying principles
Harangue
References
Measuring Processing Power
Low level vision
Entropy measurement
A representative computer
A typical nervous system
Thermodynamic efficiency
References
The Growth of Processing Power
References
Interconnecting Processors
Log2 sorting net construction
Communication scheme organization
Package counts
Speed calculations
Possible refinements
References
Programming Interconnected Processors
A little Lisp
A little ALGOL
A little operating systems
Disclaimer
Bombast
…The enormous shortage of ability to compute is distorting our work, creating problems where there are none, making others impossibly difficult, and generally causing effort to be misdirected. Shouldn’t this view be more widespread, if it is as obvious as I claim?
In the early days of AI the thought that existing machines might be much too small was widespread, but there was hope that clever mathematics and advancing computer technology could soon make up the difference. Since then computers have improved by a factor of ten every 5 years, but, in spite of reasonably diligent work by a reasonable number of people, the results have been embarrassingly sparse. The realization that available compute power might still be vastly inadequate has since been swept under the rug, due to wishful thinking and a feeling that there was nothing to be done about it anyway and that voicing such an opinion could cause AI to be considered impractical, resulting in reduced funding.
There is also an element of scientific snobbery. Many of the most influential names in the field seem to feel that AI should be like the theoretical side of physics, the essential problem being to find the laws of universe relating to intelligence. Once these are known, the thinking goes, construction of efficient intelligent machines will be trivial. Suggestions that the problems are essentially engineering ones of scale and complexity, and can be solved by incremental improvements and occasional insights into sub-problems, are treated with disdain.
This attitude is a variant of the philosophical notion that all truth can be arrived at by pure thought, and is unfounded and harmful. One wonders what state space travel would be in if the Goddards and von Brauns had spent their time trying to find the universal laws of rocket construction before trying to build space ships. AI needs a stronger experimental base. Like other branches of endeavor (notably physics, aeronautics and meteorology), we should realize our desperate need for more computing, and do things about it.
View HTML: