“Computing With Connections”, Terrence J. Sejnowski1987 (, ; backlinks)⁠:

[Book review by Terry Sejnowski of Danny Hillis’s 1985 book, The Connection Machine, describing Thinking Machines Corporation & the Connection Machine.]

…The book under review is Hillis’ doctoral dissertation, published just 4 years after the AI Memo. It describes both the design and implementation of a 65,536 processor Connection Machine TM, a computer that is now manufactured by Thinking Machines Corporation, a company Hillis co-founded. Curiously, the original motivation for the Connection Machine-semantic networks in artificial intelligence-remains unimplemented. The Connection Machine has found other uses as a general-purpose parallel processor suitable for a wide variety of problems, many unanticipated when the Connection Machine was conceived. In particular, the recent work on connectionist models in artificial intelligence (Feldman & Ballard1982) and the parallel distributed processing models in cognitive science (Rumelhart & McClelland1986, Parallel Distributed Processing: Explorations in the Microstructure of Cognition) could greatly benefit from the enormous potential for computation provided by the extensible hardware design of the Connection Machine.

Figure 1: Graph of computing power, measured in operations per second, for the largest general purpose digital computers as a function of time. Operations vary from simple boolean evaluations to 64 bit floating point arithmetic and vary in their execution times. Different problems require different mixtures of operations, so the error bars indicate the approximate range of the effective computing power. The Connection Machine (CM) is described in the review. The GF-11 is an experimental machine under development at IBM. A lower bound for the equivalent computational power needed to simulate the synaptic activity in the human brain is given in the text and drawn as a horizontal dashed region at the top of the graph. In primates, the visual system uses about 20–40% of the total processing power.

Brains…Hillis compares digital computers with brains on p. 3. It is difficult to compare their processing powers because we do not yet understand the principles of computation in the brain. Hillis compares the maximum switching rate of gates in a computer (a billion transistors switching a billion times per second) and the maximum rate of firing of all the neurons in the brain (100 billion neurons firing at a thousand times a second). However, this is not the right measure since switching events by themselves are only one part of performing a computation, and both the brain and the digital computer would burn out if all their components were to start switching at their maximum rates for even a short time. 2 more realistic measures of performance are the average processing power, measured in operations per second, and useful communications bandwidth, measured in bits per second. The processing units in the current generation Connection Machine are bit-sliced processors with a one microsecond cycle time and 4,000 bits of memory each. A processor can add 2 numbers with 8 bits of accuracy in 8 cycles, and can multiply 2 numbers with the same accuracy in 64 cycles. Thus, the 65,536 processor Connection Machine can perform a maximum of about one billion 8-bit multiplications per second. The total communications bandwidth between processing units in a Connection Machine is about 10 billion bits per second, but the I/O bandwidth for communicating between the Connection Machine and its host computer is only 500 million bits per second. Only a fraction of the maximum processing power of the Connection Machine may be achieved on a particular problem unless a highly efficient algorithm is found that maps well onto the architecture of the Connection Machine and all of the information needed to solve the problem is resident. Thus, if the data exceeds 32 MB (the total memory capacity of a 65,536 Connection Machine) then the I/O bandwidth may be rate-limiting. Firing at a maximum rate of a few hundred spikes per second, a neuron can convey only a few bits per second via its average rate of firing, but it can communicate by direct connections with thousands of other neurons. Hence, the average communications bandwidth used by the brain in moment to moment computation is about

(1011 neurons) × (5×103 connections/neuron) × (2 bits/connection/sec) ≈ 1015 bits/sec

This is about a 105 times greater bandwidth than the current generation Connection Machine. It is important that the brain can make effective use of this bandwidth; each synapse between neurons can perform a low-precision addition or multiplication (depending on the type of synapse). Hence, the average processing rate in the brain is at least 1015 operations per second. This estimate represents the minimal amount of digital computation that must be done to simulate neural operations in real time. It is a lower bound since we have not taken fully into account the analog operations that occur in dendritic trees. Many of the operations in the brain are analog and could be simulated much more efficiently with analog technology (Mead1987, Analog VLSI and Neural Systems). The cost of computing has decreased by a factor of about 10 every 5 years over the last 35 years (Figure 1). If this continues, then it will only take about 25 more years (2015) before processing power comparable to that in the brain can be purchased for $8.45$31987 million, approximately the current cost of the Connection Machine. David Waltz (personal communication) independently arrived at a similar conclusion taking into account the cost of memory, communications, and processing. It is very unlikely, however, that this goal can be achieved with the current technology: new technologies, perhaps based on optical computing, are needed.