Last Modified: 16/11/2004                                                                                               



Homepage / Publications & Opinion / Archive / Articles, Lectures, Preprints & Reprints

Biological limits to information processing in the human brain
P. Cochrane, C. S. Winter, and A. Hardwick The human brain is a product of Darwinian evolution and as such it has evolved from a set of underlying structures that constrain its ultimate potential. A combination of the physical size of the dendrites, axons and the associated blood vessels, and therefore their related signal space, limit the amount of information the brain can effectively store and process. By analysing the inter-relationship of the key constraints we have shown that:
  • The maximum effective diameter of the human brain is around 10-20cm.
  • The interconnectivity of neurons is dictated by thermal, volumetric, signal processing and transmission constraints, and is not, a priori, a key system parameter for intelligence.
  • Intelligent signal processing inflicts an order of magnitude time constraint on an optimised structure.

Thus we contend that the human brain is at, or near, the capability limits that a neuron-based system allows. This implies that our future evolutionary potential is limited and that, as a species, Homo Sapiens may be near the pinnacle of achievable intelligence using current cellular carbon technology.

Introduction
If the human brain is near the pinnacle of current evolution as the processing centre of the central nervous system, it has a new competitor in the silicon based computer. Whilst we are in stasis, and have been for many thousands of years, silicon systems are not and are growing in capability at an exponential rate. Our brain weighs in at around 1kg, is about 10cm in diameter and contains an estimated 1011 neurons. The only creatures with larger brains are dolphins and whales. Even the largest carnivorous dinosaurs had brain cavities only a few centimetres, whilst the larger herbivores had remarkably small brains. Very little evolutionary analysis has been carried out on the human brain, despite the central importance placed on it as an explanation of Homo Sapiens success. A number of deceptively simple questions have not been answered concerning potential limits to the effective size, or whether there are any underlying structural reasons why the brain is arranged in the manner it is.

As one example of a simple structural question that invites analysis, consider the connectivity of the neurons in the cortex medulla. The degree of connectivity is often taken to be an essential feature of a complex intelligent system. But it is not obvious why the brain has evolved the particular connectivity that it possess - a binary connectivity would be sufficient to access all the data, and full interconnectivity would give faster processing. In this paper we show that, when the combination of processing time at synapses, transmission speed on the axons and density of the components are considered as a whole, then the particular connectivity observed is near optimal for processing performance. However, as a model, it appears to be neither a definitive guide value for designing other systems, nor a prerequisite of an intelligent system. Another conclusion we also draw is more radical. Looking at the brain size and structure in whales and dolphins we see man as representing a probable limit of evolvable, efficient brain power using current cellular technology.

Biological organisms function as information processors; they take in information about the environment, process it, and then use the information to locate the necessary energy sources for survival. They are driven, therefore, by entropy processing. The more efficiently organisms process and extract information from the environment, the more successfully they, and their offspring, can continue their existence. These organisms are perpetuated at the expense of less efficient entropy-engines. The last two billion years of information processing has been driven by carbon-based molecular systems which have evolved through a combination of random mutation and selection. Homo Sapiens arose through this molecular-based Darwinian evolution. The future evolution of Homo Sapiens depends on understanding that living creatures are information (and subsequently order) processors; that is consumers of entropy rather than just energy. This implies that systems that are more efficient at information processing may one day supplant Homo Sapiens. Indeed for task specific applications this is axiomatic and exemplified by the difficulties even the world chess champion, Kasparov, had with Deep Blue. At one stage it was believed such open-ended games as chess would always be beyond the limitations of computers, but no longer!

These observations lead to a further interesting conclusion. If the ability of an organism to process information about its environment is a driving force behind evolution - that is if there is evolutionary pressure to evolve better brains to survive - then genetic engineering and other biological options will not help if our brain is inherently limited by architecture and operational modes. The next step in 'evolution' would then be to appropriate silicon as the intelligence medium and/or derive a collective consciousness through the networking of our wetware. Future evolution would then be driven by mechanisms and forces radically different to those manifestly of nature. With (or without ?) our help, Darwinian evolution from carbon to silicon could lead to the generation of new species based on a carbon-silicon mix.

The limits of neuronal control
At some point biological systems become inherently limited as they encounter fundamental physical limitations that constrain, direct or prevent further evolution in some direction. The most obvious examples are: the limitations on size imposed on insects by their ability to transport oxygen; or the stress limits of bone in land based mammals dictating the leg thickness needed to support their weight. We were thus prompted to pose the fundamental question; are there limits to the size of a neuronal structure such as the brain?

There are a number of related mechanisms that obviously interact to limit brain size:

  • Thermodynamics:
    The ability to remove heat without occupying all the available space with cooling systems; the human brain generates about the same heat output as a 50W light bulb in a 10cm cavity.
  • Reaction time:
    A minimum reaction time is essential to avoid threats and dangerous situations. This limits the length of transmission lines and synaptic action.
  • Component density:
    There are trade-offs between how closely dendrites, axons and vascular system can be packed and the transmission capacity of the nerves. They take up a finite physical signal space, which is particularly important as the size of an axon determines signal transmission speed. Does this scale with size?
  • Synaptic processing speed:
    'Intelligence' depends on signal processing which occurs at the synapses: are there any processing limits?
  • Pulse width:
    In neurons the pulse width is equivalent to the clock speed. No processing can occur faster than a single clock cycle. If ionic pulses cannot be made any shorter then it will effect the maximum processing speed.
  • Metabolic limits:
    The rest of our metabolism has to create the support system for our brain. More blood means a faster food-to-energy conversion and a stronger neck
  • Inertial shock:
    A large brain will need more damping for shock absorption when accelerating and decelerating (e.g. brain damage in boxers).

These factors are now addressed in two sections dealing with thermal and density/transmission/processing limitations. Inertial shock is not dealt with in detail as it appears to be the least important of the previous six listed.

Thermal size limits to the human brain
The human brain generates around 50W in a small, well insulated cavity. From an engineering view point, removal of sufficient heat to prevent thermal overload looks to be a significant problem. Theoretical analysis of the thermal limitations to the size of a brain indicate some restrictions. A larger brain produces more heat which suggests that thermal considerations ultimately dictate and restrict size. In our analysis the brain is assumed to be structured in a normal mammalian fashion.

Passive Verses Active Cooling
The brain can be passively cooled simply by heat conducting out to the surface of the head and thence into the surroundings. This is obviously not the main mechanism or our brains would overheat even at their current size whenever the ambient air temperature exceeds blood temperature! The main mechanism is therefore the active cooling by the blood is pumped through the brain structure.

Blood Temperature, Flow Rate and Volume
Because lung or skin area could be extended straightforwardly to provide cooling for as much blood as is required, the limiting factor is how fast the heat can be removed from the brain by blood flow. Between the arteries and the veins, blood flows through small capillaries to sustain and vent the brain. These capillaries are thin enough to ensure that the blood comes close to thermal equilibrium. So, to increase cooling, either the blood must be cooler when it first enters the structure, or the flow-rate must be increased above current levels. To avoid the complicated biological discussion of how cold blood could be made or how hot a brain would still work, the temperatures are assumed to be the normal human values in the following discussion.

When the volume of a human body is compared to those of larger mammals, such as mammoths and whales, it can be seen that large mammalian hearts can pump far greater volumes of blood than the human equivalent. It is unlikely therefore that the rate of blood flow required from the heart is a serious limit. In a human, blood usually flows faster in the wider blood vessels than in the narrower ones. To ensure that the speed will not exceed the limit for a mammal, we assume it to be a constant value. This puts our initial model on the optimistic side in terms of forming an estimate and, for determining the importance of thermal restrictions, it is certainly an upper bound.

Blood Pressure
Blood vessels in an organ such as the brain basically form a pair of branched tree structures. A tree of arteries distributes the blood and a corresponding tree of veins collects the blood together again. These two tree structures are linked at their smallest branches, the capillaries, where transfer of heat, nutriment, oxygen and waste-products takes place between the blood and the surrounding tissue. If the organ is made larger whilst still being composed of the same types of tissue then the number of levels in the branching hierarchy is increased. In this way the blood is supplied to a greater number capillaries, spread throughout the tissue with a similar density to before.

To estimate the extra pressure needed when the level in the branching hierarchy is increased from n to n+1 in an homogeneous brain model, one can simply consider putting together two half-volume brains (of radius Rn) and adding the necessary extra blood vessels (Fig 1). This can then be reshaped to resemble an n­level brain but of double the volume. The new brain radius, Rn+1, therefore will be Rn+1 =  Rn. Note that this does not represent the neural structure of a human brain, which is highly inhomogeneous; this model is just for the calculation of the blood supply requirements. Neither does it represent the growth of blood vessels in a growing foetus, where extra branches are made at the smallest level and then all the blood vessels increase in size, but the resulting structure is the same which is all that matters for the calculation's results to be valid.


Figure 1: A 2­dimensional representation how the blood vessel network increase when the volume of an organ is doubled.

The point where the main arteries of the two n­level brains are joined is now in the middle of the (n+1)­level brain so a extra artery of length R is needed to take the blood to that point (Fig 1). Similarly an extra vein will be needed. The radius of the new tubes, rn+1, will need to be rn, where rn is the radius of the largest tubes in an n­level brain (Fig 2), in order to supply the required flow of blood.


Figure 2: The main branch in an organ with (n+1) levels of branching in its blood vessels.

The pressure difference across this extra length of blood vessel consists of the separation loss at the point of branching and the viscous drag in the tube.

A separation loss is the pressure needed to overcome an obstruction to fluid flow such as a junction. Taking a rather extreme case of a tee-junction, the loss will be 1.8 rv2 Pa m­2 s2. The blood speed will be about 0.1 m/s or less, so the separation loss will be typically less than 18 Pa for each level of branching. A human heart produces around 100 mmHg = 13 kPa so the separation losses are negligible.

The viscous drag (q"n) can be calculated from the dimensions of the tube, the blood speed (v) and the viscosity (h) by approximating the flow to a laminar flow in a straight cylinder and using Poiseuille's formula. The result is 8hv (Rn+1)(r"n+1)"­2.

For each successive level in the branching hierarchy, this drag term decreases by a factor of 2­2/3 so the total drag through all the levels of branching cannot exceed

.

The quantity q"0 is the pressure across the smallest blood vessels, the capillaries, which is about 20 mmHg = 2.7 kPa - so the total viscous drag does not exceed 7.2 kPa regardless of the size of the brain. There is therefore little need for a greater blood pressure just because a brain is increased in size.

Blood Vessel Volume
The volume of blood vessels needed can be similarly estimated. To simplify the calculation, it is easier to take levels in the branching hierarchy three at a time, i.e. one blood vessel branching into eight, with the volume of the brain increasing eightfold. Modelling three successive bifurcations of a blood vessel as one eight-way split will lead to an overestimate of blood volume because of the restrictions it places on routing.


Figure 3: Modelling three bifurcations of a blood vessel as one eight-way split

Doubling the brain radius increases its volume eight times but it also necessitates an extra artery and an extra vein to take the blood into and out of the centre of that conglomeration (Fig 4).


Figure 4: Doubling the radius of an organ with branched blood vessels by combining eight smaller organs (before reshaping).
(Click image to view larger version)

Let Rn, Vn, Bn and rn be the brain radius, the brain volume, the total blood volume and the radius of the largest blood vessels in a brain with n levels of branching. The doubling of the radius when eight similar units are combined implies Rn+1 = 2Rn and Vn+1 = 8Vn. The extra tubes needed to carry away the blood from the eight-way join to the outside of the brain need a cross-sectional area 8 times that of the vessels in the previous level in the hierarchy so rn+1 = rn. Therefore

Rn = 2n Ro, Vn = 23n Vo and r = 23n/2 ro.

The total blood vessel volume in a brain with n levels of eightfold branching is eight times that in a brain with n­1 levels plus the volume of that extra blood vessel which is needed, i.e.

Bn = 8Bn­1 +  p rnRn

Recursively substituting in for Bn-1, Bn-2 etc. shows that .

The ratio of blood volume to brain volume, Hn, is given by:

Hn = Bn/Vn = (2n+1 - 1)  p ro2 Ro / Vo.

When the number of levels is increased by one this ratio increases by a factor:

Hn+1 / Hn = (2n+1 - 1) / (2n - 1)

As n gets large, this factor tends rapidly to 2, i.e. the ratio of blood vessel volume to brain volume doubles with each doubling of the brain radius.

In a normal size human brain, about a fourteenth of the volume is taken up with blood so the radius could only be doubled thrice before the blood supply uses up half the cranial space. This limits the increase in processing brain volume to about 250 times its present volume. This is the dominant thermal limit.

Interim Conclusions 1
Thermal limitations alone are unlikely to constrain the potential size of a human-type brain until it has increased to at least ten times the present diameter. However the increasing volume used by blood vessels acts as a more severe constraint on the overall system far below that level.

Signal processing limits
As described earlier there are a number of fundamental limits to the transmission and processing power of the brain. We now examine these limits and how they interact to constrain the potential size of any neuron-based central processing system. Although the analysis is not confined to such a model, it is useful to think in terms of intelligence and the interplay of the limiting elements.

Definitions and Background
We are not interested in specific definitions of intelligence, rather in a suitable generic use of the term relative to signal processing and control. Our definition of an intelligent system is one that compares the current environmental state with a set of memories of past states, actions or situations. In other words, the more memories a system maintains and compares, then the more 'informed' or 'intelligent' will be its choice of actions. Notice that this excludes any mention of whether smart algorithms are used to make the selection of appropriate responses. Furthermore, the faster a system can process information, and the more information it receives, about the environment the more intelligently it will be able to respond. Our definition of intelligence is effectively a function of the memory space accessed per unit time and the input bit rate.

To calculate intelligence we treat the brain as a control system and not a computational system. Then the efficiency of the brain is measured by the minimum time to process a signal. This means comparing the incoming signal with all possible related information (anything less is partial processing). Thus, in theory, information pulses must have the ability to interact with any point of memory or synapse. The transit time for the pulse to fan out to all possible neurons is effectively the time taken to traverse the extremities of the brain (in any direction).

For ease of analysis we make the assumption that all the 1014 synapses function as memory elements, remembering some state information about past actions. The sampling time is taken to be limited by the width of the ionic pulse travelling along a neuron. The brain effectively runs asynchronously at a bit rate of about 100 bit/s. Although not used here (since we are not comparing systems) the input bit rate is the sum of all the sensory nerves. This is dominated by the eye which has about 127M rods and cones concentrated down to about 1M neurons, and thus gives rise to an input bit rate of about 100 Mbit/s. The rest (sound, tactile, taste and smell) adding up to no more than one tenth of this figure.

The task of any intelligent control or processing system is not computational arithmetic - it is memory comparison. Our brain has to compare the current inputs with as many memory states as possible to obtain the best possible reaction to the environment. The limit to intelligence therefore lies in an ability to correlate all synaptic outputs in a minimal time.

Transmission speed-size relationship

The diameter of a nerve cell determines the maximum conduction speed of an ionic pulse. To find the transmission speed limits, the first step is to analyse the distribution of charge along the axon from a local injection of current, and the consequent decay length of the pulse:

l = [rm / (ri+ ro)]0.5 .........(1)

where: l = the consequent decay length of the pulse

rm = the membrane resistance

ri = the internal axial resistance

r0 = the axial resistance of the external medium

and r0 << ri

Both rm and ri scale with changes in the radius, R, but whereas the membrane resistance is a surface property decreasing as 1/2pR, ri scales by 1/pR2. Thus doubling the radius halves the membrane resistance, rm, but quarters the internal resistance. Thus increasing the propagation distance by 1.4. The trade-off for this increase is that the volume has increased fourfold.

This local potential depolarises the membrane past the threshold further down the axon, and causes the action potential to advance. The further the local depolarisation extends, the faster the propagation. So quadrupling the diameter of an axon doubles its speed. This is used to great effect in the giant squid where the nerve axons have diameters as large as 1mm and achieve propagation speeds of 20 m/s at 20 C. Typical values of l in skeletal muscles are 2mm and in small axons about 100mm.

To improve the speed further it is necessary to find a way to simultaneously lower the membrane capacitance (Cm) which also slows speed, reduce ri, and increase rm. The evolutionary route to achieve this was to myelinate the axon. By adding layers of insulating membrane around the axon, with gaps at regular intervals, it is possible to force the pulse to hop from gap to gap. The insulating membrane layer has two effects: rm increases linearly with the number of layers; and Cm falls since each layer acts a series capacitance. The net effect of myelination is to give a ten fold speed increase at the same diameter as an unmyelinated axon. This principal is used by sensory neurons which achieve similar conduction speeds to the squid axon but with axons only 1-5 mm in diameter. However, increasing myelination still gives l doubling for a four fold increase in volume. Thus myelination gives a one-off speed increase and subsequent closer packing of the elements.

Synaptic processing
All the intelligence (in terms of learning) in the brain occurs as a result of changes in the synapses rather than the properties of the axons. As such it is the delay time crossing the synapses that is key to delivering an intelligent system. The depolarisation time for a chemical synapse is about 2ms. In contrast, when speed is required rather than intelligent reaction, humans utilise minimal pathways (via the spinal cord) and synapses to connect the axons. The electrical synapse has a response time of <100ms. Interestingly, this implies that learning and intelligence, in the brain, has a factor of ten overhead in processing speed.

Transit time
For a single axon the transit time is a function of its length (l) and the speed of conduction (s). The volume is proportional to l.r2. The first relationship to note is that the volume quadruples to double transit time. If the volume quadruples then, at the most optimistic, the radius, R, of the brain increases by cube root of four (ignoring plumbing). The overall effect is that the brain processes information 2(1/3) faster. A quadrupling of the brain size improves intelligence by a mere 25%.

The thermal studies show that this limit is even harder: as we increase the brain size the amount of plumbing must increase. Doubling the brain size doubles the plumbing content and increases the plumbing from around 7% of the volume to some 14%, reducing the efficiency gain to 15%. At quadruple the volume, the efficiency gains remains 15% and at eightfold the volume actual falls by 12% as fewer elements can now be packed in!

The small myelinated neurons of the 'white matter' (1-5sqmm) manage about 10 m/s. They fill about 90% of the brain space, compared with about 10% for the unmyelinated (grey) nerves which have a transmission speed of about 1 m/s. The sum effect is that the brain cavity (approx. 10cm) can be traversed in about 10-20 ms. This is comparable with the width of the ionic pulse. Myelination has an associated energy cost, this suggests that the balance of myelination has been selected in the brain to optimise speed and minimise energy expenditure. Also the optimal size of a control system would be one in which the processing was completed in one clock cycle. The degree of myelination is clearly tuned to the pulse width.

Processing speed
There is also a trade-off between interconnectivity, size and processing time. If there are M processing/memory elements (= synapses), then full interconnectivity requires:

N = M2/2 connecting paths (for M>>100)

In comparison, a binary system requires:

N = M interconnections

The latter approach then gives P = log2N processing steps, and the former a single step.

More generally, for an interconnectivity of I, the number of processing steps:

P = log(M)/log(I)

and the number of paths:

N = I*M/2.

The paths (axons) are larger than the memory elements (synapses). Since, as I decreases N increase, the volume must increase with N. The trade-off is now between the time taken to transit the extra volume (proportional to N1/3) and the number of processing steps, P.

Consider the following extreme cases with synaptic processing time 2ms, transmission speed 10 m/s, 1015 synapses,

  • Normal brain
    I=10000 which implies P=4; and diameter = 10cm
    Synapse processing time = 10ms
    Brain transit time = 10ms
    TOTAL RESPONSE TIME = 20ms
  • Binary brain
    I=2 which implies P=50; and diameter now 1.25 cm
    So synapse processing time = 100ms
    Axon transit time = 1 ms
    TOTAL RESPONSE TIME = 101ms
  • Fully interconnected brain
    I=1015 which implies P=1; and diameter now 100 m
    So synapse processing time = 2 ms
    Axon transit time = 10 000 ms
    TOTAL RESPONSE TIME = 10 000 ms

Clearly the system as a whole will be matched when the transit and processing time are equivalent. Interestingly the resulting figures are more or less equivalent to the pulse width, showing a degree of universal optimisation.

The synapse processing time of about 2 ms allows us to predict the number of synapses in the path to be about 4-5 (to give a matched delay). This gives an interconnectivity of the order of 104. Hence, using this analysis, we can say that the brain has an interconnectivity of 104 not because this is ideal for some algorithmic reason, but because this is the ideal trade-off, for this particular combination of cellular technologies, of processing and transmission speeds. When modelling the brain we should therefore be wary of placing over importance on the interconnectivity arguments.

Fig 5 shows the overall consequence of changing interconnectivity and/or increasing the size of myelinated neurons to obtain more speed on the information processing capability per unit time. Note the broad performance plateau, with the human brain lying about 20-30% below the optimal, but with the optimal processing ability corresponding to a brain about twice the current volume. Increases in size give little performance gain but would cause immense inertial shock problems. Also note that the flatness of the plateau permits quite wide errors in assigning modelling values to the various parameters without significantly changing the conclusions.

Figure 5: Information processing vs axon speed and interconnectivity.

Interim Conclusions 2
We have outlined the general trade-offs between pulse width, transmission speed, synaptic processing and neuron density. In doing so it has become clear that there is very little advantage in increasing brain speed by increasing axon size (approximately 25% improvement for doubling the diameter by increasing axon speed). Most of this is lost when the increased blood vessels are taken into account. Furthermore we have shown that, when synapse processing is allowed for, the optimal size is when pulse width = transmission time = synapse processing time. This occurs at about 10cm. A complete plot would show effective intelligence against transmission speed and interconnectivity. The third evolvable variable - synapse processing time - is more difficult to include, and pulse width affects all the values. Fig 5 shows the effect of varying the interconnectivity from 2-105 and transmission speed from 1 m/s to 400 m/s.

Limitations to biological enhancements
The above analysis points to an interesting conclusion: genetic engineering could not be used to make a significant (ten-fold) difference to our information processing ability, since it would require simultaneously improving:

  • Transmission speed (evolve a better insulator or possibly an organic polymer as a conductor)
  • Synapse processing speed (without affecting memory ability)
  • Pulse width (which arises from the same fundamental mechanism that preserves a potential across all cell membranes and so must not disturb cellular function)
  • Thermal dissipation and energy transport

This appears to be a severely difficult undertaking. In a similar manner, drug based enhancement may marginally improve the use of inefficiently arranged or used sub-components but can never realise significant enhancements for the same reasons. There is a role for drugs and genetic engineering, but it is solely to reach the ceiling, not to fundamentally improve it.

It is reckoned that we use less than 10% of our mental capacity. Is that because we have just built layer upon layer of cells on top of disused applications, with new applications piled high. For example, do we still have our "swinging from a tree and using our tail for balancing" algorithm somewhere toward the inner core - and many more? And are these now over written by our "stand up and walk" and/or "what the heck is quantum mechanics" reasoning/ intelligence algorithms? It could be that our brain now resembles the layering we see in huge software programmes for silicon brains. Do we ever throw away pre-programmed abilities and learning algorithms?

Concentrating
Recent studies have postulated that the noisy nature of our individual synapses, through their construction from just 1000 molecules, is overcome by signal averaging. It would appear that the acts of precision throwing, running and jumping necessitate a state of concentration that requires our brain to focus. So 100,000s of neurons act in unison to achieve a greater precision of action, and perhaps thought! A consequence of this is a decision making loop of the order 100ms from the visibility of a threat, its recognition, our decision and message transmission for muscular response. A consequence of such a slow averaging and decision process is the downgrading many of earlier estimates of human mental abilities. If it were not for Hebbian decay, dreaming, and the ability to have a temporal depth of storage, we would run out of brain power very early in our childhood.

Final conclusions
From our analysis we conclude that the brain of Homo Sapien is within 10 - 20% of its absolute maximum before we suffer anatomical and/or mental deficiencies and disabilities. We can also conclude that the gains from any future drug enhancements and/or genetic modification will be minimal. Interestingly, the brain cavity of Neanderthal man was up to 20% larger than Homo Sapien, ,and for a time they coexisted. But ultimately Neanderthals became extinct - and we are now on our own!

In contrast to our biological brain, the advancement of the silicon brain (computer) is exponential, and will continue to be so for some considerable time. Given that we are effectively in a mental stasis, and silicon systems are not, we may soon see an extension of our present richness of male and female minds, to include machines. It may even be that mankind already has a new and genetically different competitor!

Further Reading
1. Z.W.Hall, "An Introduction to Molecular Neurobiology"; Sinauer Assoc., (1992).
2. N.P.O.Green, G.W.Stout, D.J.Taylor, "Biological Science 1 & 2"; Ed. R.Soper, Cambridge University Press, (1990).
3. B.Jennett, K.W.Lindsay, "An Introduction to Neurosurgery (5th edition)"; Butterworth-Heinemann Ltd., (1994).
4. W.H.Calvin, "The Ascent of Mind"; Bantam Books, (1991).
5. J.F.Douglas, L.M.Gasiorek, J.A.Swaffield, Fluid Mechanics (3rd edition); Longman Scientific & Technical, (1995).

Further background and Links to other Intersting Sites
ConceptLabs Homepage

All materials created by Peter Cochrane and presented within this site are copyright © Peter Cochrane - but this is an open resource - and you are invited to make as many downloads as you wish provided you use in a reputable manner