Algernon's Law:
A Practical Guide to Neurosurgical Intelligence Enhancement Using Current Technology.
A.k.a. "Neurohacking 102".

DEFN: Algernon's Law: Any simple major enhancement to human intelligence is a net evolutionary disadvantage.
DEFN: Algernon: Any human who, via artificial or natural means, has some type of mental enhancement which carries a price.
NOTE: The "g" in "Algernon" is pronounced "j", thus "Al-jer-non".
Despite the literary origin, I use the term as if it were derived from the root "Algern" - thus, the adjectival form is "Algernic", not "Algernonic".

Up to The Low Beyond.
Algernon's Law 1.1 is ©1996 and ©1999 by Eliezer S. Yudkowsky. All rights reserved. BORDER=0 height=31 width=88 SRC="http://c1.thecounter.com/id=280955&size=1280&colors=24&referer=&java=false">

Created:  11/18/96
Creation time:  One week
Updated:  2/26/99



Introduction:

NOTE: A footnote signifies information which is interesting but not necessary, such as a quibble (*), example, or expansion. For your reading convenience, the footnotes contain links back to their point of origin.
NOTE: The "Theory" sections are rather more formal than "Practice" sections, so if you don't go in for explanations, you can skip them. You won't become a great neurohacker, but you should still be able to get something out of the page.

Theory.

Algernon's Law is derived from the observation that the human brain is simultaneously in a state of evolutionary flux and at a local optimum. Our brains recently expanded enormously in both size and function. Most of the algorithms now being used are probably evolutionarily recent. However, all of the major natural mutations and innovations seem to have already occurred. The best evolution seems able to muster is a slow, upward crawl of the average IQ (*).

Evolution at this point would consist primarily of small optimizations that make us less vulnerable to insanity and other inconveniences. Since many of the major cognitive mechanisms are evolutionarily recent, there should be few super-interdependent mechanisms that would make additional evolution difficult. If there are any simple mutations (*) that would grant a huge intelligence boost, they must have side effects that would make them net evolutionary disadvantages.

Despite this, there is still hope for the coming Age of Neurohackers, the gestating discipline of cognitive engineering, and the Singularity dependent upon them. Not all evolutionary disadvantages are fatal, intolerable, or unmitigable. Global cognitive degradation may be tolerable in exchange for a powerful local enhancement.  If the resulting Specialization is powerful enough to be impossible without diverting nearby cognitive resources - impossible without Specialization - the effect would be to increase the collective intelligence of humanity, to widen the range of problems our combined intelligence can solve (*).

We have resources not possessed by biological evolution, including high-speed computers (*) and more storage space than the genome (*). We can design packages of interdependent, mutually compensatory, and individually detrimental modifications that evolution would require eons to assemble. And that which benefits our genes does not necessarily benefit us; we may be able to increase the amount of energy we devote to our own projects by removing some of the leashes and goads evolution has gifted us with.

Algernon's Law is named after a mouse, Algernon, from the story Flowers For Algernon by Daniel Keyes.

Practice.

I am not the first "neurohacker", nor can I claim credit for being the first to invent the discipline, although I did coin the term independently, and may have been the first to "popularize" it to the extent of getting other people to use it.  Neurohacking was invented by David Pearce, master of neuropharmacology in the service of joy.  To the best of my knowledge, the first version of this page was historically the second major branch of human cognitive engineering, neurosurgery in the service of intelligence.  Nobody has yet invented a third, but one hopes it is only a matter of time.

The most fundamental attitude shared within the (two-person) neurohacker community is the idea that human minds aren't perfect, that, with the various tools at hand, the mind can be improved - using hardware, not software.  To a neurohacker, the mind is not some pristine landscape of perfection which we are proposing to mess up; no, minds have their moments of beautifully elegant design, but there are all kinds of hideous problems.  The mind, the brain, is something that can and should be redesigned and altered to our own specifications, and we neurohackers are setting out to do.  I'm very consciously taking a "cowboy" attitude through this whole document to counter the image of the mind as an untouchable shrine.

Algernon's Law is a neurohacker's tool, not a physical law like the lightspeed limit.  Every now and then, I get email from people who've come up with all sorts of exceptions and situations where Algernon's Law doesn't apply.  Of course there are situations where it doesn't apply.  That's the point.  The question is not whether Algernon's Law is really a Law or a Guideline.  The question is whether, if you're going to start a project on intelligence enhancement, you want to know Algernon's Law.  Algernon's Law is indispensible to anybody working on Intelligence Amplification, in the same way that Murphy's Law is indispensible for engineers.  Would you want to drive over a bridge designed by people who'd never heard of redundancy or Murphy's Law?  Would you want a government run by people with no concept of TANSTAFFL?  Would you want your brain tampered with by people who think you can get endless enhancements just by wishing for them?"

Murphy's Law isn't absolute.  TANSTAFFL isn't absolute.  Algernon's Law isn't absolute.  But you'd damn well better know them by heart or your project is going to go down the tubes.  Algernon's Law can be outwitted, but it can't be outwitted by accident.

And in response to another set of emails:  Neurohacking is about hardware, not software.  It's about the neurosurgery and pharmacology, not self-improvement.  Humanity has been fooling with internally imposed "better ways of thinking" for the past two and a half millennia and not gotten anywhere.  Well, I suppose science and Socrates count for quite a bit, but it's not enough; there are limits to what you can do by altering the contents of the mind; software won't let you change the rules.


Specializing on an ability.

Theory.

DEFN: Module:  A hardware-level cognitive object, corresponding to a specific set of neurons, which contributes to intelligence by processing information in a particular domain.
DEFN: Cognitive resources:  Any cognitive element which exists in limited or conserved supply, including both low-level "hardware" resources and high-level "software" resources (*), which cannot be reallocated on a short time-scale.

The mind is composed of many modules.  These modules don't have easy-to-understand, morally significant partitions such as "artistic ability" and "creativity"; instead they have partitions like "causal analysis", "combinatorial design", "symbol formation", and so on.  Modules give rise to true abilities in particular subjects through their interaction.  Modules are defined to ground in specific neural hardware; while "software" modules might exist, they are not subject to enhancement through Specialization.

Cognitive resources are "sticky", not easily reallocated.  A cluster of a few million neurons can't process sound one moment and visual information the next.  Again, while there are mental resources that switch rapidly as part of routine information-processing, such as space in short-term memory and focus of attention, these are not subject to enhancement through Specialization.
 
DEFN: Specialization:  The enhancement of a single module, or set of modules, through the diversion of additional cognitive resources to those modules.
DEFN: Specialist:  A Specialized human.

The object of Specialization is to increase the number of neurons (or other cognitive resources) allocated to a module, thus increasing the quantity and quality of information-processing.  While cognitive resources may not be strictly conserved, increased allocation beyond a certain point will cause cognitive resources to be diverted from other modules.  Specialization goes past that point, thus arriving under the aegis of Algernon's Law.

Allocations are altered by placing a module, or set of modules, under continuous stimulation; high activity within a module causes that module to be favored for resource allocation, especially in childhood.

Modules implementing evolutionarily recent abilities, including most rational cognition, tend not to be clearly localized.  (Those that are localized at all are often localized to "somewhere in the frontal lobe".)  Emotions, on the other hand, often have clearly identifiable locations (usually in the limbic system - the center of the brain); generally within areas that are neuroanatomically distinct, and were discovered long before neuroimaging.  It is considerably more plausible to suppose that emotions can be directly stimulated, perhaps even with off-the-shelf equipment, such as neural stimulators developed to cure epilepsy.
 

The Key Hypothesis:
Specific emotions have evolved to invoke specific cognitive abilities.

Some tentatively identified correlations, with rationalizations:
The joy of success invokes symbol formation, to reify the useful concepts resulting in the success.  Despair invokes causal analysis, to determine the cause of the failure, especially repeated failures.  Enthusiasm invokes planning, specifically the short-term planning of determining the next immediate goal.  Frustration invokes combinatorial design, to build the better mousetrap or find a better way.
 
NOTE: Generally speaking, negative emotions tend to be associated with more powerful abilities.  The terms "negative" and "more powerful" are overgeneralized, but the tendency still exists:  Those emotions present when things are going wrong invoke abilities that are deeper, more drastic, than those invoked when no threat is present.  Of course, I may be prejudiced.

There are two explanations for the link between emotions and cognitive modules.  The first explanation is that it's an evolutionary advantage to use certain abilities in certain situations that evoke certain emotions, so triggers evolved between (pre-existing) abilities and emotions.  The second explanation:  Emotions are the most ancient part of the brain - even apes and dogs have emotions.  Emotions contain desires, instincts, reflexes, and intuitions - once, they were the whole of the mind.  Intelligence began as abilities attached to this core.  In other words, the link was not created afterwards - that's where intelligence comes from.  The two explanations differ primarily in style; I can't think of a practical difference, but the second one is more beautiful.

Side effects:  Specialization will result in side effects.  The enhanced ability will grab whatever cognitive resources are available. Other cognitive abilities will have their 'style' warped towards the methods used by the enhanced ability. Cognitive modules producing the same types of output as the enhanced module may be ignored. Cognitive modules requesting similar resources may be starved. Cognitive modules getting inputs from the same source may be shut out, although probably not. Also, the effects may be recursive:  All abilities stimulated by use of the enhanced ability may also grow at the expense of competing abilities.

The simplest form:  Paul gets more neurons, Peter gets less.

As always, the result will be a net evolutionary disadvantage in the ancestral environment.  This is partially fulfilled simply because, outside of an artificial environment, moderate levels of all abilities were needed to survive, and being the tribal expert at chess wasn't enough to offset being eaten by a tiger.  However, it's also likely that the architecture of the mind is sensitized to a particular resource allocation, or that internal subsystems of a module are optimized for a particular resource level, in which case the Specialist would lose more (in absolute terms) than was gained.  This doesn't mean Specialization is undesirable - if the result is a never-before-achieved height in a particular ability, that increases humanity's total ability to solve certain kinds of problems.  This is a Good Thing.  If the Specialist is happy, there's nothing wrong with having a few blind spots.

Practice

A Specialist in ability A is created by the following causal chain:  Stimulating a specific emotion E at neuroanatomical location L activates specific modules M contributing to that ability A, resulting in the diversion of cognitive resources from sacrificial abilities S.

The practice of Specialization consists of filling in all the blanks.

You will need:

To perform the following research: This is, of course, a rather abbreviated summary, on which I will soon expand.  Two steps I've left out entirely are (1) trying to reason out which emotion will invoke an ability, and (2) trying to anticipate which abilities will be sacrificed by the diversion of cognitive resources.  I will treat both items, but - while they make for fascinating speculation - I highly doubt that our knowledge of the subject allows effective de novo prediction in either area.  In both cases, the only way to find out it is to try it and see.
 
NOTE: If you're just trying to duplicate the powers of an existing (and seriously overworked) Specialist, try cloning.  Of course, since there is no particular reason to suppose the Specialist's alteration is genetic in origin, this may not work.  It also takes time.

An even simpler tactic, which takes far more money but is entirely legal:  Conduct a massive hunt for existing Specialists.  (Use fMRIs for final confirmation.)

Step one:  Protocol analysis involves going over the subject's comments during problem-solving to extract information about how the problem was solved. This can include the contents of short-term memory, the subgoals generated, the sequence of deductions, type of properties perceived as salient, the strategies being used, the associations, similarities and analogies perceived, and the subject's feelings of progress or failure.  Standard protocol analysis remains a labor-intensive undertaking because every bit of speech must be translated into a formal code to be usable as scientific data. But, since this is an engineering project, we can use the TSAR - That Sounds About Right - methodology.

Theoretically, if your neuroimaging is producing adequate results, you can run the whole Specialization project using nothing else.  Certainly neuroimaging-based results should be given a higher priority than protocol analysis, since interpretations - as opposed to hard data - are always suspect.  On the other hand, if you don't have at least some idea of which modules you're targeting, you're flying blind.  Like evolutionary reasoning, protocol analysis is not reliable, but it's what makes sense of the project.

Alas, protocol analysis introduces a Heisenbergian problem:  Thinking out loud and self-awareness are all cognitive abilities in their own right.  You can try "masking out" the signatures of those abilities, but that's probably too sophisticated to be practical.  Alternatively, try generalizing from protocol-analyzed problems to problems solved by an undisturbed subject.  "Aha!  He's twiddling his fingers while writing the class functions, just like (last time/subject B)."

Next step:  Correlating with neuroimaging.  Neural activity patterns usually don't light up a specific area of the cerebral cortex. It'd be nice if it was that simple, but it's not. The reason we check for signatures (whole-brain neural activity patterns) instead of cortical activity levels is that (1) modern neuroimaging isn't fine enough to look for increased activity in a small area and (2) increased information processing does not necessarily mean increased rates of neural firing so (3) we have to look at patterns in the overall activity levels, everywhere in the brain.  As for using neuroimaging to determine the cognitive nature of tasks... fat chance.

I honestly don't know how much neuroimaging will be useful for singling out cognitive abilities; that's the other reason I've been talking about protocol analysis.  It may be, for example, that neuroimaging will only help tell us when two tasks being performed are different. The ideal:  A particular set of activation levels (and non-activation levels), of either specific frontal areas or perhaps assorted other areas, which is present only during particular stages of the task displaying characteristic styles in protocol analysis.
 
DEFN: Signature:  A pattern of observable characteristics connected to a single cognitive module, preferably only to that module, which can be used to determine whether that module is active.  Usually, either a neuroimaged set of activity levels, or a class of subtasks in protocol analysis; preferably both.
NOTE: There is no real way to tell whether a signature applies to a single module, or a whole set of modules forming an ability.  Some emotions (perhaps all) invoke sets of modules.  You may have to settle for a superset, subset, or intersecting set.  Do not try to use more than one emotion.

Next step:  Find the emotion that produces the signature pattern.  Assuming we have a list of emotions to be tested, we could ask the subject to consciously invoke those emotions, and then attempt to solve (and protocol-analyze) problems in the target skill. Then, use fMRI scans to find out what areas show high activation levels while the emotion is invoked, or even look it up from previously done research. Insert a test wire into the subject's brain, apply a mild current, and see if this invokes the ability desired.  That is, start with software (conscious) invocation, then move on to a hardware test with the best candidate produced.
 
NOTE: When I say "wire" and "current", that's just neuro-cowboy slang - convenient shorthand for the actual stimulation method you're using.  It's not a good idea to use electrical current in the long term; it produces chemical byproducts.  There are less damaging neural stimulators.

If you're running a full-scale Specialist assembly line, there is a more efficient way.  Have a single standard subject, or group of subjects, with arrays of wires distributed over most of the limbic system.  (Also known as "zone implants".  Stephen R. Donaldson, the "Gap" series.)  Send current down each wire and test for the ability signature. This method has three advantages. First, there is no possibility of different subjects giving alternate interpretations; emotions are subjective, neuroanatomy is not. Second, this method better resembles what we'll be doing to our Specialist - a consciously evoked emotion may differ from one electrically imposed. Third, we can develop an entire library of signatures and decrease the time needed for this stage.
 
NOTE: If you're running a really large operation, catalog every distinct emotional element you can find (by stimulating random areas); produce a dozen Specialists for each.  This avoids all the research steps, leaving only testing and implementation.  Some of them probably won't be Specialists, and you may want to avoid negative social emotions like hatred, but this method does cover all the bases.

Next step:  Once you've got a good candidate emotion, confirm that it actually has something to do with the target skill or the target module.  Switch on the wire and check to see how it affects a few sample problems.  For example, look for a particular strategy that gets tried more often.

Evolutionary rationalization:  Once you know the basic theory (above) and the various sample rationalizations of tentative correspondences scattered through the rest of this document, you're on your own.

Analyzing side effects:  There is basically no way you can predict this in advance.  All the correlations I've been able to find and rationalize depend on post hoc guesses about the basic cognitive processes underlying the cognitive module.  Observing the side effects provided some Major League Clues that show up in Coding a Transhuman AI, but your chance of running that in reverse is effectively zero.

Next step:  Test it on chimpanzees, both adult, prepubescent, and infant.  If they don't have an analogous location in the limbic system, poke around until you find a location that stimulates the closest analogue emotion you can find.  Wire 'em up.  Leave 'em that way for a year.  If they keel over and die, don't try it on humans until you know why.  If you want to see how their problem-solving abilities are affected, have lots of fun; this is even legal, so you can publish it.  If you try it on rats even before chimpanzees, name your favorite "Algernon".  If you're a student who can hack nonhuman neurosurgery or knows someone who can, this is a good way to win the Nobel Prize for your science fair project.

Next step:  Wire up the kids.  Thank you for observing all safety precautions.  Make sure you've got informed consent, and not from their parents, either.  Aside from that, the care and feeding of these people - they will be people; Specialists become capable of adult reasoning much earlier - is an entire document.  If you get that far, call me.  In fact, call me before you even start.


Reengineering emotions.

DEFN: Reengineering:  Altering a mind's emotional makeup or cognitive processes, so as to deliberately remove or alter a characteristic which, while an evolutionary advantage, is undesirable in some context. 

Humans are not designed for 9-to-5 workdays; we are even less designed for weeks or months of continuous effort. We get bored. We get tired. We get frustrated. We burn out.  We lose willpower.  We run out of mental energy. Isn't that odd? Our intuition to the contrary, there's no reason why people should have internal fuel tanks. In the physical world, there's this thing called conservation of mass and energy, but there's really no reason why mental energy shouldn't appear out of thin air. There's no such thing as conservation of information on a computer. If you want two of a file, you can copy it. Why not copy mental energy? Is there some chemical, endorphins or a particular neurotransmitter, which is inevitably depleted by mental effort?

It's not an evolutionary advantage to effortlessly override temptation all the time, but it's not an evolutionary advantage to follow temptation off a cliff, either. We have the ability to override emotional temptation in favor of rational priorities. But evolution can't censor our goals to ensure that they contribute to our reproductive potential. If a human is hungry and someone else has a supply of bread, he might wait for the cover of darkness, or he might maintain his moral purity. Overriding temptation for rational reasons may not contribute to reproductive success. The result is that we have limited mental energy. We become "sorely tempted." We get bored, when we expend energy for no reason perceptible to evolution. We get burnt out, if we continuously pour all our energy into a single task.

I'm not interested in reproductive success and I don't want to spend any energy on it. It would be nice if I could concentrate on a single task for the rest of my life without burning out. Is there any way to summon unlimited mental energy out of nowhere? For the neurology of this question we turn to... part 1 of The Hedonistic Imperative by David Pearce, humanity's expert on tampering with the pleasure centers for the greater good.

"A better clue to organic life's emotional future is to be found in 1951 in a tuberculosis sanatorium for U.S. veterans. Residents prescribed the MAO-inhibiting drug iproniazid were not cured merely of their tuberculosis. After a few weeks of treatment, many of them started to become exceptionally happy. Doctors described their patients, rather over-colourfully perhaps, as "dancing in the halls".  For the most part they had not hitherto been clinically depressed.  Nor was their euphoria simply an understandable reaction to being cured of their disease.  Moreover, in contrast to many recreational drugs, tolerance to the MAO-inhibitor's mood-brightening effects, and the consequent danger of uncontrolled dose escalation, didn't set in. Instead, it transpires that MAO-inhibitors as a class can induce a benign, long-term re-regulation of several families of nerve-cell receptor proteins. Serendipitously, modern medicine had stumbled on the unsuspected but sustainably mood-elevating properties of a remarkable and diverse category of drugs, the monoamine oxidase inhibitors."
We may also want to consider that old standby, stimulation of the medial forebrain bundle - "wireheading", to use Niven's term. Intense current probably wouldn't be a good idea. You pull this stunt on rats, wire their pleasure centers and hook the wire to a lever, they'll keep pulling the lever until they die of starvation, ignoring the food right next to them. If you thought crack was addictive... There's a truly horrifying story based on this concept, Larry Niven's Death By Ecstasy; see also Spider Robinson's Mindkiller. It would probably be a good idea to make sure the wire has a fairly low capacity. It may not be a tremendously good idea to develop the technology in the first place. Ignoring that, we'll forge ahead.

What we want is a constant background hum of pleasure. There's another little-known pleasure center in rats that stimulates satiation. A rat with a wire in that doesn't exhibit self-starvation behavior. It will push the lever, lean back for ten seconds, and then lean forward and push again. It's the pleasure of a full stomach, rather than the pleasure of eating something delicious. (This is a good thing to remember if you're dieting.) Satiation isn't what we want, however. For weeks of continuous effort, we probably do want self-starvation behavior. A background hum of "go go go" is just what we need. A background hum of "just fine" would probably encourage the Algernon to kick back and relax.

In any case, we now have three distinct sources of mental energy. Monoamine oxidase inhibitors, the medial forebrain bundle, and the satiation center (whose location I've forgotten). We'll probably fiddle with the mix in our first few Algernons, trying to minimize addictiveness while maximizing energy. No research has been done in this area yet for fear of triggering the collapse of civilization. It would be ironic if "wireheads", rather than spending their days marinating in pleasure, suddenly acquired the iron willpower to yank the wire out. (The trick would be not reinserting it.) Those rats, after all, didn't have any willpower. They didn't know that not eating was a bad idea. Instead they ignored hunger and thirst and single-mindedly pursued a single goal. The trick will be getting the single-mindedness to focus on cognitive goals, instead of on staying plugged in.

Now that we've got our methods set up, we're ... not quite ready yet. What are the possible side effects of unlimited willpower? We don't know what they are, but to minimize them, we'll want to work with adults. We're not trying to enhance a specific mental ability, we're trying to produce an emotional alteration.

Boredom and burnout could probably eliminated without too many cognitive side effects, since they are simply mechanisms intended to keep us from expending energy on personal goals rather than reproductive success. Frustration is much more dangerous to tamper with. Frustration occurs when designs are nullified, problems resist solution, or repeated efforts have no effect. Consider the habits of the Sphex wasp, as told by Dean Wooldridge's Mechanical Man, and as brought to the world's attention by Douglas Hofstadter in Godel, Escher, Bach:

When the time comes for egg laying, the wasp Sphex builds a burrow for the purpose and seeks out a cricket which she stings in such a way as to paralyze but not kill it. She drags the cricket into the burrow, lays her eggs alongside, closes the burrow, then flies away, never to return. In due course, the eggs hatch and the wasp grubs feed off the paralyzed cricket, which has not decayed, having been kept in the wasp equivalent of a deepfreeze.

To the human mind, such an elaborately organized and seemingly purposeful routine conveys a convincing flavor of logic and thoughtfulness - until more details are examined. For example, the wasp's routine is to bring the paralyzed cricket to the burrow, leave it on the threshold, go inside to see that all is well, emerge, and then drag the cricket in. If the cricket is moved a few inches away while the wasp is inside making her preliminary inspection, the wasp, on emerging from the burrow, will bring the cricket back to the threshold, but not inside, and will then repeat the preparatory procedure of entering the burrow to see that everything is all right. If again the cricket is removed a few inches while the wasp is inside, once again she will move the cricket up to the threshold and reenter the burrow for a final check. The wasp never thinks of pulling the cricket straight in. On one occasion this procedure was repeated forty times, always with the same result.

No human would ever do that. We'd get frustrated. The emotion of frustration, triggered by repeated, futile efforts, can be summed as: "This isn't working. Try something else." Boredom is "Stop doing this, it's not worth it" and burnout is "Never work on this again, it's not worth it." It is the "try something else" that turns frustration into an emotion with cognitive correlates. Humans, faced with the situation above, would not only stop going inside but would be thrown into a state of meta-reasoning. We would think about our actions, and realize that we don't need to check out the burrow one more time - we can just go straight in. Or we might go inside the burrow, and then look outside to see what happens.

The problem with Reengineering emotions is that, as has been Noted, it's the negative emotions that are associated with the more interesting kinds of intelligence.  The problem is that the obvious formula, directly stimulating the positive emotions associated with willpower, "overwhelming" boredom and frustration and other means by which evolution prevents you from keeping on with a job, is that boredom and frustration are also keyed in to the forms of intelligence you need to solve the problem.

Reengineering emotions requires far more sophisticated neurohacking than mere intelligence enhancement, surprisingly enough.  Our ancient emotions are one of the central dispatching units of the brain; while Specialization merely uses one of the obvious controls, Reengineering attempts to actually modify the functioning.
 
The Method of Emotional Reengineering:
Reduce part or all of the entire emotional system to a neuroanatomical flowchart, install intercepting receptors, suppressors and activators at all key junctions, and replace the previous pattern of interactions with a new functional diagram.

Ambitious program, isn't it?  The idea is that you've been trying and trying to debug a program and you still haven't succeeded.  Your rational mind detects this and sends an activation signal down Path A to the Frustration Module (probably the mammillary bodies or the amygdala; see below).  Because this signal probably contains semantic information useful in focusing the invoked ability, we won't intercept it.  The Frustration Module sends out a semantic activation down Path B to the cerebral cortex; we won't intercept that, either.  But it also sends an activation down Path C to the Boredom Center.  We'll intercept that signal (at one of the two ends of the terminus) and negate it.  Thus the ability is invoked, but energy continues undiminished.

At a still higher level of sophistication, an entirely enthusiastic person faced with a problem might not wait for frustration, but immediately invoke the frustration-invoked cognitive functions, directly.  This might not work as well, due to the absence of a focus - an artificial signal might not contain the semantic information.  A more sophisticated method would be to take over the pathways leading out of the low-energy-level indicators, activate the signal to the rational mind but not the signal to motivation, wait for it to pick up on any ambient problems, and produce a focused, semantic signal to Frustration.  This is an ad-hoc method and I'm not sure it's reflected in neuroanatomy, but you get the idea.

Incidentally, if you think I'm overestimating the modularity of the brain, let's just say that, as students of brain-damage know, the brain can be very very modular, especially when you're dealing with evolutionarily old functions.  The method is plausible, although not certain.
The Method of Cognitive Reengineering:
Find an ability or emotional intuition which interferes with rational cognition, and develop either an off-switch or indicator lights, so that the interference can be damped or compensated for.

Lying to yourself is an evolutionary advantage.  So are the various forms of sincere political hypocrisy; the evolutionary advantage is to be really sincere about reform - that's the best way to convince people - and then, once you're in power, work to your own advantage.  If at all possible, remain sincere about this being for the peasant's benefit; an opponent who will ruin everything is the usual rationalization.  Note that the politician is being entirely sincere.  He is simply being dangled on puppet strings by evolution, which doesn't care about reform, just the propagation of his genes.  There are entire libraries of emotional lies that operate like this, most of them visible in chimpanzees.  Thus, the hypocritical emotions are evolutionarily old enough to be localized somewhere we can find them and smash them and get good government for the first time in the history of the human race.

(In association with Amazon.com.)

Or, simpler still, it is an evolutionary advantage to be able to win an argument.  This holds true regardless of whether or not the argument is correct, especially with arguments in which your pride or social standing or what-have-you is at stake.  In fact, there are probably special modules of the brain devoted to explaining away flaws, suppressing what you know to be true, making emotional arguments, and so on - it's that important to evolutionary success.  An off switch or even an indicator light would be the greatest revolution in the Search For Truth since the invention of science.  Half of my problem-solving ability derives from a few tricks I've picked up for noticing when anti-rational forms of intelligence are in action, and they have a very distinct "feel" that leads me to think they're in separate modules.  Half of figuring out everything I know was simply avoiding rationalization, and it takes a lot of attention.  If that skill could be made automaticand infallible and mass-produced, it would be the dawn of a new age.

That simple deactivation would contribute as much to intelligence as all the other techniques on this page, put together, and it would be reversable and relatively cheap and it would work on adults; why, if it worked, we could run it on thousands of scientists, millions of people, a majority of the entire planet - he said with stars in his eyes.


Natural Algernon: Countersphexist

Theory.

DEFN: Countersphexist:  A Specialist, on the tone of keer.  Specialist abilities include (among other things) causal analysis and combinatorial design.  Algernic blind spots include short-term planning, symbol formation, and similarity analysis.
DEFN: Keer:  A tone present in frustration, sorrow, and despair.  The message may be summarized as:  "This isn't working - try something else".  If you don't have time to explain keer, call it "despair".
DEFN: Tone:  An element of emotion, usually present (with other tones) in many emotions.  A tone is a hardware-level cognitive object.

The whole emotions that arise in response to situations that invoke "sorrow", "despair", or "frustration" are actually symphonies of emotional tones; most situations evoke many of these tones, and the most frequent combinations become the nouns English contains for describing emotions.  The usage is thus considerably more precise.  A tone is a component of an emotion in not quite the same sense that a module is a component of an ability; modules interact to form abilities, but tones merely add together.  A tone is, however, a hardware-level object in the same sense as a module, albeit generally located in a lump inside the diencephalon instead of scattered all over the cerebral cortex.

A Countersphexist is a Specialist on keer.  Keer is a tone with these descriptions/effects:

Keer is present in frustration, sorrow, and despair.  Of these, despair has the most similar connotations.  In a hurry, a Countersphexist can be described as Specializing on despair.

To amplify on the cognitive effects:
 
DEFN: Causal analysis:  The module concerned with cause and effect.  Detects, manipulates, and invents causal patterns and causal linkages.
DEFN: Combinatorial design:  A module concerned with selecting and combining many small functions into a complex pattern performing a large function.
DEFN: Reflection:  A module concerned with noticing and manipulating representations of thoughts.
NOTE: Since the above modules are derived from observation of function, rather than neuroanatomy, it is possible that the "module" is actually several modules - perhaps even that causal analysis and combinatorial design are the same module.

I could write an entire book about causal analysis, detailing nature, grounding, internal cognitive elements, techniques and usage, and so on.  I could write a chapter on combinatorial design and a chapter on reflection, and a book about things you can do with reflection.  Perhaps one day I shall, although I'm not sure any of it would be useful except to other Countersphexists.  For now, I don't have the time, so the abbreviated explanations will have to do.

Practice.

Countersphexism is what I know best.  Elsewhere, I speculate; here, I report and deduce.  The reason, of course, is that I'm a Countersphexist.  To the best of my knowledge, I always have been.

According to my parents, I was an unusually irritable baby; as a child I wouldn't play with the other children or strangers; I didn't start speaking until the age of three (*); and so on - it's a good bet that the Algernic perturbation was present at birth.  I was also an unusually bright child; at the age of five I was devouring Childcraft books, especially the ones on math, science, how things work.  In second grade, at the age of seven, I discovered that my math teacher didn't know what a logarithm was, and permanently lost all respect for school.  Eventually, I convinced my parents to let me skip from fifth grade directly to seventh grade.  My birthday being September 11th, I turned eleven shortly after entering seventh grade.

The first sign of massive improbabilities came shortly thereafter, in (I believe) December, when I took the SAT as part of Northwestern University's Midwest Talent Search.  I achieved a score of 1410:  740 Math, 670 Verbal.  For the 8,000 7th graders (allegedly the "top 5%" of the Midwest 7th grade) who took the test, I came in 2nd Combined, 2nd Verbal, and 3rd Math.  According to the report I got back, this placed me in the 99.9998th percentile.

It wasn't until years later - after writing v1.0 of this document, in fact - that I realized that I had skipped a grade, rendering the statistics totally worthless.  Later investigation indicated that only about 600 6th graders took the test at all, and interpolating from the few "medalists" I could find indicate that the top score would have been around 1200, 600/600.  Between this and the smaller sample size and unknown selection procedures and the puberty barrier and the unknown tendency of other kids to skip a grade, the information I have is basically useless.  Playing around with standard deviations yields somewhere between four and six sigmas from the mean, depending on the reasoning methods.  I don't know.

Figure that the probability, according to a Gaussian curve, was at least one in five million.  This is improbable enough that the non-Gaussian explanation, the Algernic hypothesis, is an acceptable alternative.  Perturbations to any given piece of neuroanatomy probably happen at least that often.

Of course, it's not just the SAT scores that gave rise to the whole hypothesis.  If you consider all the blind spots and the emotional tone and the specific abilities, it becomes almost certain.  Problem is, I couldn't have described any of these things in those terms before I invented the Algernic hypothesis, so of course they're all suspect as being - psychosomatic? psychocognitic? - ex post facto.  I will continue, then, with events that occurred before I had the current explanatory framework.

During the latter half of eighth grade, at the age of twelve-and-some, and probably right before puberty, the other shoe dropped.  (Unfortunately my memories of this time are unclear to the point of nonexistence; my five-years-younger brother remembers my graduation, but I had to ask him whether or not I'd graduated.  In all seriousness, I didn't know whether I'd completed grade school.  None of my memories have ever been synchronized by time; I can't tell you, of my own memory, when The Other Shoe Dropped in relation to my age, the date, where I was in school, or much of anything else.  It might even have happened during late 7th grade, but I seem to recall the second half of 8th.)

The Other Shoe Dropped when my mental energy level, never high to begin with, fell to zero.  For a while, my parents forced me to continue with school, but I never got through the morning.  So exited the class intellectual.

This was later misdiagnosed as depression.  A very idiosyncratic type of depression, if so.  No feelings of worthlessness.  No hatred, self- or other-.  No despair.  Then, as now, I saw major problems with civilization, but I also saw solutions.  I was a pessimist, but I had plenty of hope.  The "depression" manifested as a lack of mental energy and that was all.  (These views date back to before the Algernic hypothesis, and may be considered as supporting evidence rather than confirming details.)  The etiology of what we call "depression" is unknown, and is probably at least a dozen, maybe hundreds of separate problems in various permutations, but my case wasn't one of them.

Children, as you may have noticed, have a far higher energy level than adults.  They are naturally more enthusiastic.  Children, I think, are not so vulnerable to the energy-diminishing aspect of the keer tone; children, who are often asked to do things for which there is no understandable purpose, dislike frustration but aren't stopped by it.  Even as a child, I was "subdued" - very outspoken, but subdued in the sense of having a low (almost adult-low) energy level.  As a neurological adult, I could act, and make choices, but I couldn't do anything, exert any sort of willpower more than once.  This continued until around the age of 16, when I discovered the Algernic explanation and began to learn the skills needed to live in a Countersphexist's mind.

Let's return to the issue of Occam's Razor.  If a certain level of ability has a Gaussian probability of five million to one, sooner or later someone - over a thousand people, with the current world population - will be born with that level of ability for causes having nothing to do with Specialization.  On the other hand, knowing that I'm already exhibiting characteristics at five million to one, anything else odd about me has to be explained by reference to the same cause.  An event that peculiar uses up all your improbability; everything else about you has to be perfectly normal except as perturbed by your Big Improbability.  You only get to use the Anthropic Principle once.

A "depression" of the intensity that hit me is sufficiently improbable for me to assume that it must have the same root cause as my SAT scores.  The most economic cause that explains both is the simple perturbation of a single piece of neuroanatomy.  I don't know that "depression" alone, no matter how idiosyncratic, is quite enough to raise the total improbability to more than six billion to one and force a neurological interpretation.  All the details below are certainly enough to do so, but they were verbalized post-theory.  However, as said earlier, the SAT scores (*) provide enough raw improbability to fuel a neurological hypothesis; the depression, if attributed to the same cause, makes neurology the more economical explanation; the details confirm it.

What was the first cause, the actual neurological perturbation that created a Countersphexist?  Right now, I would guess either the right mammillary body or the amygdala, or more likely a neural pathway leading therefrom.

Reasoning:  I've run across one study asked experimental subjects to solve two sets of anagrams. One set of anagrams was unsolvable. While the subjects attempted to solve the unsolvable, activity in the amygdala increased, activity in the mamillary bodies increased, and activity in the hippocampus decreased. The hippocampus is involved in memory formation - this corresponds to my difficulty with "chunking" or symbol formation, but isn't a plausible first cause.  The mammillary bodies are part of the Papez circuit in the septo-hippocampal monitoring system.  For rats, Gray's The Psychology of Stress and Fear (1987) theorizes that the Papez circuit is responsible for evaluating the effect of planned actions on the environment.  It seems plausible that, in humans, this same circuit might be responsible for keer - "This won't work; try something else."  Damage to the amygdala has been known to lead to emotional disturbances; it's a hypothesized "central dispatching" site known to be vulnerable to perturbations.

The right mammillary body comes from noticing a lateralized disturbance.  I have tried Prozac (serotonergic), Ritalin (dopaminergic), and even DL- and L-phenylalanine (noradrenaline precursors).  None of them worked (*), but Ritalin, which has a 3-4 hour period of action, would occasionally produce hand tremors (dopaminergic drugs can do that).  On one occasion, after starting a Ritalin "holiday", I noticed that my right hand was trembling and my left hand was not.  In the cognitive science business, this is what we call a SCREAMING CLUE that something neurological is going on.  I select right because, while studies using transcranial magnetic stimulation yield contradictory results, stimulation of the right lobe is reported to induce depression and the left lobe happiness more often than the reverse.

Details.

  1. Specialist module:  Causal analysis.
    1. I once defined causal analysis as the intuition which tells you that Douglas R. Hofstadter created the English language for the sole purpose of writing "Godel, Escher, Bach."  Some of the puns within have such a high "causal density", are so densely linked on so many levels, that one intuitively attributes the language to the puns, rather than the other way around.  Note
      that the definition is carefully chosen to eliminate conscious, sequential reasoning as opposed to primal intuition, since we know Dr. Hofstadter didn't really invent the English language.

      So far, the purest example occurred one night while half-awake.  I'd dreamed that I saw the name "Eliezer Yudkowsky" in the Wall Street Journal, but it was talking about someone else.  When I fuzzily woke up, I wasn't awake enough to realize that it was "just a dream", and I started worrying that people would confuse me with the other Eliezer, and wondered if I should write a letter clearing up the error.  Then I thought:  "I haven't seen today's Wall Street Journal yet, and therefore today's Wall Street Journal is external from my dream.  If, when I read today's Wall Street Journal, I'm still in there, then I'll worry about it."  And then I went to sleep.

      "External from" is a bit of causal analysis jargon I've invented; "A is external from B" means that "A has no causal influence on B, and therefore the contents of B are independent from A".  Even though I was sleepy enough to think there was still a significant chance of my name being in the Wall Street Journal, I could intuitively perceive the causal structure relating my dream and reality, and draw conclusions from it.  This occurred, accurately and correctly, without multistage reasoning, even when I was too sleepy to draw conclusions from my knowledge that "dreams aren't real".

      As far as I can tell, the major enhancements to this ability are twofold. One, I use it all the time, reflexively; the ability is prominent and ubiquitous, enough so that I've begun inventing jargon. Two, the analysis recurses to a deeper level, draws causal linkages between constituent symbols rather than top symbols, and simulates rather than looking for similarities (*).

  2. Specialist module:  Combinatorial design.
    1. This ability has reached its height, so far, in designing software architectures, particularly in inventing new design patterns - and I only found out what a design pattern was a few months ago.  I've also used it to find security flaws - on the side of the angels, I assure you! - by adding up a lot of little flaws to make one big flaw.  I've used it to design MagicĂ´ decks, although not very seriously.

      It's the task of taking a lot of little capabilities and putting them together to achieve a big capability; formulating general rules that help you do so; figuring out the architecture, the texture by which you'll put all the little capabilities together.  It's not something I understand as well as causal analysis, which is a pity, for it may be terribly important to program it into a computer.

      Besides that, there's just pure creativity, thinking up cool things to do or interesting questions. For a Countersphexist, the questions easiest to answer begin with "Why?", but the questions easiest to ask begin with "What if?"

      I have trouble rendering it down any further than that. It probably has something to do with salient properties, complexity, imagination, aesthetic appreciation, common nodes, causal intersections, desired results, and so on ad Hofstadterium.

      The alteration to the ability may have something to do with a wider search. My disability in "planning" may be a problem with long, linear temporal chains. A lot of times in AI, one faces a tradeoff between search depth and search breadth. If you can search 10,000 nodes, the search can be 10 nodes wide and 4 nodes deep, or 100 nodes wide and 2 nodes deep. Apparently I'm a shallow (but wide) person.

  3. Reflectivity.
    1. It has only recently occurred to me that this might be a separate and Specialized module, but of course one of the basic components of Hofstadterian "anti-sphexishness' is self-watching, causal analysis and flaw-spotting applied to the self.  The problem here is separating Specialist effects from the effects of studying cognitive science and evolutionary psychology - but I have a high percentile in self-knowledge, self-analysis, even self-alteration, and it's plausible that an actual Specialty may have something to do with it.  A considerable amount of my intelligence derives from my ability to break down symbols and concepts into their components.  I can always ground a symbol, even if I have to ground it in the cognitive components making up the symbol.
  4. Finding logical flaws.
    1. When the current design fails, what went wrong?

      This is an extension of causal analysis; I'm in the process of writing a page on some specific methods (in the hope that they can be transmitted to others).  The main point of interest is that this has become a group of skills instead of a simple ability, taking practice to solidify. My strategy - rather than the ability itself - has altered, as I became accustomed to my ability. Instead of finding a logical flaw directly, I can render down the cognitive origins of the target idea, and then attack those.

      It's not that I don't make certain kinds of mistakes. It's that, if I'm paying attention, I don't make certain kinds of mistakes.  Ever.  Otherwise, I'm perfectly fallible.

      After having received certain emails, I would like to note that many people think they can spot logical flaws where, mysteriously, nobody else can.  This is unusual only if it applies to morally and socially neutral subject matter.

  5. Episode-driven memory.
    1. Immediately after resuming work on a project, I lose all perception of the time intervening between stopping and starting. This often leads to odd cognitive errors. If I meet with a person on Tuesdays and Thursdays, I'll speak about what was done "yesterday" when I mean Tuesday or last week. The effect persists over months and manifests at the drop of a pin. Just recently [written in '96, not '99], when discussing something, I retorted: "I've been working on this for the past six months!" I had worked on it for six months, and then ceased work for four months - which vanished as soon as I started thinking about it again.

      It works the other way, too. If I'm thinking about something, and I get distracted, it can be almost impossible to remember what I was thinking about. I have a great deal of trouble remembering specific times for events, event durations, or even relative times for unrelated events (whether A followed B.)  Thus I still don't know when The Other Shoe Dropped in relation to my age, my schoolwork, the calendar, or anything else.

      This is probably a secondary effect - I don't really see how it's a direct aid to any of the Specialist modules. Maybe one of the modules use a lot of strongly associative memory, and now that type of memory dominates everything else.

  6. Intolerance of disorder.
    1. I have trouble tolerating the existence of certain types of disorder and will spend as much time as necessary to remove every tiny bit (*). This only applies to disorder, not to flaws; I have no trouble leaving an inert flaw in a design and fixing it later. But I'll spend ridiculous amounts of time to get it right the first time, make something correspond precisely with my design.

      I don't know why this is - my guess: When the current design fails, make sure you get it right next time.

  1. Frustration.
    1. This negative aspect is conspicuous by its absence. I don't feel frustrated all the time, despite the continuous tactile keer of being about to cry - that's just a somatic sensation, not an emotion. I am easily frustrated, and when I am frustrated I am hit hard by even the smallest things. Any futile effort, regardless of how little the results were desired, will precipitate a nervous breakdown unless I am on guard.  ['99 - better at self-alteration now, I can usually avoid the thoughts that trigger the emotion.]  I cannot bear to observe certain types of mistakes and must flee in disorder when they are present, even in fiction or on TV.  (I will not specify the type of mistakes; this is one of the tests I use to confirm ADD/CSXers.)
  2. Low stress tolerance.
    1. Not just frustration, but any stressful emotion is very difficult to handle. "Very low mental energy" is one way to put it, if you're given to understatement.  Over the years I've learned to handle this both by avoiding stressful situations and through reflectively grabbing the reins of emotional control, a sort of "software" Reengineering that lets me take most things in stride.
  3. False tears.
    1. The keer tone is the feeling you get just before you start crying.  It's a continuous, "false" signal.  It has thus happened that I will start crying even though I feel perfectly calm, utterly rational, not at all agitated.  Try convincing someone you're feeling no emotional distress with tears running down your cheeks.  I can't blame them; at this stage of technology, denial is more common than neurohacking.  I've managed to avoid these pseudo-breakdowns for, let's see, about three years, I think.  Still wasn't fun.
  4. Low "get up and go."
    1. As I put it, even before I'd developed the Countersphexist hypothesis, "I can't do anything." It takes - not willpower (don't have any) - constant logical vigilance to make sure I don't run out of steam. To actually sit down and work at a project, it takes either a lot of willpower, a Cause, or some other "software solution".  And even then, it's difficult.  Nowadays I think I'm putting in as much work as anyone else does, given that only 20-40% of an eight-hour day is spent on actual work; maybe I'm even bettering that.  But I can't, faced with a crunch, go into overtime, except maybe for a single day; most people can handle two weeks.  And before I learned how to operate my mind, learned to avoid certain patterns of thought, I would run into "thought mines" that would cripple me, block off whole subjects, every time I was doing something I didn't want to do or I didn't think would work.
  5. Cognitive disabilities.
    1. I don't know what these are in the same way I know what I can do. All I can do is describe the things I can't seem to do or the things I can see as obvious in retrospect but can't invent on my own. Even these are difficult to distinguish from ordinary, only-human mistakes. If I'd run across Algernon's Law where someone else had thought of it first, would it now be forever enshrined in my memory as something I never would have thought of on my own? If I can't come up with a plan, how do I know that someone else could? If it seems that other people ask questions I couldn't come up with on my own, maybe it's just the effect of trying to explain something explicitly. I do get more or less the same experience when trying to write something down that I expect someone else will be looking at. Maybe all my true cognitive disabilities are genuine blind spots, things that only someone else will ever be able to figure out. "Can't ask questions", "can't plan", are the result of vaguely felt inabilities that I can't seem to put into words.

      On the whole, I can't really put anything definite down here. Maybe if I keep notes, I'll be able to get some hard data. Until then, all I can give you are my suspicions.

  6. Can't chunk (formulate new symbols).
    1. Often when I'm reading a text, I'll come across a word used for a concept that will make everything else fall into place. It doesn't have to be a particularly appropriate word - "Peroodle Effect" will do as well as "Evolutionary Psychology." The odd thing is that I can feel everything crystalizing into a cohesive whole, and it seems to me that it's not the concept that triggers it, but the existence of a symbol for it. As if the concept was diffused, and I can't use it in symbolic structures until I have a name for it.

      My best guess as to why this occurs is that (1) frustration discourages chunking, so that if you're doing something wrong, it doesn't crystalize as a skill; (2) it's an ability that got sacrificed; (3) unchunked symbols are easier to associate and manipulate in some ways. Maybe the fundamental cognitive operations of creativity act best on uncrystalized concepts. It may be relevant that the hippocampus, the encoder of long-term memories, displays decreased activity when experimental subjects attempt to solve unsolvable anagrams.

      Also, I have trouble coming up with snappy names. If I hadn't run across Hofstadter's term sphexist and Flowers for Algernon I'd probably be calling myself a victim of the Frustration-Based Evolutionary Balancing Effect. "Countersphexist Algernon" still fails to soar, somehow, but by now it's part of the literature.  After two years, "Quasars" and the "Reengineered" have replaced "Iron-willed Algernons".

  7. Can't digress.
    1. For example, I was writing a bit of HTML that should have been hyperlinked, and didn't open up the Bookmarks and read off the link because it seemed like too much work - even though, when I consciously considered it a few moments later, it took maybe five seconds. But it does involve a sequence of seven distinct steps.

      This could be an inability to handle long linear temporal sequences, as detailed under "creative design" above, so that it appeared like far more work than it actually was. Or it could be a side effect of "episode-driven memory" - but I don't think so, because I have no difficulty in departing from the task to execute short or one-step scripts, even if it would take a bit of time.

  8. Can't formulate strings of short-term goals.
    1. It could be inexperience - but I seem to have a great deal of trouble translating a purpose into immediate goals; I can do this, but only if it's a very simple problem.  This is another of those things that's hard to communicate; the disability doesn't cover everything you mean by planning, only one of the modules supporting it.  There are plans that can be designed; most formal plans or long-term plans might not give me that much trouble.  But the daily, intuitive planning that nobody bothers to formalize - that, I can't do intuitively, or rather I can only use obvious structures and not invent structures.  One-step "plans" only.

      At a guess, planning competes with design for certain cognitive resources. Planning may require formulating a long, linear causal sequence, which would compete with the short causal nets required by combinatorial design. Or it could be that planning is a function of enthusiasm and that, as a "positive" ability, it got stunted.

  9. Can't think in certain directions.
    1. In the immortal words of Curly, "I tried to think but nothing happened." Often I'll just run into a mental block if I try to think of something. It may involve extending a temporal sequence beyond a certain point. Sometimes it happens when I try to extrapolate results.

      My guess is that this also occurs when I would need to build a mental structure that relies on sacrificed ability.

      Or it could just be the result of running out of mental energy.
      Prozac causes this sensation to occur when attempting to think just about anything.
      I wonder if I'll ever be able to figure out what's actually inside this black box.


Possible Algernon: Dyslexia.

Allow me to quote Breakthrough #7:
From: "Chris J. Phoenix" <cphoenix@CS.Stanford.EDU>:
There is some new and very promising research on causes and treatment of dyslexia. The researchers have done an informal study with very impressive results: After a one-week program, the average ability of a couple hundred school-age kids was increased by more than one grade level, measured by standardized tests. They have treated over 1,000 people so far, with a 97% success rate.

The whole thing is based on a new understanding of dyslexia as a mental ability rather than a disability or collection of disabilities. Dyslexics have the ability to imagine what something would look like if they look at it from different directions--they can mentally "walk around" an object and see what it looks like from all different angles. This becomes a standard way of resolving visual confusion, and usually happens too fast for the dyslexic to be conscious that they are even doing it.

A dyslexic child who becomes confused while learning to read will develop the bad habit of trying to use this mental talent to figure out what words are. They will look at a word from several different directions while trying to read it, which causes letters to flop around in classic dyslexic fashion.

A dyslexic person can easily and quickly be taught to control this ability to distort perception ("disorient") or turn it off entirely. The "mind's eye" (the place that they are looking from) just needs to be kept from jumping around, and moved to a certain spot (the "orientation point", a few inches above and behind the head). This turns off the perceptual distortion, improves the sense of balance (ballet dancers are taught to do this), and reduces the confusion (and sometimes fear) that come from looking at words. Presto, no more reading problem!

... Well, not quite. There's another piece of the puzzle. Dyslexics tend to be visual thinkers. When they see a word for which they have no picture, their mind literally goes blank. This is confusing... and confusion causes disorientation. The words without pictures tend to be small ones like "the" or "in".

Some kids with this mental gift (it is a gift because it enables lots of creativity, artistic talent, and thinking faster) don't have trouble reading, but do have trouble with math or sports. For sports, learning to control the disorientation can make an amazing difference in literally minutes. For math or technical skills, it may take a week or two. Dyslexic reading problems will probably take a few months to resolve fully, because there are a lot of confusing words to be worked through, but even there results can be seen in a matter of weeks.

I have been working with the inventor for almost a year now, learning about this stuff, and while I don't have as much experience as I would like, I have seen it work on myself and on other people that I've told about it. I am working with one man who has been dyslexic and a computer programmer for years. After one month, spending about four hours per week, a coworker (who didn't know what he was doing) noticed that his emails were more readable and asked if he was taking an English class. After two months, his coworkers started feeling threatened because his coding speed increased so much. His progress is not unique--the results are just typical of what this technology can do. On a personal note, it's quite thrilling to watch it work, and see a word go from "scary" to "no problem" in a matter of minutes.

There are several resources available. The discoverer of this technology, Ron Davis (who is dyslexic himself), has published a book, _The Gift of Dyslexia_ (ISBN number 0-929551-23-0) that tells how to use the methods. The researchers I mentioned in the first paragraph are the Reading Research Council, which was started by Ron Davis. Their phone number is (USA) 415-692-8990, and they have a Website. The web site tells about classes, products, and other related disabilities such as autism and ADD. Also, feel free to email me at cphoenix@xenon.stanford.edu.

Okay, so a few bogosity detectors go off, but it's a lead.  If it's true, it's a textbook case of Algernon's Law. Ability gets enhanced, ability messes up other cognitive abilities, Algernon learns workaround and winds up intimidating coworkers.

I've spoken to cphoenix@xenon.stanford.edu and I'd like to quote one of my replies:

> 3) Carl Delacato's work. In cultures where babies are not allowed to
> crawl, almost everyone is dyslexic.

It occurred to me that the cerebellum is evidently responsible for our sense of three-dimensional space, both navigating through it - as in motor skills - and rotating it - as in dyslexia. This ties in with my continuing theory that the cerebellum is responsible for processing constraint propagation, since that's required to build a coherent three-dimensional world from a set of depths and edges.

> He's mentioned speaking out loud "so that he'll know what he's thinking"
> --not very self-aware!

Self-reflection could be affected by the type of games played with the cerebellum. For example, retrieval of long-term memory is done by the cerebellum. (Formation being the responsibility of the hippocampus.) And, almost certainly, part of the self-symbol gets stored in long-term memory at some point. How it gets retrieved could have subtle effects on the whole mind.

Overall, a very interesting area of research, but I think I'm running about ten years ahead of science again, and this is supposed to be a practical guide. We are not ready.


Possible Algernon: Manic-depression.

According to the Wall Street Journal, a surprising proportion of top businessfolk - around a third - are manic-depressives. Moreover, one of them was quoted as saying that he withdraws from medication when a big deal is in the offing because "it inhibits my creativity." That's what really tipped me off, since otherwise it could be explained as manic-depressives having more energy during their manic phase and impressing their bosses. But that part about medication inhibiting creativity sounds very familiar to me. Prozac cripples me completely, and since they don't put a warning to that effect on the medication, I imagine that it interferes only with Algernons who rely on depressant emotions for their edge.

Is this a case of Algernon's Law? Maybe. Not quite so clear a case as mine or dyslexics', but still a possibility. The swing between manic and depressive is suggestive of a swing between Quasar and Countersphexist, as if the continuous alternation enhances both while preventing either from becoming powerful enough to suck all the life out of nearby cognitive systems. That's if manic-depressive means, "full of energy"<->"lacking energy". If manic-depressive means "megalomania"<->"self-hatred", I would conjecture that the system affected is the social self-estimation system, or self-esteem system, that Prozac tampers with. Perhaps enhancing the social systems to provide fine detailing magnifies some type of oscillation. I confess I don't know. Still, I thought it should be included.


Worksheet:

Asperger's Syndrome and autism:  A lot of people have been telling me this resembles Countersphexism, although, after investigation, the only similarity I can see is the late onset of language at about three years.  It may still qualify as a different form of Algern in its own right.

Attention Deficit Disorder or Attention Deficit Hyperactivity Disorder:  Most people who send me email saying "I think I could be a Countersphexist, although not to such an extreme degree" turn out to share a certain set of slightly different characteristics, some of which are identical, others not.  I believe that they have "ADD/CSX", a subspecies of Attention Deficit Disorder.  In one case, after informing the emailer of this, it turned out that his father had ADD.  He saw a doctor, got diagnosed, started taking one of the many medications for ADD, and stopped failing college.  The entire page was worth it, just for that.  Typically, ADD/CSXers score well on the SAT, or otherwise display high (but not super-Gaussian) levels of computer programming ability or whatever, and have very low energy levels.  Other characteristics that have been reported by some correspondents:  Inability to watch certain scenes on television, and the tactile keer - the sensation of being about to cry (!).

About the problem of diagnosis for ADD:  Most of the qualities, such as being "easily distractable", "highly creative", and so on, are things that we all exhibit occasionally.  The same holds true for Countersphexism; we all feel fatigued, we all employ causal analysis, and so on.  These are both what one might call Amplified Archetype Disorders.  ADD and Countersphexism take a facet that's present in all of us and magnify it.  What defines ADD and Countersphexism is not the presence of the characteristic, but its presence at all times and regardless of external stimulus.  All of us feel frustrated when things start going wrong; if you feel frustrated when you get up after ten hours of sleep, during a two-week vacation, after you won the lottery, then it's time to start talking about neurological anomalies.  Not understanding this principle is what leads to the pop psychology "Could you be ADD?" questionnaires.

Hyperlexia is the absolute peak unchallengeable blatantly obvious archetypical case of Algernon's Law.  Children with hyperlexia are characterized by "A precocious ability to read words, far above what would be expected at their chronological age or an intense fascination with letters or numbers.  Significant difficulty in understanding verbal language.  Abnormal social skills, difficulty in socializing and interacting appropriately with people."  (I just found out about hyperlexia as of 3/12/99.)  They still haven't noticed Algernon's Law.

Finally, we come to Quasars.  My guess is that these are Specialists, on enthusiasm, in symbol formation and abstraction (and possibly planning).  I've received email from at least two people who almost certainly have the same case of something, even if I got the cause wrong; both were extremely enthusiastic and positive-thinking, talked using the same sentence and paragraph structure (symbol/concept followed by a definition in the form of a series of "descriptors" - properties or features), and displayed a fascination with hierarchies of meta-levels.  One of them got in the vicinity of 1400 on his SAT at age 12.


Notes: