MARVIN MINSKY, THE EMOTION MACHINE , chapter 3 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ (Simon & Schuster, Nov. 2006) http://web.media.mit.edu/~minsky/ [Comments in square brackets are my own thoughts.] 3. Being in Pain ~~~~~~~~~~~~~~~~ Minsky's main points here are that @ a pain response is the activation of resources concerned with stopping the pain, and shutting down or lowering the priority of other resources; this may lead to cascades of unpleasant secondary effects; @ there are various stages of pain, depending on severity, duration, and many other factors; he distinguishes - momentary pain, not leading to suffering; - suffering and anguish due to extended pain, and its effects in limiting our choices; @ there are various strategies for overriding pain, to some degree; @ chronic pain that serves no apparent purpose may be a "programming bug" -- something due to the late development of thinking, and insufficient time to evolve ways of countering the debilitating cascade of mental effects that chronic pain has; [I'm not sure what it would mean for the "bug" to be fixed: All long-term pain disappears after a few weeks? That could be dangerous. Specific types disappear, such as neuralgia or auto-immune disorders (rheumatoid arthritis, lupus)?] @ we learn how to avoid pain by developing "critics" that warn that what we are doing or considering doing is fraught with risk; @ switching on and off of critics (en masse) may account for bipolar disorder; in general, it accounts for moods; @ negative learning -- learning of critics -- is underrated and understudied in psychology; [Hmm, isn't there a lot of work on rats and shocks? Maybe he means humans.] @ to some extent we can motivate ourselves to endure some discomfort or pain in pursuit of some longer-range goals by "fooling ourselves", imagining possibilities that inspire or anger us into continued pursuit of the goals. Here (on the second page of the chapter) and elsewhere in the book (esp. ch.9) Minsky briefly addresses what David Chalmers calls "the hard problem" -- explaining the *experience* of pain, rather than the mechanisms of pain. But I think he simply doesn't understand what Chalmers is getting at, i.e., the subjectivity of phenomenal experience, and the intuition that intelligent behavior could just as well occur without conscious sensations. Why should learning to avoid situations that are damaging to oneself involve *feeling pain*, rather than just dispassionate evaluation of damage as "strongly dispreferred"? Ironically, every time Minsky addresses the issue of qualia -- which only exist in a subjective perspective -- he answers, yes, they *can* be explained, and then proceeds to sketch an objective account! Basically he says that pain sensations are not primitives, but can be analyzed in terms of complex interacting parts, such as the goal of stopping the pain, and the cascade of cognitive consequences particularly in prolonged pain. Mental pains induced by prolonged physical pain ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Among the cognitive consequences of prolonged pain he lists - Anguish of losing mobility - Resentment of not being able to think - Dread of becoming disabled and helpless - Shame of becoming a burden to friends - Remorse at dishonoring obligations - Dismay at the prospect of failure - Mortification of seeming abnormal - Terror of further decline and death [But while these obviously can occur as side-effects, I don't think they are *part* of the pain itself -- they just add further burdens -- various mental pains.] The machinery of suffering ~~~~~~~~~~~~~~~~~~~~~~~~~~ He diagrams the brain's body-maps and mentions some of the brain regions involved in pain, but points out that such subsystem identification doesn't really explain anything, until we can say what processing these subsystems are doing. An interesting observation is that our perception of internal pains are much less specific and localized than surface pains because evolutionarily, it wouldn't really have been helpful. You can protect a hurt finger in quite specific ways, but you can't protect an inflamed appendix except perhaps by protecting the entire belly and staying inactive. [I wondered about his mention of the "limbic system", which acc. to some of our readings has lost some credibility as an identifiable system, and his main ref is from 1965...] He mentions "pain asymbolia" -- a condition where pain is felt quite clearly, but without being experienced as unpleasant [recall also the effects of Demerol]. He thinks this is because somehow the pain is failing to cause the "cascade of torments" (mental anguish) [but I don't see that this would deprive the pain of its *physical* unpleasantness; somehow Minsky is trying to blame the unpleasantness of pain on its creation of mental pain, but I don't see that this explains anything.] Feeling, hurting, and suffering ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Under this heading he addresses the issue of why feelings are so hard to describe, again claiming that it's not because they are so simple but because they are so complex. [But why should we have particular difficulty describing feelings and other sensations, while being able to describe our thoughts on politics, or our evaluation of a movie, etc.? Aren't these just as complex? I'm more with McDermott here; the issue is not whether the underlying *processes* are simple or complex, but whether our *symbolic models* of them are simple or complex. Note, for example, that judging a rose to be red may be an extremely complex process, in terms of what happens in the eye and in the brain, and in addition we may have various complex *associations* with that perception -- e.g., think of a painter working with various shades of red, or writers of phrases such as "Oh, who will kiss her ruby lips", or "Hair so red, red as flame" (respectively from a song and from a romance writer's overwrought pen); but that doesn't mean the *concept* of "red" is complex in our model of our perceptions.] He also again talks about *experiencing* pain here, and again blames the badness of pain on the resultant mental pains. But he does make the interesting claim that avoidance of injury and taking care of injuries couldn't be learned through positive reinforcement -- it requires negative feelings, [If one imagines trying to build a robot that learns strictly through positive reinforcement, one can see the difficulty: for instance, suppose we designed the robot so that if it receives a leg injury, it will get considerable pleasure from treating the leg with great caution and care, until it is healed (or repaired). Wouldn't that lead to appropriate responses to injury? Well no -- it would probably try to get injured, so as to enjoy the feeling of caring for the injury! Can we do better by designing the robot to get pleasure from injury *avoidance* -- i.e., it gets positive reinforcement whenever it perceives that it *might* have been injured, but didn't get injured? Well, it would *still* seek out dangerous situations, since otherwise it'll have no sense of having avoided injury! So perhaps we want to build it so that the safer from injury it feels itself to be, the happier it is. But then, what would prevent it from neglecting an *accidental* injury? This may be solvable, but it doesn't look easy... or can we just "shift the origin" on the scale of negative and positive feelings, so that all feelings are just more or less positive, never negative? Then even an injured creature or robot would be feeling not-too-bad, yet would be striving strenuously to get help and/or take measures to promote healing of the injury, wouldn't it? Or would it?? If the shift in origin causes no behavioral change, then the robot (analogously, a person) would still behave as if suffering, yelling for help, etc., when injured or otherwise in trouble, so it seems that the pain would not have been banished after all!] Overriding pain ~~~~~~~~~~~~~~~ He talks about various ways of partially overriding pain: - focusing on something else - focusing analytically on the pain - establishing a counter-irritant - taking comfort from someone else similarly afflicted - self-discipline, pain-training [I think of "A Man Called Horse"] He also makes his point about the evolutionary "programming bug" here, in connection with chronic pain. In a subsection on grief, he mostly cites the wisdom of Shakespeare, e.g., in making grief more tolerable by thinking of the loss in more positive terms. Critics (Correctors, Suppressors, and Censors) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ He notes that while we make many dubious decisions, we rarely do things with disastrous consequences to ourselves, like sticking a finger in one's eye, or telling strangers how ugly they are. He argues that much expertise consists of knowing what NOT to do, and that this expertise consists of resources that he calls "critics" -- specialized recognizers of particular kinds of mistakes. He thinks some of them intervene during a dangerous action, others at its start, others before the action is even considered. They operate at all of the "levels" from instinctive ones to self-awareness. [I find this too speculative and unsupported to be of much value. Minsky has advocated critics throughout his research life, and one of his students, Gerald Jay Sussman, did a thesis on programming or planning by creating an initial program or plan more of less randomly, and then applying critics to it to repair errors and inadequacies. But while this was an interesting idea, more systematic approaches have been more successful so far, where "critics" are replaced by flaw-dection/correction algorithms based on a general logical analysis of the structure of plans and the interactions of their parts. Flaw detection/correction is simplest in the case of SAT (satisfiability-based) planning: a flaw is an unsatified logical formula, and a correction step consists of "flipping a bit" (the truth value of a propositional variable) so as to reduce the number of unsatisfied formulas -- though randomization to escape from local nonzero minima may be necessary.] He goes on to speculate further about where critics might play an important role: - mood swings - humor is often about what one should not do, flouting the critics - decision-making may be the result of critics calling a stop to further exploration of alternatives; - perception of (personal?) beauty may depend on suppression (by critics) of awareness of flaws [is he thinking of "love is blind" here?] - learning how and why failures occur in complex situations; - breaking out of local maxima; - enduring suffering for the sake of long-term benefits - mystical euphoria may be a state where all critics are turned off, causing muddled vague thoughts to seem like profound insights; [But I get impatient here, because of the lack of concreteness. I don't see that we need "critics" as special processes. Instead, it may be sufficient to suppose that we learn CAUSAL CONNECTIONS -- and if certain causal consequences are evaluated as hurtful or otherwise bad, we will plan our actions to avoid taking actions with those consequences, just as we will favor actions that we believe will have rewarding consequences. The willingness to endure some suffering for the sake of long-term goals can be explained by assuming that plan evaluation is cumulative -- i.e., we do whatever seems to give the greatest OVERALL rewards in the long run, not the greatest immediate gratification (though we probably do temporal discounting, to allow for the uncertainty of the future). I don't see that the concept of critics -- ubiquitous unspecified, special-purpose procedures -- shed any real light on this.] The Freudian sandwich ~~~~~~~~~~~~~~~~~~~~~~ He pays homage to Freud's ideas, particularly the idea that our instinctive drives (coming from the "id") often conflict with our acquired ideals (coming from the "superego"), and are arbitrated by methods that settle conflicts (coming from the "ego"); and to the idea that avoiding or managing conflict depends on *repression*, *sublimation*, or *repudiation* of impulses. Repression mechanisms keep undesirable impulses from entering consciousness, sublimation allows them limited expression in some other guise, and repudiation allows them into consciousness, but repudiates and neutralizes them there. [All this, to me, remains very speculative.] Emotional exploitation ~~~~~~~~~~~~~~~~~~~~~~ The heading refers to tricking ourselves with fictitious ideas in order to overcome ennui in problem-solving, where those ideas get us sufficiently riled up or energized to continue. [I have no introspective evidence for anything of the sort, so I'm a bit puzzled. One thing I noticed is that he refers to "Work" and "Sleep" as resources; of these, "Work" seems far too broad and vague a term for us to figure out what a resource actually does, and how.] He concludes with some truisms about imagining, e.g., "to think about changing the way things are, we have to imagine how they might be". This should raise issues in symbolic representation, but at least at this point it doesn't, for Minsky. He thinks we need to trick ourselves into pursuing certain goals when we would rather be doing something else, because if we found it too easy to switch goals at will, we might act in ways that imperil us. This would be an evolutionary disadvantage.