Hacker News new | past | comments | ask | show | jobs | submit login
“Pain Is the Only School-Teacher” (gwern.net)
81 points by telotortium on June 8, 2020 | hide | past | favorite | 59 comments



Few of the comments here seem to discuss his point. He asks if pleasure to pain is a continuum, why have pain at all? Instead of -1 to 1, with negative representing unpleasantness, human experience can instead span 0 to 2 and have the same reinforcement value with no sensation of displeasure.

My understanding of his answer is if this were the case, we wouldn't always avoid lesser pleasant sensations and may even seek them out to enjoy the rebound to greater pleasure. If burning your skin knocks you from baseline (1 instead of 0 for this example) to 0 (instead of -1), and the relief of stopping the burn kicked you up to 2 as a reward for escaping it, you'll just repeatedly burn your arm. Only negative values ensure you'll never seek them out for the purpose of rebounding to higher values.


A good argument, but not applicable for people practicing self-harm.

A quotation from https://en.wikipedia.org/wiki/Self-harm below:

“Those who engage in self-harm face the contradictory reality of harming themselves while at the same time obtaining relief from this act.

It may even be hard for some to actually initiate cutting, but they often do because they know the relief that will follow. For some self-harmers this relief is primarily psychological while for others this feeling of relief comes from the beta endorphins released in the brain. Endorphins are endogenous opioids that are released in response to physical injury, acting as natural painkillers and inducing pleasant feelings, and in response to self-harm would act to reduce tension and emotional distress. Many self-harmers report feeling very little to no pain while self-harming and, for some, deliberate self-harm may become a means of seeking pleasure.”


Pathologies are not, in the general case, a rebuttal to an argument about behavior.


Pathology is just personality when you use a -1 to 1 scale instead of a 0 to 1 scale.


There seems to be more to it than just that, or so the argument goes. The example of people wearing prosthetics to warn of damaging situations seemed relevant: they would disable the electric shock, do what they wanted, then re-enable it.

For pain to carry value it must represent the ground truth and can't be easily ignorable. That makes it carry a very large cost to override which is important since as the examples demonstrate the long-term consequences of an inability to feel pain can be quite severe.

I suppose that ties in to what you said: humans are way too bad a long-term thinking to properly evaluate consequences so we would end up burning ourselves or hacking off a limb to achieve a short-term reward. Or at least enough of us would be that our species would never have made it this far.


If you only ever experience 0 to 2 with a baseline of 1 then that's no different experientially than -1 to 1 with a baseline of 0. Since baseline is going to be the 'normal' experience it's going to be routine in both cases. Whereas they'd be confused talking to one another as the overlap of baselines is at the most positive and most negative positions versus one another.

Basically if we were to go through life only experiencing pleasure would less pleasure basically be pain if you had no capacity to feel or understand pain.

Plus the whole idea that pleasure and pain are a continuum seems overly simplistic.


Unless you remind yourself what -1 means. Is like how people have a house, and then they bitch about it because it doesn’t have enough garages(that’s like -1 1). But if they know about people renting basements. They can still become 0 2.


Right but those are different parameters.


Is also what being grateful is all about, it can keep you happy even at bad times


This reminds me a lot of pain related to finance and gambling losses, underwent by someone being much riskier than they should have been. You can tell someone about the risks of gambling or very-high risk investing, but in my experience the best teacher by far is for them to experience pain from something gone wrong themselves, which is generally them losing a lot of their hard-earned money, and feeling somewhere between 'terrible' and 'I want to kill myself'.

It's the one type of reinforcement that hurts badly enough that it can correct their future actions to be of more reasonable riskiness. A lot of people that I watch go through the psychological cycle of gambling, whether via casinos, cryptocurrencies, stock options, etc, will often not stop until they're forced to by the pain of a terrible loss. It's unfortunate that this is the most effective method for some things to be taught, but that is part of life.


I didn’t lose a lot but enough to feel it! Trading options can teach you a lot, it is one of the few areas where you need to form an opinion, reinforce it and then have it either affirmed or torn down (at great cost). Sometimes I wish political, social and economic opinions had to withstand such an immediate and potentially costly test, would certain teach some people some things.


It definitely took me quite a few lessons to have a good idea what I was doing there myself. Skin in the game is definitely important, and has become increasingly less common in society, to the detriment of many systems.


A big reason free markets work so well is it is an inherently "skin in the game" setup. This does a better job of weeding out irrational and unproductive behavior than any other system.


My teacher (organizational psychologist) was warning banks before the 2008 crisis about their bonus culture, precisely because they don’t have enough skin in the game.

Also, as an employee, I don’t have enough skin either. I could get fired, I might not be promoted. That’s a whole different level than that one could lose everything immediately as some entrepreneurs.


Aren't past gamblers more likely to relapse then non-gamblers to start? I think that my point here is that while enough pain is good job in making you stop temporally, it takes much more then that to stop forever.


That's characterized by the phrase "hitting rock bottom".


People who "hit rock bottom" do relapse. And when they relapse it all goes down really fast again. Rock bottom does not imply abstinence forever.

When I said "takes more" I meant that long term, possibly forever, work of changing habits, values, social environment to keep not playing.


>but that is part of life

Makes one wonder whether technology can help here. Seems that, so far, there is no substitute for direct experience.


[c-f] nociception -> zero hits

This surprised me. Nociception is the term you will find in the biological literature because we can't quantify how (for example) a mouse "feels" when it's tail is dipped in hot water but we can quantify the behavioral response and use it as a proxy for the underlying biological mechanism.

Once you separate the biological and psychological responses in this manner the reasoning seems (to me) to become much cleaner.

Without higher level reasoning, nociception just needs to trigger any learning response in order to achieve survival. That's an incredibly open ended requirement.

In the presence of higher level reasoning then, pain could be viewed as a secondary effect that in turn triggers some learning response. So if you have higher level reasoning you don't necessarily need pain, but you probably do need nociception plus some other robust downstream response for a reliable organism. (This seems to mostly match up with the observations about useful nonpainful pain in the article.)


"Nociception" is one of those words which assumes its conclusion, and is a kind of dormitive fallacy. "What is pain?" "Well, it is a behavioral response to a nociceptive stimuli..."

Talking about nociception may be useful once you have already gone through all of the anomalies about pain and made a good case for distinguishing between the different parts and have an implicit taxonomy. But it's not useful for my essay to just jump straight to a handwavy 'oh, you have "nociception" (whatever that is) vs motivation as an example of bi-level optimization'. And once it's all been covered, there's still no particular reason to bring in that jargon unless I need a quote or something using it, which I haven't so far.


I'm having trouble seeing it as fallacious and what (I think?) you're describing there doesn't seem to capture what I was taught nor the usage I'm accustomed to encountering in the literature.

Rather I see it as splitting along an intuitive boundary - namely a whole host of low level biological responses (seriously there's a lot) versus higher level psychological (ie perceptive, conscious, whatever) phenomena. This seems particularly meaningful to me in part because most low level multicellular organisms that otherwise lack anything resembling emotion or psychology of any sort still possess (often highly conserved) analogous pathways.

My understanding is of nociception being a biological phenomenon that can be measured by proxy via behavior. It's important to note that the behavior itself isn't the nociception but rather a downstream response to it. That might sound like nitpicking but it's important because nociceptive responses can and do occur (and are of clinical significance) in patients who have lost consciousness. (At absolute minimum it seems weird to me to talk about an unconscious patient experiencing pain. Surely we benefit from being able to draw a distinction here?)

So (for example) the mouse tail flick test is a means of quantifying the nociceptive response which presumably also causes the rodents some mild pain. In fact it's (presumably) pain that actually results in the behavior that's measured but we can't quantify it on account of (at minimum) being unable to meaningfully communicate with them. Compare this to a physician collecting periodic "pain level" survey responses from a postoperative patient.

Semi-related thought: The term nociception allows avoiding confusing turns of phrase such as "painful pain" and "nonpainful pain". Surely the apparently contradictory term "nonpainful pain" should alone be sufficient evidence that there are important distinctions to be made here (and hence justify the additional terminology)?

Another semi-related thought: I find things much easier to reason about once they're "stacked" like this. Otherwise, trying to make sense of nonpainful pain, or the C. elegans thermal response, or the physiological responses of unconscious patients is downright confusing to me. Modeling it as "damage -> nociception (ie sensing) -> pain (ie psychology) -> response (ie behavior)" is much more intuitive to me. Importantly, it provides a framework for differentiating between cases that have more or less of a psychological response involved in mediating any behavioral ones, as well as allowing for cases that exhibit only physiological responses (ie nothing psychological or behavioral). (Or perhaps it just helps scientists sleep at night after they throw a few hundred plates with thousands of C. elegans each into the autoclave?)


> My understanding is of nociception being a biological phenomenon that can be measured by proxy via behavior...At absolute minimum it seems weird to me to talk about an unconscious patient experiencing pain. Surely we benefit from being able to draw a distinction here?

It may be weird, but I think it's very valuable to have a taxonomy which lets you ask about kinds of pain which are not merely yoked to immediate verbalizable things. As you say, it is complex.

For example, if you thought 'unconscious pain' is a contradiction in terms, then what do you make of things like anesthesia awareness or the troubling long-term PTSD-like symptoms in some people who undergo anesthesia ( https://www.lesswrong.com/posts/wzj6WkudtrXQFqL8e/inverse-p-... )? That may be 'behavior' but they certainly are not classic indicators of pain. They are not like dipping a mouse's tail in hot water and observing its movement. They do, however, despite the lack of qualia, look like learning processes about avoiding damage.

And why do we apparently have consciously-perceived damage signals which can in fact motivate behavior (if the person chooses to) without the accompanying painful qualia, if nociception is merely behavioral effects? When Tanya decides to react to burning her hand on a stove by moving it away, is she really experiencing the exact same kind of nociception that you or I experience when we burn our hand on a stove and move it away? It's the same behavior, after all.

I'm sure you can extend 'nociception' as a word to cover some but not all of these cases, but by that point, nociception needs the entire essay as a preface just to explain what one means by that, which is why I don't use it. It is a pointer to an entire theoretical & empirical apparatus the reader does not have. Anyone who already knows all that doesn't need to read the pain section at all as it's obvious why pain in humans is an example of bi-level losses.

> Or perhaps it just helps scientists sleep at night after they throw a few hundred plates with thousands of C. elegans each into the autoclave?

I think it was Steven Pinker who said he stopped doing animal experiments when he could no longer convince himself that hitting mice on the head with tiny hammers to give them brain damage was not the most evil thing he did...


I can't escape the impression that you aren't using the same definition I am for nociception. I otherwise agree with everything you've said as far as I can tell.

> And why ... if nociception is merely behavioral effects? ...

This is definitely not consistent with the definition I'm using. Rather, I'm using nociception to refer to the low level _physiological_ responses to damage (specifically the set of molecular pathways that are in some way organized as part of a larger systemic response to said damage).

The term pain can then be assigned to a particular qualia, leaving a few other phenomena on the levels in between the two. This makes nociception easy to speak and reason about and helps avoid some of the most confusing or apparently contradictory situations but otherwise leaves the higher level stuff (pain, motivation, various other qualia) as difficult to figure out as before.

In the case of complications related to anesthesia I'd say it's squarely in a grey area that the classification scheme I'm applying here doesn't handle as well. That's due partly to the line between physiology and psychology blurring at times, and partly to the (related) fuzziness of the term unconscious as it applies to an organism's biology.

Even when you're unconscious, a great deal of your nervous system still has to be functioning on some level in order to keep you alive. Since psychology arises from your nervous system, which is in turn made up of innumerable physiological effects, then it's not inconsistent with the definitions I'm using that we can observe nociceptive pathways resulting in changes to some of the higher level systems even if someone was unconscious while they were active.

> When Tanya decides to react to burning her hand on a stove by moving it away, is she really experiencing the exact same kind of nociception that you or I experience when we burn our hand on a stove and move it away?

I'd point out that (given the definition I've been using) we don't experience nociception but rather some higher level qualia that's downstream of it. Not knowing all that much about her case other than what appears in your article (ie pain insensitivity) I honestly have no idea. She obviously doesn't experience what any of us would describe as pain. Beyond that, how can we know what qualia someone else experiences? If we assume her to be otherwise identical to us then I suppose it would depend on exactly where in the pathway that runs from nociception to pain response her genetic abnormality manifested.

> ... despite the lack of qualia, look like learning processes about avoiding damage.

Precisely! As I'm modeling it, useful (evolutionarily) nociceptive pathways can give rise to avoidance or learning processes in any way whatsoever. All that's required is that their presence increase organism fitness!

In the case of many simpler organisms I suspect there might not be any qualia or learning whatsoever (and thus no loss function, bi-level or otherwise, on the level of an individual organism). Rather, nociceptive pathways might simply trigger some set of immediate, hard coded, higher level responses. (Consider C. elegans in particular and I think this will make a lot of sense.)

Up organism complexity a bit and you might encounter learning in the form of something resembling a simple state machine. In other words, employing nociceptive pathways to form a very basic set of associations between specific environmental conditions and some sort of danger or avoidance response. This could allow preemptively avoiding something that caused damage (and thus lowered fitness) in the past. (This might or might not be bi-level depending on what other learning pathways, if any, it interacted with.)

Under such a model, it's only when you make it up to incredibly complex organisms whose behavior is governed (to at least some extent) by higher level reasoning that you might encounter things we could identify as psychology, qualia, or pain. This is where bi-level losses seem to become particularly important and also where things rapidly become incredibly complicated to model.

That being said, it seems to me that a hardwired avoidance response (ie an outer loss) is likely only necessary for initial bootstrapping (in the case of sufficiently well educated humans) or in an organism where higher level reasoning is present (and thus driving behavior) but not sufficiently advanced. For example, it seems at least plausible that Tanya might have been fine if only there were some way to imbue her with a well developed understanding of the world. Then again, maybe not - it's quite possible that other internal processes have developed a dependence on the pain response being present, and so ripping it out without doing any other remodeling might well result in a "pain sized hole" and a nonviable organism.


My cousin feels pain or discomfort but only a little. This almost affected her when she gave birth because her water had broken but she didn’t feel any contractions at all until it was almost too late. Luckily she got to the hospital in time and her son was born perfectly normal but it was a bit harrowing.

More interestingly, her son inherited this. He doesn’t feel pain the same way normal people do. Once her son broke his wrist and had to go to the hospital. He wasn’t in pain, but I think they had to pull on the arm to put it back in place properly (is this called traction?). The doctor was putting in all his effort to separate the wrist from the arm, and the dad almost fainted because it looked so gruesome but all the son looked like was mildly discomforted from the tension. The doctor was apparently shocked at how little pain he felt.

The son also pulled out all his teeth on his own, as they got loose. He said it bothered him to have loose teeth, but the act of pulling them out didn’t bother him at all.


I don't get his point. After all there is no need for a conscious introspection from evolutionary perspective - not even for pain. There is observable behaviour driven by events from the "outside" world. That all could easily be represented by a function of some input. No qualia needed. The truth behind this is that science has no grip on certain things. And these things are the central aspects of human experience. So all science can do is explaining consciousness "away". Like Dennet (among others) does. Magicans that distract and redirect the focus away from the matter. It could be just an epiphenomenon.


You seem to be describing p-zombies which (IMO) don't have much overlap with present day science. Higher level reasoning would appear to increase fitness in many cases. For whatever reason, our version of it appears to involve conscious introspection.

As to the point of the article - it's about modeling human learning as a reinforcement learning problem and thus pain as one of many interacting loss functions. In particular it's about modeling how these different loss functions interact with one another to achieve a desirable outcome.


I just cannot see why the introspection is needed and to what regard it extends the functional paradigma.

For the outcome of reaction - there is no introspection needed. The difference between a pain value and qualia of pain is not clear with regard to the function of behaviour and evolution.

Qualia is not simply a state. a state can also be modelled with a function. The examples with people deactivating the "warning prothetics" are no real explanation. I think a (non conscious but complex)function or set of functions could simulate human behaviour without introspection or conscious sense.

There is, from evolutionary perspective, no necessity for it. Don't get me wrong, I appreciate the authors work. I maybe just missed where the magic begins.


Certainly we lack (to the best of my knowledge) a rigorous model or definition for terms such as "conscious" and "qualia". That doesn't tell us anything about whether introspection or consciousness is necessary or not, just that our understanding is lacking.

The fact that humans seem to have it doesn't say anything about it's necessity (or lack thereof). It only speaks to it's presence in our particular case.

I would challenge your assertion that "qualia is not simply a state". In the absence of a rigorous model how are you backing such a claim? Similarly, how are you justifying your assertion that human behavior could be simulated without introspection or consciousness? For that matter, how could anyone justify a claim to the contrary?

It seems to me that (at present) we simply lack the knowledge to meaningfully reason about generalizations such as introspection or consciousness being necessary or not.

As to why the author brings qualia in to it - my guess would simply be "because humans seem to have it". Sure, you could write about bi-level loss functions from a purely mathematical standpoint. The point (IMO) is to tie that mathematical model back to a concrete example in the real world in order to draw inspiration for the design of more capable RL algorithms.

It occurs to me that you might be interested in this recent essay on the topic. (https://slatestarcodex.com/2020/06/01/book-review-origin-of-...)


> Certainly we lack (to the best of my knowledge) a rigorous model or definition for terms such as "conscious" and "qualia". That doesn't tell us anything about whether introspection or consciousness is necessary or not, just that our understanding is lacking.

Yes. In (theoretical) Computer Science, one of the first things students learn are the limits of their domain and the limits of mathematics and models in general. Keywords: Halting problem

In other domains (e.g. biology, psychology etc.) they don't bother with such meta physics :) They use very primitive models (e.g. game theory) to map very complex processes. They forget the proverb that all models are wrong, but some are useful. And if that complexity gap is to big, the models just model randomness.

Long story short: I doubt it is possible to build a formal model to describe qualia or introspection.

> The fact that humans seem to have it doesn't say anything > about it's necessity (or lack thereof). > It only speaks to it's presence in our particular case. > I would challenge your assertion that "qualia is not simply a state". > In the absence of a rigorous model how are you backing such a claim?

As I said, there is no "connection", no "dependency" to evolutionary processes. It is decoupled of what we call evolution. I would rather think it is a property of matter or physics, than one of biology or even cognition.

On the other hand, take the usual examples. A color can be described as a certain wavelength. You can model that. You can build a machine, that detects / "percepts" colors and have an inner state of that "color". What is needed in addidition, to create, what we (as humans) perveive as colors? Is it a matter of complexity? Like, if a lot of processes interact and supervise each other, then there are 3rd level effects, like with magnetism and electricity. And one of this is a new emerging "force": consciousness.

> Similarly, how are you justifying your assertion that > human behavior could be simulated without introspection > or consciousness?

Maybe not in that complexity, but it's possible to simulate agents artificially in an artificial world. To simulate reality, you need a "super" reality and more energy than is existing. Modelling is costly. You have to sacrify through "abstraction". Maybe consciousness is just possible at this level of immediacy when everything interacts with everything in real time.

> For that matter, how could anyone justify a claim > to the contrary?

If it could model or emulate a certain mental state at any time of a given person and communicate that state. So that the person could verify it. Like a weather forecast. It would be a sign in the direction. But of cause a real "proof" is not possible.


This is great, thank you gwern!

In particular, https://www.newyorker.com/magazine/2020/01/13/a-world-withou... - this is a fantastic article. I really want to know my own levels of anandamide now, I wonder how possible it would be to test that as a layman.


For another POV on pain perception and it’s role in driving decisions and behavior, Dan Ariely’s Predictably Irrational book talks about his early experiences with immense physical pain and his research into the impact of pain perception. A great book in many ways, this part comes early and is a small part of the book, but puts a new perspective on some of our cognitive biases. If you haven’t read it yet, worth the time.


Quite interesting article. After graduating from a control-theory-heavy program, I have started thinking about animal movement as an optimization problem. A manipulator is moved into a desired position while minimizing the cost function of pain, in order to avoid damage to the body.

That would fall in the "useful painful pain" category, I guess.


A lot of animal movement is about minimizing energy usage, not optimizing for minimal pain.


Sure, the cost-function can certainly be multi-dimensional.

Energy, pain, maybe even expected value of death (e.g., take energy-intensive and possibly painful evasive action from a predator) might be dimensions to take into consideration.


> Lobotomized subjects similarly report feeling intense pain but not minding it

I'd sign up for that, actually.


Afterwards you’ll regret it, but you won’t mind it.


This made me laugh out loud, what monster can downvote this?


This made me smile. What grumpy Gus can downvote it?


i think one evolutionary purpose of chronic pain is to remind about the event which caused it.

for example, every time it rains i'm reminded of all the events which caused fractures. hopefully that helps with not repeating them.


Sorry for the offtopicness, but could you please stop creating accounts for every few comments you post? We ban accounts that do that. This is in the site guidelines: https://news.ycombinator.com/newsguidelines.html.

You needn't use your real name, of course, but for HN to be a community, users need some identity for other users to relate to. Otherwise we may as well have no usernames and no community, and that would be a different kind of forum. https://hn.algolia.com/?sort=byDate&dateRange=all&type=comme...


The unholy trinity of upvotes/downvotes/karma creates some "echo chamberishness" (if I have to put it nicely) and pressure on posting as users tend to use the power to punish "wrongthink" instead of low effort posts. I think this behavior stems from that.

Having an option to relive the users of this pressure (like enabling a little anonymous posting, or adding distinctions between the downvote types) might incentivize them to drop this unwanted behavior.


100% agree. There is a very nonzero amount of "punishing wrongthink with downvotes" on HN. (Even if the post follows posting guidelines.)

As long as the community continues this kind of behavior, we'll have people spamming new accounts.


Usually there's something else wrong with a downvoted comment besides just "wrongthink". Since people don't like being downvoted, or seeing something they agree with being downvoted, they tend to reach for "wrongthink" a little too readily as the explanation.

Certainly there are unfairly downvoted comments, but users tend to give those corrective upvotes after a while: https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que....


What are your thoughts on the meta-moderation system used by Slashdot? That seemed to curb a lot of common brigade abuse and is the one feature I miss most about other discussion systems.


Indeed I "correctively upvote" things that I otherwise wouldn't, but whenever I do so I'm frustrated by the rudimentary nature of pretty much all the moderation systems I've come across to date.


there's something about hn that i can't seem to remember my pw (also, tbi). whenever i'm on a new device, i register. sorry about that.


I strongly suspect that chronic pain directly reduces fitness by far more than it indirectly (through reminders) might increase it. However, as long as the effects are part of a highly conserved subsystem and also remain mild enough relative to the fraction of the population that experiences them, then there's a good chance they won't be meaningfully selected against.


Its intellectually hazardous to anthropomorphize chaos, like ascribing a purpose to evolution.


I thimk GP meant that pain concurrent with some experience makes the experience more memorable (ie, the brain helps you avoid future experiences by making previous experiences more readily memorable).


Yup. It always rubs me wrong when some says "Oh, the organism evolved to overcome that environmental factor".

It's more accurate to say "The environmental factor selected for a genetic variation that improved fitness."


I understand that's common nomenclature, but to me "selected" always feel like it implies intelligence in some manner. I've always preferred saying "the environmental factor caused evolution" in some manner or something similar.

For example, if feels more natural to me to say "an obstacle in my path may cause me to go around it", compared to "an obstacle in my path selected for me to go around it".

I think of evolution as a system than reacts to conditions, similar to a state machine. It's hard for me to think of those conditions as acting in some way on a system, but perhaps that's just a failure in my own thinking.


I probably didn't do a good job explaining it, but using your obstacle analogy...

When you say "an object in my path caused me to go around it", you're switching the cause and effect.

A more accurate way to say it would be "an obstacle on the path meant that only a handful of the population can go around it".

The change that allows you to go around the obstacle was not caused by the obstacle, it would have happened regardless. However, the change is only important if the obstacle is there - that's what drives evolution (selection).


> However, the change is only important if the obstacle is there - that's what drives evolution (selection).

I understand what's being said, it's the "selects" that triggers the problem for me.

I think of evolution as a system. Inputs cause the system to act in a certain way, but it's the system responding to the inputs, not the inputs choosing to get a specific output (which is what it sounds like to me to say A selects for B).

In the same way with orbital mechanics I wouldn't say a one object selected for a change in trajectory in the other. I would say one caused the change in the other, or more specifically, due to the how orbital mechanics work (the "system", the law of gravity), one object caused a change in the trajectory of the other.

So I guess my question is, why does evolution prefer the "selects" terminology? Is that common in any other sciences?


> why does evolution prefer the "selects" terminology? Is that common in any other sciences?

I think this is due to set theory and statistics. Terminology such as "select x such that" is standard. In fact a choice function is commonly referred to as a selector. Since evolution is essentially a biased random walk, you can describe it as a stochastic choice function (plus some other stuff) applied to the set of entities that makes up a population.

(If "selects" bothers you, what do you make of quantum mechanical "observers"? I witnessed that one cause lots of misconceptions among university and even a few graduate students.)


Thinking in terms of purpose helps understanding. Just gotta keep in mind that some things don't really have a purpose. They may have arisen randomly, be vestigial or be a side effect of something beneficial.


Fools learn from experience. The wise learn from others' experience.

(even knowing this, I am usually a fool. Also, could someone please tl;dr "inner vs outer losses" for me? advthanksance)

Edit: am I properly interpreting "pain as grounding" to be somewhat parallel to the I term of a PID controller?


The problem with learning from others' experience is... where does this learning process come from?

Learning from watching others is actually quite difficult. As Terence said, "when two do the same, it's not the same." No set of pixel observations will be identical. How do you map a third-person view to your own first-person view? You need sophisticated algorithms before 'learning from others' experience' is even an option. We struggle to get robots to learn from imitation. Human children routinely 'over-imitate' because they can't distinguish what part of a sequence of actions is necessary and what parts are optional or can be sloppier and still achieve the goal. Indeed, how do you even know what the 'goal' was? People differ greatly in abilities, preferences, and knowledge, so you would seem to require theory of mind just to begin. (I'm reminded of a point about kittens I saw argued: cats can't learn from observations, and instead, when a kitten 'imitates' their mother, they are actually simply becoming interested in the same object or place, and then independently inventing, by the usual cat trial and error, whatever useful behavior it was.)


I currently believe the third-person view comes before the first-person view. There's a substantial evolutionary pressure to be able to predict one's predator or prey; even quite limited theories of mind come in handy there. Consciousness, however, isn't required — I'll argue that the first-person view is an accident: once we have a system that is suitable for modelling others, it makes sense that the creature we have the most substantial data stream on is ourselves, so we model that as well.

This way around might explain why our self-awareness sometimes fails to be optimal.

For how little theory of mind it takes to play antagonistic games, see Shannon's 1953 mind reading machine: https://this1that1whatever.com/miscellany/mind-reader/Shanno...

(or try to play fetch with a dog, mixing two types of throws, and see how quickly it learns your tells)


> and in other ways the manifestations of lobotomy and morphine are similar enough to lead some researchers to describe the action of morphine (and some barbiturates) as ‘reversible pharmacological leucotomy [lobotomy]’.

Interesting. Wonder if they did brain scans on people taking morphine to see if the same areas were affected. If so, couldn't we narrow "mindfulness of pain" to that section of the brain?




Applications are open for YC Summer 2022

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: