Skip to main content

Miscellaneous

Misc thoughts, memories, proto-essays, musings, etc.

Quickies

A game theory curiosity: what is the role of infrastructure?

It’s interesting that though Assad, ISIS, China, Russia etc all know that computers & networks are heavily pwned by their enemies like the USA, they still can’t bring themselves to take it down or stop using them. ISIS, while losing, continued to heavily use smartphones and the internet despite knowing it’s of intense interest to its enemies/the West and despite the West signaling its estimate of the danger to ISIS by allowing the networks & phones to continue to operate instead of destroying/disabling them (as would be trivial to do with a few bombing missions). An easier example is North Korea continues to allow tourists to pay it thousands of dollars, knowing that the tourists claim that they are in fact undermining the regime through consciousness-raising & contact with foreigners. Both NK and the tourists can’t both be right; but while it’s obvious who’s right in the case of NK (it’s NK, and the tourists are immoral and evil for going and helping prop up the regime with the hard foreign currency it desperately needs to buy off its elite & run things like its ICBM & nuclear bomb programs), it’s not so obvious in other cases.

To me, it seems like ISIS is hurt rather than helped on net by the cellphone towers in its territory, as the data can be used against it in so many ways while the benefits of propaganda are limited and ISIS certainly cannot ‘hack back’ and benefit a similar degree by collecting info on US/Iraqi forces - but apparently they disagree. So this is an odd sort of situation. ISIS must believe it is helped on net and the US is harmed on net by cellphones & limited Internet, so it doesn’t blow up all the cellphone towers in its territory (they’re easy to find). Wile the US seems to believe it is helped on net and ISIS is harmed on net by cellphones & limited Internet (so it doesn’t blow up or brick all the cellphone towers in ISIS territory, they’re easy find). But they can’t both be right, and they both know the other’s view on it.

Some possible explanations: one side is wrong, irrational, and too irrational to realize that it’s a bad sign that the enemy is permitting them to keep using smartphones/Internet; one or more sides thinks that leaving the infrastructure alone is hurting it but the benefits to civilians is enough (yeah, right); or one side agrees it’s worse off, but lacks the internal discipline to enforce proper OPSEC (plausible for ISIS), so has to keep using it (a partial abandonment being worse than full use).


Big blocks are critical to Bitcoin’s scaling to higher transaction rates; after a lot of arguing with no progress, some people made Bitcoin Unlimited and other forks, and promptly screwed up the coding and seem to’ve engaged in some highly unethical tactics as well, thereby helping discredit allowing larger blocks in the original Bitcoin; does this make it a real-world example of the unilateralist’s curse?


The most recent SEP entry on logical empiricism really reinforces how much America benefited from WWII and the diaspora of logicians, mathematicians, philosophers and geniuses of every stripe from Europe (something I’ve remarked on while reading academic biographies). You can trace back so much in just computing alone to all of their work! It’s impossible to miss; in intellectual history, there is always “before WWII”, and “after WWII”.

In retrospect, it’s crazy that the US government was so blind and did so little to create or at least assist this brain drain, and it was accomplished by neglect and accident and by other researchers working privately to bring over the likes of Albert Einstein and John von Neumann and all the others to Princeton and elsewhere. Was this not one of the greatest bonanzas of R&D in human history? The US got much of the cream of Europe, a generation’s geniuses, at a critical moment in history. (And to the extent that it didn’t, because they died during the war or were captured by the Soviets, that merely indicates the lack of efforts made before the war—few were so ideological Marxists, say, that they would have refused lucrative offers to stay in Europe in order to try to join the USSR instead.) This admittedly is hindsight but it’s striking to think of what enormous returns some investments of $1–2m & frictionless green cards would have generated in 1930–1940. If someone could have just go around to all the promising Jewish grad students in Germany and offer them a no-strings-attached green card + a stipend of $1k salary for a few years… For the cost of a few battleships or bombers, how many of the people in the German rocketry or nuclear program could’ve been lured away, and not ultimately have to take the technology developed with their talents to the Soviets?

Despite the mad scramble of Operation Paperclip, which one would think have impressed on the US the importance of lubricating brain drain, this oversight continues: not only does US immigration law make life hard on grad students or researchers, it’s amazing to think that if war or depression broke out right now in East Asia, the US isn’t standing ready to cream off all the Chinese/Korean/Japanese researchers - but instead they must trickle through the broken US system, with no points-based skill immigration assistance!

It’s unfortunate that we’re unable to learn from that and do anything comparable in the US, even basic measures like PhDs qualifying for green cards.


So much of the strange and unique culture of California, like the ‘human potential movement’, seems to historically trace back to German Romanticism. All the physical culture, anti-vaxxers, granola, nudism, homeopathy, homosexuality, pornography, all of it seems to stem either intellectually or through emigration to Germans, in a way that is almost entirely unrecognized in any discussions I’ve seen. (Consider the Nazis’ associations with things like vegetarianism or animal rights or anti-smoking campaigns, among other things.) No one does rationality and science and technology quite like the Germans… but also no one goes insane quite like the Germans.


So, LED lights are much cheaper to operate and run cooler, so they can emit much more light. When something gets cheaper, people buy a lot more of it. The leading theory right now about myopia is that it’s caused by the adaptive growing eye not receiving enough bright sunlight, which is orders of magnitude brighter than indoors, and growing incorrectly. So would the spread of LED lighting lead to a reduction in the sky-high myopia rates of industrialized countries? What about smartphone use, as they are bright light sources beamed straight into growing eyes?


People often note a ‘sophomore slump’ or ‘sequelitis’ where the second work in a series or the other works by an author are noticeably worse than the first and most popular.

Some of this can be inherent to the successor, since they cannot, for example, benefit from the magic of world-building a second time. But some of this is also going to be regression to the mean: by definition, if you start with an author’s best work, the next one can’t be better and probably will be worse. For the next one to be as good or better, either the author would need to be extremely consistent in output quality or you would need to start with one of their lesser works; the former is hard and few authors can manage it (and the ones who can are probably writing unchallenging dreck like pulp fiction), but the second, as inept as it might seem (why a reader want to start with the second-best book?) actually does happen because media markets are characterized by extreme levels of noise exacerbated by winner-take-all dynamics and long tails of extreme outcomes.

So an author’s most popular work may well not be their best, because it is simply a fluke of luck; in most universes, J.K. Rowling does not become a billionaire on the strength of a mega-blockbuster book & movie series but publishes a few obscure well-regarded children’s books and does something else (note how few people read her non-Harry Potter books, and how many of them do so because they are a fan of HP).

What might it look like in terms of ratings when an author publishes several works and then one of them takes off for random reasons and then publishes more? Probably the initial works will meander around the author’s mean as their few fans rate them relatively objectively, the popular work will receive very high ratings from the vast masses who cotton onto it, and then subsequent works will be biased upwards by the extreme fans now devoted to a famous author but still below the original since “it’s just doesn’t have the same magic”.

Authors don’t improve that much over time, so the discontinuity between pre- and post-popular ratings or sales can be attributed to the popularity and give an idea of how biased media markets are.


A striking demonstration that entertainment don’t matter much is how over the last 10 or 20 years, the size of the corpuses you can access easily and for free have increased by several orders of magnitude without making even a hiccup on happiness or life satisfaction surveys or longevity or suicide rates.

Going back to the media abundance numbers: there are billions of videos on YouTube; millions of books on Libgen; god knows how much on the Internet Archive or the Internet in general; all of this for free and typically within a few clicks. In contrast, historically, people are media-poor to an almost unfathomable degree.

A single book might cost a month or a year’s salary. A village’s only book might be a copy of the Bible. The nearest library might well be a private collection, and if one could get access, have a few dozen books at most, many of which would be common (but extremely expensive) books - hence the infinite number of medieval manuscripts of the Bible, Plato, or works like Roman de la Rose (hugely popular but now of interest only to specialists) while critically important works like Tacitus or Lucretius survive as a handful or a single manuscript, indicating few circulating copies. So in a lucky lifetime, one might read (assuming, of course, one is lucky enough to be literate) a few dozen or hundred books of any type. It’s no wonder that everyone is deeply familiar with any throwaway Biblical or Greek mythical allusion, when that might be the only book available, read repeatedly when shared across many people.

What about stories and recitations and music? Oral culture, based on familiar standards, traditions, religions, and involving in ritual functions (one of the key aspects of ritual is that it repeats), does not offer much abundance to the individual either; hence the ability to construct phylogenetic trees of folk takes and follow their slow dissemination and mutation over the centuries, or perhaps as far back as 6 millennia in the case of “The Smith and the Devil”.

Why does hardly anyone seem to have noticed? Why is it not the central issue of our time? Why do brief discussions of copyright or YouTube get immediately pushed out of discussions by funny cat pics? Why are people so much agonized by inflation going up 1% this year or wages remaining static when the available art price/quantity increases so much every year, if art is such a panacea? The answer of course is that “art is not about esthetics”, and people bloviating about how a novel saved their life are deluded or virtue-signaling; it did no such thing. Media/art is almost perfectly substitutable, there is already far more than is necessary, the effect of media on one’s beliefs or personality is nil, and so on.


It’s always surprising to read about classical music and how recent the popularity of much of it is. Some classical music pieces are oddly recent things, like crosswords only recently turning 100 years old.

For example, Vivaldi’s 1721303ya “Four Seasons” - what could be more commonplace or popular than it? It’s up there with “Ode to Joy” or Pachelbel’s Canon in terms of being overexposed to the point of nausea.

Yet, if you read Wikipedia, Vivaldi was effectively forgotten after 1800224ya, all the way until the 1930s and “Four Seasons” didn’t even get recorded until 1939!

Pachelbel’s canon turns out to be another one: not even published until 1919105ya and not recorded until 194084ya and only popularized in 196856ya! Since it might’ve been composed as early as 1680344ya, it took almost 300 years to become famous.


One thing I notice about different intellectual fields is the vastly different levels of cleverness and rigor applied.

Mathematicians and theoretical physicists make the most astonishingly intricate & difficult theories & calculations that few humans could ever appreciate even after decades of intense study, while in another field like education research, one is grateful if a researcher can use a t-test and understands correlation!=causation. (Hence notorious phenomenon like physicists moving into a random field and casually making an important improvement; one thinks of dclayh’s anecdote about the mathematician who casually made a major improvement in computer circuit design and then stopped - as the topic was clearly unworthy of him.) This holds true for the average researcher and is reflected in things like GRE scores by major, and reproducibility rates of papers. (Aubrey de Grey: “It has always appalled me that really bright scientists almost all work in the most competitive fields, the ones in which they are making the least difference. In other words, if they were hit by a truck, the same discovery would be made by somebody else about 10 minutes later.”)

This does not reflect relative importances of fields, either - is education less important to human society than refining physics’s Standard Model? Arguably, as physicists & mathematicians need to be taught a great deal before they can be good physicists/mathematicians, it is more important to get education right than continue to tweak some equation here or speculate about unobservable new particles. (At the least, education doesn’t seem like the least important field, but in terms of brainpower, it’s at the bottom, and the research is generally of staggeringly bad quality.) And the low level of talent among education researchers suggest considerable potential returns.

But the equilibriums persist: the smartest students pile into the same fields to compete with each other, while other fields go begging.


People keep saying self-driving cars will lead to massive sprawl by reducing the psychological & temporal costs of driving, but I wonder if it might not do the opposite, by revealing those reduced costs?

Consumers suffer from a number of systematic cognitive biases in spending, and one of the big ones seems to be per-unit billing vs lump-sum pre-paids or hidden or opportunity costs.

Whenever I bring up estimates of cars costing >$0.30/mile, when correctly including all costs like gas, time, people usually seem surprised & dismayed & try to deny it. (After calculating the total round-trip cost of driving into town to shop, I increased my online shopping substantially because I realized the in-person discounts were not nearly enough to compensate.) And Brad Templeton suggests ride-sharing is already cheaper for many people and use-cases, when you take into account these hidden costs.

Similarly, the costs of car driving are reliably some of the most controversial topics in home economics (see for example the car posts on the Mr Money Moustache blog, where MMM is critical of cars). And have you noticed how much people grumble about taxi fares and even (subsidized) Uber/Lyft fares, while obsessing over penny differences in gas prices and ignoring insurance/repair/tire costs? Paying 10% more at the pump every week is well-remembered pain; the pain of paying an extra $100 to your mechanic or insurer once a year is merely a number on a piece of paper. The price illusion, that when you own your own car, you can drive around for “free”, is too strong.

So I get the impression that people just don’t get just how expensive cars are as a form of transport. Any new form of car ownership which made these hidden prices more salient would feel like a painful jacking up of the price.

People will whine about how ‘expensive’ the cars are for billing them $0.3/mile, but self-driving cars will be far too convenient for most people to not switch. (In the transition, people may keep owning their own cars, but this is unstable, since car ownership has so many fixed costs, and many fence-straddlers will switch fully at some point.) This would be similar to cases of urban dwellers who crunch the numbers and realize that relying entirely on Uber+public-transit would work out better than owning their own car; I bet they wind up driving fewer total miles each month than when they could drive around for ‘free’.

So, the result of a transition to self-driving cars could be a much smaller increase in total miles-driven than forecast, due to backlash from this price illusion.


Education stickiness: what happened to the ronin trope in anime?

Justin Sevakis, “How Tough Is It To Get Into College In Japan?”, on Japanese college entrance exams:

There’s a documented increase in teen suicide around that time of year, and many kids struggle with the stress. The good news is that in recent years, with Japan’s population decreasing, schools have become far less competitive. The very top tier schools are still very tough to get into, obviously, but many schools have started taking in more applying students in order to keep their seats full. So “ronin” is a word that gets used less and less often these days.

In macroeconomics, we have the topic of wage stickiness and inflation: a small amount of inflation is seen as a good thing because workers dislike less a nominal paycut of 0% but a real paycut of 2% due to 2% inflation than inflation of 0% + 2% pay cut. So inflation allows wages to adjust subtly, avoiding unemployment and depression. Deflation, on the other hand, is the opposite, causing a static salary to increase in its burden to the employer, while the employee doesn’t feel like they are getting a real pay increase, and many workers would need to have pay cuts to enjoy a reasonable pay increase (and likewise, a burden on anyone using credit).

What about higher education? In higher education, being a zero-sum signaling game, the ‘real value’ of a degree is analogous to its scarcity and eliteness: the fewer people who can get it, the more it’s worth. If everyone gets a high school diploma, it becomes worthless; if only a few people go that far, it maintains a salary/wage premium. Just like inflation/deflation, people’s demand for specific degrees/schools will make the degrees more or less valuable. Or perhaps a better analogy here might be to gold standards: if the mines don’t mine enough gold each year to offset population growth and the regular loss of gold to ordinary wear-and-tear, there will be deflation, but if they mine more than growth+loss, there will be some inflation.

What are elite universities like the Ivy Leagues equivalent to? Gold mines which aren’t keeping up. Every year, Harvard is more competitive to get into because - shades of California - they refuse to expand in proportion to the total US student population, while per-student demands for Harvard remain the same or go up. They may increase enrollment a little each year, but if it’s not increasing fast enough, it is deflationary. As Harvard is the monopoly issuer of Harvard degrees, they can engage in rent-seeking (and their endowment would seem to reflect this). This means students must sink in more to the signaling arms race as the entire distribution of education credentials gets distorted, risking leaving students at a bottom tier school earning worthless signals.

While with a decreasing population, it effectively become easier each year as the fixed enrollment allows a larger percentage of the student population, to maintain fixed enrollment. Since education is signaling and an arms-race, this makes students better off (at least initially). And this will happen despite the university’s interest in not relaxing its criteria and trying to keep its eliteness rent-wage constant. There may be legal requirements on a top tier school to take a certain number, it may be difficult for them to justify steep reductions, and of course there are many relevant internal dynamics which push towards growth (which adjuncts and deans and vice-presidents of diversity outreach will be fired now that there are fewer absolute students?).


A fantasy I am haunted by, the Cosmic Spectator.

What if, some day or night, a vast Daemon stole into my solitude and made a simple offer - “choose, and I shall take you off the mortal plane, and you mayest go whither in the Universe thou pleases down to the Final Day, an you give up any influence on the world forevermore, forever a spectator; else, remain as you are, to live in the real world and die in a score or three of years like any man?” Or, I wouldn’t sell my soul to the Devil for as little as Enoch Soames does, merely looking himself up in the library, but for a library from the end of history?

Would I take it? Would you take it?

I think I would. “How does it turn out?” is a curiosity that gnaws at me. What does it all amount to? What seeds now planted will in the fullness of time reveal unexpected twists and turns? What does human history culminate in? A sputter, a spurt, a scream, a spectacular? Was it AI, genetics, or something else entirely? (“Lessing, the most honest of theoretical men…”) Becoming a ghost, condemned to watch posterity’s indefinite activities down through deep time as the universe unfolds to timelike infinity, grants the consolation of an answer and an end to hunger - just to know, for the shock of looking around and seeing every last thing in the world radically transformed as suddenly I know to what end they all tend, what hidden potentials lurked to manifest later, what trajectory minimizes their energy between the Big Bang and the Big Crunch, everything that I and everyone else was mistaken about and over or underestimated, all questions given a final answer or firmly relegated to the refuse bin of the unknowable & not worth knowing, the gnawing hunger of curiosity at last slaked to the point where there could be curiosity no more, vanishing into nirvana like a blown-out candle.


One kind of fraud is striking in its absence online: tampered or forged PDFs. People create malicious videos, photos, chat logs, and Microsoft Word documents all the time to scam and propagandize people, or publish entire PDFs full of garbage science, but they don’t edit existing PDFs.

Once in a while I see someone object to a paper I host on Gwern.net by saying “but that’s not on a real journal website! it’s not peer-reviewed! it’s just some random asshole’s personal website! You can’t believe that!” Aside from bemusement over people not believing PDFs can exist elsewhere (do they not understand the idea of “files”? Apparently many young people struggle with it. Or are they just extremely eager to obey copyright law & can’t even imagine just copying a PDF?), it’s interesting how well this actually works.

There is all kinds of incompetence, fraud, and malice online, often in PDFs… but only new PDFs. I can’t think of a single fraud accomplished by editing a real PDF & just uploading it for Google Scholar etc., or where I’ve been burned by even mislabeling. You can just search for a paper title, download it, and trust that ~100% of the time, you are getting what you thought you were getting, with the main caveat being that you may be downloading the author’s draft or a preprint and not the finalized version (particularly in economics, where papers might go through many preprints, sometimes changing the results substantially along the way, and take anywhere up to a decade to reach final publication). And when you do find a PDF claiming something malicious, like claiming to use statistics to show that Trump won the 2020 US presidential election, it’s always a ‘new’ PDF, which is forthright about it being a new unpublished ‘white paper’ or somesuch, and doesn’t purport to be a published paper. Or if it was a forged or edited document, it was usually clearly exported from Microsoft Word or another word processor (eg. all the forgeries exposed by anachronistic use of the Calibri font). Whereas, if you were so epistemically careless with images on, say, Facebook, you would wind up with a folder stuffed full of lying images which have been Photoshopped, claimed to be things other than what they are, ‘deep faked’, etc.

PDF forgery is striking because it’d be so easy to do: find a useful research paper, edit it in any of the many PDF utilities, upload anywhere, wait for people to copy it (as they do), then take down yours; now you have an authoritative peer-reviewed research paper floating around the Internet with no links to you. Given how rarely people check the original papers, and how retracted studies like the Wakefield autism/vaccine study or blatant propaganda like Project Denver will circulate among the epistemically-lazy indefinitely, a not-too-blatant forgery can get into widespread circulation for a long time before anyone notices. And it’s not as if there are no zealots or fanatics or malefactors willing to do so—historically, scribes tamper with documents all the time! (“Written by Confucius” or “apropos of nothing now I, Josephus the Jew, will tell you how wonderful Jesus Christ was”…)

Why can you just download PDFs off any random asshole’s website?

Because there’s no Photoshop for PDFs, maybe? Places like Arxiv provide TeX sources, but that’s still a dark art for most would-be forgers and fanatics and con artists. PDFs are not necessarily hard to edit, but editing PDFs is not part of a normal workflow for most people: even little kids will edit photos for social media, but outside of big corporations and design, PDFs are strictly write-only formats—your document system compiles source documents to PDFs, and you never edit the PDF, only the source documents. (It is somewhat analogous to the compiled binary of a program: a PDF is focused on laying out, pixel by pixel, how a printed document should look; it may not even contain the original text, much less any of the structure. Just as there are hackers who specialize in understanding and changing raw binary computer code, there are people who specialize in editing PDFs… but not many.) This is enough to push malefactors into other approaches. After all, if editing photos can work so well, why bother with the much harder editing of PDFs? As the joke goes, PDFs don’t need to outrun the (Russian) bear, they just need to outrun the other formats.


  • The Count of Zarathustra: The count of Monte Cristo as a Nietzschean hero?

  • Poem title: ‘The Scarecrow Appeals to Glenda the Good’

  • Twitter SF novel idea: an ancient British family has a 144 character (no spaces) string which encodes the political outcomes of the future eg. the Restoration, the Glorious Rebellion, Napoleon, Nazis etc. Thus the family has been able to pick the winning side every time and maintain its power & wealth. But they cannot interpret the remaining characters pertaining to our time, so they hire researchers/librarians to crack it. One of them is our narrator. In the course of figuring it out, he becomes one of the sides mentioned. Possible plot device: he has a corrupted copy?

  • A: “But who is to say that a butterfly could not dream of a man? You are not the butterfly to say so!” B: “No. Better to ask what manner of beast could dream of a man dreaming a butterfly, and a butterfly dreaming a man.”

  • Mr. T(athagata), the modern Bodhisattva: he remains in this world because he pities da fools trapped in the Wheel of Reincarnation.

  • A report from Geneva culinary crimes tribunal: ‘King Krryllok stated that Crustacistan had submitted a preliminary indictment of Gary Krug, “the butcher of Boston”, laying out in detail his systematic genocide of lobsters, shrimp, and others conducted in his Red Lobster franchisee; international law experts predicted that Krug’s legal team would challenge the origin of the records under the poisoned tree doctrine, pointing to news reports that said records were obtained via industrial espionage of Red Lobster Inc. When reached for comment, Krug evinced confusion and asked the reporter whether he would like tonight’s special on fried scallops.’

  • “Men, we face an acute situation. Within arcminutes, we will reach the enemy tangent. I expect each and every one of you to give the maximum. Marines, do not listen to the filthy Polars! Remember: the Emperor of Mankind watches over you at the Zero! Without his constant efforts at the Origin, all mankind would be lost, and unable to navigate the Warp (and Woof) of the x and y axes. You fight not just for him, but for all that is good and real! Our foes are degenerate, pathological, and rootless; these topologists don’t know their mouth from their anus! BURN THE QUATERNION HERETIC! CLEANSE THE HAMILTONIAN UNCLEAN!”

    And in the distance, sets of green-skinned freaks could be heard shouting: “Diagonals for the Orthogonal God! Affines for the Affine God! More lemma! WAAAAAAAAAAAAGGHHH!!!!!!” Many good men would be factored into pieces that day.

    In the grim future of Mathhammer 4e4, there is only proof!


I noted today watching our local woodchuck sitting on the river bank that after a decade, our woodchuck has still not chucked any whole trees; this lets us set bounds on the age-old question of “how much wood could a woodchuck chuck if a woodchuck could chuck wood”—assuming a nontrivial nonvacuous rate of woodchucking, then it must be upperbounded at a small rate by this observation. To summarize:

Based on 10 years longitudinal observation of a woodchuck whose range covers 2 dozen trees, we provide refined estimates upperbounding woodchucking rates at <7.9 × 10−11 tree⁄s, improving the state of tree art by several magnitudes. Specifically: we estimate the number of chucked trees at 0.5 as an upperbound to be conservative, as a standard correction to the problem of zero cells; therefore, the woodchucking rate of 0.5 trees per woodchuck per decade per 24 trees is 0.5 chucked-trees / ((1 woodchuck × 60 seconds × 60 minutes × 24 hours × 365.25 days × 10 years) × 20 trees) < 0.0000000000792202195 or 7.92202195e−11 trees per second.


Air conditioning is ostracized & moralized as wasteful in a way that indoor heating tends not to be. This is even though heat stress can kill quite a few people beyond the elderly (even if heat waves do not kill as many people as cold winters do—they merely kill more spectacularly). Why? After all, AC should seem less of a splurge on energy: winter heating bills in any cold place tend to be staggeringly large, while AC is so cheap it is often stuck through a random window & run off wall-socket electricity without requiring highly specialized fuel & infrastructure like heating; and the temperature difference is obviously much larger for heating—a slightly cold winter might see temperature going to minus Celsius temperatures vs room temperature of 22℃ for a difference of >23℃, while the most brutally hot summers typically won’t go much past 38℃, or a difference 15℃. (A record winter temperature when I lived on Long Island might be −18℃, while a really cold winter in Rochester, NY, would be −28℃, or ~50℃ away from a comfy room temperature!) The energy bills would show the difference, and I suspect my father had a keen grasp of how expensive winter was compared to summer, but I never saw them, so I don’t know; and most people would not, because they have utilities included in rent, or their spouse or family handles that, or they don’t do the arithmetic to try to split out the AC part of the electricity bill & compare with the winter heating bill.

I would suggest that, like the earth spinning, the reason AC looks more wasteful is simply that it looks like AC is more work. An air conditioner is visibly, audibly doing stuff: it is blowing around lots of air, making lots of noise, rattling, turning on & off, dehumidifying the place (possibly requiring effort to undo), and so on. Meanwhile, most Western heating systems have long since moved on from giant crackling stone fireplaces tended by servants and regularly cleaned by chimney sweeps. Whether you use electric baseboards like I do, or a gas furnace in the basement, you typically neither hear nor see it, nor are you aware of it turning on; the room is simply warm. Only the occasional gap with a chill trickle of air will remind you of the extreme temperature difference.


I wonder how fish perceive temperature? They’re cold-blooded so cold doesn’t kill them except at extremes like the Arctic, so they probably don’t perceive it as pain. They’d never encounter water hot enough to kill them, so likewise. Water temperature gradients are so tiny, even over a year, that they hardly need to perceive it in general…

But the problem for them is oxygenation for gas exchange, and different species can handle different levels of oxygen. So for them, hot/cold probably feels like asphyxiation or CO2 poisoning: they go into colder water and slow down but breath easier; and in hotter water than optimal, they begin to strangle because they have to pass more hot water through their gills to get the necessary oxygen.


The tension in the term “SF/F” is eternal because science fiction applies old laws to new things, while fantasy fiction applies new laws to old things, and people always differ on which.

(Is Brandon Sanderson a fantasy author, or really just a science fiction writer with odd folk physics?)


You wish to be a writer, and write one of those books filled with wit and observations and lines? Then let me tell you what you must do: you must believe that the universe exists to become a book—that everything is a lesson to watch keenly because no moment and no day is without its small illumination, to have faith in the smallest of things as well as the largest, and believe that nothing is ever finished, nothing ever known for better or worse, no outcome is final & no judgment without appeal nor vision without revision, that every river winds some way to its sea, and the only final endings are those of books. You must not despise your stray thoughts, or let quotes pass through the air and be lost; you must patiently accumulate over the years what the reader will race through in seconds, and smile, and offer them another, book after book, over pages without end. And then, and only then, perhaps you will be such a writer.


East Asian aesthetics have cyclically influenced the West, with surges every generation or two. This pattern began with European Rococo & Impressionism & Art Deco, both heavily influenced by Japanese and Chinese art, a phenomenon termed chinoiserie. Subsequent waves include the post-impressionist surge around the 1890s via World Fairs, the post-WWII wave (helped by returning American servicemen as well as hippies), and the late 1980s/1990s wave fueled by the Japanese bubble and J-pop/anime.

The East Asian cycles may be over. The Korean Wave may represent the latest iteration, but it’s doubtful another will follow soon. Japan appears exhausted culturally and economically. South Korea, despite its cultural output, faces a demographic collapse that threatens its future influence. China, while economically potent, is culturally isolated due to its firewall and Xi Jinping’s reign. Taiwan, though impactful relative to its size, is simply too small to generate a substantial wave.

In South Asia, India seems promising, as Bollywood is already a major cultural export, and it enjoys a youth bulge, while Modi is unable to become a tyrant like Xi or freeze Indian culture (as much as he might like to), but there’s two problems:

  1. The decline of a mainstream culture conduit and the rise of more fragmented, niche cultures also complicates the transmission of such waves. It may be that it is simply no longer possible to have meaningful cultural ‘periods’ or ‘movements’; any ‘waves’ are more like ‘tides’, visible only with a long view.

  2. Poor suitability for elite adoption: Pierre Bourdieu’s theory on taste hierarchies suggests that the baroque aesthetics of Indian art, characterized by elaborate detail and ornamentation, may not resonate with Western elite preferences for minimalism. It is simply too easy & cheap. (Indian classical music, on the other hand, can be endlessly deep, but perhaps too deep & culture-bound, and remains confined to India.)

    Furthermore, in an era of AI-generated art, baroque detail is cheap, potentially rendering Indian aesthetics less appealing.

    A successful esthetic would presumably need to prize precision, hand-made arts & crafts that robotics cannot perform (given its limitations like high overhead), personal authenticity with heavy biographical input, high levels of up-to-the-second social commentary & identity politics where the details are both difficult for a stale AI model to make and also are more about being performative acts than esthetic acts (in the Austinian sense), live improvisation (jazz might make a comeback), and interactive elements which may be AI-powered but are difficult to engineer & require skilled human labor to develop into a seamless whole (akin to a Dungeon Master in D&D).


“Men, too, secrete the inhuman. At certain moments of lucidity, the mechanical aspect of their gestures, their meaningless pantomime makes silly everything that surrounds them. A man is talking on the telephone behind a glass partition; you cannot hear him, but you see his incomprehensible dumbshow: you wonder why he is alive.”

“Men, too, secrete the inhuman. At certain moments of lucidity, the mechanical aspect of their gestures, their meaningless pantomime makes silly everything that surrounds them.
A man is talking on the telephone behind a glass partition; you cannot hear him, but you see his incomprehensible dumbshow: you wonder why he is alive.”

I walk into the gym—4 young men are there on benches and machines, hunched motionless over their phones.

I do 3 exercises before any of them stir. I think one is on his phone the entire time I am there.

I wonder why they are there. I wonder why they are alive.


If a large living organism like a human were sealed inside an extremely well-insulated sealed container, the interior temperature of this container would vary paradoxically over time as compared to the average outside/room temperature of its surroundings: it would first go up, as the human shed waste heat, until the point where the human died of heat stroke. Then the temperature would slowly go down as heat leaked to the outside. But then it would gradually rise as all the resident bacteria in or on a body then began scavenging & eating it, reproducing exponentially in a race to consume as much as possible. Depending on the amount of heat released there, the temperature would rise to a new equilibrium where the high temperature damages bacteria enough to slow down the scavenging. This would last, with occasional dips, for a while, until the body is largely consumed and the bacteria die off. (Things like bone may require specialist scavengers, not present in the container.) Then the temperature would gradually drop for the final time until it reaches equilibrium with the outside.

Trajectoid Words

Zach Weinersmith’s 2024-12-04 Saturday Morning Breakfast Cereal webcomic discusses the geometric concept of a “trajectoid” (Sobolev et al 2023): a 3D blob which when rolled, wiggles around to trace out an arbitrary pattern. But that requires physically unrealistic properties and a realistic trajectoid you could actually use to draw lines with ink on paper, say, bans some paths. Weinersmith argues that:

This suggests that (1) you can make an object that traces out words of your choice, but (2) you can only use non-looping letters: ‘I’, ‘J’, ‘L’, ‘M’, ‘N’, ‘S’, ‘U’, ‘V’, ‘W’, ‘Z’.

The most interesting words I could come up with were “minimum”, “illusion” [sic] and “wuss”. Note, that if you allow loops, you open up a world of “ass”.

This word list seems too short to me (and erroneous: ‘illusion’ has an ‘o’ in it, which is a loop). Surely there are many more words than that?

Usually, constrained writing allows for all sorts of things if you try hard enough: Oulipo is practically dedicated to this proposition, and while this is much more constrained than, say, lipograms, it is still a lot of letters to work with.

Did Weinersmith try to come up with these words by hand, when it would be so trivial to search a dictionary? That might explain his paucity of hits. Easy enough to fix!

A quick check of my OS dictionary with the most obvious possible regexp to find 2-letter or more combinations (as we can easily see that only ‘I’ or possibly ‘U’ are valid single-letter words here):

length () { awk '{ print length, $0 }' | \
       sort --general-numeric-sort | \
       awk '{$1=""; print $0}' | sed -e 's/^ //'; }

grep -E -e '^[iIjJlLmMnNsSuUvVwWzZ][iIjJlLmMnNsSuUvVwWzZ]+$' \
  /usr/share/dict/words | length \
  | tac >  ~/trajectoid-words.txt

Partial results (full list of n = 179 does include “wuss”):

  • minimums

  • Vilnius

  • muumuus

  • Muslims

  • Mullins (many)

  • minimum

  • insulin

  • Willis

  • swills

  • muumuu

  • muslin

  • Muslim

  • minims (many, most familiar as a unit)

  • Julius

  • jinnis (plural of jinni, but could also be Jinnis)

  • Zulus

  • wills

  • Swiss

  • swims

  • swill

  • Sunni

  • slums

  • slims

  • sinus

I would have to say that there are many interesting words there, and you could probably write whole loopless sentences or paragraphs with some care. We could also permit ‘loopless’ numbers, since we are presumably permitting non-letters like punctuation as well, and that is helpful—it gets us ‘1’, ‘2’ (substituting for ‘to’ & ‘too’), ‘3’, ‘5’, ‘7’—at least excluding variant glyphs like the open ‘4’. (“Wiz Linus swims in muslin muumuus, wins sum 2 nil. Linus wins in sun in Jul. Mimi swims in slim mini. Mimi is ill in sun. Jill will miss Miss Mimi, ill in ISIS Muslim inn. Julius swims, is ill 2, swills Swiss insulin.”) One could also use a LLM to do constrained sampling of only loopless letters/words, which is how people generate lipograms or Biblical-words-only or fixed-width text with LLMs.

A larger dictionary, and a more careful selection of letters, would doubtless yield many more.

Non-Existence Is Bad

In 201113ya, the science-fiction writer Frederik Pohl wrote a blog post declining an offer of free cryonics vitrification from Mike Darwin.

His rationale was to quote from John Dryden’s free rhyming translation of the Epicurean philosopher Lucretius’s On The Nature of Things, “The Latter Part of the Third Book of Lucretius; against the Fear of Death”:

So, when our mortal forms shall be disjoin’d.
The lifeless lump uncoupled from the mind,
From sense of grief and pain we shall be free,
We shall not feel, because we shall not be.
Though earth in seas, and seas in heaven were lost
We should not move, we should only be toss’d.
Nay, e’en suppose when we have suffer’d fate
The soul should feel in her divided state,
What’s that to us? For we are only we
While souls and bodies in one frame agree.

Nay, though our atoms should revolve by chance,
And matter leap into the former dance,
Though time our life and motion should restore.
And make our bodies what they were before,
What gain to us would all this bustle bring?
The new-made man would be another thing.

Pohl died on 2 September 201311ya, age 93, almost exactly 2 years after publishing his post. He was not cryopreserved, and his death was final.1

Lucretius further notes as a reason to not fear death or regard it as an evil that

Why are we then so fond of mortal Life,
Beset with dangers, and maintain’d with strife?
A Life, which all our care can never save;
One Fate attends us; and one common Grave.
…Nor, by the longest life we can attain,
One moment from the length of death we gain;
For all behind belongs to his Eternal reign.
When once the Fates have cut the mortal Thred,
The Man as much to all intents is dead,
Who dies to day, and will as long be so,
As he who dy’d a thousand years ago.

Lucretius is gesturing towards the “symmetry argument” (also made by others): if we do not fear or regret the many years before our birth where we did not exist, why should we fear or regret the similar years (perhaps fewer!) after our death?

Look back at time…before our birth. In this way Nature holds before our eyes the mirror of our future after death. Is this so grim, so gloomy?

If non-existence is bad, surely the non-existence before our birth was bad as well? And yet, we fear and try to avoid non-existent years which come after our birth, and do not think at all about the ones before. Thus, since it seems absurd to be upset about the former, we must become consistent by doing our best to cease to be upset about the latter.

This question is also explicitly asked by Thomas Nagel in “Death” (197945ya):

The third type or difficulty concerns the asymmetry, mentioned above, between out attitudes to posthumous and prenatal nonexistence. How can the former be bad if the latter is not?

I take as the response to the Lucretius symmetry argument what the SEP calls the “comparativist” position: non-existence is bad, and those billions of years beforehand are a loss, but they are not an avoidable loss. Because they were inevitable, and required for us to exist at all, we cannot be upset about them. As we understand cosmology and physics and biology at present, it would be difficult to impossible for us to have existed close to the beginning of the universe: the universe needs to cool down from the Big Bang to allow atoms to exist at all, stars must go through many generations to accumulate enough heavy elements to form metallic planets like the Earth, life must somehow evolve from nothing and then pass through many stages of development to go from the first proto-life in mud or sea vents or something to multi-cellular life to brains to intelligence to human technological civilization… Are we “early” or are we “late”? This is a topic of extreme scientific uncertainty, so we can’t even get upset by the prospect that we “missed” a few billion years, resulting in astronomical waste.

Perhaps we should be upset that humanity took so long to show up, and that so many stars have gone to waste and so much of the universe has receded behind the Hubble horizon—but until we have more scientific knowledge, it is difficult to get genuinely upset at such abstract loss, and our absence of emotion is not a good guide.2 As Philip Larkin put it(“The Old Fools”):

At death, you break up: the bits that were you
Start speeding away from each other for ever
With no one to see. It’s only oblivion, true:
We had it before, but then it was going to end,
And was all the time merging with a unique endeavour
To bring to bloom the million-petaled flower
Of being here. Next time you can’t pretend
There’ll be anything else.

Further, even if the physical brute facts were such that we could come into existence, personal identity does not permit that. What does a hypothetical claim like “if I were born before the Civil War, I would have been an abolitionist” mean, if you do not believe in some sort of soul or karma?

It cannot mean you with all your current beliefs about slavery, based on ideas and concepts and words and events that hadn’t happened yet. It is also impossible for it to simply mean another human being with the exact same DNA as you—unless you bite the bullet of claiming that identical twins and clones are literally the same exact person, and also that is impossible because you carry a large number of de novo mutations no one back then had and your exact DNA could not have been created by any recombination of existing DNA chromosomes, and your genome could only have come into existence back then by some astronomically-unlikely collection of spontaneous mutations (a “Boltzmann genome”, if you will). It also can’t mean someone with the exact same personality & body etc because that too is astronomically unlikely given all of the random stochastic noise that molds us starting from conception (again, barring a “Boltzmann body”). Even the very atoms of your body cannot be replicated in the past, because of isotopic changes in atoms like that caused by nuclear bomb testing.

Given any notion of personal identity grounded in our traits, preferences, personality, memories and other properties, I’m unable to see any genuinely coherent way to talk about “if I had been born at some other time & place”. All that can amount to is some psychological or sociological speculation about what a vaguely statistically-similar person might do in certain circumstances: they may look like you and have the same color hair and react in similar ways as you would if you stepped into a time machine, but they are not you, and never could have been. Thus, it is impossible for you or I to have been “born earlier”. That person would be someone else (and nor could that person have been born “earlier” or “later”). Nagel:

But we cannot say that the time prior to a man’s birth is time in which he would have lived had he been born not then but earlier. For aside from the brief margin permitted by premature labor, he could not have been born earlier: anyone born substantially earlier than he would have been someone else. Therefore the time prior to his birth prevents him from living. His birth, when it occurs, does not entail the loss to him of any life whatever.

So, our birth is fixed and the missing years before inevitable. However, the years after our death are evitable. We could have lived longer than we did. There is no known upper limit to the human lifespan at present, and it is easy to imagine histories where greater progress was made in techniques of life-extension or preservation. (Histories where the Industrial Revolution happened a small percentage faster, and we would now be in the equivalent of 2100 AD, or where cryopreservation had been perfected decades ago rather than remaining an ultra-obscure niche, or alternatives like chemical fixation of brains had been developed to provable efficacy.)

As Lucretius admits, he isn’t too upset about the prior years because he views human lifespan as hopelessly immutable and fixed. (As Lucretius is philosophically committed to the impossibility of genuine progress which might increase human lifespan, by his cyclical model of history, this is a consideration he does not and cannot take into account.) You get 100 years or so at most, and it doesn’t make a difference where in history you put them. The closer you get to that, the less you have to lose. We don’t mourn the 120-year-old woman, while we mourn the child.

I agree with Nagel:

This approach also provides a solution to the problem of temporal asymmetry, pointed out by Lucretius. He observed that no one finds it disturbing to contemplate the eternity preceding his own birth, and he took this to show that it must be irrational to fear death, since death is simply the mirror image of the prior abyss. That is not true, however, and the difference between the two explains why it is reasonable to regard them differently. It is true that both the time before a man’s birth and the time after his death is time of which his death deprives him. It is time in which, had he not died then, he would be alive. Therefore any death entails the loss of some life that its victim would have led had he not died at that or any earlier point. We know perfectly well what it would be for him to have had it instead of losing it, and there is no difficulty in identifying the loser.

When my cat abruptly dies of lung cancer at age 11, I am saddened in part because that was several years short of his life expectancy, and I’ve known cats which lived into their 20s and he was in good health just months before I had to euthanize him. It’s easy to imagine the counterfactual where his lifespan was double what it was, and I could have looked forward to him greeting me when I return from my walk for years to come; I have no difficulty in identifying the loser or what experiences we lost. Indeed, I can even imagine him living well past his 20s and breaking prior cat longevity records—if he lived long enough to benefit from advances in veterinary medicine, a pet version of longevity escape velocity. And this possibility of progress helps break the symmetry: there is in fact a good reason to prefer to be born as late as possible—because it is clearly possible that if you are born after certain dates, you may enjoy a vastly greater, perhaps indefinitely greater, lifespan. That date was not 1900124ya AD; but it might be 200024ya AD; and it might be even more likely to be 2024 AD, and so on.

All of this is contingent and empirical. If we lived in an entirely different kind of universe, then we might be upset at delays. If we lived in some sort of Christian-style universe where God made the world on the first day and we could be immortal if he permitted it, then we might be upset that we were not created on the first day immortal, and have been deprived of so much existence. (Imagine that God created a human on the first day, and put them to sleep immediately, only to be woken by ‘the last trump’, and without ever getting to live a life in the mundane world, forced to proceed straight to the New Jerusalem to sing hosannas for all eternity. Wouldn’t they be correct to be angry with God and demand a reason? Indeed, consider the reduction; if it was not bad for that human to sleep away the mortal universe; suppose God created the universe and all its people, put them all to sleep, and woke them up collectively at the last second?)

So, nonexistence is bad. But whether we should regret badness depends on whether it is possible to change it. In the case of nonexistence before birth, it is impossible to change it. In the case of nonexistence after death, it is possible to change it. So we should regret all nonexistence after death, and be upset by it, but not by the (only seemingly) symmetrical nonexistence before birth.

Celebrity Masquerade Game

Proposal for a costume party game which involves social deduction of a well-known guest.

Several players start off masked, and the rest must attempt to guess which is actually the guest; each time they guess, right or wrong, they become a masked player too. Players compete to correctly guess who is the well-known guest, and then to fool as many of the remaining non-masked players as well.

When all players are masked, the game ends, and the well-known guest is revealed. Prizes are awarded for the best players at both sides of the game.

(for n players; 1 Celebrity, 1 Host, and n − 2 Guests; probably best for 10–50 players)

The Celebrity Masquerade Game is a social deduction costume-party game in the spirit of a masquerade ball involving guessing the identity of a special3 masked guest (‘Celebrity’) over the course of a party where the Guests slowly turns into that special masked guest, trying to fool the remaining Guests into thinking they are the real special masked guest.

It is not announced beforehand, and part of the fun is the Guests realizing that there is a game happening at all, piggybacking on the normal party dynamics of trying to figure out who is the special guest, and noticing that there are ‘too many’ masked guests. The mechanism & rules are kept as simple & intuitive as possible to avoid the need for any explicit announcements or instruction which would break the illusion of an ordinary party with nothing strange happening.

Requirements

  • n black costume cloaks suitable for a masquerade.

  • n masks (eg. V for Vendetta-style Guy Fawkes masks, customized with a logo); they should all be either identical or unique, and should be able to fit over pre-existing costumes.

  • n × ≥3 tokens: a token is unique to a (cloak, mask, token) set.

    A token contains the text “KEEP THIS. GO NOW TO [Location of costume swap, eg. ‘PLAYGROUND’] ALONE. YOU ARE CELEBRITY TOO.” The text should not be visible until the token is given by a Celebrity to a Guest, to avoid spoilers.

    Tokens should be visibly unique to a set, so the true Celebrity’s tokens can be easily distinguished at the end of the game from false Celebrities; and there should be several tokens per Celebrity to avoid running out and permit false Celebrities to compete over how many Guests they can fool, but not too many tokens, so everyone gets a chance.4

    One possible implementation would be cut-out strips of colored construction paper, written on one side, folded in half to cover up the writing-side, and then (weakly) taped to the heart area of the black cloak. Then everyone can see the color from a distance to distinguish between all the Celebrities, a token can be instantly ripped off and handed to a non-Celebrity, and they are cheap & easy to make.

  • a Location nearby but not directly visible from the main party rooms, ideally an adjacent room.

    This Location can be small but needs to hold the previous items, and at least 2 people; it may be helpful to post a sign like “INVITE ONLY” or otherwise discourage Guests from poking in. The Location can also have a sign with instructions, in which case the token text can be simplified to make them less labor-intensive.

Preparation

Before the party, the requirements are set aside in the Location. For example, the masks are in a pile on a table, and the cloaks are piled folded next to them with a sign ‘TAKE A PAIR’.

At the start of the party, the Celebrity goes to the Location. The Host welcomes Guests and mingles as normal, and at some point returns to the Location.

The Celebrity & Host put on the first two costumes, and the Host goes back to the party.

Playing

When the Host stealthily returns to the party as the first false Celebrity, they remain anonymous and coyly refuse to confirm their identity (“I don’t know, do you think I’m Celebrity?”), challenging Guests to—somehow—figure out if they are Celebrity or not, but otherwise pretend to be Celebrity to the best of their abilities, and mingle normally with Guests.

Beginning

When the first Guest verbally guesses to the Host “you are Celebrity” (or some other assertion or statement to that effect), then the Host hands the Guest 1 token. The first Guest reads the token and follows the instructions to the Location. At the Location, they receive a costume to become the second false Celebrity.5 They then realize they will imitate the first Celebrity: pretend to be Celebrity, wait for someone to guess, and hand them a token, creating a new Celebrity. If necessary, further instructions are given and parts of their original costume left behind. They then return to the party as unobtrusively as possible, and also pretend to be an anonymous Celebrity. Eventually a third Guest will guess they are Celebrity, and be handed a token; and so on.

Middle

Once the flow of new false Celebrities is stable, and is not too slow nor too fast, the real Celebrity can join the party. (They will want to join early enough to not miss too much of the party, but not so early that it might be obvious they are the real Celebrity rather than yet another false one; depending on the total number, somewhere around #3–10 should work.)

As time passes and Guests are replaced by Celebrities, all the remaining Guests will realize that some masquerade game is happening, and that the goal (for Guests) is to guess the real Celebrity from the false Celebrity, and after turning into a false Celebrity, the goal (for false Celebrities) is to fool the remaining Guests into wasting their one guess on them.

At this point the remaining Guests can either make a serious effort to mingle & listen to conversations & guess; or they can choose to opt out of the first half of the game as a Guest by simply immediately making a random guess to receive their token & costume; and they then can opt out of the second half of the game by not making any serious effort to fool the remaining Guests. (This opt-out helps end the game quickly if people are not having fun, and skip to the party-favors.)

End

The game can run indefinitely, but should probably end after an hour or two. Once there are no remaining Guests, or the party runs out of time, or the Host simply senses the fun has worn off, the Host ends the game by getting everyone’s attention and asking everyone who is not the real Celebrity to take off their masks—if they are wearing one. (Any remaining Guests do nothing.)

This reveals the true Celebrity, and the type of his tokens; those who guessed him and have his token type are the winners. (Optionally: the winners include the false Celebrities with the fewest or perhaps no remaining tokens—because they fooled the most Guests.6)

Prizes

The winners may receive party favors like expensive fruit or noise-makers. Because the game is best played as a one-off novelty, the masks & cloaks also make good souvenir collectibles for everyone to take home if they wish.

Pemmican

Pemmican is an ancient travel food of dried meat pounded with fat and some flavorings like honey or berries. It was a staple of American Indians and Arctic travel. As it has become a bit of a fad among “carnivore diet” enthusiasts, you can buy pemmican commercially, as an ultra-premium protein bar.

One of the odd things about it is that it doesn’t sound very good, but explorers could live off it for months or even years (as opposed to other foods, like K-rations, where soldiers became unable to eat it and would start throwing the rations out). Robert E. Peary wrote in 1917 that:

Too much cannot be said of the importance of pemmican to a polar expedition. It is an absolute sine qua non. Without it a sledge-party cannot compact its supplies within a limit of weight to make a serious polar journey successful…With pemmican, the most serious sledge-journey can be undertaken and carried to a successful issue in the absence of all other foods.

Of all foods that I am acquainted with, pemmican is the only one that, under appropriate conditions, a man can eat twice a day for 365 days in a year and have the last mouthful taste as good as the first. And it is the most satisfying food I know. I recall innumerable marches in bitter temperatures when men and dogs had been worked to the limit and I reached the place for camp feeling as if I could eat my weight of anything. When the pemmican ration was dealt out, and I saw my little half-pound lump, about as large as the bottom third of an ordinary drinking-glass, I have often felt a sullen rage that life should contain such situations. By the time I had finished the last morsel I would not have walked round the completed igloo for anything or everything that the St. Regis, the Blackstone, or the Palace Hotel could have put before me.

Even the Eskimo dogs were at times obliged to yield to the filling qualities of pemmican, and anything that will stay the appetite of a healthy Eskimo dog must possess some body. I recall an instance where my powerful king dog discovered a tin of pemmican that had had a hole punched in it in some way. The maddening smell of the luscious beef fat through the hole spurred him to drive his iron jaws through the tin until he had ripped it like a can-opener and reached the contents. Had the tin contained ordinary meat, the 12 pounds would have been merely an appetizer for him; but when I found him later, he had voluntarily quit, with only a portion of the pemmican eaten. And—though this may not be believed by others who have had experience with Eskimo dogs—he would eat nothing more that day.

I was also interested in it as a travel tool: pemmican is compact, long-term storable (years or decades), low-residue (ie. no need for bathrooms at awkward times), and presumably highly-satiating (due to being all fat+protein) without the jaggedness of the carb-heavy snacks/foods which are the default ‘snack’ everywhere & easiest to purchase in airports etc. So I looked into buying some.


Pemmican is horribly expensive, especially post-COVID19, as it is usually made of beef (where low-end beef is $5/pound in supermarkets), which is dried (half is water, doubling the per-pound cost), and made in small batches by a handful of specialists (so expensive production). The net result is that a single bar for a snack (barely approaching a meal) is easily >$10. (To compare it to another fatty treat, for the price of one pemmican bar, one could buy 2 gallons of ice cream!)

Still, I was curious, so on 2021-02-09, I ordered a box of the most commonly-mentioned-on-social-media one, Carnivore Bar (“Grass-Finished Box”, 12×7, $120.72); and when I ran out in 2023-04-24, a variety of bars from a small competitor, Aupa (variety case, 10×60g bars; $66.66).

The Carnivore Bar look pretty much exactly like what they are: ground up dried beef mixed with a lot of beef tallow (and salt), and a white sheen from the fat. When I tried the first one, I was unimpressed. It was edible, and quite easy to eat, and did feel satiating—but I was certainly not $10-worth impressed. Similarly, for the next few, although I gradually began noticing that I was enjoying them more and more, and looking forward to the next one. These signs of habituation & addiction were a bit concerning, given how expensive pemmican is! (I can afford vices like good loose-leaf tea, but not spending >$30/day on food, not without making a lot more money than I do as a writer.)

My interpretation is that Robert E. Peary is correct, and the apparent blandness & mediocrity of pemmican initially reflects that pemmican is an acquired taste—acquired because it takes multiple exposures to associate the reinforcement of the fat with perhaps either the taste receptors or the gut response to fat, similar to how rabbit starvation is hard to notice consciously & develops gradually. If I had been eating more, and more regularly, I think I would have felt the pemmican effects much more strongly. (So ironically, I think Eliezer Yudkowsky is right when he says from a basic biological perspective, humans ought to be able to get addicted to ‘bear meat and fat with honey’; he’s just wrong in claiming that humans don’t, due to ignorance of esoteric foodstuffs.)

When I looked into re-ordering, Carnivore Bar had raised their prices to something like $15/bar, and so I tried a smaller competitor, which had much lower prices.

The Aupa bars were… merely OK. The first ‘blueberry’ one was definitely quite different from how I remember the Carnivore Bar: it seemed to have much less tallow/fat (an odd ingredient to skimp on), so it was much more crumbly/gristly (a nuisance on airplanes), and they could’ve benefited from using more salt & better packaging. The second plain one was more satisfyingly like Carnivore Bar, so the batches/types seemed to differ substantially.

The addictive effect was not as strong, perhaps due to the lower fat. I have been gradually using them up during travel, and as of September 2024, I am down to 2. They have been as useful as I hoped: one bar drives away hunger for hours, while using little space/weight. (They are expensive, yes, but meat in airports or cities is even more expensive—even a decent hamburger or sandwich will often cost $15.)


As beef prices have kept going up, I expect I won’t be buying more pemmican; but maybe I’ll look into some cheaper form of protein bar—there ought to be plant or animal sources of protein/fat which won’t cost $15+ per bar or come loaded with so much sugar one might as well buy a candy bar instead.

Rock Paper Scissors

Trying to help my sister at a fighting game in a dream, I explained to her, “like all of them, it’s just rock paper scissors—high attacks, low attacks, and blocks. High beats low, block beats high, low beats block, or something like that. RPS is everywhere8.”

“But why is it everywhere‽” she asked.

Uh… good question. After all, matching pennies works as a game. It has yomi and strategy, and choices are non-transitive (in a trivial sense). You obviously can’t go lower than 2 choices like matching-pennies (1 choice is no choice), but why not 2? Or 4, like “rock-paper-scissors-lizard”? Or 5 (“rock-paper-scissors-lizard-Spock”)?

They have surprisingly complicated Nash equilibria, too, and people tend to be bad at playing them, so the higher-count games are not boring or trivial. They also aren’t that complicated compared to countless games that people play. So why are they so much less popular?

I think it’s because 2 forces outcomes to be “better vs worse”, so can require a lot of games to ever approximate equality (imagine 2 people flipping fair coins: they both have the same expected value but they will have different percentages until they reach quite large numbers of coin flips, as the binomial convergence to 50% is slow). At 3, you have efficient short matches like ‘best of 3’.9 At 4, the outcomes start overlapping and becoming redundant, without speeding up matches.

Also from the perspective of game mechanics, 3 outcomes allows better/worse/same outcomes and ties are an important mechanic for games.

If you don’t have ties or no-change outcomes, then every episode requires you to either be perfect, or suffer constant attrition. Imagine trying to play Dark Souls or something where every single attack to you does damage or you damage them, with no alternatives like “dodge” or “block”—just hit or be hit. It’d be even harder to play, and if you stack encounters like an ordinary game, it’s basically demanding that you play a perfect move every time, or else you’ll be attrited long before you finish. So that’s why 2 is inadequate, and 3 is good.

But then at ≥4, you’re no longer adding any important mechanics: there is nothing like “tie” which is as generic and widespread in games. (And if you need more outcomes or more complexity, you can simply make sub-games which are themselves RPS: you might have ranged weapons vs melee vs magic, and within each kind of weapons, another RPS.)

Backfire Effects in Operant Conditioning

A certain mother habitually rewards her small son with ice cream after he eats his spinach. What additional information would you need to be able to predict whether the child will:

  1. Come to love or hate spinach,

  2. Love or hate ice cream, or

  3. Love or hate Mother?

Gregory Bateson (Steps to an Ecology of Mind)

A story to think about in terms of both design, and AI:

A relative of mine worked as a special education teaching assistant in elementary schools; some years back, a AIBO-style robot dog company was lobbying their school district to try to buy a dog for each classroom, in particular, the special ed classrooms. The dog person comes in and demos it and explains the benefits: it can be the classroom pet, and help socialize the children.10 If they damage the robot dog, it’s just money, and the robot dog will never run out of patience in, as the salesperson demonstrates, being bopped on the head to turn it on and off, and shake and wag its tail etc.

The special ed teacher politely watches and after the sales person leaves, asks the TAs & others: “what problem did you see, and why will we never have that robot dog in our classrooms?”

“The kids probably won’t be interested in it—it seemed boring and limited.”

“Yes, but what else?” “Uh…”

“What would that robot dog teach our kids?” “Uh…”

“It will teach them that ‘when your friend is annoying you or not doing what you want, to hit them on the head’. And that ‘when you want to do something with your friend, you should hit them on the head’. And that is why we will never use that dog.”

Her report to the district was negative, and the school district did not buy any robot dogs. (Admittedly, probably a foregone conclusion.)

I asked, and she did not know if the robot dog people were ever told.


This is a parable about:

  • design: the head-bopping mechanic is elegant but wrong.

    Hitting the head is intuitive, minimizes controls, semi-realistic, and appropriate for adults; it is inappropriate for small children, particularly children struggling with social interaction & violence. (“Good design is invisible.”)

    Their problem was probably fixable, like using voice activation (which would interoperate well with the speech-generating devices used by non-verbal children)—but they cannot fix it until they know it exists. However, the designers had no way of knowing because they did not spend time doing special education or related fields like animal training, and the teachers didn’t care enough to get it through their thick corporate heads—“they may never tell you it’s broken”.

  • training humans to use AIs:

    like LLMs, the head-bopping encodes a certain attitude towards anything mechanical; this attitude may bleed through towards parasocial or social relationships as well.

  • and training AIs how to use humans: with any sort of feedback loop or data collection, a robot friend may also be learning how to manipulate & ‘optimize’ humans.

    What the AI learns will depend on what sort of reinforcement learning algorithm the system as a whole embodies (see Hadfield-Menell et al 2016; Everitt et al 2019/Everitt et al 2021, & Langlois & Everitt2021), and what the reward function is. For example, a simple, plausible reward function that a commercial robot system might be designed to maximize would be “total activated time”; this could then lead to manipulative behavior to reward-hack, like to try to keep playing indefinitely even if no human is there, or ‘off switch’ behavior like dodging any hand trying to bop the head.

    Different RL algorithms would lead to different behaviors: based on Langlois & Everitt2021, I would expect that:

    • policy gradients (like evolution strategies) & deep SARSA would try to avoid being turned off

    • Q-learning would not care

    • tree searches (like iterative widening or MCTS) would care if it is in their model & reachable within their planning budget, but not otherwise

    • “behavior cloning” imitation-learning (eg. LLMs) would depend on whether reward-hacking was present in the training dataset and/or prompt.

On Being Sick As A Kid

No one tells children this, but one of the best parts of growing up is not staying up late or not being forbidden to eat dessert every night, but not being sick all the time. We think of childhood as perhaps the best time of life, but there is one way in which human children are truly miserable, in having an extraordinary level of infectious diseases:

A half century ago everyone expected their children to experience the ravages of measles, mumps, rubella, chicken pox, influenza, and other infections that had evolved into the “childhood” diseases. This was traumatic enough that even as a small child, not knowing anything about the dynamics of disease epidemics, I wondered why it was I had to experience all of these diseases as well as an almost continuous string of less severe “colds” and enteropathies (we called them all stomach flu), when all the while our pets appeared to remain perfectly healthy11…I assert the likely possibility that because of our unique ability to change our ecosystem, for the past few thousand years, we human beings have been the most diseased species on earth.

It is easy to forget (until you become a parent), but children are sick, all the time, particularly in elementary school or lower. It is hard to remember, but attendance records were hard to set because you might absent at any time—if it wasn’t chicken pox or the winter flu, it was just who-knows-what, caught from your sibling before you (the larger the family, the more likely that someone in it is sick or recently sick or will become sick). In elementary school, no one blinked an eye at a classmate being gone or leaving in the middle of the day. To make any impact against this general miasma of sickness, you had to do something memorable like barf in the classroom, requiring the janitor to come: now there’s “impact” for you! (Whereas in high school, being absent was cause for remark; and if you barfed in class, you’d never live it down for the rest of high school, and if immortalized in the yearbook or social media, perhaps never at all.)

This is hard to remember in part because the experience of being sick is so miserably gray and unmemorable. You lay in bed, or on the couch, and time drags by, in a combination of boredom and pain. (The pain is easier to remember than the boredom, because boredom is almost by definition the absence of anything to remember; but the boredom was doubtless the predominant experience.) How do you remember the long afternoons, wishing for the day to end and the hours to reel by instead of drag, perhaps all alone by yourself in an empty house as everyone is away at work or school or running errands (not that they really want to be around a sick kid to begin with), as you flipped through the daytime cable television and realized the age-old truth that there really was nothing on TV? One illness just blends into the next, as you spend hours on the toilet listlessly paging through a comic book, or finally get to sleep only to have to drag yourself to the toilet at 3AM (tailed by a concerned cat). It is fun to have your mom make chicken soup for you, as a literal dish, not a figure of speech, but chicken soup is not really that great: that’s why it’s the traditional food for sick kids like you, so you can keep it down, and you don’t usually eat it otherwise. (Tastier food might be wasted on you anyway, as whatever it is clogs your nose and grays out the taste of things as thoroughly as it grays the days.)

And this all just goes on, every few months, for seemingly forever, just with the frequency imperceptibly spacing out, until by adulthood, one might go years without more than a mild cold or flu. (COVID-19 was memorable for me because, even with the vaccinations, my 2 major cases were the worst illnesses I had experienced in ~20 years, and the experience flashed me back to childhood as I recalled that as bad as this was, it had been worse as a kid—at least I wasn’t barfing all over my bed, multiple times.) Was there ever a moment that as a kid I appreciated I hadn’t been getting sick that much? I don’t think there was: “out of sight, out of mind”. Most adults, I suspect, don’t ever think about it either, until they run into oddities like kindergarten teachers getting sick frequently12, or they have kids themselves, and recall that “kids are walking petri dishes” (which is rather unkind to the petri dishes in question).

But perhaps we should tell young kids this more, that a reason to look forward to growing up, with all its downsides like jobs and taxes, is that for about 30 years, before aging starts catching up, you won’t be sick all the time.

On First Looking Into Tolkien’s Tower

One of the backhanded gifts of the pre-Internet age was encountering things stripped completely of context or even the potential for context. You could, if you made an effort, look things up at your local library, but many things were unavailable there. (TV shows, for example, could not be gotten for love nor money, and any episode you missed became a fabulous creature; many Simpsons episodes were legendary, in the original sense, as it might be a decade before they came around on reruns again, pre-DVD box sets.) Your default was ignorance. When I read The Lion, The Witch, and the Wardrobe, my sole knowledge of “C. S. Lewis” was more or less, “presumably some ancient British man”; I was surprised to learn years later that it was a Christian allegory, by a famous Christian apologist. That is how things were.

So one day, on a ski trip with the Boy Scouts as an 11 year old or so, we returned to the rental house and, changed into blissfully dry clean clothes, and possibly with a mug of hot chocolate in hand (memories do embellish), I wander around the house, exploring and looking for something to do. The house has a bare handful of books, as a token furnishing effort, I guess, and I pick up a slim dark hardcover volume with no cover. The Two Towers (196559ya), by a “J. R. R. Tolkien” (who? must be like “C. S. Lewis”, another Inkling—itself a term I would not know for a long time). Post-Peter-Jackson, with medieval-fantasy so pervasive that there are popular franchises which could be defined as parodies of parodies of parodies, it may seem impossible anyone could not know—but it was easy.

I open the book and there is little or no critical apparatus. No long introductions about how this is the second volume of one of the most influential novels trilogies of the 20th century, the single-handed maker of modern fantasy, a series that will still be read centuries from now, loaded down with half a century of debate and interpretation and homage and simplifications.

The text on the page just… begins—with Rangers and “Hobbits” and “Orcs” and elves and dwarves, and curled up on the uncomfortable armchair, within not that many pages, we read:

…Now they laid Boromir in the middle of the boat that was to bear him away. The grey hood and elven-cloak they folded and placed beneath his head. They combed his long dark hair and arrayed it upon his shoulders. The golden belt of Lórien gleamed about his waist. His helm they set beside him, and across his lap they laid the cloven horn and the hilt and shards of his sword; beneath his feet they put the swords of his enemies. Then fastening the prow to the stern of the other boat, they drew him out into the water. They rowed sadly along the shore, and turning into the swift-running channel they passed the green sward of Parth Galen. The steep sides of Tol Brandir were glowing: it was now mid-afternoon. As they went south the fume of Rauros rose and shimmered before them, a haze of gold. The rush and thunder of the falls shook the windless air.

Sorrowfully they cast loose the funeral boat: there Boromir lay, restful, peaceful, gliding upon the bosom of the flowing water. The stream took him while they held their own boat back with their paddles. He floated by them, and slowly his boat departed, waning to a dark spot against the golden light; and then suddenly it vanished. Rauros roared on unchanging. The River had taken Boromir son of Denethor, and he was not seen again in Minas Tirith, standing as he used to stand upon the White Tower in the morning. But in Gondor in after-days it long was said that the elven-boat rode the falls and the foaming pool, and bore him down through Osgiliath, and past the many mouths of Anduin, out into the Great Sea at night under the stars.

For a while the three companions remained silent, gazing after him. Then Aragorn spoke. ‘They will look for him from the White Tower’, he said, ‘but he will not return from mountain or from sea.’ Then slowly he began to sing:

Through Rohan over fen and field where the long grass grows
The West Wind comes walking, and about the walls it goes.
‘What news from the West, O wandering wind, do you bring to me tonight?
Have you seen Boromir the Tall by moon or by starlight?’
‘I saw him ride over seven streams, over waters wide and grey;
I saw him walk in empty lands, until he passed away
Into the shadows of the North. I saw him then no more.
The North Wind may have heard the horn of the son of Denethor.’
‘O Boromir! From the high walls westward I looked afar,
But you came not from the empty lands where no men are.’

Then Legolas sang:

From the mouths of the Sea the South Wind flies, from the sandhills and the stones;
The wailing of the gulls it bears, and at the gate it moans.
‘What news from the South, O sighing wind, do you bring to me at eve?
Where now is Boromir the Fair? He tarries and I grieve.’
‘Ask not of me where he doth dwell—so many bones there lie
On the white shores and the dark shores under the stormy sky;
So many have passed down Anduin to find the flowing Sea.
Ask of the North Wind news of them the North Wind sends to me!’
‘O Boromir! Beyond the gate the seaward road runs south,
But you came not with the wailing gulls from the grey sea’s mouth.’

Then Aragorn sang again:

From the Gate of Kings the North Wind rides, and past the roaring falls;
And clear and cold about the tower its loud horn calls.
‘What news from the North, O mighty wind, do you bring to me today?
What news of Boromir the Bold? For he is long away.’
‘Beneath Amon Hen I heard his cry. There many foes he fought.
His cloven shield, his broken sword, they to the water brought.
His head so proud, his face so fair, his limbs they laid to rest;
And Rauros, golden Rauros-falls, bore him upon its breast.’
‘O Boromir! The Tower of Guard shall ever northward gaze
To Rauros, golden Rauros-falls, until the end of days.’

So they ended. Then they turned their boat and drove it with all the speed they could against the stream back to Parth Galen.

‘You left the East Wind to me’, said Gimli, ‘but I will say naught of it.’

‘That is as it should be’, said Aragorn. ‘In Minas Tirith they endure the East Wind, but they do not ask it for tidings. But now Boromir has taken his road, and we must make haste to choose our own.’

You can do that?, I asked in shock. Yes. Yes, you can.

There was not time to read it all there, nor did I take it with me, but I did take a name: “J. R. R. Tolkien”.

And that is how things were.

Hash Functions

Hashes are an intellectual miracle. Almost useless-seeming, they turn out to do practically everything, from ultra-fast ‘arrays’ to search to file integrity to file deduplication & fast copying to public key cryptography (!).

Yet, even Knuth can’t find any intellectual forebears pre-1953! Seems out of nowhere. (There were, of course, many kinds of indexing schemes, like by first digit or alphabetically, but these are fundamentally different from hashes because they all try for some sort of lossy compression and avoiding randomness.)

You can reconstruct Luhn’s invention of hash functions for better array search from simple check-digits for ECC and see how he got there, but it gives no idea of the sheer power of one-way functions or eg. building public-key cryptography from sponges.

I remember as a kid running into a cryptographic hash article for the first time, and staring. “Well, that sounds like a uselessly specific thing: it turns a file into a small random string of gibberish? What good is that…?” And increasingly 😬😬😬ing as I followed the logic and began to see how many things could be constructed with hashes.

Icepires

It might seem like vampires have been done in every possible way, from space to urban-punk to zombies & werewolves—whether England or Japan, vampires are everywhere (contemporary vampires are hardly bothered by running water), and many even go out in daylight (they just sparkle). But there’s one environment missing: snow. When was the last time you saw Count Dracula flick some snow off his velvet cape? (I can think only of Let The Right One In; apparently 30 Days of Night exploits the rather obvious device of polar night.)

Logically, Dracula would be familiar with snow. Romania has mild humid continental weather, with some snow, and in Transylvania’s Carpathian Mountains there is a good amount of snowfall—enough to support dozens of ski resorts. Nothing in the Dracula mythos implies that there is no snow or that he would have any problem with it (snow isn’t running water either).

Indeed, snow would be an ally of Dracula: Dracula possesses supernatural power over the elements, and snow is the perfect way to choke off mountain passes leading to his redoubt. Further, what is snow famous for? Being white and cold. It is white because it reflects light, and Dracula being undead & cold, would find snow comfortable. So snow would both protect and camouflage Dracula—enough snow shields him from the sun, and he would not freeze to death nor give away his position by melt or breath. (However, this is a double-edged sword: not being warm-blooded, Dracula can literally freeze solid, just like any dead corpse or hunk of meat. So there is an inherent time-limit before he is disabled and literally frozen in place until a thaw releases him, which may entail a lot of sunlight…)

Thus, we can imagine Dracula happening a little differently had Dracula employed his natural ally, General Winter, against his relentless foes, Doctor Helsing & Mina Harker:

Well aware of his enemy’s resources and the danger of the season, but unable to wait for more favorable conditions, the party has struck with lightning speed, traveling on pan-European railways hot on Dracula’s heels, and arranged for modern transportation technology for the final leg. Dracula summons forth weeks of winter storms, dumping meters of snow on all passes—but to no avail against Dr Helsing’s sleds & land yachts & ice boats (which can travel at up to 100MPH), filled with modern provisions like ‘canned goods’ and ‘pemmican’. Dracula may have the patience of the undead, but the Helsing party has all the powers of living civilization.

They lay siege, depleting Dracula’s allies by shooting down any fleeing bats and trapping the wild wolves. At long last, they assault the fortress for every direction, rushing in on their land yachts over the snow-drifted ramparts, cleansing it room by room of abomination, leaving no hiding place unpurged. Finally, at high noon, Doctor Helsing bursts into Dracula’s throne-room to fix an unholy mistake with a proper stake through the heart: but the coffin is empty! Where is Dracula—he is blinded by a sudden reflection of light from the snow outside, and reflecting, realizes that there is one last place Dracula can flee without being burnt to a crisp or spotted by the party—into the sea of snow.

How to coax him out? Hunted for months, bereft of his human thralls, slowly freezing solid, Dracula is surely driven almost to madness by hunger—hunger for the blood which is the life—hunger for the hot blood of the nubile Mina Harkness, who can be used again as bait for the vampire. But he retains his monstrous strength surpassing that of mortal men… Jonathan Harper looks at Doctor Helsing, and comments:

You’re going to need a bigger boat.

Cleanup: Before Or After?

We usually clean up after ourselves, but sometimes, we are expected to clean before (ie. after others) instead. Why?

Because in those cases, pre-cleanup is the same amount of work, but game-theoretically better whenever a failure of post-cleanup would cause the next person problems.

Usual: cleanup after. The usual social norm around ‘cleanup’ is that clean up is done after the mess-causing activity, and one cleans up one’s own mess. The post-cleanup rule is widespread, and feels ‘fair’ because it has the useful property that you pay for the mess you make. It is also reasonably self-enforcing in that you are probably ‘fouling your own nest’ if you break it and wait for someone else to clean it up. (Why would they?)

But sometimes before! In some areas, though, this is reversed and the rule is pre-cleanup:

  • in doing laundry, many households rule that you clean out the dryer’s lint trap before you run your own load (ie. cleaning out the previous user’s laundry lint)

  • bathroom toilets often have a similar rule: if there’s no toilet paper after use, the next person handles the replacement. (Whether one should put the toilet seat up or down is unclear.)

  • in math lectures, one may be expected to clean the blackboard from a previous lecture, and leaves one’s own work for the next lecturer to clean

  • in low-level programming, the operating system can clean (zero out) memory before a program begins, or after a program exits.

    Similarly, in loops, one can do various initializations or cleanups before or after a loop iteration. More broadly, any resource handle like network or file or database.

Similar game-theoretically. Pre-cleanup results in the same amount of work on net (1 mess, 1 clean), and is a dominant or self-enforcing strategy in the sense that if you try to deviate from it you will just make more work for yourself & everyone else’s life easier (because now you do 2 cleanups each time). Like post-cleanup, pre-cleanup in these scenarios is also not vulnerable to free-riding exploitation: a lint trap can be only so hard to clean off, a blackboard can be only so hard to clean off, and one only needs some memory zeroed-out (the memory one is going to use) and not 100% of it.

Limiting risks. If pre-cleanup is an equal amount of work and coordination and equally robust to exploitation as post-cleanup, but feels ‘less fair’, then why bother with the reversal? Because pre-cleanup also manages incentives—when it comes to potentially-safety-critical cleanup of shared resources which have limits to how bad the mess can be. You could do post-cleanup, and simply assume that the user before you cleaned it when they were done. But did they?

Worst-case avoidance. An unclean lint trap is hazardous because it might catch fire. Someone rushing into a toilet, who really needs to go, probably does not want to relieve themselves and only then notice that the toilet paper is out (thanks to someone neglecting their post-cleanup duties); with pre-cleanup, they may mince as they scramble to find the replacement roll, but at least they avoid the shame of searching after a most maculate #2. Math lecturers can more easily plan their day if they assume they will need to arrive a few minutes before class to clean up, and head to their next destination as soon as they are done teaching, instead of relying on the previous lecturer (whoever that was) to not be a slob. And in coding, we know all too well that most programs (not ours, of course) are programmed by reckless sociopaths with depraved indifference to correctness or other programs, and that our own programs will crash or stop for a myriad of reasons, and may never reach a ‘cleanup phase’ and do a clean exit (to the point where some argue you shouldn’t even bother with trying to have any kind of controlled shutdown and just write “crash-only programs”); if the program has to zero-out its own memory, many programmers will never do so, and it may be difficult to do so correctly, with potentially catastrophic problems for computer security like cryptography; much better for the OS to consistently erase memory when a program requests new memory.

Pre-cleanup robust to others’ errors. So, in these cases, pre-cleanup avoid errors and negligence of others, and that is why it is a better norm than the otherwise-equivalent post-cleanup.

Peak Human Speed

Reviewing available transport technologies, the fastest a pre-modern human could safely move was somewhere in the range >75MPH (cliff diving) to <107MPH (iceboat).

What was the maximum survivable speed any human reached before the modern era (of parachutes, airplanes, rockets, etc)? Would a medieval peasant ever see anything approaching the peak speed of a humble soccer-mom minivan (~100MPH)?

Horses: <44–55MPH. Human sprinters reach a world record speed of 26MPH; thoroughbred horses have a world record of 44MPH, and quarter horses can hit 55MPH.13 No premodern boat, not even oar-powered vessels (10MPH?) or the clipper ships (20MPH?), approaches 44MPH. Planes and gliders did not exist, so those are out. A medieval trebuchet or a large-bore gun could accelerate a human to much higher speeds, but they would not survive it, so that doesn’t count (likewise for ‘standing on a couple barrels of gunpowder or a volcanic explosion or a meteorite impact site’). Is 44MPH the best we can do?

Skis/Skates: <87MPH. Can we do better? What about more gentle forms of gravity power, like skiing? Modern speed skiing has set shocking speed records as high as 158MPH in 2016; it’s unclear how much this relies on special slopes & hyper-modern technology to speed up over more medieval-esque “a stick of wood with beeswax rubbed on one side used on the nearest hill” skiing, but WP notes that in 1898126ya an American reached 87MPH, which is fairly old and already 10MPH beyond high diving. Ski history is vague before the 1800s, so it’s hard to be any surer. Maybe. But skiing brings us to another possibility which pushes the speedometer both up and back: what about skiing on ice? Ice-skaters can hit 34MPH, but are limited by the need to power themselves, rather than use gravity or the wind. So what about a ship on ice?

Diving: <120MPH. The trebuchet and parachute examples suggests the main problem is not reaching a high peak speed (plenty of methods work, gravity-based or otherwise), it’s surviving the speed, and particularly the landing. That immediately suggests water as the landing method. Where would we have a large pool of water to land in? At a cliff, one can, with no equipment, do high diving; there are many tall cliffs, so one can likely reach a high speed, with an upper bound at terminal velocity, which for a human is ~120MPH.

Diving: 75MPH. What’s the actual record for cliff diving? Apparently it is ~75MPH: that’s the final velocity I see quoted for Laso Schaller’s 2015 cliff dive record from 193ft. Since this is <120MPH, that suggests that the limit is more the diver’s durability than cliff heights, and indeed, Schaller was injured in his dive, and many other high divers have been injured or died. Inasmuch as things like rubber bands and aerogel were unavailable, it’s hard to see how anyone could arrange better landing than open water. So that sets a new peak human speed: any young idiot could for all human history go jump off a tall cliff and hit ~75MPH, far faster than any ship or horse or sprinter.

Ice boats: 107MPH. Ice is inherently flat and extremely smooth, self-lubricating, and exposed to winter winds. Serious large-scale iceboats date back to at least the 1600s, with the Icicle setting a record of 107MPH in 1885139ya! An iceboat is mechanically simple, and the rigging & sails don’t look much different from the European ships that were sailing the globe, so while the Icicle may have set that record in 1885139ya, I see no reason similarly fast iceboats could not have been built centuries earlier even than the commercial iceboats in the 1600s, and the speed of iceboats would have been clear to anyone who used them. I think a medieval speed demon could have surpassed 75MPH using an iceboat, but would not have reached 107MPH as they couldn’t’ve afforded such a grandiose iceboat as the Icicle (or we would likely have records of such an aristocratic folly).

Still < minivans. So I conclude that a medieval peasant would likely never experience going faster than 75–107MPH (and towards the lower end). Remarkable—a single minivan contains more concentrated speed than a medieval peasant would see in a lifetime!

Oldest Food

Oldest possible food: 20mya? Random observation: the verifiable anecdote I could find about the oldest thing ever eaten, permafrost meat, only takes you back like 0.05 million years. (Things like honey only go back a thousand years or so at best14, or eggs 1,700 years.) But you could go further. The oldest edible meat on Earth can be no older than 20 million years, because that’s the longest continuous glaciation/permafrost (in Antarctica). Any other meat has decayed, fossilized/mineralized15, been compacted beyond edibility, etc.

Ancient seafood? You can go older with pathological examples like ‘drinking glacial core water’ or eating ‘salt’ or ‘calcium carbonate (Tums)’ or iron. But let’s define food as ‘organic’, in being made of non-metals & CHNOPS, which excludes all those; what is the oldest possible edible organic biological food?

I think that maybe there are extremely slow-living abyssal microbes or endoliths, which one could perhaps harvest in sufficient bulk to ‘eat’, but I am unsure if that’s feasible or if they would be considered to count: they pose something of a ‘Ship of Theseus’ problem in that they’ve repaired themselves so many times that you have to ask if they are the ‘original’—if arbitrary amounts of repair don’t matter, then you yourself are flesh that is billions of years old…


Another interesting question might be, ‘what is the rarest simplest food?’

Things like ‘cooked meat’ would not count because animals are cooked in wildfires all the time and predators eat them–some raptors even deliberately spread fires to hunt! Things like baked goods would be highly unnatural, but also require extensive agriculture & processing: there is nothing in the wild like domesticated wheat, much less ground up, fermented into dough, and baked. Vegetables, likewise: highly unnatural, usually extremely selectively bred or hybridized. Products like wine have natural analogues: overripe fruits can ferment and produce alcohol on their own, and inebriated wild animals have been often documented.

So I would suggest cheese: it is made with milk, which is common due to mammals, and requires only a simple acidification or digestize enzyme step to solidify milk into cheese, but probably exists nowhere in the wild except for posssibly occasional instances of a young nursing mammal being killed while not eaten or scavenged before their stomach enzymes can digest their last meal into something vaguely remiscent of ‘cheese’. Other dairy products are also good candidates here: butter is just milk which has been physically shaken or stirred for long enough that it solidifies, no additional ingredients required; having made some in elementary school by shaking a Tupperware container for a long time, I have a hard time imagining any natural process which might result in the creation of butter.

Zuckerberg Futures

I remember being at our annual All Things Digital conference in Rancho Palos Verdes, California in 201014ya and wondering, as sweat poured down Mark Zuckerberg’s pasty and rounded face, if he was going to keel over right there at my feet. “He has panic attacks when he’s doing public speaking”, one Facebook executive had warned me years before. “He could faint.” I suspected that might have been a ploy to get us to be nicer to Zuckerberg. It didn’t work. As Walt and I grilled the slight young man on the main stage, the rivulets of moisture began rolling down his ever-paler face…This early Zuckerberg had yet to become the muscled, MMA-fighting, patriotic-hydrofoiling, bison-killing, performative-tractor-riding, calf-feeding man that he would develop into over the next decade.

Kara Swisher

Mark Zuckerberg may be seriously underestimated. The level of sheer hatred he evokes blinds people to his potential; he can do something awesome like water ski on a hydrofoil carrying the American flag for the 2021 Fourth of July, and the comments will all be about how much everyone loathes him. (Whereas if it had been Jeff Bezos, post-roids & lifting, people would be squeeing about his Terminator-esque badassery. And did you know Jack Dorsey was literally a fashion model? Might help explain why he gets such a pass on being an absentee CEO busy with Square.) And it’s not Facebook-hatred, it’s Mark-hatred specifically, going back well before the movie The Social Network; in fact, I’d say a lot of the histrionics over and hatred of Facebook, which have turned out to be so bogus is motivated by Mark-hatred rather than vice-versa. Even classmates at Harvard seemed to loathe him. He just has a “punchable face”; he is short; he somehow looks like a college undergraduate at age 37; he eschews facial hair, emphasizing his paleness & alien-like face, while his haircut appears inspired by the least attractive of Roman-emperor stylings (quite possibly not a coincidence given his sister); he is highly fit, but apparently favors primarily aerobics where the better you get the sicker & scrawnier you look; and he continues to take PR beatings, like when he donated $75m to a San Francisco hospital & the SF city board took time out in December 2020 from minor city business (stuff like the coronavirus/housing/homeless/economic crisis, nothing important) just to pass a resolution insulting & criticizing him. (This is on top of the Newark debacle.)

Unsurprisingly, this hatred extends to any kind of thinking or reporting. To judge by reporting & commentary going back to the ’00s, no one uses Facebook, it’s too popular, and it will, any day now, become an empty deserted wasteland; puzzling, then, how it kept being successful—guess people are brainwashed, or something. Epistemic standards, when applied to Zuckerberg & Facebook, are dismally low; eg. the Cambridge Analytica myth still continues to circulate, and we live in a world where New York Times reporters can (in articles called “excellent” by tech/AI researchers) forthrightly admit that they have spent years reporting irrelevant statistics about Facebook traffic (to the extent of setting up social media bots to broadcast them daily), statistics that they knew were misinterpreted as proving that Facebook is a giant right-wing echo chamber and that their assertions about the statistics only “may be true”—but all of this false reporting is totally fine and really, Facebook’s fault for not providing the other numbers in the first place!16

This reminds me of Andrew Carnegie, or Bill Gates in the mid-1990s. Who then would have expected him to become a beloved philanthropist? No one. (After all, simply spending money is no guarantee of love. No matter how much money he spends, the only people who will ever love Larry Ellison are yacht designers, tennis players, and the occasional great white shark.) He too was blamed for a myriad of social ills, physically unimposing (short compared to CEO peers who more typically are several inches above-average), and was mocked viciously for his nasally voice and smeared glasses and utter lack of fashion and being nerdily neotenous & oh-so-punchable; he even had his own Social Nework-esque movie. Gates could do only one of two things according to the press: advance Microsoft monopolies in sinister & illegal ways to parasitize the world & crush all competition, or make noble-seeming gestures which were actually stalking horses for future Microsoft monopolies.

But he too was fiercely competitive, highly intelligent, an omnivorous reader, willing to learn from his mistakes & experiments, known to every manjack worldwide, and, of course, backed by one of the largest fortunes in history. As he grew older, Bill Gates grew into his face, sharpened up his personal style, learned some social graces, lost some of the fire driving the competitiveness, gained distance from Microsoft & Microsoft became less of a lightning rod17 as its products improved18 and began applying his drive & intellect to deploying his fortune well. In 199925ya, Gates was a zero no matter how many zeros in his net worth; in 2019, he was a hero.

Zuckerberg starts in the same place. And it’s easy to see how he could turn things around. He has all the same traits Gates does, and flashes of Chad Zuck emerge from Virgin Mark already (“We sell ads, senator”).

Past errors like Newark can be turned into learning experiences & experiments; the school of life being the best teacher if one can afford its tuition. Indeed, merely ceasing to self-sabotage and doing all the ‘obvious’ stuff would make a huge difference: drop the aerobics for weightlifting (being swole is unnecessary, just not being sickly is enough); find the local T doctor (he likely already knows, and just refuses to use him); start experimenting with chin fur and other hair styles, and perhaps glasses; step away from Facebook to let the toxicity begin to fade away; stop being a pushover exploited by every 2-bit demagogue and liberal charity, and make clear that there are consequences to crossing him (it is better to be feared than loved at the start, cf. Bezos, Peter Thiel); consider carefully his political allegiances, and whether the Democratic Party will ever be a viable option for him (he will never be woke enough for the woke wing, especially given their ugly streak of anti-Semitism, unless he gives away his entire fortune to a Ford-Foundation-esque slush fund for them, which would be pointless), and how Michael Bloomberg governed NYC; and begin building a braintrust and network to back a political faction of his own.

In 20 years19, perhaps we’ll look back on Zuckerberg with one of those funny before/after pairings, like Elon Musk in the ’90s vs Elon Musk in 2015, or nebbish Bezos in ’90s sweater vs pumped bald Bezos in mirror shades & vest, and be struck that Zuckerberg, so well known for X (curing malaria? negotiating an end to the Second Sino-Russian Border Conflict?) was once a pariah and public enemy #1.

Russia

Who would grasp Russia with the mind?
For her no yardstick was created:
Her soul is of a special kind,
By faith alone appreciated.

Fyodor Tyutchev

We wanted the best, but it turned out like always.

Viktor Chernomyrdin

In its pride of numbers, in its strange pretensions of sanctity, and in the secret readiness to abase itself in suffering, the spirit of Russia is the spirit of cynicism. It informs the declarations of her statesmen, the theories of her revolutionists, and the mystic vaticinations of prophets to the point of making freedom look like a form of debauch, and the Christian virtues themselves appear actually indecent…

Joseph Conrad, Under Western Eyes

Varlam Shalamov once wrote: “I was a participant in the colossal battle, a battle that was lost, for the genuine renewal of humanity.” I reconstruct the history of that battle, its victories and its defeats. The history of how people wanted to build the Heavenly Kingdom on earth. Paradise! The City of the Sun! In the end, all that remained was a sea of blood, millions of ruined human lives. There was a time, however, when no political idea of the 20th century was comparable to communism (or the October Revolution as its symbol), a time when nothing attracted Western intellectuals and people all around the world more powerfully or emotionally. Raymond Aron called the Russian Revolution the “opium of intellectuals.” But the idea of communism is at least two thousand years old. We can find it in Plato’s teachings about an ideal, correct state; in Aristophanes’ dreams about a time when “everything will belong to everyone.” … In Thomas More and Tommaso Campanella … Later in Saint-Simon, Fourier and Robert Owen. There is something in the Russian spirit that compels it to try to turn these dreams into reality.

…I came back from Afghanistan free of all illusions. “Forgive me father”, I said when I saw him. “You raised me to believe in communist ideals, but seeing those young men, recent Soviet schoolboys like the ones you and Mama taught (my parents were village school teachers), kill people they don’t know, on foreign territory, was enough to turn all your words to ash. We are murderers, Papa, do you understand‽” My father cried.

Svetlana Alexievich, 2015 Nobel Prize speech

What is it about the Russian intelligentsia, “the Russian idea”? There’s something about Russian intellectuals I’ve never been quite able to put my finger on, but which makes them unmistakable.

For example, I was reading a weirdo typography manifesto, “Monospace Typography” which argues that all proportional fonts should be destroyed and we should use monospace for everything for its purity and simplicity; absurdity of it aside, the page at no point mentions Russia or Russian things or Cyrillic letters or even gives an author name or pseudonym, but within a few paragraphs, I was gripped by the conviction that a Russian had written it, it couldn’t possibly have been written by any other nationality. After a good 5 minutes of searching, I finally found the name & bio of the author, and yep, from St Petersburg. (Not even as old as he sounds.) Or LoperOS or Urbit, Stalinist Alexandra Elbakyan, or Nikolai Bezroukov’s Softpanorama, or perhaps we should include Ayn Rand—there is a “Russian guy” archetype—perhaps Lenin was a mushroom after all?

Perhaps the paradigmatic example to me is the widely-circulated weird news story about the two Russians who got into a drunken argument over Immanuel Kant and one shot the other (not to be confused with the two Russians in a poetry vs prose argument ending in a fatal stabbing), back in the 2000s or whenever. Can you imagine Englishmen getting into such an argument, over Wittgenstein? No, of course not (“a nation of shopkeepers”). Frenchmen over Sartre or Descartes? Still hard. Germans over Hegel? Not really (although see “academic fencing”). Russians over Hume? Tosh! Over Kant? Yeah sure, makes total sense.

What is it that unites serfs, communism (long predating the Communists), the Skoptsy/Khlysts, Tolstoy, Cosmism, chess, mathematics (but only some mathematics—Kolmogorov’s probability theory, yes, but not statistics and especially not Bayesian or decision-theoretic types despite their extreme economic & military utility20), stabbing someone over Kant, Ithkuil fanatics, SF about civilizations enforcing socialism by blood-sharing or living in glass houses, absurd diktats about proportional fonts being evil, etc? What is this demonic force? There’s certainly no single specific ideology or belief or claim, there’s some more vague but unmistakable attitude or method flavoring it all. The best description I’ve come up with so far is that “a Russian is a disappointed Platonist who wants to punch the world for disagreeing”.

Conscientiousness And Online Education

Moved to “Conscientiousness & Online Education”.

Fiction

American Light Novels’ Absence

I think one of the more interesting trends in anime is the massive number of adaptations of light novels done in the ’90s and 00s; it is interesting because no such trend exists in American media as far as I can tell (the closest I can think of are comic book adaptations, but of course those are analogous to the many mangas -> animes). Now, American media absolutely adapts many novels, but they are all normal Serious Business Novels. We do not seem to even have the light novel media - young adult novels do not cut the mustard. light novels are odd as they are kind of like speculative fiction novellas. The success of comic book movies has been much noted - could comic books be the American equivalent of light novels? There are attractive similarities in subject matter and even medium, light novels including a fair number of color manga illustrations.

  • Question for self: if America doesn’t have the light novel category, is that a claim that the Twilight novels, and everything published under the James Patterson brand, are regular novels?

    Answer: The Twilight novels are no more light novels than the Harry Potter novels were. The Patterson novels may fit, however; they have some of the traits such as very short chapters, simple literary style, and very quick moving plots, even though they lack a few less important traits (such as including illustrations). It might be better to say that there is no recognized and successful light novel genre rather than individual light novels - there are only unusual examples like the Patterson novels and other works uncomfortable listed under the Young Adult/Teenager rubric.

Cultural Growth through Diversity

We are doubtless deluding ourselves with a dream when we think that equality and fraternity will some day reign among human beings without compromising their diversity. However, if humanity is not resigned to becoming the sterile consumer of values that it managed to create in the past…capable only of giving birth to bastard works, to gross and puerile inventions, [then] it must learn once again that all true creation implies a certain deafness to the appeal of other values, even going so far as to reject them if not denying them altogether. For one cannot fully enjoy the other, identify with him, and yet at the same time remain different. When integral communication with the other is achieved completely, it sooner or later spells doom for both his and my creativity. The great creative eras were those in which communication had become adequate for mutual stimulation by remote partners, yet was not so frequent or so rapid as to endanger the indispensable obstacles between individuals and groups or to reduce them to the point where overly facile exchanges might equalize and nullify their diversity.

Claude Levi-Strauss, The View from Afar pg 23 (quoted in Clifford Geertz’s “The Uses of Diversity” Tanner Lecture)

Leaving aside the corrosive effects on social solidarity documented by Putnam and Amy Chua’s ‘market minorities’, I’ve wondered about the artistic consequences of substantial diversity to a country or perhaps civilization. In Charles Murray’s Human Accomplishment, one of the strongest indicators for genius is contact with a foreign culture. This foreign contact can be pretty minimal - Thomas Malthus drew on threadbare descriptions of China’s teeming population, and the French philosophes had little more to go on when drawing inspiration in Confucianism, as did the later rococo and chinoiserie artists; much of American design and art traces back to interpretations of East Asian art based on few works, and the sprawling American cults or New Age movements and everything that umbrella term influenced post-’60s were not based on deep scholarship. They did much with little, one might say. This seems fairly true of many fertile periods: the foreigners make up, at most, a few percentage points of the population.

For example, Japanese visual art is pretty mediocre from the 900s to the 1600s - but great between then and the Meiji era. What happened? They shut off access to the outside world, and that’s apparently how we got ukiyo-e. But what did the Meiji era, when the doors were flung open to the accumulated treasures of the Western world, ever produce? And to go the other direction: the Impressionists and other artists of the time received trickles of Asian artwork which apparently inspired them (my own single favorite block-print by Hiroshige also exists - as a Van Gogh painting!), but what has happened since as masses and masses of artwork became available?

However, the modern era which is likely the most globalized era ever for cultural products, in which population movements are so much vaster and in which English-speakers have access to primary sources like they have never had before (compare how much classic Japanese & Chinese literature has been translated and stored in libraries as of 200915ya to what was available when Waley began translating Genji Monogatari in 1921103ya!). This would seem to be something of a contradiction: if a little foreign contact was enough to inspire all the foregoing, then why wouldn’t all the Asian immigrants and translations and economic contact to America spark even greater revolutions? There has been influence, absolutely; but the influence is striking for how a little bit helped (how many haiku did the Imagists have access to?) and a lot done not much more, and perhaps even less. There’s no obvious reason that more would not be better, and obvious reasons why it would be (less overhead and isolation for the foreigners; sheer better odds of getting access to the right master or specialist that a promising native artist needs). But nevertheless, I seem to discern a U-shaped curve.

It seems cultures benefit most from cross-pollination when there’s only a little, but globalization is forcing contact way beyond that, if you follow me.

In schools, one sees students move in cliques and especially so with students who share a native language and are non-native English speakers - one can certainly understand why they would do such a thing, or why immigrants would congregate in ghettos or Chinatowns or Koreatowns where they can speak freely and talk of the old country; perhaps this homophily drives the reduced cross-fertilizing by reducing the chances of crossing paths. (If one is the only Yid around, one must interact with many goyim, but not so if there are many others around.) Is this enough? It doesn’t seem like enough to me.

This is a little perplexing. What’s the explanation? Could it be that as populations build up, all the early artists sucked out the novelty available in hybridizing native material with the foreign material? Or is there something stimulating about having only a few examples - does one draw faulty but fruitful inferences based on idiosyncrasies of the small data set? In machine learning, the more data available, the less wild the guesses are, but in art, wildness is a way of jumping out of a local minima to somewhere new. If Yeats had available the entire Chinese corpus, would he produce better new English poems than when he pondered obsessively a few hundred verses, or would he simply produce better English pastiches of Chinese poems? Knowledge can be a curse by making it difficult or impossible to think new thoughts and see new angles. Or perhaps the foreign material is important only as a hint to what the artist was trying already to achieve; in psychology, there is an interesting variant on the cocktail party effect ‘key’ effect where one hears only static noise in a recording, is given a hint at the sentence spoken in the recording, and then one can suddenly hear it through the noise. Perhaps the original aims are entirely unimportant, and what is at play is a sort of pareidolia or apophenia (akin to electronic voice phenomenon/sine-wave speech, to continue the sound metaphor). Richard Hamming:

[“You and Your Research” excerpts on reading yourself uncreative]


Ross Douthat

My own favored explanation, in The Decadent Society, is adapted from Robert Nisbet’s arguments about how cultural golden ages hold traditional and novel forces in creative tension: The problem, as I see it, is that this tension snapped during the revolutions of the 1960s, when the Baby Boomers (and the pre-Boomer innovators they followed) were too culturally triumphant and their elders put up too little resistance, such that the fruitful tension between innovation and tradition gave way to confusion, mediocrity, sterility.

I may be over-influenced here by the Catholic experience, where I think the story definitely applies. As R. R. Reno argued in a 2007 survey of the so-called “heroic generation” in Catholic theology, the great theologians of the Vatican II era21 displayed their brilliance in their critique of the old Thomism, but then the old system precipitously collapsed and subsequent generation lacked the grounding required to be genuinely creatively in their turn, or eventually even to understand what made the 1960s generation so important in the first place:

…a student today will have a difficult time seeing the importance of their ideas, because the grand exploratory theologies of the Heroic Generation require fluency in neoscholasticism to see and absorb their importance. Or the theories introduce so many new concepts and advance so many novel formulations that, to come alive for students, they require the formation of an almost hermetic school of followers…

In these and many other ways, the Heroic Generation’s zest for creative, exploratory theology led them to neglect—even dismiss—the need for a standard theology. They ignored the sort of theology that…provides a functional, communally accepted and widely taught system for understanding and absorbing new insights.

I think this frame applies more widely, to various intellectual worlds beyond theology, where certain forms of creative deconstruction went so far as to make it difficult to find one’s way back to the foundations required for new forms of creativity. Certainly that seems the point of a figure like Jordan Peterson: It’s not as a systematizer or the prophet of a new philosophy that he’s earned his fame, but as popularizer of old ideas, telling and explicating stories (the Bible! Shakespeare!) and drawing moral lessons from the before-times that would have been foundational to educated people not so long ago. Likewise with the Catholic post-liberals, or the Marx reclamation project on the left. It’s a reaching-backward to the world before the 1960s revolution, a recovery that isn’t on its own sufficient to make the escape from repetition but might be the necessary first step.

Which brings us back to the question of traditionalism and dynamism, and their potential interaction: If you’ve had a cultural revolution that cleared too much ground, razed too many bastions and led to a kind of cultural debasement and forgetting, you probably need to go backward, or least turn that way for recollection, before you can hope to go forward once again.

In this version, creative stagnation comes by ‘eating the seedcorn’. But instead of being cross-national, it’s cross-temporal/generation: a younger generation builds up by tearing down its elders, but then no edifice is left for the next generations.

This seems particularly clear in music with the disappearance of jazz from American culture, historical literature like skaldic poetry, and contemporary literature. Take the quip about free verse being like ‘playing tennis with the net down’. If you took the Williams or Nadal or another tennis great, and asked them to play a beautiful game of tennis with the net down and simply pretend the net was up, I think they would rise to the challenge and play an excellent game! They have too much taste and integrity and are too molded by a life spent with the net to make a bad move; perhaps one will spot the other not quite clearing the invisible net, but the return stroke is too cool to object to. They know the rules so well and are so skilled that they know when to break them, and cooperate. like in D&D or other games: the final and ultimate rule is that the only rules are what makes a great game.

But, what happens with their successors in the game of netless tennis? Having been raised without a net, where do they get their taste from, and how do they learn when to break the rules? Instead, they’ll imitate without true understanding, and will not create any new beautiful net-less serves; netless tennis will have begun to degenerate, and players gradually compete in dishonesty to claim they won. ‘That went low!’ ‘No, it didn’t.’ ‘What? It totally did’ ‘I disagree. And that means I won. Loser.’ Without any rigid framework or grounding in uncheatable reality, it involutes and degenerates and becomes l’arte pour l’arte and corrupted by parasites, cliques, and fads.

T. S. Eliot, William Carlos Williams, etc, all were raised inside the Western tradition and knew it intimately, even if they did not write like it. They had the ear for what free verse works, somehow, according to the ineffable internal workings of English. Williams can write “This Is Just To Say”; his imitators 50 years later… do not, and are just writing prose with random linebreaks. And so while the first generation of free verse poets gave us giants like Eliot, whose verse works even when it has thrown away much of the Western tradition of meter and rhyme, and were read and admired by the world, within a generation or two, it’s become what you see in Poetry magazine today, and read by no one except the author, and a few other professional poet-academics.

TV & the Matrix

One of the most common geek criticisms of The Matrix is that the supposed value of the humans to the machine overlords is as an energy source; but by any comparison to alternatives like burning coal, solar power, fusion plants etc, human flesh is a terrible way of generating electricity and feeding dead humans to other humans makes no sense. An example:

NEO: “I’ve kept quiet for as long as I could, but I feel a certain need to speak up at this point. The human body is the most inefficient source of energy you could possibly imagine. The efficiency of a power plant at converting thermal energy into electricity decreases as you run the turbines at lower temperatures. If you had any sort of food humans could eat, it would be more efficient to burn it in a furnace than feed it to humans. And now you’re telling me that their food is the bodies of the dead, fed to the living? Haven’t you ever heard of the laws of thermodynamics?”

There’s a quick way to rescue the Matrix-verse from this objection: that was simply a dumbing-down for the general movie audience. To take an existing SF trope (eg. from Dan Simmons’s Hyperion Cantos), the real purpose of humans is to reuse their brains as a very energy-efficient (estimates of the FLOPS of a human brain against the known ~watt energy consumption indicate orders of magnitude more efficiency than the best current hardware) highly-parallel supercomputer, which would justify the burden of running a Matrix. From the Matrix short story “Goliath”:

“…we were really just hanging there, plugged and wired, central processing units or just cheap memory chips for some computer the size of the world, being fed a consensual hallucination to keep us happy, to allow us to communicate and dream using the tiny fraction of our brains that they weren’t using to crunch numbers and store information.”

But this raises additional questions:

  1. the AIs won the war with the humans in this version too, so why do they need any human computing horsepower?

    Perhaps the AIs collectively are superior to humans in only a few domains, but these domains had military advantage and that is why they won.

    Or more narrowly, perhaps the AIs are collectively superior in general, but there’s still a few domains they have not reverse-engineered or improved on human performance and those are what the human brains are good for.

    More intriguingly, it’s well-known in machine learning & statistics that something like Condorcet’s jury theorem holds for prediction tasks: a collection or ensemble of poor error-prone algorithms can be combined into a much better predictor as long as their errors are not identical, and a new different algorithm can improve the ensemble performance even if it’s worse than every other algorithm already in the ensemble.

    So the humans could, individually or collectively, be useful even if humans are always inferior to other AIs!

  2. how do you make use of intact humans brains? With existing machine learning/AI approaches to neural networks, each neural network is trained from scratch for a specific task, it’s not part of a whole personality or mind on its own. What do you do with an entire brain with a personality and memories and busy with its own simulated life? If the AIs want the humans for image-recognition tasks (very handy for robots), how do they extract this image recognition data in a useful manner from people that are spending 24h in a computer simulation?

    The most obvious way is to hire researchers normally: run a shadowy “hedge fund” or “defense agency” and assign real problems from outside the Matrix; many techniques are generalizable, and the circumstances guarantee that no-one will be able to figure out what the R&D is for. This will show up as countries spending ridiculously large amounts of money on their financial sectors despite the minimal economic gain associated with them, and ridiculously large amounts of money on “national defense” (one can perhaps use up as much as a twentieth of GDP without risking verisimilitude: one must keep defense budgets plausible when trying to extract useful work from large industrialized countries with weak neighbors, sea borders, and no realistic threats to their national security, but if all else fails, wars can always be ginned up and subjects’ fear centers stimulated en masse). That approach has limits though, as world-class mathematicians etc are intrinsically rare; so how do you get anything useful out of the remaining 98% of the human populace, since you can hardly hire them to labor outside of the Matrix? Especially the many less intelligent members of the population, necessary for verisimilitude but difficult to employ usefully.

    Insert the tasks into the simulated environment in a naturalistic way, of course. You have an image which might be a bat? Insert it and see if people think “a bat!” You need to recognize street numbers? hijack someone walking down a “street”, replace the real house number with the unrecognized image, and see what they think. Ditto for facial recognition.

    This works because it may be easier to detect a human brain thinking “bat” than it is to recognize a bat; the human may say “bat” (very easy), subvocalize the word “bat” (fairly easy), or think “bat” (not so easy, but near or at the 2014 fMRI state-of-the-art). You could make it even easier by feeding your human brains a test set or library of known-images, figuring out the common brain signature which corresponds to “bat”, then one can easily deduce the brain signature on subsequent unknown images, thereby classifying the unknown images - very similar to existing machine vision practices.

    Of course, to do that on all topics of interest and not just bats, you would have to feed human brains a great deal of imagery which could make no sense as part of their ordinary daily life.

    Ideally, they would be raptly focused on a rapidly changing sequence of images, and as much as you can feed them, so the equivalent of a full-time job, perhaps 5+ hours a day or 24-33 hours a week. You would want this to have a clear intelligence or SES gradient, where the poor or less intelligent spend the most time watching TV (even though this is severely self-sabotaging and irrational behavior as they are least able to afford such a useless leisure activity), to minimize the loss of any superstars to TV watching and extract as much useful pattern-recognition work as possible. You’d want to start programming human brains as early as possible in life, perhaps starting around 2 years of age, so as to minimize how much food & energy they use before they can start computationally-useful tasks. And given how strange and alien as this all sounds to any normal healthy human lifestyle, you would need to make the test-set uploading as addictive as possible to ensure all this - it’d be no good if a lot of humans opted out & wasted your investment.

In other words, television is how the Matrix operators exploit us.

Cherchez Le Chien: Dogs As Class Markers in Anime

In the anime Azumanga Daioh, a key bit of characterization for schoolgirl Chiyo Mihama comes when her friends visit her house and are awestruck that it is enormous, has ample yardage & greenery. The final proof of her family’s wealth is when out of the mansion comes bounding an enormous friendly white Great Pyrenees dog named Mr Tadakichi. Mr Tadakichi emphasizes the space available to the Mihama family (someone living in a 6-tatami apartment does not have room for a large dog nor permission from their landlord) and their ability to care for a foreign breed of dog (it eats a lot and must be regularly walked). Further, the dog breed is French, and France has strong connotations of wealth & elegance (see also Paris syndrome).

Then I began to notice that in anime, cats were far more common than dogs, and I noticed that in anime/manga set in contemporary Japan, dogs seemed to be associated almost exclusively with either rural settings or with people implied to be middle to upper-class.

It seems to me that the use of Mr Tadakichi was not accidental: no other character in AD has a dog, and this makes sense when one considers the space & permission issues - Chiyo is the only rich character, and so of course she’s the one who has a family dog, who has a summer house by the ocean, who is going to study overseas rather than endure the hell of college entrance exams, etc. The other parts are more obvious than dog-ownership, though, and I might have noticed a peculiarity of AD: perhaps it’s only that one manga/anime where dogs are signifiers of wealth. What about all the other anime?

TODO: methodology? Claim is dog-owning characters will be more likely to be wealthy than other characters. This is not within-series (imagine a series like Maria-sama where every girl is either rich or middle-class; if they all had dogs, that would clearly support the hypothesis even though there would be no correlation with their variants) but across series. Collect a random sample of normal characters, then collect a full list of dog-owning characters, compare the log regression where owning a dog = success? Can I use the same sources as in the hafu anime character database - TvTropes, AniDB, WP, MAL, Baka-Updates Manga, Google/Scholar?

Aanother nice example of subtle Japaneseisms in anime: the implicit appearance of aoi in how uses of green/blue are unstable and shift between blue and green far more often than a Western animation would eg. in Saga of Tanya the Evil, Tanya’s stereotypical brilliantly bright blue eyes often shift into distinctly green colorations despite no particular lighting/scenery reason for it.

Tradeoffs and Costly Signaling in Appearances: the Case of Long Hair

At some point I confused two tsundere anime characters: they both had brown long hair and flat chests and I messed up a comment. This pairing of long hair and flat chests seems to be common for tsundere character designs (eg. 4 of 5 ’notable examples on Know Your Meme with the exception being from an unpopular and fairly obscure anime; indeed, as TvTropes’s list indicates, almost all of the Rie Kugimiya-voiced characters have this pair of traits).

When I realized my mistake, I noticed that their counterpart female characters both had short brown/black hair and ample bosoms. This inverse relationship struck me as a little odd because the counterparts only need one distinguishing feature, as many characters get by with, so why did they have the opposite length of hair and opposite cleavage settings? And the more I thought about it, the more it seemed like this pattern held true in real-life too: I might see women with long hair, or with cleavage, but rarely with both.

Of the 2x2 table of short/long hair and large/small breasts, the counterpart can’t have the original long/small combination because that would be confusing; and if the counterpart had short/small, that would unavoidably cast them as either ‘child-like’ or more masculine which is often inappropriate, so the short/small combination would be avoided; but that still leaves long/large as an option, which changes only one aspect.

But then I remembered: what is the stereotypical haircut of a new mother in both Japan and America - isn’t it to cut the hair very short, often less than shoulder-length? And isn’t long hair in many cultures associated with young women, and unwed young women in particular, and considered positive? (“for a woman, if her hair is abundant, it is a glory to her”; “hair is the richest ornament of women”; “they say that the hair is everything, you know”; “the bald woman boasts of her sister’s hair”; “if you meet a red-haired woman, you’ll meet a crowd”; “when the month of May arrives, women’s hair grows and penises become strong”; “one hair of a woman draws more than a bell-rope” / “one hair of a maiden’s head pulls harder than ten yoke of oxen” / “one hair from the head of a woman pulls more than a ship’s hauser” / “beauty draws with a single hair”; - but remember men, as attractive as blonde hair may be, “falseness often lurks beneath fair hair” / “often a troll-woman is under fair skin, and virtue under dark hair” and remember women, “short hair is soon brushed”!) And as pointed out by evo-psych theorizers, long hair may be sexually attractive as a costly & thus reliable signal of health; further, if the hair is blonde, then (in pre-hair-dye eras) it’d be a reliable signal of youth too.

If long hair really is an attractive asset for a woman (and I think a lot of men would agree that barbigerous factors matter, if not as much as breasts or buttocks), then one might wonder why marriage is accompanied by hair-cutting. After all, why deliberately make yourself less attractive? Surely it’s nice to be beautiful and admired even if you’ve already found a husband. Plus, everyone grows hair so it seems like a relatively egalitarian aspect of attractiveness - it’s not set in stone like so much of one’s appearance.

The answer may be that long hair is not just a reliable signal, but a costly one: the longer hair is, the harder it is to take care of it. One has to use up more shampoo & conditioner cleaning the full mass, rendering showers a complicated affair; the weight of long hair is a literal burden; brushing the hair may take a long time; one has to keep an eye out to avoid getting the hair in one’s eyes, caught in anything around you, avoid knocking things over with one’s tresses, keep it clean, or avoid stepping on it in the most extreme cases.

I don’t know how much time & effort is involved in maintaining, say, hip-length rather than shoulder-length hair, but it must be considerable to explain why hip-length hair is so unusual even before marriage.

Further, as commenter Jay points out, if the point of attractiveness is to secure a long-term relationship for reproduction, then as well as being expensive & unnecessary, hair that has been cut short or concealed is serving as a commitment mechanism in indicating less interest in & less potential for adultery/divorce (women literally letting their hair down only at home for their husbands):

It is quite possible that the act of cutting one’s hair is signaling. Signaling to one’s mate that she intends to remain faithful. Assuming that men value longer hair as a measure of genetic/health quality, the act of cutting her hair has the effect of reducing her perceived quality as a reproductive partner while her actual value remains unchanged. Judging from this perspective, it is optimal for one’s mate to reduce their perceived sexual value after pair bonding has happened, to reduce competition. In this scenario, the woman can signal her commitment to the relationship by reducing her market value - cutting her hair. Assuming the above idea is true, it should be observable that women who don’t cut their hair after marriage/children are more likely to cheat.

This doesn’t explain the apparent inverse relationship where one has either breasts or hair. Hair may be costly, and so women shed it at the first opportunity, but why isn’t long hair universal before marriage?

I think this may be explained by the optionality of hair: one cannot choose the size of one’s breast without resorting to desperate measures like surgery, one cannot change one’s eye colors without unpleasant measures like colored contact lenses, one cannot change the shape of one’s face, losing weight & being fit is a lifelong battle - but hair is optional. So suppose one lucks out and has a curvaceous cleavage men drool over; perhaps that is sufficient and you don’t want to go to the extra costs of long hair, and so you never grow flowing locks and this works for you. But suppose one instead has a chest like a cutting board, what can you do about that? Not much… but you could compensate by instead growing long hair, so wouldn’t you? It’s better than being both short-haired and flat-chested. (There will be exceptions of course; a supermodel might be both busty & hairy because their job makes it worth their while, some women may simply like long hair a lot and want to have long hair regardless, and one may be cursed with bad hair & growing is never worthwhile.)

So the apparent inverse correlation of hair length and bust size is generated by Berkson’s paradox where there is no correlation between hair & bust initially, but then, conditioning on aiming for a threshold of attractiveness & cost-benefit optimizing, an inverse correlation is generated by bustier women tending to cut their hair & less busty women growing their hair.

In general, if a factor of attractiveness is optional and costly, we’d expect people blessed with more non-optional/uncontrollable factors to avoid the optional/controllable costly ones, and that the optional costly factors will vary with perceived prospects or need for attractiveness (eg. we’d expect sharp decreases in hair length after marriage, and gradual decreases with age).

To test this:

  • compile a set of tsundere female characters, a random selection of non-tsundere characters, and classify each by hair length & breast size, and see if there is an inverse relationship in both groups or whether it’s a tsundere artifact

  • real-world datasets?

    • Perhaps photos from dating sites where women might be expected to be explicitly optimizing for physical attractiveness? (But what dating sites record both hair length and breast size? Surely not representative ones…)

    • Do porn preferences map onto attractiveness preferences enough? Then we might see the inverse relationship there. (One might worry that all porn performers would have huge surgically-augmented breasts, but an analysis of the IAFDB says the modal breast size is 34B and natural hair colors are common albeit blonde is still 6x overrepresented. The IAFB, unfortunately, doesn’t have hair lengths which would allow for checking an inverse correlation.)

  • more exotically, increases or decreases in cost should cause corresponding decreases & increases on the margin

Boorus: Revealed Preferences

While the IAFB doesn’t include hair length, as part of my Danbooru2018 dataset project, I found a dataset which does: the anime image boorus, enormous databases of images annotated with a rich set of tags by their users, often used for porn or anime or both.

The major boorus include tags corresponding to the question, and one can simply check the counts of images. Images represent a combination of popularity of an artist (for their images to be uploaded & annotated), popularity of characters (for artists to draw images of them), and also popularity of a particular tag (while a character may canonically have long hair, an artist can still draw them with short hair should they wish).

Searching on 2016-04-22 by hand (estimating number in each booru by number per page times number of pages), I found (using one_girl to ensure that the hair/breast tag referred to the same character/person)

  1. Danbooru:

    • 1girl short_hair small_breasts: 6300

    • 1girl long_hair small_breasts: 10400

    • 1girl short_hair large_breasts: 35200

    • 1girl long_hair large_breasts: 90100

  2. Safebooru:

    • 1girl short_hair small_breasts: 44800

    • 1girl long_hair small_breasts: 78400

    • 1girl short_hair large_breasts: 281600

    • 1girl long_hair large_breasts: 792000

  3. Gelbooru:

    • 1girl short_hair small_breasts: 448056

    • 1girl long_hair small_breasts: 712656

    • 1girl short_hair large_breasts: 2245572

    • 1girl long_hair large_breasts: 5741820

  4. Big Booru:

    • 1girl short_hair small_breasts: 290000

    • 1girl long_hair small_breasts: 487500

    • 1girl short_hair large_breasts: 1450000

    • 1girl long_hair large_breasts: 3825000

  5. Sankaku:

    • 1girl short_hair small_breasts: 10254

    • 1girl long_hair small_breasts: 16715

    • 1girl short_hair large_breasts: NA

    • 1girl long_hair large_breasts: NA

  6. Rule 34:

    • 1girl short_hair small_breasts: 142002

    • 1girl long_hair small_breasts: 220206

    • 1girl short_hair large_breasts: 854070

    • 1girl long_hair large_breasts: 2123856

hair <- read.csv(stdin(), header=TRUE)
Source,Hair,Bust,Count
D,0,0,6300
D,1,0,10400
D,0,1,35200
D,1,1,90100
S,0,0,44800
S,1,0,78400
S,0,1,281600
S,1,1,792000
G,0,0,448056
G,1,0,712656
G,0,1,2245572
G,1,1,5741820
B,0,0,290000
B,1,0,487500
B,0,1,1450000
B,1,1,3825000
SC,0,0,10254
SC,1,0,16715
R,0,0,142002
R,1,0,220206
R,0,1,854070
R,1,1,2123856


summary(lm(log(Count) ~ Bust*Hair + Source, weights=hair$Count, data=hair))
# Coefficients:
#               Estimate Std. Error   t value   Pr(>|t|)
# (Intercept) 12.5573170  0.0400583 313.47599 < 2.22e-16
# Bust         1.6513538  0.0416067  39.68965 5.9265e-15
# Hair         0.4831628  0.0483397   9.99515 1.8094e-07
# SourceD     -3.7524417  0.0990217 -37.89515 1.0763e-14
# SourceG      0.4129648  0.0193262  21.36817 1.6455e-11
# SourceR     -0.5934732  0.0251471 -23.60008 4.6621e-12
# SourceS     -1.6183374  0.0369046 -43.85187 1.6355e-15
# SourceSC    -3.3185000  0.2261675 -14.67275 1.8117e-09
# Bust:Hair    0.4657983  0.0521960   8.92403 6.6273e-07
#
# Residual standard error: 36.8839 on 13 degrees of freedom
# Multiple R-squared:  0.999131,    Adjusted R-squared:  0.998596
# F-statistic: 1867.51 on 8 and 13 DF,  p-value: < 2.22e-16

As expected, both long hair and larger busts are more popular (so hair+bust > bust > hair > short/small); interestingly, there is an interaction between the two which almost as large as the main effect for hair, making the combination ~20% more effective than the simple sum of the two traits.

The Tragedy of Grand Admiral Thrawn

I explain the somewhat arbitrary-seeming death of the popular Star Wars character Grand Admiral Thrawn as being the logical culmination of a tragic character arc in which his twisted (or “thrawn”) cunning & scheming, one of his defining traits, ultimately backfires on him, causing his bodyguard to betray & assassinate him.

In the sprawling Star Wars Expanded Universe, one of the most memorable characters was Grand Admiral Thrawn. Thrawn was introduced by Timothy Zahn in 199133ya in his The Thrawn Trilogy (best-sellers credited with reviving interest in the EU)—and then immediately escorted out in the third book, The Last Command, when he was assassinated by his bodyguard in Chapter 28 (of 29) at the climax of a Rebel attack on a key Imperial facility:

…“Don’t panic, Captain,” Thrawn said. But he, too, was starting to sound grim. “We’re not defeated yet. Not by a long shot.”

Pellaeon’s board pinged. He looked at it—“Sir, we have a priority message coming in from Wayland,” he told Thrawn, his stomach twisting with a sudden horrible premonition. Wayland—the cloning facility—

“Read it, Captain,” Thrawn said, his voice deadly quiet.

“Decrypt is coming in now, sir,” Pellaeon said, tapping the board impatiently as the message slowly began to come up. It was exactly as he’d feared. “The mountain is under attack, sir,” he told Thrawn. “Two different forces of natives, plus some Rebel saboteurs—” He broke off, frowning in disbelief. “And a group of Noghri—”

He never got to read any more of the report. Abruptly, a gray-skinned hand slashed out of nowhere, catching him across the throat.

He gagged, falling limply in his chair, his whole body instantly paralyzed. “For the treachery of the Empire against the Noghri people,” Rukh’s voice said quietly from beside him as he gasped for breath. “We were betrayed. We have been revenged.”

There was a whisper of movement, and he was gone. Still gasping, struggling against the inertia of his stunned muscles, Pellaeon fought to get a hand up to his command board. With one final effort he made it, trying twice before he was able to hit the emergency alert.

And as the wailing of the alarm cut through the noise of a Star Destroyer at battle, he finally managed to turn his head.

Thrawn was sitting upright in his chair, his face strangely calm. In the middle of his chest, a dark red stain was spreading across the spotless white of his Grand Admiral’s uniform. Glittering in the center of the stain was the tip of Rukh’s assassin’s knife.

Thrawn caught his eye; and to Pellaeon’s astonishment, the Grand Admiral smiled. “But,” he whispered, “it was so artistically done.”

The smile faded. The glow in his eyes did likewise… and Thrawn, the last Grand Admiral, was gone.

“Captain Pellaeon?” the comm officer called urgently as the medic team arrived—too late—to the Grand Admiral’s chair. “The Nemesis and Stormhawk are requesting orders. What shall I tell them?”

Pellaeon looked up at the viewports. At the chaos that had erupted behind the defenses of the supposedly secure shipyards; at the unexpected need to split his forces to its defense; at the Rebel fleet taking full advantage of the diversion. In the blink of an eye, the universe had suddenly turned against them.

Thrawn could still have pulled an Imperial victory out of it. But he, Pellaeon, was not Thrawn.

“Signal to all ships,” he rasped. The words ached in his throat, in a way that had nothing to do with the throbbing pain of Rukh’s treacherous attack. “Prepare to retreat.”

What did Thrawn mean by “it was so artistically done” and is this plot twist a deus ex machina, a letdown in an otherwise excellent trilogy that rose above the usual EU level of pulp fiction?

It’s a little bit of a deus ex machina, but as they go, I think it’s acceptable because all the mechanics are laid in place well in advance in The Thrawn Trilogy, and the assassination itself serves the major literary purpose of demonstrating Thrawn’s fatal flaw of hubris leading him to a tragically bad end.

For years I was vaguely puzzled by the ending: sure, it made logical sense that the Noghri would retaliate by killing him, didn’t violate any rules or worldbuilding or anything, but it felt unmotivated and lacking in literary purposes—why did Timothy Zahn choose that particular way of dealing with Thrawn when Star Wars villains have often been dealt with in so much less final ways? (Admiral Daala, for example, returns constantly; even Emperor Palpatine, who died so definitively canonically, nevertheless gets brought back in Dark Empire as multiple clones.)

After reading a boring Greek tragedy, The Last Command finally clicked for me: the trilogy was also a classical tragedy.

So first, the timing of the assassination is not implausible: the bodyguard can pick and choose the time, and since they can’t expect to escape alive, they’ll want to maximize the damage—major combat was common for Thrawn, his bodyguard would know this perfectly well, and also know that killing him in the middle of a battle based on having access to Thrawn’s strategic genius would maximize the damage.

Second, the betrayal is also plausible, because Leia had at this point spent most of a book on the Noghri home planet, uncovering the deception, so it’s been thoroughly established for the reader that ‘the Noghri clans know how they have been deceived and enslaved for generations and that their gratitude/worship of Darth Vader (and then Thrawn) as a hero is the cruelest of lies’; the reader expects them to be… not happy about this.

Third, that Thrawn wasn’t expecting it is what makes it so ironic and dramatically satisfying: his last line is “But… it was so artistically done.” Some people read this as referring to the battle or perhaps Thrawn’s long-term plans or even the assassination itself; the interpretation here being that Thrawn is stabbed in the back by his trusted Noghri literally stabbing him in the back—this is ironic, certainly, but does it really merit a wistful description like that? But I’d always read it as obviously referring to Vader’s deception of the Noghri where the environmental cleanup robots etc were actually keeping the planet poisoned & destroyed; he understands the only reason Rukh would ever assassinate him is that the deception has failed and the Noghri found out, and he is disappointed that the so elegant and artistic scheme—the pollution remediation robots were in fact the cause of the pollution—has collapsed.

Now, the reader might reasonably say ‘hey, maybe you shouldn’t rely for bodyguards on a race of murder-ninja-lizards who you are tricking into generational servitude by a vast scheme of planetary destruction masquerading as a charity and who might find out at some point and not be happy, and find someone else to be your bodyguards?’, but the reader is of course not a twisted strategic genius who delights in deception & trickery & exploiting the psychology of his enemies (remember the definition: “thrawn (adj). twisted; crooked”) and enjoys keeping ‘his friends close but his enemies closer’, so to speak. This delight in twisted deception is Thrawn’s fatal flaw, which leads him into the hubris of taking such an extreme risk which will explode in his face, and the lack of necessity is precisely what makes it tragic; and a good tragedy always ends in death. And the fact that the assassination happens during a critical battle, which might have paved the way to victory, aside from being rational in-universe, only increases the tragic element: he and his empire were undone by his fatal flaw at the height of his powers and success.

Unlike a more standard tragedy where our protagonist is a good guy, Thrawn is an irredeemable bad guy, so while he realizes his proximate mistake (‘what a pity that my deception failed… even though it was so skillfully & cleverly done’), he has no anagnorisis of his own fatally-flawed ‘thrawn’ nature, like a hero would.

Thrawn would never reflect on how cruel his scheme was, or how unnecessary it was; he took too much pleasure in the cleverness—and that is why at the end of the trilogy he dies an unrepentant villain.


On Dropping Family Guy

The other day I saw a mention of Family Guy, and I remembered: I used to watch it all the time on Fox & Adult Swim, and liked it a fair bit. I still have several seasons’ worth in my big DVD binder, so I could watch some anytime. But I haven’t watched it in ~4 years. Why did I sour on it?

It’s still available on TV, so that’s not it (although the wretched American Dad and The Cleveland Show series seem to be on a lot; Seth MacFarlane does not know his limits). Nor is it that I now dislike animation; I watch as much anime as ever, and I enjoy The Simpsons whenever I get the chance. And The Simpsons highlights another reason which is not the reason: many Family Guy episodes are awful, but that’s equally true of The Simpsons, especially in the later seasons. We watch for the good episodes, and forgive the bad.

No, I think the reason is more that I became tired with Family Guy. Something in it tired me out. After some thought, I realized that the humor was the reason, and specifically the kinds of humor FG used.

Yes, there’s more than one kind of humor in FG despite its reputation. One venerable classification of culture is into ‘high culture’ and ‘low culture’. The former requires education and knowledge, while the latter caters to the ‘lowest’ common denominator (or to put it nicely, is ‘universal’ or ‘accessible’).

The kind of humor FG is known for is clearly ‘low’. Jokes about bodily functions, sex, norm-breaking in general - these are without question low. The Simpsons has, of course, low humor of its own: pretty much anytime Homer says “D’oh!” is an instance of low humor.

But then again, The Simpsons also has ‘high’ humor - characters, guest appearances, allusions, background images, things one might not even realize are meant to be funny until one has already gotten the joke. In the classic episode “Thirty Minutes Over Tokyo” where they go to Japan, on the flight there Marge tells Homer to not pout about going to Japan rather than Jamaica because “You liked Rashomon” to which Homer replies “That’s not how I remember it.” The first several times I saw this, I had no idea what the allusion meant until Rashomon happened to be shown in school and I learned the plot revolved around differing retellings of a crime, at which point Homer’s reply became funny. I suspect 99% of viewers have never seen Rashomon, and almost as many have no idea what the plot was or what the joke was, but for the last 1%, it’s funny.

Does FG have this “high” humor? Yes! Although here is the difficulty: although I know that there is high humor, most of it I don’t understand. Whenever Stewie or Brian break into song or dance, I understand that probably some classic movie or Broadway musical is being alluded to & homaged, but I have no idea what. I don’t even when I think I should: in one FG episode, Brian finally becomes a writer for The New Yorker, a publication I have read sporadically for many years - nevertheless, most of the jokes go clean over my head.

Isn’t that weird? I think of myself as a fairly knowledgeable guy, and I catch much of the high humor on The Simpsons (I read a few episode guides from The Simpsons Archive identifying all the jokes and allusions, and had seen a good fraction of them). Which raises a question, at least for me: if I am missing all or most of the high jokes on FG, who exactly are they aimed at? Especially when the Adult Swim demographic skews younger (and more ignorant) than me?

But regardless of why the high jokes are incredibly high and elitist in a sense, I am still missing them. I like a little low humor, but I like best a mix of high and low (heavy on the high). If I am missing out on most of the high, then I am left with the dregs: the low humor, which after sustained exposure wore thin. And exhausted me.

And so I stopped watching.

Pom Poko’s Glorification of Group Suicide

The first time I watched Studio Ghibli’s 199430ya Pom Poko has so far been the last: I found it a transparently thoroughgoing narrative about mass suicide in WWII, and as harrowing as Grave of the Fireflies - but worse in a way because there is no condemnation of the mass suicides. Instead, we somewhat admire and sympathasize with them.

Alexandra Roedder wrote:

I always saw Pom Poko as being what it claimed to be: a story of the development of the Tama region of Tokyo, told with a strange kind of humor to offset (maybe?) or emphasize the sad facts of the development’s impact on the environment. The mass sacrifice strikes me as being a part of the portrayal of the era, relying on the cultural memory (for lack of a better term) of kamikaze from WWII, not a commentary on it.

But WWII also brought the end of the traditional sort of Japanese development and increased Western-style development - like the city we see by the end, and things like the nightclub (or casino?) where they met the foxes.

The parallels are there, if you want to see them.

The Tanuki (the traditional Japanese) see their traditional way of life threatened by modern foreign-style development with apartments and powered construction vehicle (Western conquest & economic development), and fight back against the initially minor developments (embargo) and then escalate into full warfare (Pearl Harbor and the Pacific War), which ultimately leads to failure against the humans’ superior tools (American materiel advantage), individual kamikaze missions, and mass suicidal attacks of multiple Tanuki in separate places, groups, and methods (Japanese use of suicide submarines, kamikaze planes, banzai charges), and finally, as despair and defeat set in, sheerly futile mass civilian suicide like scores of Tanuki setting off as part of the Buddhist cult-boat to the afterworld (the mass Okinawa suicides with the imperialist justifications - and remember that Buddhism was heavily implicated in this ideology too, it wasn’t just Shintoism, having come to terms with the imperial government during the Meiji restoration).

That you see them as good-humored shows that the suicides are not condemned but if anything approved of them as noble and demonstrating their purity of heart. (I understand Gone with the Wind is not without its own good humor to offset the sad story of the decline of the Old South.)

We all know of the conservative trends and Japanese nationalism which lingers in Japanese politics and manifests in such forms as: denying that any bad things like war crimes happened during their pre-WWII expansion or during WWII itself with the general denial of culpability exemplified by the comfort women; the revisionism in textbooks about such incidents (no doubt whipped up by those who hate Japan) like the Rape of Nanking; the war criminal shrine; and the martyr complex over the nuclear bombings as a perpetual club against the West.

The trend is not absent from anime. Grave of the Fireflies is all about Japanese suffering, for example, with not the slightest sense that other countries were suffering even more.

I like Ghibli movies well enough, but my own particular focus is Gainax films and Hideaki Anno in particular. One of the striking aspects of the WWII material that Anno loves so dearly is that in his discussions, at hardly any point does he exhibit any sense of guilt or culpability or sense that the conquests and Pacific War might have been a bad thing for any other reason than Japan losing it and suffering the consequences. When the topic comes up, they say other things - from an Atlantic interview:

Anno understands the Japanese national attraction to characters like Rei as the product of a stunted imaginative landscape born of Japan’s defeat in the Second World War. “Japan lost the war to the Americans”, he explains, seeming interested in his own words for the first time during our interview. “Since that time, the education we received is not one that creates adults. Even for us, people in their 40s, and for the generation older than me, in their 50s and 60s, there’s no reasonable model of what an adult should be like.” The theory that Japan’s defeat stripped the country of its independence and led to the creation of a nation of permanent children, weaklings forced to live under the protection of the American Big Daddy, is widely shared by artists and intellectuals in Japan. It is also a staple of popular cartoons, many of which feature a well-meaning government that turns out to be a facade concealing sinister and more powerful forces.

Further examples of this rhetoric and regret over losing the Pacific War can be found in Takashi Murakami’s long essay “Earth in my Window” or in Sawaragi (transcribed from Little Boy, 200519ya)

One can’t help but wonder - if the “Pacific War” led to peace since then, is that really so bad? I suspect I already know how the Chinese and Koreans regard this tragedy. (And why is it the “Pacific War”, anyway? Japan was engaged in Asian land wars or occupation or subversion for decades before Pearl Harbor.)

I’ll give another example for Anno. Numbers-kun translates part of an Anno interview with one of his favorite film-makers, Kihachi Okamoto:

Then some talk about Okamoto’s Nikudan. Anno watched it twice and Okamoto said it’s more than enough…Anno said he still remembered a lot of the scenes and how they are edited and linked. But the ones he watched most are Japan’s Longest Day [196856ya] and Okinawa Battle. He even played it as BGV [background video] when he was doing storyboarding at one time, and then slowly his attention was drawn to the video and ended up spending 3 hours watching it.

I have not been able to watch The Battle of Okinawa yet, but Animeigo’s liner notes do a good job indicating why it might be a tad controversial…

Okinawa came up in my Evangelion research, incidentally, because Okinawa comes up in Gunbuster as one of the (very subtle and easy for non-Japanese to miss) indications throughout that Japan has been restored to its rightful dominant place in the world22, in Evangelion there was a cut episode with a trip to Okinawa, and for End of Evangelion, OST commentary indicates that the victorious JSSDF shock forces (who cut down surrendering NERV personnel without mercy and burn them alive with flamethrowers) were intended to demonstrate man’s viciousness and inhumanity. During the nonfictional battle of Okinawa, of course, the victorious troops using flamethrowers were American. Very few (non-Japanese) people ever notice the Okinawa references in EoE.

Takahata’s films always seem to have that kind of “laugh because we can’t do anything else” humor.

But Isao Takahata being the director is one of the signs: recall that one of his other films was… Grave of the Fireflies.

Tedne suggests:

Someone claimed that it was a parable of the decline of the radical Left in Japan. I think it is a good parable of the decline and fall of indigenous communities. The Tanuki who kill themselves are simply trying to be true to themselves; same thing with their warfare. Under extreme threat people sometimes take extreme actions. There is no simple and compelling reason to either condemn or commend that.

It can be both; the radical Left - and Right, let’s not forget Yukio Mishima23 - had certain classic positions. What was the Left most opposed to? The security treaties with America and the bases Okinawa. They were young, energetic, and wished to ‘revolutionize the world’ in service of an ideal, one might say - just like their noble kamikaze forebears. And likewise failed, for similar reasons.

Further, I disagree that the presented actions are normal. The pervasive suicide in Pom Poko is not an universal. People rarely commit suicide: groups fighting to the last man or committing suicide are so rare that they command considerable attention when they happen deliberately. One can command considerable attention just by threatening to kill oneself, and self-immolation - both in the Middle East and Asia - are compelling protests. In practice, people surrender, adapt, and live on. (Unsurprisingly!)

The concept of suicide-bombers and kamikazes live on because they are so unusual; lone assassins and fanatics may occasionally hazard certain death, but entire organized bodies of men? One has to look far for counterparts. (The Greeks at Thermopylae? But most of them survived. The concept of forlorn hopes at sieges? But they expected to win wealth & glory if they broke through, and certainly didn’t carry petards with them even if that might’ve been an effective idea.) Far more common in history is soldiers deserting or mutinying at what they consider a suicide mission. US observers, even knowing of the much-discussed suicidal strains in bushido like seppuku, were still shocked by the course of the Pacific War; so it was that Admiral Nimitz could write to the Naval War College:

The war with Japan has been [en-acted] in the game room here by so many people and in so many different ways that nothing that happened during the war was a surprise - absolutely nothing except the Kamikaze tactics towards the end of the war; we had not visualized those.24

Allegories can be difficult to understand the more remote and foreign they become: who can read Dante Alighieri’s Inferno and understand all the political or historical material without a scholarly apparatus? I think something similar is happening with Pom Poko. If we were to come up with a contemporary Islamic allegory, how would people react to it?

It’s not that hard to come up with an isomorphic version which would make an average Westerner a little uneasy: a tribe of cheerful Arabian Djinni under the good King of Djinn discover that oil drilling is extending into the Empty Quarter which they have lived in for so long; they declare jihad and attempt to fight back, using their magical powers, but while 1 or 2 rigs catch on fire after some male djinni magically blow themselves up and some other djinni can commandeer a truck to smash into the gate of an oil refinery, their efforts are generally futile. Dozens of djinni decide in their despair to permanently depart the world on a giant magic carpet bound for Paradise (where they hope for houris), while the rest wish together and perform one last spell in the urban streets of Riyadh: evoking the Golden Age of Baghdad and the One Thousand And One Nights with the bronze giant and roc and princes of Serendip and enchanted women and divers other fantastical characters & objects. Exhausted, they abandon their smoky forms to masquerade as ordinary turban, bisht, or hijab wearing immigrant workers & expats in the Saudi government & oil industries, only periodically showing their true colors.

Full Metal Alchemist: Pride and Knowledge

What do you think about Ed transmuting the literal gate to break the rules in the end of FMA?

That wasn’t rule-breaking; that was awesome. It was tremendously satisfying.

One of FMA’s running themes was the narrowness of those interested in alchemy. They were interested in it, in using it, in getting more of it. Obviously folks like Shou Tucker or Kimbly sold their soul for alchemy, but less obviously, the other alchemists have been corrupted to some degree by it. Even heroes like Izumi or the Elrics transgressed. Consider Mustang; his connection with Hawkeye was alchemy-based, and only after years did the connection blossom. Consider how little time he spent with Hughes, in part due to his alchemy-based position. Mustang didn’t learn until Hughes was gone just how much his friends meant.

Similarly, Greed. His epiphany at the end hammers in the lesson about the value of friends. How did he lose his friends? By pursuit of alchemy-based methods of immortality.

That is why Ed was the real hero. Because he realized the Truth of FMA: your relationships are what really matter. No alchemist ever escaped the Gate essentially intact before he did. Why? Because it would never even occur to them to give up their alchemy or what they learned at the Gate.

Have you ever heard of a monkey trap made of a hole and a collar of spikes sticking down? The monkey reaches in and grabs the fruit inside, but his fist is too big to pass back out. If only the stupid monkey would let go of the fruit, he could escape. But he won’t. And then the hunter comes.

The alchemists are the monkey, alchemy is the fruit, and the Truth is the hunter. The monkeys put the fruit above their lives, because they think they can have it all. Ed doesn’t.

Were there things I disliked? Yes, the whole god thing struck me as strange and ill-thought out. I also disliked the mechanism for alchemy - some sort of Earth energy. I thought that the movie’s idea that alchemy was powered by deaths in an alternate Earth to really fit the whole theme of Equivalent Exchange - TANSTAAFL. It’s good that the Amestrian alchemy turns out to be powered by human sacrifice (TANSTAAFL), but that turns out to be due to the Father character blocking the ‘real’ alchemy, and so, non-Amestrian alchemy turns out to be a free lunch!

A Secular Humanist Reads The Tale of Genji

After several years, I finished reading Edward Seidensticker’s translation of The Tale of Genji. Many thoughts occurred to me towards the end, when the novelty of the Heian era began to wear off and I could be more critical.

The prevalence of poems & puns is quite remarkable. It is also remarkable how tired they all feel; in Genji, poetry has lost its magic and has simply become another stereotyped form of communication, as codified as a letter to the editor or small talk. I feel fortunate that my introductions to Japanese poetry have usually been small anthologies of the greatest poets; had I first encountered court poetry through Genji, I would have been disgusted by the mawkish sentimentality & repetition.

The gender dynamics are remarkable. Toward the end, one of the two then main characters becomes frustrated and casually has sex with a serving lady; it’s mentioned that he liked sex with her better than with any of the other servants. Much earlier in Genji (it’s a good thousand pages, remember), Genji simply rapes a woman, and the central female protagonist, Murasaki, is kidnapped as a girl and he marries her while still what we would consider a child. (I forget whether Genji sexually molests her before the pro forma marriage.) This may be a matter of non-relativistic moral appraisal, but I get the impression that in matters of sexual fidelity, rape, and children, Heian-era morals were not much different from my own, which makes the general immunity all the more remarkable. (This is the ‘shining’ Genji?) The double-standards are countless.

The power dynamics are equally remarkable. Essentially every speaking character is nobility, low or high, or Buddhist clergy (and very likely nobility anyway). The characters spend next to no time on ‘work’ like running the country, despite many main characters ranking high in the hierarchy and holding minister-level ranks; the Emperor in particular does nothing except party. All the households spend money like mad, and just expect their land-holdings to send in the cash. (It is a signal of their poverty that the Uji household ever even mentions how less money is coming from their lands than used to.) The Buddhist clergy are remarkably greedy & worldly; after the death of the father of the Uji household, the abbot of the monastery he favored sends the grief-stricken sisters a note - which I found remarkably crass - reminding them that he wants the customary gifts of valuable textiles25.

The medicinal practices are utterly horrifying. They seem to consist, one and all, of the following algorithm: “while sick, pay priests to chant.” If chanting doesn’t work, hire more priests. (One freethinker suggests that a sick woman eat more food.) Chanting is, at least, not outright harmful like bloodletting, but it’s still sickening to read through dozens of people dying amidst chanting. In comparison, the bizarre superstitions (such as trapping them in houses on inauspicious days) that guide many characters’ activities are unobjectionable.

The ‘ending’ is so abrupt, and so clearly unfinished; many chapters have been spent on the 3 daughters of the Uji householder, 2 are disposed of, and the last one has just been discovered in her nunnery by 1 of the 2 protagonists (and the other protagonist suspects). The arc is not over until the would-be nun has been confronted, yet the book ends. Given that Murasaki Shikibu was writing an episodic entertainment for her court friends, and the overall lack of plot, I agree with Seidensticker that the abrupt mid-sentence ending is due either to Shikibu dying or abandoning her tale - not to any sort of deliberate plan.

Economics

Long Term Investment

“That is, from January 192698ya through December 200222ya, when holding periods were 19 years or longer, the cumulative real return on stocks was never negative…”

How does one engage in extremely long investments? On a time-scale of centuries, investment is a difficult task, especially if one seeks to avoid erosion of returns by the costs of active management.

In long-term investments, one must become concerned about biases in the data used to make decisions. Many of these biases fall under the general rubric of “observer biases” - the canonical example being that stocks look like excellent investments if you only consider America’s stock market, where returns over long periods have been quite good. For example, if you had invested by tracking the major indices any time period from January 192698ya through December 200222ya and had held onto your investment for at least 19 years, you were guaranteed a positive real return. Of course, the specification of place (America) and time period (before the Depression and after the Internet bubble) should alert us that this guarantee may not hold elsewhere. Had a long-term investor in the middle of the 19th century decided to invest in a large up-and-coming country with a booming economy and strong military (much like the United States has been for much of the 20th century), they would have reaped excellent returns. That is, until the hyperinflation of the Weimar Republic. Should their returns have survived the inflation and imposition of a new currency, then the destruction of the 3rd Reich would surely have rendered their shares and Reichmarks worthless. Similarly for another up-and-coming nation - Japan. Mention of Russia need not even be made.

Clearly, diversifying among companies in a sector, or even sectors in a national economy is not enough. Disaster can strike an entire nation. Rosy returns for stocks quietly ignore those bloody years in which exchanges plunged thousands of percentage points in real terms, and whose records burned in the flames of war. Over a timespan of a century, it is impossible to know whether such destruction will be visited on a given country or even whether it will still exist as an unit. How could Germany, the preeminent power on the Continent, with a burgeoning navy rivaling Britain’s, with the famous Prussian military and Junkers, with an effective industrial economy still famed for the quality of its mechanisms, and with a large homogeneous population of hardy people possibly fall so low as to be utterly conquered? And by the United States and others, for that matter? How could Japan, with its fanatical warriors and equally fanatical populace, its massive fleet and some of the best airplanes in the world - a combination that had humbled Russia, that had occupied Korea for nigh on 40 years, which easily set up puppet governments in Manchuria and China when and where it pleased - how could it have been defeated so wretchedly as to see its population literally decimated and its governance wholly supplanted? How could a god be dethroned?

It is perhaps not too much to say that investors in the United States, who say that the Treasury Bond has never failed to be redeemed and that the United States can never fall, are perhaps overconfident in their assessment. Inflation need not be hyper to cause losses. Greater nations have been destroyed quickly. Who remembers the days when the Dutch fought the English and the French to a standstill and ruled over the shipping lanes? Remember that Nineveh is one with the dust.

In short, our data on returns is biased. This bias indicates that stocks and cash are much more risky than most people think, and that this risk inheres in exogenous shocks to economies - it may seem odd to invest globally, in multiple currencies, just to avoid the rare black swans of total war and hyperinflation. But these risks are catastrophic risks. Even one may be too many.

This risk is more general. Governments can die, and so their bonds and other instruments (such as cash) rendered worthless; how many governments have died or defaulted over the last century? Many. The default assumption must be that the governments with good credit, who are not in that number, may simply have been lucky. And luck runs out.

In general, entities die unpredictably, and one has no guarantee that a, say, 1500 year old Korean construction company will honor its bills in another 500 years because all it takes is one bubble to drive it into bankruptcy. When one looks at securities turning into money, of course all you see are ones for those entities which survived. This is ‘survivorship bias’; our observations are biased because we aren’t looking at all of the past, but the present. This can be exploited, however. Obviously if an entity perishes, it has no need for assets.

Suppose one wishes to make a very long-term investment. One groups with a large number of other investors who wish to make similar investments, in a closed-end mutual fund with a share per investor, which is set to liquidate at some remote period. This fund would invest in assets all over the world and of various kinds, seeking great diversification. The key ingredient would be that shares are not allowed to be transferred. Should an investor perish, the value of their share would be split up amongst the other investors’ shares (a percentage could be used to pay for management, perhaps). Because of this ingredient, the expected return for any individual investor would be extremely high - the potential loss is 100%, but the investor by definition will never be around for that loss. Because the identity and number of investments is fixed, potential control of the assets could be dispersed among the investors so as to avoid the situation where war destroys the headquarters of whomever is managing the assets. The technical details are unimportant; cryptography has many ingenious schemes for such secret sharing (one can easily heavily encrypt a file as usual, and then using eg. Shamir’s Secret Sharing, create & distribute n keys where any chosen number of keys up to all n are needed to decrypt the file).

From Eliezer Yudkowsky’s “The Apocalypse Bet”

Suppose you think that gold will become worthless on April 27th, 2020 at between four and four-thirty in the morning. I, on the other hand, think this event will not occur until 2030. We can sign a contract in which I pay you one ounce of gold per year 2010102020, and then you pay me two ounces of gold per year 2020102030. If gold becomes worthless when you say, you will have profited; if gold becomes worthless when I say, I will have profited. We can have a prediction market on a generic apocalypse, in which participants who believe in an earlier apocalypse are paid by believers in a later apocalypse, until they pass the date of their prediction, at which time the flow reverses with interest. I don’t see any way to distinguish between apocalypses, but we can ask the participants why they were willing to bet, and probably receive a decent answer.

Or Garett Jones

How can these prophets of doom cash in on their confidence? After all, they think that the state of the world where they’re proven right is a state of the world where nobody can reward them for being right. Aside from the self-congratulation, how can they benefit? By signing a contract right now. If you’re reasonably sure Treasuries will be worthless in a dozen years, you should find somebody who disagrees, and convince them to give you $1 today. If you’re right, then 12 years later you get to keep the money. If you’re wrong, you have to pay the other party the normal rate of return on the $1, plus a little extra. $2 should be plenty if you can back the contract with some collateral; maybe $4 otherwise. Notice what you’re doing here: You’re writing an uninsurance policy. You get the premiums up front, and you pay out only if things turn out fine. Since the other party is pretty sure things will turn out fine, the deal you’re offering from their point of view is about the same as any other investment. That’s why you only have to offer about the normal rate of return. The hard part here, of course, is convincing the other party you’ll repay in the future–your barrier to riches isn’t the apocalypse, it’s your own trustworthiness. That’s where attorneys and insurance companies come in. Lloyd’s is happy to offer hole-in-one insurance [more], so the industry has no problem writing policies that pay out upon joyous events.

Measuring Social Trust by Offering Free Lunches

People can be awfully suspicious of free lunches. I’d like to try a little experiment or stunt sometime to show this. Here’s how it’d go.

I’d grab myself a folding table, make a big poster saying ‘Free Money! $1 or $2’ and in fine print, ‘one per person per day’. Then, anyone who came up and asked would get $2. Eventually, someone would ask for $1 - they would get it, but be asked first why they declined the larger amount.

I think their answers would be interesting.

Even funner would be giving the $2 as a 2-dollar bill, and not 2 dollar bills. They’re rare enough that it would be quite a novelty to people.

Lip Reading Website

Moved to Startup Ideas.

Good Governance & Girl Scouts

See Girl Scouts and good governance.

Chinese Kremlinology

I’m not suggesting that any of the news pieces above are false, I’m more worrying about my own ability to be a good consumer of news. When I read about Wisconsin, for example, I have enough context to know why certain groups would portray a story in a certain way, and what parts of the story they won’t be saying. When I’m interested in national (US) news, I know where to go to get multiple main-stream angles, and I know where to go to get fringe analysis. Perhaps these tools don’t amount to much, but I have them and I rely on them. But I really know very little about how news from China gets to me, and it is filtered through a lot more layers than when I read about things of local interest.

Antoine Latter

It is dangerous to judge a very large and complex country with truly formidable barriers to understanding and internal opacity. As best as I can judge the numbers and facts presented for myself, there are things rotten in Denmark. (The question is whether they are rotten enough.)

But at the same time, we can’t afford to buy into the China-as-the-next-threat hype. When I was much younger, I read every book my library had on Japan’s economics and politics, and many online essays & articles & op-eds besides. They were profoundly educational, but not just in the way that their authors had intended - because they were all from the Japan as Number One (Ezra Vogel) / Rising Sun (Michael Crichton) period of the bubble ’80s, and they were profoundly confident about how Japan would rule the world and quite convincing but even as I read them, Japan’s bubble had popped brutally and it continued to stagnate. This dissonance, and my own susceptibility to the authors I had read, was not lost on me. (There was another sobering example from that same period for me - I had read Frank Herbert’s Dune with avidity, thoroughly approving of Paul’s actions; then I read Dune Messiah and some of Herbert’s essays and interviews, only to realize that I had been cheering on a mass murderer and had fallen right into Herbert’s trap—“I am showing you the superhero syndrome and your own participation in it.”)

Years later, I came across Paul Krugman’s “The Myth of Asia’s Miracle”, which told me about a economic (as opposed to military or geopolitical) parallel to Japan’s ascension that I’d never heard of - Soviet Russia! (And it’s worth noting that one of the other ‘Asian Tigers’, South Korea, despite its extraordinary growth and own mini-narratives, is still 3k or so below Japan’s per-capita income.)

Ever since I have been curious as to China’s fate (much greater than or comparable total wealth to the US?), skeptical of the optimistic forecasts, and mindful of my own fallibility. Falling into the narrative once, with Japan, is understandable; fool me twice with Soviet Russia, that’s forgivable; fool me three times with China, and I prove myself a fool.

Against Collapsism

As always, there are those who forecast doom for countries. In the USA, some commentators foresee a race war, or the eventual collapse of welfare pensions (particularly Social Security), or more recently, the collapse of the higher-education college system under its indefinitely-increasing tuition/student-loan burdens, and the revolt of Millennials for all of the above. If these systems are as doomed as dotcom stocks in 199925ya or US suburban housing prices in 200816ya, one should presumably avoid engaging with them (such as by declining to go into debt for a college degree) or counting on them (saving on one’s own for retirement), and one should ‘short’ term and actively bet on trends culminating in collapse.

But what is the most boring outcome? What should be the default hypothesis of any ‘Baby Boomer bubble’? I’d suggest it’s something Adam Smith told an acquaintance when told of, as it happens, English defeat in America at Saratoga: “There is a great deal of ruin in a nation.”26

That debt or taxes will become intolerable is hard to deny, and many of the relevant long-term trends look bad, or are inexorable consequences of demographic trends. But just because something is intolerable doesn’t mean people can’t easily tolerate it. Revolutions are rare; mediocritizing is more mundane. Even if there were a revolution, there appears to be little meaningful way to benefit or exploit it from something morally equivalent to ‘shorting’.

It is a phrase that has come to mind repeatedly as I’ve read about California, Japan, Greece, Puerto Rico, the USSR, Zimbabwe, North Korea, or Venezuela—however bad things get, where one thinks “surely this is the breaking point, surely now they cannot possibly continue as they are and endure still worse”, things manage to get worse, and do so for longer than one would hope or fear possible.

It turns out that humans can and will endure almost anything if it is gradual enough, and they come to expect it as normal.27 Even in the Third World, living conditions are vastly above any physical Malthusian limit. Identities have proven to be more tasty to Chavistas than any pabellón criollo or arepas.

A society can endure the ‘intolerable’ simply by adjusting in countless ways large and small. There is not so much a sudden loud collapse like a balloon popping at a prick, so much as a wheezing of an untied balloon deflating (with a sound alike in dignity to flatulence as the warm air rushes out).

For literally my entire (reading) life I have been reading people forecasting economic stagnation and demographic doom for Japan (correctly!) and then losing their shirts on Japanese bonds & the yen.

They weren’t wrong about Japan being in bad shape. The once-unprecedented prospect of a major industrialized country seeing its absolute population drop in peacetime doesn’t even make news now. Their debt situation has become much worse. Economic vitality hasn’t returned, and momentum in prestige areas has long since departed for other countries.28 Nothing has been fixed in Japan. But Japan just sort of… keeps going.

If there is a great deal of ruin in those nations, there is more in the USA. Why think that any Boomer bubble popping would be any different? Why can’t tuition keep going up to a breaking point—and then stay there, at a permanently high plateau, extracting as much consumer surplus as possible through careful price discrimination? Or in California, or the Bay Area: why can’t prices stay high forever? (Asset prices are just bubbles that have never popped; when they don’t pop, we stop calling them ‘bubbles’.) And there are areas of fat and waste which could be trimmed in a crisis with no loss. (Would universities truly suffer if they had to cut back to, say, 1990s levels of administrative staff & salaries? Or hospitals if MRIs weren’t administered quite so often?)

Is there anything to ‘short’, in these areas? It’s not clear that one could short such things in any direct way. How would you short Japan’s lost decades? Or Venezuela? Or Zimbabwe? Or Greece? Or North Korea? Or Puerto Rico? Or the USSR? Things like degrees or welfare program are not like a housing or stock market, where the assets can be dumped onto a free market to create a price spiral driving other debt-supported instruments into further fire sales, creating a positive feedback loop. There’s nowhere you can sell a college degree, or a pension obligation. They are sunk costs. If you think a college degree is worthless, what can you do but not get one? Even if you think they may become worthless at some point (of which I can see little sign in current hiring practices, trajectories of MOOCs, salary premiums etc), you can’t short nonprofit universities—they have endowments, among other things.

Even if there are some correlated financial assets like bonds one can directly short (not a given, and not a given that the bonds will work, see Greece), that’s not accessible to most people, requires the most exquisite timing to catch the falling knife due to the difficulty of long-term short positions, and offers limited leverage.

The default hypothesis should be that the adjustments will occur along many margins, ranging from degraded service to more fees to stealth tax increases to seignorage & inflation, to bans of (tacitly or de jure) permitted things, creative reinterpretations of contracts to minimize expenditures (demonstrating the ‘incompleteness of contracting’ as an economist would say), defaults on implicit obligations, failure to make investments both obvious (eg. bridge or cathedral repairs) and subtler adjustments (“the seen and the unseen”) like destruction of social capital from political & identity-based division, lesser investment (especially into human capital & innovation), and a general graying and ‘demosclerosis’ of society as a whole as everything just gradually decays or gets better slower than it should’ve.

A paradigmatic example might be all the small US cities increasingly burdened by police/fireman/teacher/civil-servant pensions granted long ago, whose equally long-forecast financial bombs have begun exploding: the responses are localized defaults (to renegotiate at least some obligations), screwing over new employees to subsidize old ones while getting back onto actuarially sound grounds, desperate resorts to ill-advised financial gimmickry, cuts to services, reducing oversight & internal efficiency (leading to further long-term cuts), halting of population growth as people attrit elsewhere, and endless legal wrangling by the special interests fighting over a shrinking pie. But—none of the youth riot, or cause revolutions even when children are starving to death in the streets daily; in areas like Detroit or California, they can barely bring themselves to vote. (If any Millennial groups voted at anything approaching 100% rates, the world would look very different from how it does now; for starters, Ron Paul would have been elected.)

There may someday be a ‘collapse’ of some sort somewhere, as so long forecast by so many (“Proof of Trotsky’s farsightedness is that none of his predictions have yet come true”…) but these collapses may take the form of the education sector collapsing from 2035-levels of GDP consumption to merely 2020-levels of GDP consumption; this way of being ‘right’ is cold comfort to the skeptic many decades later, or the bankrupt investor, or the student contemplating whether to get a degree and how long the degree will be useful.

The critic doesn’t have the luxury of opting-out. Choosing to take no action and not invest in a system is an action itself. As long as time keeps passing and opportunity costs keep being incurred, and the punch is still flowing at the party, what choice does one have but to get up and dance?

Domain-Squatting Externalities

In developing my custom search engine for finding sources for Wikipedia articles, one of its chief benefits turned out to nothing other than filtering out mirrors of Wikipedia! Since one is usually working on an existing article, that means there may be hundreds or thousands of copies of the article floating around the Internet, all of which match very well the search term one is using, but which contribute nothing. This is one of the hidden costs of having a FLOSS license: the additional copying imposes an overhead29. This cost is not borne by the copier, who may be making quite a bit of money on their Wikipedia mirror, even penalized by Google as they have since become. In other words, cluttering up searches is a negative externality. (One could say the same thing of the many mirrors or variant versions of social news sites like Hacker News. Who are they imposing costs upon unilaterally?)

Domain-squatters are another nuisance; so often I have gone to an old URL and found nothing but a parking domain, with maybe the URL plugged into a Google search underneath a sea of random ads. But, the libertarian objects, clearly these domain-squatters are providing a service since otherwise there would be no advertising revenue and the domain-squatters could not afford to annually renew the domain, much less turn a profit.

But here is another clear case of externalities.

On parking domains, only 1 person out of thousands is going to click on an ad (at best), find something useful to them, and make the ads a paying proposition. but those other thousands are going to be slowed down - the page has to be loaded, they have to look at it, analyze it, and realize that it’s not what they wanted and try something else like a differently spelled domain or a regular search. A simple domain-not-found error would have been faster by a second at least, and less mental effort. The wasted time, the cost to those thousands, is not borne by the domain-squatter, the ad-clicker, or the advertiser. They are externalizing the costs of their existing.

Ordinary Life Improvements

Moved to “My Ordinary Life: Improvements Since the 1990s”.

A Market For Fat: The Transfer Machine

One day sweating at the gym, I suddenly thought, “if only I could pay someone to take this fat off my hands - someone who enjoys this crap!”

Well, what if you could? What if there were a machine which could transfer kilograms of body fat between people? Let’s stipulate that the machine is cheap to operate, perfectly safe, requires the two people to be within a meter or two, and can instantaneously transfer kilograms of body fat between two people in such a way that the presence & absence is as if they hadn’t eaten the equivalent. (The fat can’t simply be dumped into the drain, like a magical liposuction.) What would happen?

Many people are unhappy with their excessive amount of body fat, as body fat is unattractive and unhealthy, so they would eagerly get rid of it. Relatively few people are in need of body fat (fat is an important nutrient, the absence of which leads to ‘rabbit starvation’, and body fat is an important organ and lack of it can cause problems ranging from being more easily injured to infertility in women, but at least in the contemporary USA, lack of body fat is a rare problem outside of bodybuilders & people with eating disorders (neither of which group wants transfers), and is rare even in Third World countries where obesity is increasingly the threat to poor people), so it would seem supply exceeds demand, requiring a market in body fat.

So probably people would wind up paying other people to take body fat off their (sometimes literal) hands. But there are limits to transfer as human skin is only so elastic and takes a while to grow (morbidly obese people who lose scores of kilograms often must undergo skin reduction surgery to deal with the loose flaps which can become infected), so transfers will be limited in size (perhaps 10-20kg is the safe upper limit) and will need to be regular & of small size.

The effect on the fitness industry would likely be catastrophic. Fitness and diets are at present bizarre poor-functioning markets because they can only sell inputs; there is no one you can pay to remove 1kg of fat from you, you can only pay for a gym membership or X training sessions or for Y prepared meals or Z diet recipe books, and not only are the average effects of those interventions either unknown or grossly exaggerated, they hide enormous individual variation, rendering rational shopping & comparison futile to a great extent. And despite their inability to deliver, they suck up dozens of billions of dollars in costs a year, and a gym membership can easily cost >$1k/year. In contrast, the ability to transfer fat and reliably immediately causing verifiable benefits would immediately attract a large fraction of clientele, destroying much of the demand for fitness/diet services, and such a collapse would have consequences eg. inability to pay fixed costs like the primary expense, rent.

Gyms notoriously are designed to be undercapacity, under the assumption that only a small fraction of members will use it heavily (or at all) and the rest cross-subsidize the users, so a shift to only serious gym-users (to people building muscle, and those busy working off fat transfers!) would further damage the economics of many gyms, as serious gym-users will visit more often, use more equipment more heavily, be more demanding, and likely be skewed towards powerlifting/bodybuilding which requires much more space per customer than a bunch of treadmills cheek-by-jowl. Fitness is a major costly signal at present, signaling a mix of leisure/wealth/self-discipline/low-discount-rates over the long-term, so the collapse of visible fitness (in the form of low body fat percentage) into a simple (and likely cheap) signal of short-term financial wherewithal (as one can impulsively go pay for a large transfer of fat) would force the middle/upper class (the jogging class) into finding new, more costly signals.

Paradoxically, the fact that fat can be transferred and is a cheap signal would result in universal demand from the middle/upper-class, precisely because it would be so easy that there would no longer be plausible excuses for not doing it; maintaining a stable weight is difficult and time-consuming and near impossible for some people, so one can only be blamed somewhat for failing to do so, but with fat transfer, failure to do so could only indicate severe deviancy/deficiency, somewhat like smoking or having yellow teeth or not having a college degree. So even as fat ceases to be a particularly informative signal, it becomes a mandatory signal, which would induce more demand for transfer. There would also be effects on the supply end of people: many fitness instructors admit they do it to show off and will sacrifice money for the privilege; so the collapse of the signal will eliminate much of the personnel.

What would replace low body fat as a signal? Signals are fairly arbitrary so it’s hard to be confident, but one natural replacement would be substitution with food. Food consumption is already heavily moralized (just walk into a Whole Foods to see or consider why nutrition science has such a dismal track record of accurate causal inference) and the increasing signalization (“Instagrammization”?) of gourmet food & restaurants has been noticed by many people; one of the fundamental constraints on consumption of craft beers and “bean to bar” chocolate and steak dinners is, however, fear of weight Transfer offers a way to lift that constraint while simultaneously serving as a minor costly signal of financial status - someone who consumes organic cruelty-free beef regularly while remaining thin is signaling both their sophisticated palate, refined moral sensibilities, and wealth. Or to put it another way, transfers become the mythical vomitorium enabling wealthy gluttons to purge their food binges. So trends towards elaborate exhibitionist feasts could be accelerated (incidentally increasing the supply of excess fat).

Who would be buyers and sellers?

Rich people tend to be thinner than poor people, but are still far from optimal weights, so the existing SES gradient of fat/health/wealth would probably increase: poor people would become richer but fatter; poor people also have high discount rates and are more impulsive, so while they may be happy to make the deal, the long-term consequences will be bad. This would be particularly acute for drug addicts, who would be able to score in exchange for ‘just’ taking another kilogram of fat and, repeatedly doing so, might wind up with hundreds of excess kilograms. (The money itself will only minimally help health, given existing results on the causal effect of monetary shocks & free health insurance.) On the positive side, being a fat recipient is the epitome of unskilled labor and provides an additional option of a de facto Basic Income without the draining effect of blood donation.

And since most everyone will have some spare capacity for transfers (perhaps after a delay for skin growth to catch up), there’s no reason that fat could not be securitized: personal loans could be collateralized by body weight (eg. someone 10kg underweight could pledge 30kg), agreements to fat transfers could be purchased and bundled together for large-scale fat transfers or to implement futures (ie. pay people for the right for a 1kg transfer, and hold onto it until potentially prices rise in the future and then sell the right to other people at the new price). Fat might itself be ‘borrowed’ in the sense of people renting fat reductions for short terms - visits to a beach, important meetings, weddings, and special occasions in general. One might not be able to afford a permanent transfer to reach one’s ideal weight, but one might be able to afford temporary transfers where the fat is eventually returned (with a little extra as interest), which could be paid for monetarily or in kind by oneself temporarily holding additional fat later, which might be more easily described in units like “kg-days” - eg. for work one might borrow -5kg for the work week, incurring a debt of 25 kg-days (transferring the 5kg to someone who wants to be thin on the weekends and doesn’t mind being fat during the week) and then pay it back by carrying 12.5kg on the weekends for 2 days (repaying the other person by letting them be particularly svelte on the weekend for partying or whatever). An active futures market could incentivize serious research into weight loss, as anyone who cracks the code of weight loss and can burn fat more efficiently can then profitably exploit the knowledge by taking fat deliveries and then disposing of the fat at sub-market rates; organizations engaged in such “arse arbitrage” would be called, of course, “pudge funds”. Prices would fluctuate with the weather & season, reflecting available food and exercise opportunities, and would doubtless be affected by many other variables too; an efficient fat market would have to increase prices slightly whenever Netflix releases a new TV series for binging, all of which subtle adjustments would require “high caloric trading” specialists to provide liquidity. Health insurers would crunch the numbers and, moral hazard issues aside, might often conclude that paying for transfers is cheaper than incurring long-term expenses from diabetes or bariatric surgery. Life insurers, on the other hand, would be less happy, as transfers could result in dramatic decreases in the life expectancy of an insured person right after they subscribe to a policy, and might define transfers as grounds for canceling life insurance policies.

Young people are cash-poor and time-rich, and have high metabolisms, while old people reverse all that (and many an old person has ruefully recalled how they could eat anything when they were young without gaining a gram), so there’s a natural set of exchanges there, although it will be limited by the fact that young people will not want to take on too much fat because of the impact on personal attractiveness (most valuable when young), and because there are far more old people than young people (young people are really advantaged only adolescence to late 30s, but then they might live another 40 years getting fatter, for net fat years).

There are also large individual differences. Diet and exercise studies routinely show large differences in individual responses to the interventions with tiny long-term effects, with a small fraction of participants often gaining weight or worsening; forced-feeding and starvation experiments also demonstrate rebound and homeostatic effects. All of this has large genetic contributions. For this and other reasons, “calories in, calories out” is either false or tautological. And large people presumably burn more calories as their basal metabolic rate or during exercise than small people, giving them an advantage. Hence, some people will have large comparative advantages in fat loss because they are large or simply find exercise or dieting relatively easy. This would form a nice side job for them, or if the price of fat rises high enough, potentially a full-time job.

Indeed, in the extreme case, someone, a “fatpreneur”, might be able to stop eating food entirely (perhaps still consuming vitamin pills and a few nutrient-dense foods for essential nutrients) and burn donor fat constantly as their job; 1kg of body fat is ~7000-10,000 (k)cals, with daily caloric needs at 200024ya kcal (minus the caloric content of the essential nutrients), so a fatpreneur could live without eating directly on 1-2kg a week, disposing of 52-105kg of fat annually in a steady state. A fatpreneur likely could accumulate a steady client list and make weekly rounds collecting ~2kg of fat. (Perhaps we could call this the “gag economy”.) Fatpreneurs could burn even more calories and thus fat if they exercised, but exercise burns relatively small amounts (often <500 kcal per workout, which if done every other day is <1750274ya kcal or 0.25kg fat/week, increasing annual totals to ~65-118kg) while being exhausting, time-consuming, and unpleasant - which is, after all, a large part of the problem and why there is so much fat, because “you can’t outrun your fork” - so it’s unclear if exercise would be a good use of their time and worth the extra fat disposal capability. The fat disposal (and savings on regular food consumption) may be enough on its own, and they are better off pursuing normal employment. Given only 105kg/year, to earn a modest passive income of $30k would require fat prices of >$285/kg (and exercise would add a marginal $3700).

Fatpreneurs will be rare, precisely because so many people find mere normal weight maintenance impossible in the modern environment, and we shouldn’t forget that “food is not about calories”: at all levels of American society, food is a luxury and a religion, it is not about mere sustenance. This can be seen in the levels of expenditure - Americans often eat out, spend considerable money on luxuries like alcohol or tobacco or meat among the poor or homeless, these make up a large fraction of expenditures, and this is true worldwide even among the most impoverished Third World countries whose people earn <$2/day, alcohol or other drugs like khat will be major expenditures, and in general spend several hundred dollars a month on food despite a nutritious diet being doable for ~$2.85$22010/day (“Stigler’s diet problem” is a classic optimization challenge, with one of the more recent attempts being Garille & Gass2001, finding some possible diets costing annually $711.76$4122001/$611.56$3542001/$1,124.66$6512001/$924.26$5352001—vastly smaller than actual American per capita food expenditures in 2014 of $6,112.68$4,5762014). Eating food and meals is a beloved luxury people will pay thousands of dollars for, and even people using meal-replacement methods like Soylent find themselves missing food psychologically; so it is highly unrealistic to imagine that more than a small percentage of people would become fatpreneurs unless fat prices were at least $40/kg () and probably much higher than that would be necessary. Aside from people unusually insensitive to lack of food consumption, two possibilities are to integrate transfers with medical treatment: the ill, such as cancer patients enduring chemotherapy, are often underweight30 and could in fact benefit from regular transfers of fat (although they would probably benefit much more from transfers of muscle instead), which would provide several hundred thousand people per year in need of several kilograms; finally, to avoid being a trivial scenario, I defined transfers as only between living people, but what about terminal or extremely sick people? One could imagine them taking on large transfers to cover medical bills or try to leave an estate; if one is dying, some additional kilograms is barely harmful & may be helpful. The problem with this is that dying can be unexpected and abrupt, and people are generally not interested in monetizing their corpses despite a human body being worth tens of thousands of dollars as parts & creating a lucrative industry for human tissue.

The fatpreneur’s problem immediately suggests another consequence: increase in the demand for stimulants and appetitive suppressants like modafinil, nicotine, amphetamines, or ephedrine. (This doesn’t include the more extreme options which people might be incentivized to use, such as 2,4-Dinitrophenol or clenbuterol.) They’re already popular for the ability to increase motivation for work or study, but with a market for fat, their use can now outright pay for themselves. A year’s supply of ephedrine or modafinil might run ~$200 in bulk, so only a small total fat loss is necessary to make their use profitable solely via weight loss. Generalizing wildly from modafinil & ephedrine, standard stimulants might cause 0.25kg/month weight loss or 3kg/year, so any fat price >$66/kg would make a number of stimulants profitable. It’s also often noted that quitting smoking leads to weight gains on the order of 2-5kg as nicotine is no longer suppressing the appetite or smoking substituting for snacking, so a negative consequence is that this will further discourage smokers from quitting as the side-effects of quitting become more financially salient.

What about international trade with Third World countries, exporting the body fat? International trade doesn’t seem as promising as it might because simple scarcity of calories long ago stopped being the fundamental cause of hunger and malnutrition. Food in bulk, like rice, is incredibly cheap. Where starvation and mass famine exists, it is solely a political phenomenon: the “Maduro diet” stops at the borders of Venezuela, and somehow the clouds causing torrential floods in North Korea never manage to pass the DMZ. Poor countries struggle with bad politics/laws/governments, micronutrients like iodine, logistics barriers to distributing existing food, and issues like that - not lack of calories per se. For these problems, sending hundreds of thousands of impoverished villagers to the USA or bringing fat Americans to them are either illegal, ineffective, or infeasible. It makes no sense to spend hundreds or thousands of dollars on bribes & passports, then the plane ticket, to fly someone halfway across the world to load up on fat equivalent to a few hundred dollars of food at most (and the equivalent of low-quality food at that, without any micronutrients), and fly them back. It would make more sense to do as an adjunct to tourism, or as a kind of “medical tourism” in its own right, to let tourists get rid of the kilograms at a discount (and let them gorge on the cruise ship buffets without guilt). But like medical tourism presently, it would probably be a minor phenomenon.

What would be a reasonable market-clearing price? At the moment, there is a huge oversupply of American fat. As noted, the other end is limited - even an exercising fatpreneur can optimistically only burn 105kg of fat per year. A series of Gallup polls ending 2017 asks American adults (>18yo) about total & ‘ideal’ weight, finding an average difference of 7.25kg; this is based on self-reports, so the reported total weights are likely convenient underestimates of their true weight, their ‘ideal’ weight is probably still far from medically optimal, and Gallup notes that the ‘ideal’ weight is gradually increasing over time indicating what one might call “normalization of deviance” (ie. Americans are so fat they are ignorant of or in denial about how fat they are). Once population-scale changes had occurred, expectations would probably reset and many more kilograms would become excess. Anyway, this gives a ballpark: with ~234 million Americans over 18 (as of 201014ya, anyway, close enough), that implies there’s >1.7 billion kilograms (or >18 trillion kcal) of unwanted fat. To clear the backlog in 1 year would require >1.6m fatpreneurs. 1.6m is an enormous number but perhaps not unreasonable - if there are 234m adult Americans, then only ~1% need to be freaks willing to give up food and live off burning fat, and it’s never a good idea to bet against “>1% of the population being X” even where X is something very weird (eg. that appears to be roughly the percentage of Americans convinced the Earth is flat).

Once the backlog is cleared, what might the steady state look like? (Since as a starting point, we can assume people won’t change their habits.) Fat accumulates over a lifetime, as the obesity crisis is considered to start in childhood with obese children, so that 7.25kg difference appears to accumulate slowly over several decades; if it takes 20 or 30 years, that implies an annual permanent gain of ~0.24kg, so over that ~234m Americans, there’s on the order of 5.85m kg to dispose of per year, equivalent to 560k fatpreneurs. So both the stock and flow of fat are enormous.

For perspective, a gym membership is typically >$700 and members are typically still not at the ideal weight; reportedly, around 57m Americans in 2016 were members of a health club, which is difficult to reconcile with that 7.25kg gap and the overall USA obesity crisis, if gyms were all that effective. Even if we imagine that the entire gap is erased by the gym membership, as an upper bound on the value of the gym membership, that implies a willingness-to-pay of >$100/kg and possibly much more. Then there is the large time consumption (at 3 gym visits a week, 1.5 hours per visit to allow for travel/shower/overhead, and minimum wage, the time alone exceeds the membership costs), and then there is the sheer unpleasantness of exercise. For example, at $550/kg, the cost of erasing the fat gap would roughly equal the cost of LASIK, a highly popular elective surgery which merely removes glasses - how much more would people pay to be thin…? An even more striking survey claims that, going beyond gym memberships, general fitness spending has reached $1,860/year for the average American adult (again, without success), giving >$256/kg.

So the fat market could, if necessary, easily support prices of several hundred dollars per kilogram, and make stimulants & fatpreneurs profitable. Given the extremely large stock & flow of excess fat, the rarity of people gifted in weight maintenance, the high utility placed on eating, the induced demand from signaling/arms races, increases in fat flow from being able to trade money for risk-free food consumption, and revealed preferences from current fitness expenditures versus efficacy, I expect that market-clearing prices would wind up being quite high and definitely >$100, and >$1,000 doesn’t strike me as an absurd price. At $500, an average individual would need to spend $3.6k (so somewhat less than LASIK) to eliminate their gap, and the whole stock would require total payments of $850b to clear, with recurring annual costs of ~$3b. All doable.

More generally, what happens once fat has a clearcut price? The effects on food are ambiguous. On the one hand, by providing an option for avoiding the most negative consequence of food consumption, demand for food would increase in general (and particularly for the rich); but, on the other hand, people love bundling & illusions of freeness (“$0 is different”) and fixate on clear prices (eg. the obsession with gas prices while ignoring mileage). So though the option should make food in effect cheaper, even if the net price is reduced, food will feel much more expensive when anyone can calculate that at $500/kg or $5/100 kcal (), a Big Mac’s 563 kcal for $3.99 means it costs approximately 7 times () as much in fat reduction as it does to buy. (Of course, this was always true - everything one eats above maintenance costs a ton of effort in dieting/exercise to get rid of, hence those not-so-amusing comparisons about how “it takes 10 marathons to burn the calories in a single bag of M&Ms” - but there was never any easy way to connect a given caloric value to a total cost. It’s much simpler to just remember “each 100 kcal costs $5”.) So perhaps the signaling demands and the price illusion effect will largely wind up canceling each other out.

Finally, what are the effects on health? That depends on what the net effect is. If fatpreneurs and other fat burning outlets are insufficient to produce net decreases in excess fat due to equivalents of risk homeostasis, then societal utility will increase but it won’t be accompanied by health benefits. If they are, the life expectancy benefits could be considerable: moderate obesity correlates with life expectancy reductions on the order of a year or two, which is consistent with one meta-analysis of RCTs of weight loss - eg. Kritchevsky et al 2015 finds a RR of 0.85 from a mean weight loss of 5.5kg after 2 years, which RR, if permanent & maintained, would translate to, given American mortality curves, +1.4 years life expectancy. Assuming half the adult US population needs weight loss and gets 1.4 years from transferring their full gap, that could translate to 160 million man-years.

Overall, thinking about it, I get the impression that a market for fat would not change society all that much, but exaggerate existing trends and correlations, and the benefits would probably far exceed the costs.

Urban Area Cost-Of-Living As Big Tech Moats & Employee Golden Handcuffs

Some speculations on Bay Area real estate & SV’s future:

Possibly the single biggest pain point for tech workers in the Bay Area or NYC is the ever-escalating cost of living, driven by local taxes and rent, with the cherry on top of incompetent & malicious local governance. (I was told by one real estate/tech company headquartered in SF that that was in fact the one major city they didn’t & couldn’t operate in!) Certainly it is their biggest quality of life issue whenever I talk to them, whether online or in person. (SF’s tragedy is that it could have had Houston’s housing, Vancouver’s cosmopolitanism, Tokyo’s mass transit, LA’s media, Shanghai’s economic dynamism, & Amsterdam’s air quality/environment; instead, it got Houston’s cosmopolitanism, Vancouver’s housing, Tokyo’s economic dynamism, LA’s mass transit, Shanghai’s air quality, & Amsterdam’s media.)

As absurdly high as their salaries may become, the cost of living drains it away, and many wind up better off after moving away, because the steep salary cuts are outpaced by even steeper cost of living. Considering that labor is one of the single largest costs for big tech companies, the cumulative dysfunctionality here over the past decades must amount to hundreds of billions of dollars essentially flushed down the drain. Why don’t they fix it, or, if political realities render that impossible, leave? The usual answer is agglomeration efficiencies (‘all the good programmers are already in X, we can’t leave!’), but they can coordinate a move, or simply follow a leader who makes a big commitment to a better location. Steve Jobs and other tech CEOs were able to coordinate an extremely illegal wage-fixing cartel when they wanted to cut costs, after all, so a legal HQ or office move (or simply freezing net labor growth in the Bay Area & grandfathering in those offices) shouldn’t be that hard, and the advantages only increase with time.

Even if they can’t take steps as drastic as that, it seems like they could do more than - whatever it is they have done about it thus far. Which seems to be, I’m not sure what, exactly. The main response seems to be buying up big office buildings or real estate for new HQs now before the prices can go up even further, which may be a practical response but is treating only a symptom.

One suspicion has creeped up on me: if they don’t act like they care, maybe… they don’t care. Why would they not care? Even if employee happiness/burnout/turnover is foolishly ignored by them, the sheer expense of salaries ought to make them care. Unless… the salaries have a silver lining which help offset their cost.

What if the salaries are golden handcuffs? If salaries must keep pace with Bay Area CoL, this has a few implications:

  1. collective employee savings will remain minimal, as whatever excess there is can be absorbed by the landlord oligopoly simply raising rent some more

  2. not working is an unattractive option, as the cost of simply existing is so enormous

  3. job hunters will prioritize jobs which pay in cash, as cash is needed to pay rent, and other assets are not accepted, whether they are shiny gold rocks or stock options

These points mean that employees will have a hard time saving up large amounts of capital to serve as cushions, retirement savings, or seed investments for a startup of their own; they will be risk-averse to being fired or switching jobs, as those incur loss of working time/salary and risk extended periods of unemployment, and, perhaps most importantly, startups - which are cash-poor equity-rich - will struggle even more to be founded or then hire employees. After all, how can a startup compete with a FANG or AmaGooBookSoft or whatever big tech company offering salaries like $200k+ & perks to the best software engineers? Sure, that startup might be able to offer them handsome stock options with an expected value (in the very distant future after the increasingly-hypothetical IPO) of say $150k, but this equity is effectively worthless to an engineer who needs to make rent now. A FANG, on the other hand, can pay cash on the barrelhead and throw in some options as a bonus, for that old-time SV flavor.

Aside from cash salaries needed for rent and other expenses, there is student debt, particularly for younger employees or employees with graduate degrees. Perhaps the only thing in the USA outpacing real estate costs is college tuition/student debt. Well, two things - student debt and health care. People are if anything even more terrified of losing a good health insurance plan than they are of being evicted/potentially homeless or their kids not going to college. Here too a FANG or other giant has the scale and brawn and cash and tax advantages to offer a gold-plated health insurance plan - of the sort which cannot be bought on the market and if it could one could not afford it - which cannot be matched by any startup.

Imagine a FANG software engineer, who, after years of the Big Tech grind, contemplating quitting to join a startup. The startup seems like it could be big, very big, and he’s excited. His wife acknowledges this and encourages him to follow his dreams and make a mark - but exactly how much can it pay, in cash, not equity? Their house, which is necessary for them and their two kids and dog (he admits to some lifestyle creep but is unwilling to go back to his grad school days of scarfing free pizza to make ends meet), costs $4k/month and the landlord will be raising the rent soon, he hasn’t paid off his student loans from CMU (which cost $200k in student debt), and their savings won’t last much more than a year considering all their other expenses. Incidentally, does the startup offer a comparable health insurance plan? What would she and the kids do if he was hit by a bus or developed terminal cancer? We would not be surprised if perhaps the engineer puts off the decision to quit, resolving to do it next year, perhaps, working on it as a side project, after saving up some money, and maybe asking for a raise… (Not that he winds up having time or energy to work on it - given his daily commute over a crippled transportation infrastructure to work, necessary for a house which costs ‘only’ that much.)

There are further pernicious effects on startups. They will need to raise more cash, more often, reducing returns for founders and increasing overhead (if for no other reason than the time it takes to do VC funding rounds); the high fixed costs make raising funds more a matter of life and death, and possibly promotes herding/groupthink/conservatism among investors (do the investors in a particularly risky/novel startup want to take the risk of investing in a startup which might not be able to raise money in the next round despite success?); they will be less able to prototype and experiment before needing to raise VCs, perhaps unable at all; the escalating increase in costs offsets other positive trends such as decreasing computing costs from cloud computing etc.

Since this is bad for startups, it is good for FANG. The threat from disruption is lessened because their employees are increasingly indentured servants, or perhaps we should say, serfs. They are tied to the land (by rent), and resort to protection (from medical bills) from the feudal lords to which they pledge allegiance. They have difficulty leaving, making for less turnover and longer tenure (while turnover is much higher than in most industries, it doesn’t seem to be nearly as high as it used to be - contrast multi-year or multi-decade careers at individual tech companies with how it was during the Internet Bubble), and more importantly, have difficulty leaving for or founding disruptive new startups which are serious threats to the incumbents, who at least are known factors to each other & have reached a certain modus vivendi. In this scenario, the extremely high cost of living is a ‘moat’ equivalent to regulation or barriers to entry, which ward off rivals: they must be able to pay escalating inflated salaries just to enable employees to keep living as accustomed. (And to the extent that all the key employees have relocated to the Bay Area, this moat is more, not less, effective.)

The Seen and the Unseen

Some people have observed that the Bay Area and Silicon Valley just don’t seem as innovative as they used to, that for all the venture capital money sloshing around and Y Combinators and SoftBank money, FANG just keeps getting bigger and bigger, with less turnover & more buying out competitors, and small startups (as opposed to ‘startups’ like Uber or Stripe) seem to struggle more and more. This damages global economic growth, technology, and the future: less real competition or investment or creative destruction. And implemented this way, it also damages economic mobility, particularly harming the poor: cities are increasingly where opportunity is - and the doors to opportunity are being locked.

I wonder if the real estate/health insurance pathologies are part of the answer. Somewhere, they realize it hasn’t been that much of a problem for them - everything is still working, the Bay Area offices are still full, they’re still able to recruit - as much of a problem as it may be to their employees. They realize somewhere the status quo is much more survivable for them than their present & future competitors, and, aside from being extremely difficult to fix, would not necessarily make them much better off because it would benefit everyone else (and especially their competitors) more.

If so, since there are no signs of the bad trends stopping, much less reversing, nor that big tech companies have had a ‘come to Jesus’ moment, the anemia will continue and likely worsen. While the healthcare problem is endemic in the USA and cannot be solved by leaving the Bay Area, we would expect to gradually see a squeeze out to other areas: whatever the benefits of agglomeration & network effects, there must be some breaking point at which startups cannot form or operate in the Bay Area; this could potentially create a rival SV in whatever other area begins to tap into the latent potential of disrupting the now-increasingly-sclerotic tech giants, which will fund further development & VC & startups in that rival location, potentially kickstarting the same virtuous circles & network effects & agglomeration gains which SV originally enjoyed. (Examples of triggers would be the Traitorous Eight or the Paypal Mafia.)

This process could take decades, but we may already be seeing some signs in the growth in places considered alternatives to SF, like Austin, Detroit, Seattle, Atlanta, Salt Lake City, or peripheries of NYC/DC. Which one might be the winner is unclear, and likely undetermined at present. It’ll be interesting to see if there is any clear winner, or if there will just be a continued extrapolation of the current situation, where CoLs increase to the breaking point and an equilibrium of low innovation/startups while the tech incumbents squat on the economy earning their own rents and a mist of startups and satellite offices and ‘HQ2’ (or should that be ‘HQ2/3’ now?) spread out globally without condensing anywhere.

Psychology

Decluttering

Ego depletion:

Ego depletion refers to the idea that self-control and other mental processes that require focused conscious effort rely on energy that can be used up. When that energy is low (rather than high), mental activity that requires self-control is impaired. In other words, using one’s self-control impairs the ability to control one’s self later on. In this sense, the idea of (limited) willpower is correct.

Wonder whether this has any connection with minimalism? Clutter might damage executive functions; Killingsworth & Gilbert2010 correlated distraction with later unhappiness, and from “Henry Morton Stanley’s Unbreakable Will”, Roy F. Baumeister and John Tierney:

You might think the energy spent shaving in the jungle would be better devoted to looking for food. But Stanley’s belief in the link between external order and inner self-discipline has been confirmed recently in studies. In one experiment, a group of participants answered questions sitting in a nice neat laboratory, while others sat in the kind of place that inspires parents to shout, “Clean up your room!” The people in the messy room scored lower self-control, such as being unwilling to wait a week for a larger sum of money as opposed to taking a smaller sum right away. When offered snacks and drinks, people in the neat lab room more often chose apples and milk instead of the candy and sugary colas preferred by their peers in the pigsty.

In a similar experiment online, some participants answered questions on a clean, well-designed website. Others were asked the same questions on a sloppy website with spelling errors and other problems. On the messy site, people were more likely to say that they would gamble rather than take a sure thing, curse and swear, and take an immediate but small reward rather than a larger but delayed reward. The orderly websites, like the neat lab rooms, provided subtle cues guiding people toward self-disciplined decisions and actions helping others.

Paul Graham, “Stuff”:

For example, in my house in Cambridge, which was built in 1876148ya, the bedrooms don’t have closets. In those days people’s stuff fit in a chest of drawers. Even as recently as a few decades ago there was a lot less stuff. When I look back at photos from the 1970s, I’m surprised how empty houses look. As a kid I had what I thought was a huge fleet of toy cars, but they’d be dwarfed by the number of toys my nephews have. All together my Matchboxes and Corgis took up about a third of the surface of my bed. In my nephews’ rooms the bed is the only clear space. Stuff has gotten a lot cheaper, but our attitudes toward it haven’t changed correspondingly. We overvalue stuff.

…And unless you’re extremely organized, a house full of stuff can be very depressing. A cluttered room saps one’s spirits. One reason, obviously, is that there’s less room for people in a room full of stuff. But there’s more going on than that. I think humans constantly scan their environment to build a mental model of what’s around them. And the harder a scene is to parse, the less energy you have left for conscious thoughts. A cluttered room is literally exhausting. (This could explain why clutter doesn’t seem to bother kids as much as adults. Kids are less perceptive. They build a coarser model of their surroundings, and this consumes less energy.)…Another way to resist acquiring stuff is to think of the overall cost of owning it. The purchase price is just the beginning. You’re going to have to think about that thing for years-perhaps for the rest of your life. Every thing you own takes energy away from you. Some give more than they take. Those are the only things worth having.

Michael Lewis, “Obama’s Way”:

This time he covered a lot more ground and was willing to talk about the mundane details of presidential existence. “You have to exercise,” he said, for instance. “Or at some point you’ll just break down.” You also need to remove from your life the day-to-day problems that absorb most people for meaningful parts of their day. “You’ll see I wear only gray or blue suits,” he said. “I’m trying to pare down decisions. I don’t want to make decisions about what I’m eating or wearing. Because I have too many other decisions to make.” He mentioned research that shows the simple act of making decisions degrades one’s ability to make further decisions. It’s why shopping is so exhausting. “You need to focus your decision-making energy. You need to routinize yourself. You can’t be going through the day distracted by trivia.”

It’s striking how cluttered a big city is when you visit them from a rural area; it’s also striking how mental disease seems to correlate with cities and how mental performance improves with natural vista and not urban vistas.

See also latent inhibition:

Latent inhibition is a process by which exposure to a stimulus of little or no consequence prevents conditioned associations with that stimulus being formed. The ability to disregard or even inhibit formation of memory, by preventing associative learning of observed stimuli, is an automatic response and is thought to prevent information overload. Latent inhibition is observed in many species, and is believed to be an integral part of the observation/learning process, to allow the self to interact successfully in a social environment.

Most people are able to shut out the constant stream of incoming stimuli, but those with low latent inhibition cannot. It is hypothesized that a low level of latent inhibition can cause either psychosis, a high level of creativity[1] or both, which is usually dependent on the subject’s intelligence.[2][3] Those of above average intelligence are thought to be capable of processing this stream effectively, an ability that greatly aids their creativity and ability to recall trivial events in incredible detail and which categorizes them as almost creative geniuses. Those with less than average intelligence, on the other hand, are less able to cope, and so as a result are more likely to suffer from mental illness.

Interesting decluttering approach: “100 Things Challenge”

Optimizing the Alphabet

Here’s an interesting idea: the glyphs of the Phoenician-style alphabet are not optimized in any sense. They are bad in several ways, and modern glyphs are little better. For example, ‘v’ and ‘w’, or ‘m’ and ‘n’. People confuse them all the time, both in reading and in writing.

So that’s one criterion: glyphs should be as distinct from all the rest as possible.

What’s a related criterion? ‘m’ and ‘w’ are another pair which seem suboptimal, yet they are as dissimilar as, say, ‘a’ and ‘b’, under many reasonable metrics. ‘m’ and ‘w’ are related via symmetry. Even though they share relatively few pixels, they are still identical under rotation, and we can see that. We could confuse them if we were reading upside down, or at an angle, or just confuse them period.

So that’s our next criterion: the distinctness must also hold when the glyph is rotated by any degree and then compared to the rest.

OK, so we now have a set of unique and dissimilar glyphs that are unambiguous about their orientation. What else? Well, we might want them to be easy to write as well as read. How do we define ‘easy to write’? We could have a complicated physiological model about what strokes can easily follow what movements and so on, but we will cop out and say: it is made of as few straight lines and curves as possible. Rather than unwritable pixels in a grid, our primitives will be little geometric primitives.

The fewer the primitives and the closer to integers or common fractions the positioning of said primitives, the simpler and the better.

We throw all these rules in, add a random starting population or better yet a population modeled after the existing alphabet, and begin our genetic algorithm. What 26 glyphs will we get?

Problem: our current glyphs may be optimal in a deep sense:

Dehaene describes some fascinating and convincing evidence for the first kind of innateness. In one of the most interesting chapters, he argues that the shapes we use to make written letters mirror the shapes that primates use to recognize objects. After all, I could use any arbitrary squiggle to encode the sound at the start of “Tree” instead of a T. But actually the shapes of written symbols are strikingly similar across many languages.

It turns out that T shapes are important to monkeys, too. When a monkey sees a T shape in the world, it is very likely to indicate the edge of an object - something the monkey can grab and maybe even eat. A particular area of its brain pays special attention to those important shapes. Human brains use the same area to process letters. Dehaene makes a compelling case that these brain areas have been “recycled” for reading. “We did not invent most of our letter shapes,” he writes. “They lay dormant in our brains for millions of years, and were merely rediscovered when our species invented writing and the alphabet.” https://www.nytimes.com/201014ya/01/03/books/review/Gopnik-t.html

  • “Dimensions of Dialogue”, Joel Simon: “Here, new writing systems are created by challenging two neural networks to communicate information via images. Using the magic of machine learning, the networks attempt to create their own emergent language isolate that is robust to noise.”

Multiple Interpretations Theory of Humor

My theory is that humor is when there is a connection between the joke & punchline which is obvious to the person in retrospect, but not initially.

Hence, a pun is funny because the connection is unpredictable in advance, but clear in retrospect; Eliezer’s joke about the motorist and the asylum inmate is funny because we were predicting some other response other than the logical one; similarly for ‘why did the duck cross the road? to get to the other side’ is not funny to someone who has never heard any of the road jokes, but to someone who has and is thinking of zany explanations, the reversion to normality is unpredicted.

Your theory doesn’t work with absurdist humor. There isn’t initially 1 valid decoding, much less 2.

Mm. This might work for some proofs - Lewis Carroll, as we all know, was a mathematician - but a proof for something you already believe that is conducted via tedious steps is not humorous by anyone’s lights. Proving P/=NP is not funny, but proving 2+2=3 is funny.

‘A man walks into a bar and says “Ow.”’

How many surrealists does it take to change a lightbulb? Two. One to hold the giraffe, and one to put the clocks in the bathtub.

Exactly. What are the 2 valid decodings of that? I struggle to come up with just 1 valid decoding involving giraffes and bathtubs; like the duck crossing the road, the joke is the frustration of our attempt to find the connection.

Efficient Natural Language

Split out to “How Complex Are Individual Differences?”

Cryonics Cluster

When one looks at cryonics enthusiasts, there’s an interesting cluster of beliefs. There’s psychological materialism, as one would expect (it’s possible to believe your personal identity is your soul and also that cryonics works, but it’s a rather unstable and unusual possibility), since the mind cannot be materially preserved if it is not material. Then there’s libertarianism with its appeal to free markets and invisible entities like deadweight loss. And then there is ethical utilitarianism, usually act utilitarianism31. They’re often accused of being nerdy and specifically autistic or Asperger’s; with considerable truth. Most have programming experience, or have read a good deal about logic and math and computers. Romain2010 gives the stereotypical image:

Cryonics is a particularly American social practice, created and taken up by a particular type of American: primarily a small faction of white, male, atheist, Libertarian, middle- and upper-middle-income, computer-engineering “geeks” who believe passionately in the free market and its ability to support technological progress…When I interviewed him, Jerry Lemler, former president of Alcor, claimed that a “typical cryonicist” is highly educated, white, American, male, well-read, employed in a computer or technical field, “not very social,” often single, has few or no children, is atheistic or agnostic, and is not wealthy but financially stable. Lemler also told me that cryonicists tend to have very strong Libertarian political views, believing in the rights of the individual and the power of the free market, although Lemler himself is a self-proclaimed “bleeding heart Liberal.” Less than 25% of Alcor’s members were women, and only a small fraction of these women joined purely out of their own interest; most female Alcor members were the wives, partners, daughters, or mothers of a man who joined first. Lemler also said that cryonicists are highly adventurous, although he added, “You may not see that in their current lives. In fact, we have the bookish types, if you will, as I just described. You wouldn’t think that they’d be willing to take a chance on this particular adventure.”…Like any group, the cryonics community is by no means uniform in demography, thought, or opinion. The majority of cryonicists I met were, indeed, software or mechanical engineers. But I also encountered venture capitalists, traders, homemakers, a shaman, a journalist, an university professor, cryobiologists, an insurance broker, artificial intelligence designers, a musician, men, women, children, people of color, people in perfect health, and people who were terminally ill. Nevertheless, a sort of Weberian “ideal type” (Weber 200123ya[193094ya]) of the typical cryonicist has emerged, and this is how cryonicists recognize themselves and one another. …In an effort to bring the quite passionate technical discussion to a close, one member made a public aside to me, the anthropologist, loud enough for the benefit of everyone in the room. He said, “You know that a typical cryonicist is a male computer programmer, don’t you?” Everyone laughed. Another member shouted out, “And a Libertarian!” Everyone laughed harder. Everyone appeared to enjoy the joke, which seemed to reaffirm the group’s identity and to promote a kind of solidarity among them.

The results of one long-running online survey (from the sample size, LessWrongers probably made up <0.5% of the sample); “Understanding Libertarian Morality: The Psychological Roots of an Individualist Ideology” (as summarized by the WSJ):

Perhaps more intriguingly, when libertarians reacted to moral dilemmas and in other tests, they displayed less emotion, less empathy and less disgust than either conservatives or liberals. They appeared to use “cold” calculation to reach utilitarian conclusions about whether (for instance) to save lives by sacrificing fewer lives. They reached correct, rather than intuitive, answers to math and logic problems, and they enjoyed “effortful and thoughtful cognitive tasks” more than others do. The researchers found that libertarians had the most “masculine” psychological profile, while liberals had the most feminine, and these results held up even when they examined each gender separately, which “may explain why libertarianism appeals to men more than women.”

This clustering could be due solely to social networks and whatnot. But suppose they’re not. Is there any perspective which explains this, and cryonic’s “hostile wife phenomenon” as well?

Let’s look at the key quotes about that phenomenon, and a few quotes giving the reaction

The authors of this article know of a number of high profile cryonicists who need to hide their cryonics activities from their wives and ex-high profile cryonicists who had to choose between cryonics and their relationship. We also know of men who would like to make cryonics arrangements but have not been able to do so because of resistance from their wives or girlfriends. In such cases, the female partner can be described as nothing less than hostile toward cryonics. As a result, these men face certain death as a consequence of their partner’s hostility. While it is not unusual for any two people to have differing points of view regarding cryonics, men are more interested in making cryonics arrangements. A recent membership update from the Alcor Life Extension Foundation reports that 667 males and 198 females have made cryonics arrangements. Although no formal data are available, it is common knowledge that a substantial number of these female cryonicists signed up after being persuaded by their husbands or boyfriends. For whatever reason, males are more interested in cryonics than females. These issues raise an obvious question: are women more hostile to cryonics than men?

…Over the 40 years of his active involvement, one of us (Darwin) has kept a log of the instances where, in his personal experience, hostile spouses or girlfriends have prevented, reduced or reversed the involvement of their male partner in cryonics. This list (see appendix) is restricted to situations where Darwin had direct knowledge of the conflict and was an Officer, Director or employee of the cryonics organization under whose auspices the incident took place. This log spans the years 197846ya to 198638ya, an 8 year period…The 91 people listed in this table include 3 whose deaths are directly attributable to hostility or active intervention on the part of women. This does not include the many instances since 198737ya where wives, mothers, sisters, or female business partners have materially interfered with a patient’s cryopreservation(3) or actually caused the patient not to be cryopreserved or removed from cryopreservation(4). Nor does it reflect the doubtless many more cases where we had no idea…

…The most immediate and straightforward reasons posited for the hostility of women to cryonics are financial. When the partner with cryonics arrangements dies, life insurance and inheritance funds will go to the cryonics organization instead of to the partner or their children. Some nasty battles have been fought over the inheritance of cryonics patients, including attempts of family members to delay informing the cryonics organization that the member had died, if an attempt was made at all(5). On average, women live longer than men and can have a financial interest in their husbands’ forgoing cryonics arrangements. Many women also cite the “social injustice” of cryonics and profess to feel guilt and shame that their families’ money is being spent on a trivial, useless, and above all, selfish action when so many people who could be saved are dying of poverty and hunger now…Another, perhaps more credible, but unarguably more selfish, interpretation of this position is what one of us (Darwin) has termed “post reanimation jealousy.” When women with strong religious convictions who give “separation in the afterlife” as the reason they object to their husbands’ cryopreservation are closely questioned, it emerges that this is not, in fact, their primary concern. The concern that emerges from such discussion is that if cryonics is successful for the husband, he will not only resume living, he may well do so for a vast period of time during which he can reasonably be expected to form romantic attachments to other women, engage in purely sexual relationships or have sexual encounters with other women, or even marry another woman (or women), father children with them and start a new family. This prospect evokes obvious insecurity, jealousy and a nearly universal expression on the part of the wives that such a situation is unfair, wrong and unnatural. Interestingly, a few women who are neither religious nor believers in a metaphysical afterlife have voiced the same concerns. The message here may be “If I’ve got to die then you’ve got to die too!” As La Rochefoucauld famously said, with a different meaning in mind, “Jealousy is always born with love, but does not always die with it.”…While cryonics is mostly a male pursuit, there are women involved and active, and many of them are single. Wives (or girlfriends) justifiably worry that another woman who shares their husbands’ enthusiasm for cryonics, shares his newly acquired world view and offers the prospect of a truly durable relationship - one that may last for centuries or millennia - may win their husbands’ affections. This is by no means a theoretical fear because this has happened a number of times over the years in cryonics. Perhaps the first and most publicly acknowledged instance of this was the divorce of Fred Chamberlain from his wife (and separation from his two children) and the break-up of the long-term relationship between Linda McClintock (nee Linda Chamberlain) and her long-time significant other as a result of Fred and Linda working together on a committee to organize the Third National Conference On Cryonics (sponsored the Cryonics Society of California).32

Going back to Romain 201014ya, reproduction is also a theorized concern:

For many cryonicists, having children is considered an unnecessary diversion of resources that can and should be devoted to the self, especially if one is to achieve immortality. Phil, one of the few cryonicists I know with children, once said to me, “They’re good kids. But if their moms hadn’t wanted them, they wouldn’t exist.” He did not see much value in passing on genes or creating new generations and preferred to work toward a world in which people no longer need to procreate since the extension of human lifespans would maintain the human species. Indeed, I have heard some in the community theorize that having children is an evolutionary byproduct that could very well become vestigial as humans come closer and closer to becoming immortal. I have also heard several lay theories within the cryonics community about genetic or brain structure differences between men and women that cause men to favor life-extension philosophies and women to favor procreation and the conservative maintenance of cultural traditions…In a very different example, Allison wanted to have children but decided that she will wait until post-reanimation because she was single and in her mid-30s and thus approaching age-related infertility (medicine of the future would also reverse loss of fertility, she assumed). When I suggested that she might freeze her eggs so that she could possibly have genetically related children later in life, she responded that she has too much work to accomplish in the immediate future and would rather wait until she “came back” to experience parenthood.

Eliezer Yudkowsky, remarking on the number of women in one cryonics gathering, inadvertently demonstrates that the gender disparity is still large:

This conference was just young people who took the action of signing up for cryonics, and who were willing to spend a couple of paid days in Florida meeting older cryonicists. The gathering was 34% female, around half of whom were single, and a few kids. This may sound normal enough, unless you’ve been to a lot of contrarian-cluster conferences, in which case you just spit coffee all over your computer screen and shouted “WHAT?” I did sometimes hear “my husband persuaded me to sign up”, but no more frequently than “I persuaded my husband to sign up”. Around 25% of the people present were from the computer world, 25% from science, and 15% were doing something in music or entertainment - with possible overlap, since I’m working from a show of hands. I was expecting there to be some nutcases in that room, people who’d signed up for cryonics for just the same reason they subscribed to homeopathy or astrology, ie. that it sounded cool. None of the younger cryonicists showed any sign of it. There were a couple of older cryonicists who’d gone strange, but none of the young ones that I saw. Only three hands went up that did not identify as atheist/agnostic, and I think those also might have all been old cryonicists.33

Some female perspectives:

Well, as a woman, I do have the exact same gut reaction [to cryonics]. I’d never want to be involved with a guy who wanted this. It just seems horribly inappropriate and wrong, and no it’s nothing to do at all with throwing away the money, I mean I would rather not throw away money but I could be with a guy who spent money foolishly without these strong feelings. I don’t know that I can exactly explain why I find this so distasteful, but it’s a very instinctive recoil. And I’m not religious and do not believe in any afterlife. It’s sort of like being with a cannibal, even a respectful cannibal who would not think of harming anyone in order to eat them would not be a mate I would ever want.34

“You have to understand,” says Peggy, who at 54 is given to exasperation about her husband’s more exotic ideas. “I am a hospice social worker. I work with people who are dying all the time. I see people dying All. The. Time. And what’s so good about me that I’m going to live forever?”

…Peggy finds the quest an act of cosmic selfishness. And within a particular American subculture, the pair are practically a cliché. Among cryonicists, Peggy’s reaction might be referred to as an instance of the “hostile-wife phenomenon,” as discussed in a 200816ya paper by Aschwin de Wolf, Chana de Wolf and Mike Federowicz.”From its inception in 196460ya,” they write, “cryonics has been known to frequently produce intense hostility from spouses who are not cryonicists.” The opposition of romantic partners, Aschwin told me last year, is something that “everyone” involved in cryonics knows about but that he and Chana, his wife, find difficult to understand. To someone who believes that low-temperature preservation offers a legitimate chance at extending life, obstructionism can seem as willfully cruel as withholding medical treatment. Even if you don’t want to join your husband in storage, ask believers, what is to be lost by respecting a man’s wishes with regard to the treatment of his own remains? Would-be cryonicists forced to give it all up, the de Wolfs and Federowicz write, “face certain death.”

…Cryonet, a mailing list on “cryonics-related issues,” takes as one of its issues the opposition of wives. (The ratio of men to women among living cyronicists is roughly three to one.) “She thinks the whole idea is sick, twisted and generally spooky,” wrote one man newly acquainted with the hostile-wife phenomenon. “She is more intelligent than me, insatiably curious and lovingly devoted to me and our 2-year-old daughter. So why is this happening?”…A small amount of time spent trying to avoid certain death would seem to be well within the capacity of a healthy marriage to absorb. The checkered marital history of cryonics suggests instead that a violation beyond nonconformity is at stake, that something intrinsic to the loner’s quest for a second life agitates against harmony in the first…But here he doesn’t expect to succeed, and as with most societal attitudes that contradict his intuitions, he’s got a theory as to why. “Cryonics,” Robin says, “has the problem of looking like you’re buying a one-way ticket to a foreign land.” To spend a family fortune in the quest to defeat cancer is not taken, in the American context, to be an act of selfishness. But to plan to be rocketed into the future - a future your family either has no interest in seeing, or believes we’ll never see anyway - is to begin to plot a life in which your current relationships have little meaning. Those who seek immortality are plotting an act of leaving, an act, as Robin puts it, “of betrayal and abandonment.”35

As the spouse of someone who is planning on undergoing cryogenic preservation, I found this article to be relevant to my interests! My first reactions when the topic of cryonics came up (early in our relationship) were shock, a bit of revulsion, and a lot of confusion. Like Peggy (I believe), I also felt a bit of disdain. The idea seemed icky, childish, outlandish, and self-aggrandizing. But I was deeply in love, and very interested in finding common ground with my then-boyfriend (now spouse). We talked, and talked, and argued, and talked some more, and then I went off and thought very hard about the whole thing…Ultimately, my struggle to come to terms with his decision has been more or less successful. Although I am not (and don’t presently plan to be) enrolled in a cryonics program myself, although I still find the idea somewhat unsettling, I support his decision without question. If he dies before I do, I will do everything in my power to see that his wishes are complied with, as I expect him to see that mine are. Anything less than this, and I honestly don’t think I could consider myself his partner.36

To add a data point, I found myself, to put it strongly, literally losing the will to live recently: I’m 20 and female and I’m kind of at the emotional maturity stage. I think my brain stopped saying “live! Stay alive!” and started saying “Make babies! Protect babies!”, because I started finding the idea of cryopreserving myself as less attractive and more repulsive (with no change in opinion for preserving my OH), and an increase in how often I thought about doing the right thing for my future kids. To the extent that I now get orders of magnitude more panicked about anything happening to my reproductive system than dying after future children reach adulthood.37

Quentin’s explanation is even more extreme:

What follows below is the patchwork I have stitched together of the true female objections to a mate undergoing cryonic suspension. I believe many women have a constant low-level hatred of men at a conscious or subconscious level and their narcissistic quest for entitlement and [meaningfulness] begrudges him any pursuit that isn’t going to lead directly to producing, providing, protecting, and problem solving for her. It would evolutionarily be in her best interest to pull as many emotional and physical levers to bend as much of his energies toward her and their offspring as she can get away with and less away from himself. That would translate as a feeling of revulsion toward cryonics that is visceral but which she dares not state directly to avoid alerting her mate to her true nature.

She doesn’t want him to live for decades, centuries, or millennia more in a possibly healthier and more youthful state where he might meet and fall in love with new mates. She doesn’t want her memory in his mind to fade into insignificance as the fraction of time she spent with him since she has died to be a smaller and smaller fraction of his total existence; reduced to the equivalent in his memory of an interesting conversation with a stranger on the sidewalk one summer afternoon. She doesn’t want him to live for something more important than HER. So why not just insist she join him in cryonic suspension? Many of these same wives and girlfriends hate their life even when they are succeeding. Everyone is familiar with the endless complaints, tears, and heartache that make up the vast majority of the female experience stemming from frustration of her hypergamous instinct to be the princess she had always hoped to be and from resentment of his male nature, hopes, dreams, and aspirations. She thinks: “He wasn’t sexually satisfying! He isn’t romantic enough! He never took me anywhere! He didn’t pay attention to me! Our kids aren’t successes! We live in a dump! His hobbies are a waste of time and money! My mother always told me I can do better, and his mother will never stop criticizing me! I am fat, ugly, unsuccessful, old, tired, and weary of my responsibilities, idiosyncrasies, insecurities, fears, and pain. My life sucked but at least it could MEAN something to those most important to me.” But if they are around for too long it shrinks in importance over time.She wants you to die forever because she hates what you are. She wants to die too, because she hates what she is. She wants us all to die because she hates what the world is and has meant to her.

In the same vein:

But why not go with him then [into cryonics]?

Show me the examples of the men who asked, or even insisted that their wives go with them, and said “If you don’t go with me, I won’t go”. The fact that men generally don’t do this, is likely a big contributor to the female reaction. Imagine your husband or boyfriend telling you, “I just scheduled a 1 year vacation in Pattaya, and since I know you hate Thai food, I didn’t buy you tickets. I’ll remember you fondly.” That’s very different from the man who says, “I’ve always dreamed of living in Antarctica, but I won’t do it without you, so I’m prepared to spend the next 5 years convincing you that it’s a great idea”.38

Indeed, I buy the “one way ticket away from here” explanation. If I bought a one-way ticket to France, and was intent on going whether my wife wanted to come with me or not, then there would be reason for her to be miffed. If she didn’t want to go, the “correct” answer is “I won’t go without you”. But that is not the answer the cryonicist gives to his “hostile” wife. It’s like the opposite of “I would die for you” - he actually got a chance to take that test, and failed.39

Robin Hanson tries to explain it in terms of evolutionary incentives:

Mating in mammals has a basic asymmetry - females must invest more in each child than males. This can lead to an equilibrium where males focus on impressing and having sex with as many females as possible, while females do most of the child-rearing and choose impressive males.

…And because they are asymmetric, their betrayal is also asymmetric. Women betray bonds more by temporarily having fertile sex with other men, while men betray bonds more by directing resources more permanently to other women. So when farmer husbands and wives watch for signs of betrayal, they watch for different things. Husbands watch wives more for signs of a temporary inclination toward short-term mating with other men, while wives watch husbands more for signs of an inclination to shift toward a long-term resource-giving bond with other women. This asymmetric watching for signs of betrayal produces asymmetric pressures on appearances. While a man can be more straight-forward and honest with himself and others about his inclinations toward short-term sex, he should be more careful with the signs he shows about his inclinations toward long term attachments with women. Similarly, while a woman can be more straight-forward and honest with herself and others about her inclinations toward long-term attachments with men, she should be more careful with the signs she shows about her inclinations toward short term sex with men.

…Standard crude stereotypes of gender differences roughly fit these predictions! That is, when the subject is one’s immediate lust and sexual attraction to others, by reputation men are more straight-forward and transparent, while women are more complex and opaque, even to themselves. But when the subject is one’s inclination toward and feelings about long-term attachments, by reputation women are more self-aware and men are more complex and opaque, even to themselves…if cryonics is framed as abandonment, women should be more sensitive to that signal.40

The “selfishness” of cryonics does seem to be an issue for women and many men; one might wonder, would other heroic medical procedures be more socially acceptable if they involved “other-directedness”? I suggest the answer is yes: cord blood banking costs thousands with a lower (<0.1%) success rate (usage of the cord blood) than many cryonicists expect of cryonics (the Fermi estimates tend to be <5%); sperm banking costs a similar amount, while egg/oocycte banking may cost something like half what cryonics does! In the media coverage I have read of those 3 practices, I have the impression that people see them as legitimate medical procedures albeit ones where the cost-benefit equation may not work out. (Cryonicists, on the other hand, are just nuts.) Perhaps this is because sperm and egg banking - while fundamentally selfish, since if you cannot use your egg or sperm later, why don’t you want to adopt? - involves the creation of another person as hallowed by society.

Reductionism Is the Common Thread?

The previously listed ‘systems of thought’, as it were, all seem to share a common trait: they are made of millions or trillions of deterministic interacting pieces. Any higher-level entity is not an ontological atom, and those higher-level illusions can be manipulated in principle nigh-arbitrarily given sufficient information.

That the higher-level entities really are nothing but the atomic units interacting is the fundamental pons asinorum of these ideologies, and the one that nonbelievers have not crossed.

We can apply this to each system.

  • Many doubters of cryonics doubt that a bunch of atoms vitrified in place is really ‘the self’.

  • Many users of computers anthropomorphize it and can’t accept that it is really just a bunch of bits

  • Many doubters of materialist philosophy of mind are not willing to say that an extremely large complex enough system can constitute a consciousness

  • Many doubters of utilitarianism doubt that there really is a best choice or good computable approximations to the ideal choice, and either claim utilitarianism fails basic ethical dilemmas by forcing the utilitarian to make the stupid choice or instead vaunt as the end-all be-all of ethics what can be easily be formulated as simply heuristics and approximation, like virtue ethics41

  • Many doubters of libertarianism doubt that prices can coordinate multifarious activities, that the market really will find a level, etc. Out of the chaos of the atoms interacting is supposed to come all good things…? This seems arbitrary, unfair, and unreasonable.

  • The same could be said of evolution. Like the profit motive, how can mere survival generate “from so simple a beginning endless forms most beautiful and most wonderful have been, and are being, evolved”42?

  • Finally, atheism. A faith of its own in the power of reductionist approaches across all fields. What is a God, but the ultimate complex high-level irreducible ontological entity?

In all, there is incredulity at sheer numbers. An ordinary person can accept a few layers since that is what they are used to - a car is made of a few dozen systems with a few discrete thousand parts, a dinner is made of 3 or 4 dishes with no more than a dozen ingredients, etc. The ordinary mind quails at systems with millions of components (number of generations evolution can act on), much less billions (length of programs, number of processor cycles in a second) or trillions (number of cells in human body, number of bits on consumer hard drives).

If one doesn’t deal first-hand with this, if one has never worked with them at any level, how does one know that semiconductor physics is the sublayer for circuits, the sublayer for logic gates; logic gates the sublayer for memory and digital operations, which then support the processor with its fancy instruction like add or mov, which enables machine code, which we prefer to write as assembler (to be compiled and the linked into machine code), which can be targeted by programming languages, at which point we have only begun to bring in the operating system, libraries, and small programs, which let us begin to think about how to write something like a browser, and a decade later, we have Firefox which will let Grandma go to AOL Mail.

(To make a mapping, the utilitarian definition is like defining a logic gate; the ultimate decisions in a particular situation are like an instance of Firefox, depending on trillions of intermediate steps/computations/logic gates. Non-programmers can’t see how to work backwards from Firefox to individual logic gates, and their blindness is so profound that they can’t even see that there is a mapping. Compare all the predictions that ‘computers will never X’; people can’t see how trillions of steps or pieces of data could result in computers doing X, so - ‘argument from incredulity’ - they then believe there is no such way.)

A programmer will have a hard time being knowledgeable about programming and debugging, and also not appreciative of reductionism in his bones. If you tell him that a given system is actually composed of millions of interacting dumb bits - he’ll believe you. Because that’s all his programs are. If you tell a layman that his mortgage rate is being set by millions of interacting dumb bits (or his mind…) - he’ll probably think you’re talking bullshit.

Religious belief seems to correlate and causate with quick intuitive thinking (and deontological judgments as well), and what is more counterintuitive than reductionism?

I don’t know if this paradigm is correct, but it does explain a lot of things. For example, it correctly predicts that evolutionism will be almost universally accepted among the specified groups, even though logically, there’s no reason cryonicists have to be evolutionists or libertarians, and vice-versa, no reason libertarians would have any meaningful correlation with utilitarianism.

I would be deeply shocked & fascinated if there were data showing that they were uncorrelated or even inversely correlated; I could understand libertarianism correlating inversely with atheism, at least in the peculiar circumstances of the United States, but I would expect all of the others to be positively correlated. The only other potential counterexample I can think of would be engineers and terrorism, and that is a relatively small and rare correlation.

Lighting

I am well aware of the effects of lighting on my mind from reading up on the effects of light (and blue light in particular) on circadian rhythms & melatonin secretion, and have done a sleep self-experiment on red-tinting my laptop screen. (There seems to be a voluminous literature on bright lights being beneficial for alertness in the workplace, but I haven’t read much of it.) Despite this, my room is lit primarily by a lamp with 4 CFL light bulbs which I inherited, and not designed in any sense - I’ve focused on modifying myself more than my environment.

The 4 bulbs are puny CFLs: 13 watts (52 total), with a light temperature of 2700k (yellowish). Particularly during winter, when darkness falls around 4PM sharp, I find the illumination inadequate. A LW discussion reminded me that I didn’t have to put up with perpetual gloom - I could buy much larger CFLs and replace the smaller ones.

So after some Amazon browsing, and getting frustrated at how CFL listings equivocate on how many watts they draw vs how many watts-equivalent-incandescent-bulbs they are, I settled on “LimoStudio 2 x Photo Studio Photography 105 Watt 6500K Day Light Fluorescent Full Spectrum Bulb” for $22.15 & ordered 2014-12-06. They arrived on 2014-12-09 & I immediately installed them. Their temperature is much bluer and 2 105 watt bulbs would roughly quadruple light output (assuming equal efficiency) and since brightness is perceived logarithmically, make the room something like half again as bright. (They’re almost comically larger than the small 13 watt bulbs.)

I had to move the lamp since the naked bulbs in corner of my eye were giving me a headache, but the lighting works well in that corner. It’s nice to have things brighter and it does indeed feel like it reduces sleep pressure in the evening; downsides: shows the walls dirtier, shadows much sharper, feels like it may be much harder to fall asleep even with melatonin if I leave the lights on past 11 PM. A small benefit is that I still have some incandescent light bulbs installed; with 2 CFLs bumped out of the lamp, I can take them and replace 2 incandescents, which should save some electricity.

During the darkest winter days, just 2 of them now feels inadequate, so I ordered another pair on Amazon on 18 2014.

One handy way to quantify the effect is via my laptop webcam. Since 201212ya or so, I have run a script which periodically takes a snapshot through the laptop webcam and saves the photo. I haven’t gotten much use out of them, but changes in ambient lighting over time would seem to be a perfect use-case. I’ve written a script using Imagemagick which analyzes each webcam photo and calculates the average brightness (as a grayscale intensity) and the average LAB color triplet. (I originally used RGB but the 3 colors turned out to correlate so highly that the data was redundant and I was told LAB better matches human perception; in any case, the LAB values turn out to be less inter-correlated and so should be more useful.) The light intensity might affect sleep patterns (particularly sleep timing) and daily productivity.

Possible Amazon Mechanical Turk Surveys/experiments

  • Sunk costs: see whether manipulation of learning affects willingness to endorse sunk costs

  • backfire effect idea: manipulation of argument selection affects backfire effect?

  • followup SDr’s lipreading survey, unexpected and contrary to my theory

  • can one manipulate the subadditivity effect in both directions for cryonics? In one version, enumerate all the ways things can go right and in another all the ways it can go wrong.

  • test my theory that Ken Li actually does suck:

Ken Liu’s “The Paper Menagerie” is the most critically acclaimed Fantasy short story in history, to judge by its simultaneously winning the short story category for the 201113ya World Fantasy Award & Hugo Award & Nebula Awards (narrowly missing the Locus Award), a sweep which had never happened since the youngest award was started in 197549ya - 35 years before.

Presumably this means that the story is, if not the best fantasy story ever, at least an extremely good story and by far the best of 201113ya. So I read it eagerly with high expectations, which were immediately dash. The story is not that good. The prose is OK: not nearly as wooden as, say, Isaac Asimov, but not as spare & finely-honed as Ted Chiang’s, deliriously excessive as R.A. Lafferty, extraordinarily smooth and literary as Gene Wolfe, mannered as John Crowley, dream-like as Neil Gaiman… The plot itself is sentimental. In fact, as I read it, words kept rising to consciousness that should never be associated with a winner of any of those awards much less all three simultaneously, words like “trite” and “maudlin”. With a skeptical eye, the story crumbles even more into a pitiful sort of self-indulgent narcissism, in which a character angsts over small issues which seem large only because they live a life so blessed that they have never known real hardship; with even a little bit of perspective, their complaints become almost incomprehensible, and what was meant to be moving becomes absurd. Part of my objection is a lurking sense that Orientalism and/or “diversity” promotion lies behind the triple crown.

One of the difficulties in attributing people’s evaluations of something to essentially tribal or ideological motives is that typically it is hard to rerun or vary the scenario to control for the key aspect; for example, if we wondered how much the Barack Obama’s presidential election owed to racial politics (rather than other factors that have been mentioned, such as John McCain’s uninspiring campaign or choice of Sarah Palin, Obama’s slick staff, the well-timed meltdown of the American economy etc), we are left to parse tea leaves and speculate because there is no way we can re-run the election using a Barack Obama who chose to identify as white rather than black, or an Obama who was simply white, and we cannot even run polls on a hypothetical alternative Obama with the same biography as a junior senator from Illinois with no signature accomplishments because the parallels would be obvious to too many Americans one might poll.

If we were to hypothetically vary Liu’s story, we would want to replace the main character with an equivalent character whose non-Anglophone nationality was involved in WWII, resulted in many refugees and women from that country returning as wives to America, who might know a beautiful paper-working art suited for depicting tigers, and who was mocked on ethnic or racial grounds. Remarkably, this turns out to be easily doable on all points: Liu’s story could easily turned into a story about a half-German boy in America mocked for being a filthy Nazi whose mother came from the German post-WWII wasteland and who spoke mostly German while making Scherenschnitte for her son who later spurned the paper cutout animals and even later realizes the cutouts formed German words (in Fraktur, which is a pretty hard-to-read family of fonts) with the same ending. The question is, if we take Liu’s story, rename the author “Ken Schmitt” or “Ken Hess” or “Ken Brandt” or “Ken Schmidt” perhaps (making sure to pick a surname as clearly German as Liu is Asian, and ideally a single-syllable as well to control for issues related to memorability or length), make the minimal edits necessary to convert it to the above version - do you think this hypothetical “The Scherenschnitte Menagerie” would’ve won even 1 award, much less 3?

It seems highly unlikely to me, but unlike with Obama, we can produce the variant version without trouble, and in any survey, we can count on very few SF/F reader-respondents having actually read “The Paper Menagerie” (short stories are generally published in specialty magazines, whose circulations have declined precipitously over the past decades, and rarely ever achieve the popularity of the top SF/F novels). If we surveyed a sample of SF/F readers and saw a preference for the original Liu version (especially if the preference were moderated by some measure of liberalism), then any non-ideological explanation must explain how the original version using Asia is so enormously esthetically superior to an isomorphic version using European-specific details.

With all this in mind, it seems like it should be easy to design a survey. Take the two versions of the story with the two different author names, ask the respondent to rate their randomly-chosen story 1-5 (Likert scale), ask how much SF/F they consume, their general politics, a question asking whether they had heard of the short story before (this could be tricky), and some additional demographic information like age, ethnicity, and country. For extra points, one randomize whether a short biography of the author appears after each story, to see if there is “who? whom?” reasoning at work where knowing that Liu is from China increases the positiveness of ratings but knowing that “Schmitt” is from Germany does not affect or reduces ratings.

Technology

Somatic Genetic Engineering

What’s the killer app for non-medical genetic engineering in humans?

How about germ-line engineering of hair color? Think about it. Hair color is controlled by relatively few, and well-understood, genes. Hair color is a dramatic change. There is massive demand for hair dye as it is, even with the extra effort and impermanence and unsatisfactory results. How many platinum blonds would jump at the chance to have kids who are truly enviably blond? Or richly red-headed (and not washed-out Irish red)? A heck of a lot, I’d say. The health risks need not be enormous - aside from the intervention itself, what risk could swapping a brunette gene for blond cause?

What sort of market could we expect? Demographics of the United States lists 103,129,321 women between 15 and 64; these are women who could be using dye themselves, so appreciate the benefit, and are of child-bearing years.

Likely, the treatment will only work if there’s natural variation to begin with - that is, for Caucasians only. We’ll probably want to exclude Hispanics and Latin Americans, who are almost as homogeneous in hair color as blacks and Asians, so that leaves us 66% of the total US population. 66% * 103,129,321 will get us a rough estimate of 6.806535186e7 or 68,065,351.

https://www.isteve.com/blondes.htm claims that “One study estimated that of the 30% of North American women who are blonde, 5⁄6ths had some help from a bottle.” (0.3 * (5/6) = 0.25 or 25%) says Demographics of Mexico says 53,013,433 females Canada 200618ya Census 16,136,925

or 172,279,679 when you sum up Mexico/Canada/USA (the remaining NA states are too small to care about); 25% of 172,279,679 is 43,069,919. 43 million dye users.

Here’s a random report https://www.researchandmarkets.com/reportinfo.asp?report_id=305358 saying hair dye is worth 1 billion USD a year. Let’s assume that this is all consumed domestically by women. (So 1,000,000,000 / 43,069,919 per year is 23)

A woman using hair dye on a permanent basis will be dying every month or so, or 12 times a year. Assume that one dye job is ~20 USD* (she’s not doing it herself); then ((1b / 20) / 12) gives us ~4,166,666 women using hair dye, or 1⁄24 or 4.1% of eligible women. This seems rather low to me, based on observations, but I suppose it may be that elderly women do not use much hair dye, or the trend to using highlights and less-than-complete dye jobs. But 4% seems like a rather safe lower end. That’s a pretty large market - 4 million potential customers, who are regularly expressing their financial commitment to their desire to have some hair color other than their natural one.

If each is spending even $100 a year on it, a genetic engineering treatment could pay for itself very quickly. At $1,000, just 10 years. (And women can expect to live ~80). Not to mention, one would expect the natural hair to simply look better than the dye job.

There’s a further advantage to this: it seems reasonable to expect that early forms of this sort of therapy will simply not work for minorities such as blacks or Hispanics - their markets wouldn’t justify the research to make it work for them; their dark hair colors seem to be very dominant genetically, and likely the therapy would be working with recessive alleles (at least, it seems intuitively plausible that there is less ‘distance’ between making a Caucasian embryo, who might even have a recessive blonde allele already, fully blond, as compared to making a black baby, who would never ever come anywhere near a non-black hair color, blond). So marketing would benefit from an implicit racism and classism: racism in that one might need to be substantially Caucasian to benefit, and classism to be able to pony up the money up front.

* I think this price is a low-ball estimate by at least 50%; hopefully it will give us a margin of error, since I’m not sure how often dye-jobs need to be done.

The Advantage of an Uncommon Name

Theory: as time passes, it becomes more and more costly to have a ‘common’ name: a name which frequently appears either in history or in born-digital works. In the past, having a name like ‘John Smith’ may have not been a disadvantage - connections were personal, no one confused one John Smith with another, textual records were only occasionally used. It might sometimes be an issue with bureaucracy such as taxes or the legal system, but nowhere else.

But online, it is important to be findable. You want your friends on Facebook to find you with the first hit. You want potential employers doing surreptitious Google searches before an interview to see your accomplishments and not others’ demerits; you do not want, as Abigail Garvey discovered when she married a Wilson, employers thinking your resume fraudulent because you are no longer ranking highly in Google searches. As Kevin Kelly has since put it:

With such a common first/last name attached to my face, I wanted my children to have unique names. They were born before Google, but the way I would put it today, I wanted them to have Google-unique names.

Vladimir Nesov termed having a common given and surname like “John Smith” as being “Google Stupid”. Clive Thompson says that search rankings were why he originally started blogging:

Today’s search engines reward people who have online presences that are well-linked-to. So the simplest way to hack Google to your advantage is to blog about something you find personally interesting, at which point other people with similar interests will begin linking to you - and the upwards cascade begins.

This is precisely one of the reasons I started Collision Detection: I wanted to 0wnz0r the search string “Clive Thompson”. I was sick of the British billionaire and Rentokil CEO Lord Clive Thompson getting all the attention, and, frankly, as a freelance writer, it’s crucially important for anyone who wants to locate me - a source, an editor, old friends - to be able to do so instantly with a search engine. Before my blog, a search for “Clive Thompson” produced a blizzard of links dominated by the billionaire; I appeared only a few times in the first few pages, and those were mostly just links to old stories I’d written that didn’t have current email addresses. But after only two months of blogging, I had enough links to propel my blog onto the first page of a Google search for my name.

This isn’t obvious. It’s easy to raise relatively rare risks as objections (but how many cases of identity theft are made possible solely by a relatively unique name making a person google-able? Surely few compared to the techniques of mass identity theft: corporate espionage, dumpster diving, cracking, skimming etc.) To appreciate the advantages, you have to be a ‘digital native’. Until you’ve tried to Google friends or acquaintances, the hypothesis that unique names might be important will never occur to you. Until then, as long as your name was unique inside your school classes, or your neighborhood, or your section of the company, you would never notice. Even researchers spend their time researching unimportant correlations like people named Baker becoming bakers more often, or people tending to move to a state whose name they share (like Georgia).

What does one do? One avoids as much as possible choosing any name which is in the say, top 100 most popular names. People with especially rare surnames may be able to get away with common personal names, but not the Smiths. (It’s easy to check how common names are with online tools drawing on US Census data. My own name pair is unique at the expense of the Dutch surname being 12 letters long, and difficult to remember.)

But one doesn’t wake up and say “I will name myself ‘Zachariah’ today because ‘John’ is just too damn common”. After 20 years or more, one is heavily invested in one’s name. It’s acceptable to change one’s surname (women do it all the time), but not the first name.

One does decide the first name of one’s children, though, and it’s iron tradition that one does so. So we can expect digital natives to shy away from common names when naming their kids. But remember who are the ‘digital natives’ - kids and teenagers of the ‘00s, at the very earliest. If they haven’t been on, say, Facebook for years, they don’t count. Let’s say their ages are 0-20 during 200816ya when Facebook really picked up steam in the non-college population; and let’s say that they won’t have kids until ~30. The oldest of this cohort will reach child-bearing age at around 2018, and every one after that can be considered a digital native from osmosis if nothing else. If all this is true, then beginning with 2018, we will see a growing’long tail’ of baby names.

So this is a good story: we have a suboptimal situation (too many collisions in the new global namespace of the Internet) and a predicted adjustment with specific empirical consequences.

But there are issues.

  • Rare names may come with comprehensibility issues; Zooko’s triangle in cryptography says that names cannot be unique, globally valid, and short or human-meaningful. You have to compromise on some aspect.

  • There’s already a decline in popular names, according to Wikipedia:

    Since about 1800224ya in England and Wales and in the U.S., the popularity distribution of given names has been shifting so that the most popular names are losing popularity. For example, in England and Wales, the most popular female and male names given to babies born in 1800224ya were Mary and John, with 24% of female babies and 22% of male babies receiving those names, respectively. In contrast, the corresponding statistics for in England and Wales in 199430ya were Emily and James, with 3% and 4% of names, respectively. Not only have Mary and John gone out of favor in the English speaking world, also the overall distribution of names has changed [substantially] over the last 100 years for females, but not for males.

    (The female trend has continued through to 201014ya: “The 1,000 top girl names accounted for only 67% of all girl names last year, down from 91% in 196064ya and compared with 79% for boys last year.”43) The theory could probably be rescued by saying that the advantage of having an unique given name (and thus a relatively unique full name) goes that far back, but then we would need to explain why the advantage would be there for women, but not men. On the other hand, Social Security data seems to indicate both a 2-century long decline in the popularity of the top ten names and also a convergence of top-ten name rarity; from Andrew Gelman:

    Total popularity of top ten names each year, by sex; Source: Social Security Administration, courtesy of Laura Wattenberg

    Total popularity of top ten names each year, by sex; Source: Social Security Administration, courtesy of Laura Wattenberg

  • Pop culture is known to have a very strong influence on baby names (cf. the popularity of Star Wars and the subsequent massive spike in ‘Luke’). The counter-arguments to The Long Tail marketing theory say that pop culture is becoming ever more monolithic and hit-driven. The fewer hits, and the more mega-hits, the more we could expect a few names to spike and drive down the rate of other names. The effect on a rare name can be incredible even from relatively small hits (the song in question was only a Top 10):

    Kayleigh became a particularly popular name in the United Kingdom following the release of a song by the British rock group Marillion. Government statistics in 200519ya revealed that 96% of Kayleighs were born after 198539ya, the year in which Marillion released “Kayleigh”.44

  • Given names follow a power-law distribution already where a few names dominate, and so small artifacts can make it appear that there is a shift towards unpopular names. Immigration or ethnic groups can distort the statistics and make us think we see a decline in popular names when we’re actually seeing an increase in popular names elsewhere - imagine all the Muhammeds and Jesuses we might see in the future. Those will show up as decreases in the percentages of ‘John’ or ‘James’ or ‘Emily’ or ‘William’, and fool us, even though Muhammed and Jesus are 2 of the most popular names in the world.

  • One informal analysis suggests short first names are strongly correlated with higher salaries.

  • the impacts of names can be hard to predict and subtle (see some examples cited in Alter’s “The Power of Names”)

(Much of the above appears to be pretty common knowledge among people interested in baby names and onomastics in general; for example, a Washington Post editorial by Laura Wattenberg, “Are our unique baby names that unique?”, 16 Sunday May 201014ya, argues much of the above.)

Backups: Life and Death

Consider the plight of an upload - a human mind running on a computer rather than a brain. It has the advantage of all digital data: perfect fidelity in replication, fast replication - replication period. An upload could well be immortal. But an upload is also very fragile. It needs storage at every instance of its existence, and it needs power for every second of thought. It doesn’t carry with it any reserves - a bit is a bit, there are no bits more durable than other bits, nor bits which carry small batteries or UPSes with themselves.

So reliable backups are literally life and death for uploads.

But backups are a double-edged sword for uploads. If I backup my photos to Amazon S3 and a bored employee pages through them, that’s one thing; annoying or career-ending as it may be, pretty much the worst thing that could happen is that I get put in jail for a few decades for child pornography. But for an upload? If an enemy got a copy of its full backups, the upload has essentially been kidnapped. The enemy can now run copies and torture them for centuries, or use them to attack the original running copy (as hostages, in false flag attacks, or simply to understand & predict what the original will do). The negative consequences of a leak are severe.

So backups need to be both reliable and secure. These are conflicting desires, though.

One basic principle of long-term storage is ‘LOCKSS’: “lots of copies keeps stuff safe”. Libraries try to distribute copies of books to as many holders as possible, on the premise that each holder’s failure to preserve a copy is a random event independent of all the other holders; thus, increasing the number of holders can give arbitrarily high assurances that a copy will survive. But the more copies, the more risk one copy will be misused. That’s fine if ‘misuse’ of a book is selling it to a book collector or letting it rot in a damp basement; but ‘misuse’ of a conscious being is unacceptable.

Suppose one encrypts the copies? Suppose one uses a one-time pad, since one worries that an encrypted copy which is bullet-proof today may be copied and saved for centuries until the encryption has been broken, and is perfectly certain the backups are ‘secure’. Now one has 2 problems: making sure the backups survive until one needs them, and making sure the one-time pad survives as well! If the future upload is missing either one, nothing works.

The trade-off is unfortunate, but let’s consider secure backups. The first and most obvious level is physical security. Most systems are highly vulnerable to attackers who have physical access; desktop computers are trivially hacked, and DRM is universally a failure.

Any backup ought to be as inaccessible as possible. Security through obscurity might work, but let’s imagine really inaccessible backups. How about hard drives in orbit? No, that’s too close: commercial services can reach orbit easily, to say nothing of governments. And orbit doesn’t offer too much hiding space. How about orbit not around the Earth, but around the Solar System? Say, past the orbit of Pluto?

That offers an enormous volume: the Kuiper Belt is roughly ~1.95x1030 cubic kilometers45. The lightspeed delay is at least 20 minutes, but latency isn’t an issue; a backup protocol on Earth could fire off one request to an orbiting device and the device would then transmit back everything it stored without waiting for any replies or confirmations (somewhat like UDP).

1030 cubic kilometers is more than enough to hide small stealthy devices in. But once it sends a message back to Earth, its location has been given away - the Doppler effect will yield its velocity and the message gives its location at a particular time. This isn’t enough to specify its orbit, but it cuts down where the device could be. 2 such messages and the orbit is known. A restore would require more than 2 messages.

The device could self-destruct after sending off its encrypted payload. But that is very wasteful. We want the orbit to change unpredictably after each broadcast.

If we imagine that at each moment the device chooses between firing a thruster to go ‘left’ or ‘right’, then we could imagine the orbit as being a message encrypted with a one-time pad - a one-time pad, remember, being a string of random bits. The message is the original orbit; the one-time pad is a string of random bits shared by Earth and the device. Given the original orbit, and knowing when and how many messages have been sent by the device, Earth can compute what the new orbit is and where the device will be in the future. (‘It started off on this orbit, then the random bit-string said at time X to go left, then at X+1, go left again, then at X+Y, go right; remembering how fast it was going, that means it should now be… there in the constellation of Virgo.’)

The next step up is a symmetric cipher: a shared secret used not to determine future orbit changes, but to send messages back and forth - ‘go this way next; I’m going this way next; start a restore’ etc. But an enemy can observe where the messages are coming from, and can work out that ‘the first message must’ve been X, since if it was at point N and then showed up at point O, only one choice fits, which means this encrypted message meant X, which lets me begin to figure out the shared secret’.

A public-key system would be better: the device encrypts all its messages against Earth’s private key, and vice versa. Now the device can randomly choose where to go and tell Earth its choice so Earth knows where to aim its receivers and transmitters next.

But can we do better?

Measuring Multiple times in a Sandglass

How does one make a sand hourglass measure multiple times?

One could just watch it and measure fractions by eye - when a 10-minute timer is down to 1⁄2, it has measured 5 minutes. One could mark the outside and measure fractions that way.

Or perhaps one could put in two-toned sand - when the white has run out and there’s only black sand, then 5 minutes has passed.

But the sand would inevitably start to mix, and then you just have a 10-minute timer with grey sand. Perhaps some sort of plastic sheet separating them? But it would get messed up when it passes through the funnel.

Then, perhaps the black sand could be magnetically charged positively, and the white sand negatively? But magnetism attracts unlike. If the black is positive and white negative, they’ll clump together even more effectively than random mixing would.

We can’t make a color homogeneous in charge. Perhaps we could charge just black negative, and put positive magnets at the roof and floor? The bias might be enough over time to counteract any mixing effect - the random walk of grains would have a noticeable bias for black. But if the magnet is strong, then some black sand would never move, and if it’s weak, then most of the sand will never be affected; either way, it doesn’t work well.

Perhaps we could make half the black sand positive and half negative, while all white is neutral? Black will clump to black everywhere in the hourglass, without any issues about going through the funnel or affecting white.

How might this fail? Well, why would there be only 2 layers? There could be several alternating layers of black and white, and this be a stable system.

We might be able to remedy this by combining magnetized black sand with magnets on the roof/floor, imparting an overall bias - the layers form, but slowly get compacted together.

The real question is whether strong enough magnetism to usefully sort is also so strong to clump together and defeat the gravity-based timing.

Powerful Natural Languages

Split out to “On the Existence of Powerful Natural Languages”.

A Bitcoin+BitTorrent-Driven Economy for Creators (Artcoin)

One criticism of the Bitcoin system by cryptographers & commenters is that the fundamental mechanism Bitcoin uses to prevent double-spends is requiring proof-of-work (finding certain very rare random numbers, essentially) for each set of transactions to make it hard for anyone to put together enough computers to be able to find multiple valid sets of transactions and spend the same coin twice. (People are motivated to actually do the proofs-of-work since when they discover a valid set of transactions, the protocol allows them to invent 50 coins for themselves.)

The computing power applied to the problem is nontrivial: it is literally equivalent to a supercomputer, distributed among the various participants. But this is a supercomputer which is devoted solely to calculating some numbers which satisfy a completely arbitrary criteria. Yes, it works - Bitcoin is still around and growing. But can’t the situation be improved? Even distributed computing projects like Folding@home do some good; even the distributed cryptographic projects did some good by proving points about the insecurity of various algorithms.

After all, checking random numbers has the necessary property of being hard to figure out and easy to check, but this sounds like the P vs NP problem - and that’s so interesting because countless real-world economically valuable problems possess the same property. Why can’t we take Bitcoin and replace it with a succession of real-world problems suitably encoded? We’ll call it “Artcoin”. This way we get a secure Bitcoin (because no one can afford to compute multiple solutions) and we also put the computing power to use. Everybody wins.

With centralized systems, we could do other things like implement micropayments for BitTorrent (eg. “Floodgate: A Micropayment Incentivized P2P Content Delivery Network”). Nor are alternate blockchains are not an impossible idea. The Namecoin network is up and running with another blockchain, specialized for registering and paying for domain names. And there’s already a quasi-implementation of Bitcoin micropayments in an amusing hack: Bitcoin Plus. It is a piece of JavaScript that does the SHA-256 mining like the regular GPU miners. The idea is that one includes a link to it on one’s website and then all one’s website visitors’ browsers will be bitcoin mining while they visit. In effect, they are ‘paying’ for their visit with their computational power. This is more efficient than parasitic computing (although visitors could simply disable JavaScript and so it is more avoidable than parasitic computing), but from the global view, it’s still highly inefficient: JavaScript is not the best language in which to write tight loops and even if browser JavaScript were up to snuff, CPU mining in general is extremely inefficient compared to GPU mining. Bitcoin Plus works because the costs of electricity and computers is externalized to the visitors. Reportedly, CPU mining is no longer able to even pay for the cost of the electricity involved, so Bitcoin Plus would be an example of negative externalities. A good Artcoin scheme should be Pareto-improving.

One issue that pops up is how do you input the specific real-world problem into the Artcoin network so everyone can start competing to solve it? Perhaps there could be some central authority with a public key that signs each specific problem; everyone downloads it, checks that the signature is indeed valid, and can start trying to solve it. But wait, Bitcoin’s sole purpose was to be a decentralized electronic currency. (No one needs a new centralized electronic currency: you call it ‘Paypal’ or ‘Stripe’ or something.) If there was such a central authority in Artcoin, no one would use it!

And they would be right to not use it. One little noted property about NP problems is that the exponential blowup in difficulty refers to worst-case problems: one can construct easily solved instances. This means our Artcoin could be rendered completely worthless and vulnerable if the central signer decide to generate and sign an endless stream of trivial problems, at which point any fraudster could double-spend to his heart’s content.

If we had some magical way of estimating the difficulty of an arbitrary NP problem, we could devise a hybrid scheme: ‘if the just-released problem is at least 95% difficult, try to solve it; else, just try to solve the old random number problem.’ Any central authority attempting to water down the proof-of-work security would just see his signed problems ignored in favor of the old inefficient scheme, and so would have no incentive to release non-real-world problems even if a third party (like a government) attempted to coerce them.

A more P2P scheme would be to have clients simply verify any solution for a set of transactions, and let anyone supply problems so users can pick which problems they work on. Maybe your mining pool has a SF bent so you donate your collective power to solving SETI@home problems, while my mining pool prefers to work on protein folding problems. But this would seem to run into the same problem as before: how do you know a third mining pool isn’t “solving” trivial instances it made up for an otherwise perfectly acceptable NP problem?

If we had some way of estimating, we could implement this P2P scheme as well: users can subscribe to their favorite charity publishers of problems (or the publisher could pay the solver a sum to incentivize participation), and if any publisher attempted to weaken the system by publishing trivially solved problems, peers would simply reject the problem in favor of a different problem & solution or a hash.

How could we measure difficulty? Obviously you could measure difficulty by trying to solve the problem yourself: if it takes 1000 seconds, you know it’s no harder than 1000 seconds. But what good would this do you? You could broadcast a message to all your peers saying “these problems are supposed to take at least 20000 seconds to compute, but this only took 1000 seconds!” but you have no proof of this; they could do the check themselves so as to reject trivial solutions & their linked sets of transactions, but if peers rechecked work just because some stranger on the Internet cried foul, they’d spend all their time rechecking work and the system would fail.

Does this magical way of estimating difficulty exist? I don’t know. I’ve asked, and have been pointed at imperfect predictors: apparently 3-SAT has a curious and well-known spike in difficulty of problems when the total variables divided by number of clauses reaches >=0.5, a “phase transition point”. A randomly-generated problem can be inspected and predicted with substantial accuracy how difficult it will be: “Predicting Satisfiability at the Phase Transition” claims to reach “classification accuracies of about 70%”.

SAT-satisfiability prediction is a good step, but still incomplete: a 70% failure rate is far more than necessary to implement double-spend attacks at random just by hoping that several easy instances will be chosen, and it does not forbid offline pre-computing attacks.

William Carlos Williams

so much depends
upon

a red wheel
barrow

glazed with rain
water

beside the white
chickens.

Have you ever tried to change the oil in your car? Or stared perplexed at a computer error for hours, only for a geek to resolve it in a few keystrokes? Or tried to do yardwork with the wrong tool? (Bare hands rather than a shovel; a shovel rather than a rake, etc.)

So much depends on the right tool or the right approach. Think of a man lost in a desert. The right direction is such a trivial small bit of knowledge, almost too small a thing to even be called ‘data’. But it means the entire world to that man - the entire world.

So much depends on little pieces of metal being 0.451mm wide and not 0.450mm, and on countless other dimensions. (Think of the insides of a jet engine, of thousands of planes and even more tens of thousands of people not falling screaming out of the sky.)

Williams is sharing with us, in true Imagist style, a sudden realization, an epiphany in a previously mundane image.

Here is a farm. It seems robust and eternal and sturdy. Nothing about this neglected wheelbarrow, glazed with rain and noticed only by fowl, draws our attention - until we suddenly realize how fragile everything is, how much everything has to go right 99.999% of the time, how without a wheelbarrow, we cannot do critical tasks and the whole complex farm ecosystem loses homeostasis and falls apart.

(I sometimes have this feeling on the highway. Oh my god! I could die in so many ways right now, with just a tiny movement of my steering wheel or anyone else’s steering wheel! How can I possibly still be alive after all these trips?)

Simplicity Is the Price of Reliability

Why should we care about simple systems and long part lifetimes? Because for many things, the only way to build a reliable system is out of as few and more reliable parts as possible.

Kevin Kelly in “The Art of Endless Upgrades” notes a lesson from a home-repair story:

When we first moved into our current house, newly married, I had some caulking to do around the place. I found some silicon caulking that boasted on the tube that it was warranted for 20 years. Cool, I thought. I’ll never have to do this again. Twenty years later, what’s this? The caulking is staring to fray, disintegrate, fail. I realize now that 20 years is not forever, though it seemed that way before. Now that I am almost 60, I can see very permanent things decay in my own lifetime.

Consider a 20 year lifetime for a part. This may sound like a lot, and just like Kelly, you may intuitively feel that 20 years is close enough to ‘forever’ as to need little more thought. But each such part or object cannot be considered in isolation, because you have many objects. On a yearly basis, I photograph all my possessions for backup purposes and to sort through my possessions; despite my best efforts to keep clutter down and to photography multiple objects in the same photo, I find it takes more and more photographs every year, and for 2015, had to take 281 photographs of what is probably at least 10 objects per photo so perhaps 3000+ individual objects (of which each may have many components and parts which can break in exciting & novel ways). Kevin Kelly notes that in an inventory of his own household, there was probably 10,000 objects, surprisingly close to the Inventory of Henry VIII of England’s crown holdings of 17,810 items (pre-Consumer Revolution), but far more than a selection of Third World households, who supposedly averaged 127 objects. Self-storage complexes, overflowing garages, hoarders… Thousands of objects, at a minimum, without including infrastructure: the circuit breakers, the hot water tanks, the faucets, the showerheads, the light bulbs, the fans, the lamps, the switches, the floor & roof boards, the pipes and plumbing, the septic tank, the grinder, the windows - the list is fractal. Stuff breeds stuff, and kipple lurks under every corner.

But each of these objects can be a problem. They can be lost, or forgotten, or decay, or they can damage other things. (They can also be mental burdens.) If an object lasts 20 years or 7305 days, but you have 10000 objects, then on average something will break on a daily basis; worse, on a good 30 days, 3 objects will break simultaneously; on around 16 days, 4 will, and on a handful of days, 6 or more objects will break.46 (Fortunately, most objects breaking do not cause serious problems, and many objects easily last 20+ years and don’t contribute too many failures. But if you’ve ever wondered why in military histories, it seems like every ship or airplane is half-broken all the time and units routinely are disabled by mechanical problems, this is partially why: many moving parts and objects placed in harsh environments.) If one object breaking can cause another to break, things get even worse…

I recently ran into an example of this. I use a Zeo EEG-based sleep tracking device, and it is one of my favorite objects which I have used to run many self-experiments. I also record my sleep using an Android smartphone accelerometer-based app, since I know my Zeo will not last forever (in particular, the rechargeable battery is expected to die within a few years) and I will most likely be forced to replace it with an accelerator app.

Sometime in late 2015, my Zeo stopped working. The headset would not relay any data no matter if it should have been fully charged or not. I concluded the battery had finally died as I long feared; Zeo Inc closed a long time ago, so there are no replacement headsets (or if some can be found used, they will probably cost $200+ and be using original batteries as well), and one has to DIY by finding a random rechargeable battery, cracking open the headset, and soldering the new one in, which is all new to me. While I was getting around to this, the lights went out in my bedroom: they flickered irregularly, began going out for hours at a time, and then finally one day they went out entirely. This was irritating and made my bedroom difficult to use, but not a big problem, because of the 4 sockets in the bedroom, the socket nearest my bed (the one my smartphone, Zeo, and electric mattress for cold nights were all plugged into) My landlord lackadaisically began to think about calling an electrician.

I did some work on repairing my broken Zeo: reading up on the topic further, ordering the new battery for $9, and most tricky, cracking open the headset which turned out to be almost impervious to my prying, even when I put it in a vice, and to my horror, when I finally wiggled a razor blade through the corner and started to crack it open, I had cut too far into it and cut a metal ribbon connecting the circuit board with the metal headband! I had no idea if I had destroyed it or not, but I hoped that if I superglued it down, the physical connection would be enough for the EEG functionality to work. Unnerved by that, I didn’t cut out the original battery and try to solder in a new battery (likely botching it and destroying the irreplaceable headset); I would do that sometime later. After about a month of the blackout, my smartphone began having trouble recording through the night, apparently running out of power; I wasn’t sure what was going on, but after verifying that the smartphone would properly charge while plugged into my laptop, and while plugged into its charger plugged into a socket in the living room, I concluded it had to be the power strip or loose placement in the power socket in the bedroom. While wiggling the charger around, the lights abruptly came back on, and I began hearing ‘pop’ sounds and I noticed simultaneous flashes from behind the faceplate of the power socket. Immediately yanking everything out, I smelt burnt insulation and feared the worst: an electrical fire inside the walls. I unplugged everything, flipped the circuit breaker, and after monitoring my bedroom with a fire extinguisher for a few hours, insisted an electrician be called within the week. He came, and… the whole thing was nothing but a loose wire in the one socket I had thought was working! The poor connection caused it to heat up and burn the insulation, and blocked electricity from flowing ‘downstream’ to the lamps. Then the smartphone hadn’t been charging because it wasn’t getting enough electricity through the charger, and as I quickly verified, the Zeo had been suffering the same problem and was perfectly functional. (The superglue had worked.)

I hadn’t known it was possible for digital devices to appear to be turned on and receiving power but only partial power as I naturally assumed that if the Zeo turned on and appeared operational then the power socket must be perfectly fine, and I was completely surprised that the one power socket I had verified as working - did it not power everything I had plugged into it? - was both malfunctioning, had been for at least 3 months if not for years before that, and this was the root of multiple apparently unrelated problems. So one loose wire had caused a cascade of issues eventually costing: the electrician’s repair bill, $9, 3 months of sleep logs, a good deal of trouble on my part, and nearly destroyed an nearly irreplaceable favorite possession.

Such a cascade of issues and odd edge-cases with interacting objects is common. With enough objects in a system such as a home, at least some objects will always be failing and can trigger interactions with some other objects; all of these interactions cannot be predicted because each of these objects is complex in its own right, and their behaviors must be understood on a superficial and black box level.47 Cook gives some principles in his well-known “How Complex Systems Fail”:

  1. Complex systems contain changing mixtures of failures latent within them.

    The complexity of these systems makes it impossible for them to run without multiple flaws being present. Because these are individually insufficient to cause failure they are regarded as minor factors during operations. Eradication of all latent failures is limited primarily by economic cost but also because it is difficult before the fact to see how such failures might contribute to an accident. The failures change constantly because of changing technology, work organization, and efforts to eradicate failures.

  2. Complex systems run in degraded mode.

    A corollary to the preceding point is that complex systems run as broken systems. The system continues to function because it contains so many redundancies and because people can make it function, despite the presence of many flaws. After accident reviews nearly always note that the system has a history of prior “proto-accidents” that nearly generated catastrophe. Arguments that these degraded conditions should have been recognized before the overt accident are usually predicated on naïve notions of system performance. System operations are dynamic, with components (organizational, human, technical) failing and being replaced continuously.

  3. Catastrophe is always just around the corner.

    Complex systems possess potential for catastrophic failure. Human practitioners are nearly always in close physical and temporal proximity to these potential failures - disaster can occur at any time and in nearly any place. The potential for catastrophic outcome is a hallmark of complex systems. It is impossible to eliminate the potential for such catastrophic failure; the potential for such failure is always present by the system’s own nature.

  4. Post-accident attribution accident to a “root cause” is fundamentally wrong.

    Because overt failure requires multiple faults, there is no isolated ‘cause’ of an accident. There are multiple contributors to accidents. Each of these is necessary insufficient in itself to create an accident. Only jointly are these causes sufficient to create an accident. Indeed, it is the linking of these causes together that creates the circumstances required for the accident. Thus, no isolation of the ‘root cause’ of an accident is possible. The evaluations based on such reasoning as ‘root cause’ do not reflect a technical understanding of the nature of failure but rather the social, cultural need to blame specific, localized forces or events for outcomes.

My Zeo accident follows several of these: the Zeo could’ve failed at any time due to the battery getting too old, insufficient voltage or amperage, electrical surges from lightning strikes, the headband conductivity being destroyed by sweat/dirt, software errors etc. A failure in a separate object triggered a latent but unknown Zeo design flaw (not telling the user it is unable to charge the headset due to insufficient electricity and trying to anyway); this led to further errors. Root-cause attribution is difficult as none of the causes are individually sufficient, and even now I don’t see how I could have done better without being an electrician.

Trying to add in preventive objects or procedures runs the risk of creating even further problems; as one sysadmin on HN says

Your fail-safes are themselves the source of faults and failures. I’ve seen, just to list a few: Load balancers which failed due to software faults (they’d hang and reboot, fortunately fairly quickly, but resulting in ~40 second downtimes), back-up batteries which failed, back-up generators which failed, fire-detection systems which tripped, generator fuel supplies which clogged due to algae growth, power transfers which failed, failover systems which didn’t, failover systems which did (when there wasn’t a failure to fail over from), backups which weren’t, password storage systems which were compromised, RAID systems which weren’t redundant (critical drive failures during rebuild or degraded mode, typically), far too many false alerts from notifications systems (a very common problem even outside IT: on hospital alarms), disaster recovery procedures which were incomplete / out of date / otherwise in error. That’s all direct personal experience.

When I was still contributing a bit to the DVCS darcs, I thought then-maintainer Eric Kow’s efforts to set up automated tests & buildbots, while well-intended in trying to catch errors in patches & ensure that the code always compiled and software engineering best practices, wound up wasting far more time than it ever saved because the testing infrastructure itself kept crashing and needed constant configuration & upgrades. Given limited contribution time, dealing with the buildbots did not seem like a good investment. This also applies to programmers tempted to automate or script stuff on their computers: how much time are you really saving and was it worth the risk that one day a year from now you’ll wake up and discover a cron job hasn’t been running for months because of some Bash error? (Due to upgrading to a Shellshock-resistant Bash, in my case, which caused an inscrutable interaction with an exported function name in my Bash aliases file.)

Complexity and automation create a technical debt in the form of the fallout from future unreliability. “The price of reliability is the pursuit of the utmost simplicity. It is a price which the very rich find most hard to pay.”

November 2016 Data Loss Postmortem

In late November 2016, my Acer laptop broke and also corrupted the encrypted filesystem on the SSD, which apparently due to design decisions is very easily broken. Because I had been changing my backup strategy to use an new encryption key and had been lax about my manual backups, this made the most recent encrypted backups undecryptable as well, causing data loss back at least 2 weeks. I review how all this came to pass despite my careful backups, and my countermeasures: a new higher quality laptop, making backups of encrypted filesystem headers as well as the contents, and buying faster external drive enclosures.

Key lesson: if you use LUKS-encrypted Linux filesystems, know that they are super-fragile! You should backup a copy of the LUKS header (by running a command like sudo cryptsetup luksHeaderBackup /dev/sda5 --header-backup-file luks-header.bin.crypt) to avoid the header becoming corrupted & all data destroyed.

Investigating my cascade of laptop and backup problems (XKCD #1760: ‘TV Problems’)

Investigating my cascade of laptop and backup problems (XKCD #1760: ‘TV Problems’)

On 2016-11-26, I woke to find my Acer Inc. laptop cold and dead. The power had not gone out overnight, according to the microwave and the UPS, and the power indicator on the laptop was not on. I swapped the power cord for a generic replacement power cord I’d gotten from Best Buy when the first laptop power cord had died on me back in May; this made no Attempts to drain the battery and reset the laptop (this laptop had a hardwired battery) did nothing, and the laptop, then and later, remained totally unresponsive and gave no activity. It was well and truly dead. My best guess is that the motherboard broke/was fried. This is an unusual way for a (non-Apple) laptop to die, and has never happened to me before (my problems have always been with the screen or keyboard or hard drive or battery/power supply). My best guess for why the motherboard died is that the stress of doing deep learning on the GPU finally caught up with it; I thought it was handling it fine because the CPU only occasionally had to be throttled, but the cumulative months of running char-rnn/torch-rnn for projects like RNN metadata may’ve been too much and it shorted out or something.

The laptop in question was a Acer Aspire V17 Nitro Black Edition VN7-791G-792A Gaming Laptop 4th Generation Intel Core i7 4720HQ (2.60GHz) 16GB Memory 1TB HDD 256GB SSD NVIDIA GeForce GTX 960M 4 GB GDDR5 17.3-inch Windows 8.1 64-Bit I had bought in 2015-07-24 from Newegg for $1,215.21; it had a 1 year warranty on it, which had expired. In any case, even had the laptop still been under warranty, I would’ve had to buy a new laptop simply because I am leaving on a long trip to England on 10 December, and there was no way I could ship a laptop to an Acer repair center, the entire motherboard replaced (which would probably cost easily $500+), be shipped back to me, and all my data recopied & OS set up in time for me to leave. So I had to buy a new To replace the Acer laptop, I decided to go with a brand known for reliability, good laptop keyboards, power, and Linux support: a Lenovo ThinkPad P70. ThinkPad P70s have the 17-inch screens I love, and I added 32GB ECC RAM because Rowhammer is frightening and I like the idea of more reliability, and expedited shipping. Perhaps because I was buying the day after Black Friday, it cost a lot less than I expected: $1,886.59. It shipped on 2016-11-30, and UPS soon said it was in Anchorage Alaska, so it should definitely make it by 10 December, and indeed arrived on the evening of 1 December, well ahead of my expectations. (This suggests that paying $20 for expedited shipping might not have been that necessary, but I think it was the right choice - I didn’t want to risk it.) Had it not arrived in time, my backup plan was to drive to Best Buy, buy a regular laptop, and simply return it afterwards.

I pulled out the 1TB Samsung SSD I’d put in and inserted it into the old half-broken Dell Studio 17 laptop I keep as my backup laptop. (It’s half-broken because about a third of the keyboard is missing or about to break, there is a permanent line of LEDs lit up on the screen, and it’s ancient - 4GB RAM, only 2 USB ports etc.) The drive failed to boot with FS corruption, and then LUKS decryption with the passphrase failed despite at least 30 tries on my part. The SSD had been corrupted by the motherboard in its death throes, apparently. Even though the entire partition was there and even most of the LUKS header was there and could be seen using cryptsetup luksDump /dev/sda5, the key was not available. Reading up, I learned that LUKS-encrypted hard drives have no redundancy or protection from any kind of corruption, and this is by design/laziness, claimed to be in the interests of security, to quote the cryptsetup man page (emphasis added):

LUKS header: If the header of a LUKS volume gets damaged, all data is permanently lost unless you have a header-backup. If a key-slot is damaged, it can only be restored from a header-backup or if another active key-slot with known passphrase is undamaged. Damaging the LUKS header is something people manage to do with surprising frequency. This risk is the result of a trade-off between security and safety, as LUKS is designed for fast and secure wiping by just overwriting header and key-slot area.

(“Did you just tell me to go f—k myself?” “I believe I did, Bob.”)

This is BS because the security rationale makes no sense (how many milliseconds would it take to erase a second header stored elsewhere on the partition? has anyone ever in the history of the world been saved from law enforcement or hackers because there was only one header rather than two?); I suspect the real reason is that the maintainer admits in bug reports that the current LUKS format doesn’t allow for multiple copies (1, 2), and this is just them engaging in sour grapes about how It’s Actually A Good Thing. I can safely say I had no idea whatsoever that Linux encrypted drives were this fragile and permanent data loss so trivial, or that we are expected to do things like sudo cryptsetup luksHeaderBackup /dev/sda5 --header-backup-file luks-header.bin.crypt if we want to avoid it. Such fragility is itself counter to security as it encourages users to not use encryption: they have to choose whether to be mugged by bitflips or by the FBI.48 What a choice.

A lesson is learned but the damage is irreversible. This was unfortunate, but OK, I have a good backup situation. It was well-timed, if anything, as I had been busy on 2016-11-15 improving my backup situation by creating a fresh un-passphrased PGP key X for unattended backups and setting up a Backblaze B2 account for a remote off-site backup (Backblaze storage is ~10x cheaper than Amazon S3). So I had available for recovery of my data:

  1. 3 passphrased LUKS-encrypted external USB-2 drives, which I rsync manually to periodically irregularly at my whim (partially because they are so slow), rotating among them; stored in ziplock bags in a locked fire safe in a separate room. These had full backups as of 1 November, 5 November, and 14 November. Useful but outdated.

  2. an incomplete Backblaze full backup, using duplicity backups encrypted to key X. It was going to take a week or two to do the first full upload, so unsurprisingly it had been interrupted by my Acer laptop’s untimely demise, and was of no use; oh well.

  3. a plugged-in USB-3 4TB external drive that I had a cron job which did daily duplicity backups encrypted to X; I could see the new duplicity archives created at midnight, so the incrementals were all there. (This external drive is not passphrased LUKS-encrypted since it’s a RAID and I didn’t want to risk breaking it and quickly filled it up to the point where I couldn’t reformat it for lack of anywhere to copy the contents, so I left it as a NTFS FS and simply made sure that all duplicity archives backed up to it were encrypted; same thing in the end.) Very useful—I could pull my data off it quickly and I would lose nothing more than some background activity since midnight. All I needed was key X to decrypt it.

At this point horror dawned. My most recent immediately accessible backup was the manual backup drive done on 14 November, but I had created key X on 15 November. Copies of key X existed on the SSD - whose master key had been corrupted and could not be decrypted - and on the USB-3 external drive—all of which was encrypted to key X—and on Backblaze - also encrypted to key X—and nowhere else.

What happened was that I had simply assumed that my regular backup procedures would migrate key X onto the manual external drives, and then in any disaster, I would simply copy over all the data from one of them, and grab incrementals from Backblaze or the USB-3 external drive. This was almost right and off only by a day. But so much of my time was used up with Thanksgiving festivities and writing and digging in a new Ethernet cable and shopping that my usual manual backup routine of syncing every 5 days or so fell into abeyance. This made my backup improvements merely a complex verschlimmbesserung.

So I’d lost 16 days of data.

The timing was really remarkably bad.

The previous night, I had wondered mentally if I spent too much on backups. That morning, staring at my bricked laptop+16 days data loss, I knew I spent too little.

What is despair? I have known it—hear my song.

James Mickens, “The Night Watch”

The 5 stages of backup & data loss grieving:

  1. Denial: it’ll work if I hold the power button; I must’ve mistyped the password, LUKS isn’t broken; I have incremental backups!

  2. Anger: no header file copies at all‽ Curse you LUKS developers! If only I had made even one backup more recently!

  3. Bargaining: even one backup would be enough, I can survive missing a few days, one of the other drives…

  4. Depression: gone, all gone, 14 days of data… I don’t even remember what I’ve lost, why I write things down in order to forget them

  5. Acceptance: “vanity of vanities, all is vanity”. “Look upon my works ye mighty and despair.” Shoganai. At least I didn’t lose as much as Shōtetsu. Let’s draw up a list of what’s been lost and figuring out how to recreate them.

Thus began my short stay in computing hell…

The data loss was bad enough but the rest added insult to injury. I wasted almost 2 days in the Ubuntu LiveCD trying to copy the /dev/sda5 partition off the SSD before I realized that creating a bzip2-compressed disk image of an encrypted partition was, besides being very slow between the ancient laptop CPU & USB-2 connection, pointless and a huge waste of my time. Then I tried to install Ubuntu 16.10 on the Dell Studio 17 only to encounter a baffling array of glitches, crashes, reboots, and extremely slow hard drive write speeds, which went away when I reinstalled 15.04, so apparently the Dell Studio 17 either is having internal hardware issues or has become so old that standard Linux distros are incompatible out of the box (which is just peachy) or the Ubuntu release is broken. 15.04 at least works, but the keyboard remained a PITA to use for anything, and I also had to fight issues with the old software in 15.04 (eg. my Xmonad configuration doesn’t work, the versions of Xmonad shipped are that ancient; and the Chromium is too old for the Ledger extension, so I can’t use my bitcoins without finding someone who’s backported Chromium 50 to 15.04…). And I can’t avoid using the broken laptop by plugging in a USB keyboard unless I want to also be unable to plug in any of the USB drives because it only has 2 USB ports and one is used by the Logitech trackball. Further, Acer’s website and support are notoriously bad, and they didn’t help at all (after waiting 40 minutes in chat with a thorough description of my problem to get a price quote on a repair, they simply disconnected me).

Data recovery proved tricky. I wrote down everything I could possibly think of which I had done in those 16 days, got copies of IRC logs for hints as to my activities, synced with the Github mirror of Gwern.net to recover (most) of my writing during that period, and tried to match up with my offline activities. I definitely lost 2 weeks of data on some of my self-experiments and other metrics, along with some new music I’d downloaded from What.cd and whose name I’ve forgotten, and some Geocities-related char-RNN training logs + checkpoints, all of which is sad but not a huge deal. In retrospect, the data loss doesn’t seem to be that bad (although I’ve spent several days working through it) in part, ironically, for the same reasons that I had slackened on manual backups. Of course, this assumes I haven’t forgotten that I’ve forgotten having done something important…

(In July 2017, I realized that my arbtt window-tracking logs, which I keep for productivity analyses, hadn’t been updated since December 2016, costing me over half a year of data; I discovered that I had deleted my local user binaries because they were compiled on the previous system, but had forgotten to reinstall arbtt locally and my cron job hardwired the local version. This was because the arbtt packaged by Debian/Ubuntu was too outdated so I installed from HEAD, and then the error from the missing binary was not reported in system emails because the call is wrapped in an infinite loop which ignores errors; and that was because arbtt would segfault once a month & stop recording data. I reinstalled arbtt, checked that the latest version worked & removed the hardwiring, and added a daily arbtt report which would expose any lack of data collection - but the lost data is forever.)

So, post-mortem time - one always prepares to fight the last war. What was the primary cause for this huge waste of time, money, and effort; what other causes worsened it; and how might all be prevented in the future?

    • Cause: the proximate cause of data loss was the Acer laptop bricking. This could be due to Acer’s build quality (not always sterling) or due to the stress of GPU neural network training.

    • Solution: I have bought a higher quality brand & model (Lenovo ThinkPad) and will avoid extensive laptop GPU training in the future (either build a deep learning desktop or use the new Amazon GPU instances, which are far better than the previous ones).

    • Cause: the next-proximate cause of data loss was the LUKS header being corrupted.

    • Solution: in future OS installations, dump a copy of the LUKS header using ‘cryptsetup’ and archive it in the local FS (where it can be recovered from backups) and online. I’ve added this to my OS installation checklist, done so for the new ThinkPad, and also for the 3 external drives with encrypted FSes.

    • Cause: the penultimate cause of data loss was key mismanagement and key X’s secret key only being available in key-X-encrypted backups, creating a vicious circle.

    • Solution: whenever creating new keys for backups, export the secret key to a passphrase-protected file and upload that to the Internet.

    • Cause: another penultimate cause of data loss was writings which could’ve been public (eg. my ongoing chocolate reviews) on Gwern.net and, thus, retrievable from Github, were instead kept private. For example, my ‘model criticism’ demonstration wasn’t really done after I started it on 21 November, but on a whim I committed it, and so I don’t have to rewrite the whole damn thing.

    • Solution: I am going to review my drafts to see what can be added.

    • Cause: the ultimate cause of data loss was manual backups being irregular because they were too slow/painful.

    • Solution: I am upgrading the 3 manual drives’ USB-2 enclosures to newer faster USB-3 enclosures (2 2.5-inch and 1 3.5-inch enclosure) and schedule regular reminders to do a manual backup so it can’t slip from mind so easily. Another source of friction is the varying sizes of the HDDs: I have to be careful about the 500GB one to use rsync rather than duplicity otherwise it’ll run out of space quickly. So I’m replacing that with a 2TB one, and after finishing upgrading everything, I’ll switch the rsyncs to rdiff-backup, so everything will be consistent: either encrypted duplicity, or rdiff-backup to encrypted drive. (Total cost: $155.)

    • Cause: I wasted most of a day and was much slower in restoration because of the Dell Studio 17 laptop incompatibility with a reasonably recent Ubuntu, USB-2 enclosures, and a shortage of USB ports in the Dell laptop.

    • Solution: the speed is partially addressed by the USB-3 enclosure upgrade (at least, for restoring to a different laptop, one with USB-3 ports); the USB port shortage I am addressing by buying a USB hub49; and the incompatibility is a little more difficult. I am concerned about the glitches, but I’m not sure I want to scrap the Dell laptop entirely and look for some cheap regular $600 laptop on sale after Christmas to use as my backup laptop. Is that going too far? Maybe. Building a desktop would help inasmuch as then the ThinkPad would count as the backup computer, but building a proper desktop is a commitment I’m not sure I want to make after spending $1,800 on a laptop…

Hopefully all this should prevent any recurrence or make dealing with it far less difficult. Nevertheless, it’s cost me a ton of time, money, and stress. A lesson is learned but the damage is irreversible.


The ThinkPad arrived fortunately quickly on 2016-12-01. I immediately began installing & copying over. Despite the certification of ThinkPad P70s with Ubuntu 14.04, the install was not super-smooth. I pulled out the default HDD with Windows on it (which did work when I booted it up to test) and put in my 1TB Samsung SSD, and toggled the graphics in the BIOS to “discrete” rather than “hybrid”. The Ubuntu 16.10 LiveCD wouldn’t even boot (I tested the laptop thoroughly with the builtin BIOS tools and the optical disk drive was fine, just like everything else), so I’ve chucked it (it’s now betrayed me twice, with the Dell and ThinkPad so I think the CD is just broken) and I later burned another.

Installing with 15.04… initially went well but on rebooting gave me problems: it would freeze after GNU GRUB loaded Ubuntu. After much puzzling, I learned there was a bootloader graphics problem - I could work around it by either using ‘rescue’ mode to bypass the GRUB splashscreen stuff and simply tell rescue mode to boot as normal, or I could type in the FS encryption password blind and then it would work! After this, things went much smoother albeit with a ton of waiting on the USB-2 enclosures to sync with the SSD (the new enclosures not yet having arrived). The audio required an upgrade of the kernel module snd_hda_intel, and I experienced problems when I tried to change the ext4 filesystem mount options (apparently it no longer allows you to change options like journal=writeback in /etc/fstab as it will throw an error and dump you into a read-only root filesystem on boot…?). Ubuntu 15.04 turns out to no longer be supported by Ubuntu, where “supported” means “won’t even let you upgrade to the next OS version”, so I had considerable difficulties figuring out how to tweak apt to get it to upgrade to 15.10 so it could then upgrade to 16.10, but eventually that worked out. Then I began taking advantage of the 32GB of RAM by putting /tmp/ & Firefox’s cache into it. Very nice.

I then established that Internet, Firefox, IRC, syncing Gwern.net, torrents (using rtorrent), audio, video, Mnemosyne (for spaced repetition), backups to external drives & Backblaze, the trackball, email, and some other things are all working without any noticeable problems. The WiFi works out of the box, surprisingly, not even requiring an apt-get installation of a proprietary driver. Display brightness appears to not be supported based on dmesg errors, and I haven’t checked laptop suspend yet. The boot thing is still an issue but not a major one.

Hardware-wise, I am enjoying the ThinkPad so far. The keyboard is decent with an acceptable layout, the screen is at least as good as (and may be somewhat larger than) the Acer was, the battery & SSD were easy to install and the hardware was not hostile at all with everything labeled, the BIOS is nicely featured, the 32GB RAM is very nice to have, the 4 USB ports save me from port starvation, I appreciate that the Ethernet port is in the back of the laptop, the laptop feels sturdy, the lid overhangs slightly so it’s easy to open which is a nice touch, my cat walking on the touchpad hasn’t crashed X yet… So far so good.

Cats and Computer Keyboards

Moved to Cat Sense.

How Would You Prove You Are a Time-Traveler From the Past?

SF/F fiction frequently considers the case of a time-traveler from the future to the past, who can prove himself by use of advanced knowledge and items from the future. In the reverse case, a time-traveler from the past to the future wishes to prove he is from the past and that time-travel is real. How can he do this when all past knowledge is already known or whose chain of custody being broken is more likely than time-travel being real? I suggest 8 methods: carbon-14 nuclear isotope dating of their body as isotopes cannot be removed; sequencing of their genome to check consistency with pedigree as human genomes cannot be synthesized or edited on a large scale; selection & mutation clocks, likewise; immune system signatures of extinct or rare diseases such as smallpox, and accumulated pollution such as heavy metals, difficult & dangerous to fake. While these proofs may not offer conclusive proof since any human system can be subverted with enough effort, they can provide enough evidence to launch research into time travel and a definitive finding.

An accidental time-traveler from the future to the past, the A Connecticut Yankee in King Arthur’s Court scenario, has it easy; even though they will have a hard time adapting to the past and may die untimely and will discover most of their knowledge is too vague or relies on a vast invisible infrastructure, they may possess impressive technological artifacts & will still remember enough to refute contemporary theories & provide simple experimental proofs of their claims & help people avoid dead ends. Aside from accelerating scientific and technological development by possibly centuries (possibly more if the time travel can be repeated), they also prove the existence of usable backwards time travel, which would have cosmic implications.

What about the other way? What about an accidental time-traveler from the past to the future? This scenario is not so immediately consequential but would have the same implication, as most forward time travel mechanisms ought to be impossible in our universe or for people of the past or only allow backwards time travel (wormholes, relativistic vehicles, Tipler cylinders, FTL travel, exotic space-time geometries with closed timelike curves) and so a verified instance of forward time travel implies either one of those remote possibilities or some entirely unknown way, all of which imply our understanding of the world is deeply wrong (eg. availability of relativistic vehicles in the past implies aliens or a totally wrong human history).

The forward scenario is much harder. For concreteness, let’s imagine a mad steampunk scientist from 1850s Victorian England. How would we distinguish them from a con artist or prankster or lunatic? The time traveler doesn’t benefit from temporal asymmetry in their education or ambient tacit knowledge, since all science in the 1850s is well-known now, and anything about Victorian England that they know which we could verify, presumably they could have looked up in the same databases. Likewise, accents can be trained with sufficient effort, polygraphs and other forms of lie detectors like fMRIs are beatable with training or unreliable, and any backstory like a scientist disappearing in an explosion in the 1850s proves nothing. If they had traveled forward deliberately, they could have attempted to create hidden but verifiable knowledge like objects buried in particular places or letters left with lawyers, but even that may not provide sufficient evidence - how can one be sure that the objects were not buried more recently or the law offices were not broken into, or a genuine long-term letter on some mundane issue like an inheritance has not been swapped with a time-traveller-related letter, or that the excavators themselves can be trusted (eg. Mormonism’s “golden plates” whose evidence amounts to no more than the questionable printed testimony ascribed to the “witnesses”)? Verifying the physical age of a letter or ink is inadequate as there is plenty of genuine paper or substances surviving from Victorian England which can be used in a forgery. And of course they may have traveled forward accidentally or involuntarily, so it would be suboptimal to require verification which must be planned in advance. What proof could a time-traveller offer of being from Victorian England which could not be faked for reasonably small sums of money like <$1b? In general, Victorian people know and can do a subset of what a modern person could know or do, and hence while a modern person could prove to Victorians that they are from the future, how could a Victorian person prove to moderns that they are not a modern?

This may seem impossible because of the asymmetry, but there are in fact a few ways in which Victorian people are not subsets of modern people. The key insight is to think about ways in which things have been lost or become unavoidable with the passage of time which might distinguish Victorians from moderns:

  1. nuclear isotope dating

    Atomic bomb testing in the atmosphere has endowed all people worldwide from 194579ya to ~2030 with excess carbon-14. Carbon isotopes cannot be filtered from food and cannot be removed from bodily tissues such as bones nor can those bodily tissues be replaced by any method; all time-travelers from before 194579ya will thus exhibit very anomalously low levels of carbon-14. This test would provide proof of time-travel from an era predating atomic testing (or backwards time travel from an era post-dating ~2030 assuming no future atmospheric bomb tests or nuclear warfare). The only way to cheat this test would be to either falsify the entire test, or perhaps spend billions of dollars over several decades on ultracentrifugation raw ingredients to build a closed ecosystem to support a woman and grow the fake time traveler from conception to adulthood50. Given appropriate care in performing the test akin to those used in testing claims of psi (taking multiple samples at multiple times under the supervision of stage magicians, distributed to multiple blinded independent testing centers etc), a positive result would offer striking evidence in support of time travel.

  2. DNA: human DNA remains effectively immutable as of 2017; state-of-the-art human CRISPR editing allows only a few edits of high but <100% comprehensiveness before the viral vector becomes ineffective due to immune system response, and full human DNA synthesis would cost >$1b (and would be impossible to conceal, would affect commercial synthesis price curves, and any entity engaged in human DNA synthesis would use it for vastly more important things than hoaxing); thus any test based on DNA would indicate either being from the past or future. There are several possibilities; in decreasing order of strength:

    • family pedigree tree: due to recombination and the presence of many variants, any particular human genome can exist in few places in a tree without being astronomically unlikely; a time traveler’s relatives/descendants/ancestors should be easily located given their claimed biography & samples taken from them or their grave, or searched for & extracted from existing large DNA datasets like 23andMe or UK Biobank. Their sequenced genome will either fit perfectly into the claimed position & imputed genomes, or it will not. (It would be astronomically unlikely for just the most common 1 million SNP markers to randomly fit, and even less likely for full genome sequencing revealing family-specific mutations to be fooled by chance.) Like carbon-14 levels, the only way to cheat this test would be to fake the test.

    • complex trait polygenic scores: in Western countries, particularly the UK, there has been selection for & against certain traits (such as height or education, respectively) which will increase/decrease the respective polygenic scores. The polygenic scores are individually weak evidence but aggregated across many traits, may provide a worthwhile amount of evidence.

    • ancestry admixture: the UK has experienced considerable immigration since the 1850s. Immigration admixture is evidence for a modern origin and thus its absence for a Victorian origin.

    • mutation molecular clock: more generically, as mutations are always accumulating, if only neutral ones, a Victorian genome will look somewhat different than, and have somewhat fewer mutations than, a contemporary genome (especially given relaxed selection/dysgenics). This test is probably too weak to be worth considering.

    • gut & skin microbiota species & mutation clock: microbes have much shorter generation times, so while individual microbiomes are modifiable and not nearly as reliable as a test based on one’s unalterable genome, it may be possible to try to measure whether the time traveler’s microbiomes look properly archaic and similar to what would be estimated with a Victorian diet. The composition of the microbiomes could also be checked for plausibility.

  3. immune system: one of the signature accomplishments of public health, and major forms of progress since Victorian England, is the development of germ theory, a wide array of vaccinations, and the suppression or extinction of many infectious diseases & parasites.

    This offers a variety of checks. Blood samples can be taken to measure antigen levels and immune system response to various agents. The time traveler may have been infected with smallpox and now resistant; I don’t know if it is possible to distinguish a past smallpox infection from a cowpox inoculation or smallpox vaccination (cowpox is a different species, as is the modern smallpox vaccine’s Vaccinia, but they are similar enough to offer protection so perhaps they are similar enough that a vaccinated person cannot be distinguished from a survivor), but if it is, it would show that either they are a time traveler from a period where smallpox is endemic (such as the 1850s) or the Russian/American biowarfare stockpiles have leaked (catastrophic global security news). Other diseases are still prevalent, so only imply a weaker conclusion that the time traveler, if fake, was willing to hazard their health.

  4. pollution: another major change from Victorian England to modern England is massive improvements in the quality of the environment.

    Lead, mercury, fine coal dust, heavy metals - all serious issues among even the wealthy of Victorian England, present at levels radically unacceptable & unseen except in toxic Superfund sites. Tests of their blood, lungs, and bones would indicate levels hardly ever seen in modern times. Similar to nuclear isotope dating, these poisons can accumulate over a lifetime; while it is possible for someone to deliberately poison themselves or a child for many years to provide verisimilitude for a fake time traveler, like the infection or isotope strategy, it would be a hazard to their health and evil.

Are there any others? Perhaps now-extinct plants’ pollen or foodstuffs? Or some long-abandoned Victorian medical procedure which leaves unique traces? Maybe.

So that is ~8 ways to check whether a Victorian time traveler is really from Victorian England. Most of these would work for other time periods as well, possibly even better (polygenic scores & ancestry, mutational clock, microbiota) although some would not (pollution may not be a major factor in various times & places, and the family pedigree tree doesn’t work well if the family cannot be sampled). Of these, the first 2 are extremely difficult or impossible to beat without spending billions of dollars, and several are dangerous to defeat.

On a more philosophical level, is it even possible to prove you are a Victorian time traveler? Suppose one did pass all of these tests; given how incredibly unlikely time travel is based on a century of physics research, the absence of all other time travelers or traces thereof, how it would upend our understanding of the universe & human history, isn’t time travel (forward or backwards) like psi in that a flaw in the proof will always be more likely than it being true? Wouldn’t it be more likely that passing the ancestral DNA test was due to a sophisticated hacking campaign targeting the lab’s equipment or reports, or a breakin to tamper with calibration samples, than the person actually being a bewhiskered bewildered scientist accidentally blown 2 centuries ahead? Wouldn’t it be more likely that they’d discovered a flaw in carbon dating or some ultra-low-carbon-14 foodstuff or some other issue in standard isotopic tests or a one-in-billions mutant freak who does not biologically age than our understanding of physics so broken & incomplete that a Victorian gentleman might be still around to donate blood after stepping through a time portal?

With psi, the more rigorous the experiments became, the weaker the effect becomes. What once was secreting ectoplasm & communicating across time with ancestral spirits & moving tables with the power of one’s mind has become reduced to a slight excess of 1 bits in a radioactive random number generator’s output stream - if even that. The sheer puniness of the effect disproves any claim to being an evolved power of the human mind which could accomplish any of what has been ascribed to psi. With these time travel signatures, however, the effect should be large and easily observed. Nevertheless, I would say that it is not possible based purely on the Victorian time traveler’s body to ‘prove’ time travel to the level of rationally believing in it P > 0.99; if I heard of such tests being done, I would not believe they had been done or done honestly.

What such tests can do is offer enough evidence to pursue time travel. Given the implications, time travel need not be 100% proven to be worth researching - just not be P~=0 as at present and with no feasible research program. Knowing it may be possible is half the battle; and what has been done accidentally should be doable intentionally. And accomplishing it will then be proof.

ARPA and SCI: Surfing AI (Review of Roland & Shiman 200222ya)

Moved to separate review page.

Open Questions

Moved to “Open Questions”.

The Reverse Amara’s Law

Amara’s law: “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.” This is applicable to many technologies, especially computers, AI, the Internet, solar panels etc—the visionaries are right in the long run but frequently grossly overestimate how long it’ll take something to become universal. (More discussion in my review of The Media Lab.)

If the law is more than a triviality or tautology, then the reverse of it should be interesting to consider. It gives a 2x2 matrix: something can be overrated in the short & long runs, underrated in short & long runs, overrated short underrated long (Amara), and underrated short and overrated long (anti-Amara).

The first category is boring: any technology which failed and which anyone was enthusiastic about, since failing in the short run is generally how one proceeds to fail in the long run as well. That is how most failures go, and since most new things fail, the entries are innumerable, and the surprising thing is that we remember a CueCat or Segway at all. The second category is a little more interesting: a technology which keeps becoming more important, surprising people then and now. There are not as many clear examples here since at some point people tend to catch up and start gauging it appropriately—no one now is going to underestimate the Internet’s long-term effects now. Something like AI might still be underestimated by many. The third is Amara’s law, many possible examples can be found by searching for mentions of it.

The fourth is hard. Something which exceeded expectations, shocking people, but then just as their beliefs start catching up, whiplashes them by suddenly stalling out or stopping, leading to ‘overshoot’ of predictions:

  1. Nuclear Bombs:

    the Manhattan Project and the first atomic bombings were apparently regarded quite casually by the US government & military (whatever the scientists later might portentously intone while recalling things). The bombings, however, represented one of the most important shifts in human history by inaugurating the Cold War and nuclear race to ICBMs & hydrogen bombs, and potentially to the fall of human civilization. Many involved deeply regretted it and their lack of foresight later.

    Indeed, early in the Cold War, a fatalistic attitude prevailed, that nuclear war over the coming decades was inevitable, among many other implications. Very few people, especially intellectuals or politicians, appear to have seriously expected that humanity would be lucky enough to make it to 2019 with not a single nuclear war, much less a single additional military use of atomic bombs; they would also be surprised just how tiny the ‘nuclear club’ was at its peak.

  2. the Apollo Project & Space Race:

    In 194084ya, hardly any person would have believed that humans would be engaged in routine space missions within 20 years, much less that in 196955ya, men would walk on the moon and do so repeatedly thereafter until the feat lost its novelty. After all, in 194084ya, not so much as an insect had been sent into space yet (leading to wacky speculation about the deranging psychological effects of space travel, among other things), and aside from a few marginal research projects and efforts, there was little human interest in space flight.

    The extraordinary success of rocketry and space travel, powered by the military-industrial complexes of Germany, Russia, and the USA, led to dreams by Werner von Braun et al of Moon bases, Mars colonization missions, missions to Jupiter, asteroid belt mining… Who could blame them when humanity went from wooden propeller airplanes to orbiting space stations like Salyut 1/Skylab in <30 years? If the projections were not nearly as fantastical as the 1989 Rockwell International plan, they still wound up being fantastical in hindsight. The subsequent exploration & colonization of the solar system has been as disappointing as the Space Race & Apollo were extraordinary.

  3. military technology seems to be a possible source of examples: militaries are often accused of “fighting the last war”, which implies repeated reverse Amara’s - being surprised by a new technology or warfare method introduced in a war, and then overcorrecting.

    • trench warfare in WWI doesn’t seem to’ve been all that expected, and then extensive preparations for trench warfare in WWII were not that useful to France

    • another WWII-related example might be aircraft carriers: drastically underestimated by battleship-centric navies initially, becoming lords of the sea, floating airbases, around which blue-water navies built their battle groups, and then perhaps increasing overestimated as submarines/missiles/drones developed?

  4. more debatable examples:

    • zeppelins

    • fax machines

    • supersonic flight

    • asbestos

    • DDT

    • cryptocurrencies?

Worldbuilding: The Lights in the Sky Are Sacs

On page 217 of evolutionary biologist Geoffrey Miller’s 201113ya book Spent, in the middle of some fairly interesting material on Openness to Experience, one reads:

…Our six verbal creativity tasks included questions like: “Imagine that all clouds had really long strings hanging from them - strings hundreds of feet long. What would be the implications of that fact for nature and society?”…

To make the obvious point: strings hundreds of feet long strong enough to support themselves and any real weight are better termed ‘ropes’. And ropes are heavy. There’s no obvious way to change physics to permit just ropes to not be heavy, in the same way you can’t remove fire & keep cellular respiration. (If we insist on the ‘string’ language ad the implication that the strings are weak and thin, we can take some sort of arachnid tack, which would be either creepy or awesome.) So let’s engage in a little worldbuilding exercise and imagine alternatives, in the spirit of Carl Sagan’s Jovian ecology of floaters.

A cloud with a rope dangling is an awful lot like a balloon or lighter-than-air vehicles in general. How do they work? Usually by using hot air, or with an intrinsically lighter gas like helium or hydrogen. Both need good seals, though, which is something a biological organism can do. But where is an organism going to get enough heat to be a living hot air balloon? So maybe it uses helium instead, but then, where does it get helium? We get helium by applying hundreds of billion of dollars in R&D to digging deep narrow holes in the ground, which is not a viable strategy for a global population of clouds. So hydrogen? That’d work actually; hydrogen is very easy to obtain, just crack water! Even better, the organisms creating this hydrogen to obtain flight could reuse the hydrogen for energy - just burn with oxygen! The Laws of Thermodynamics say that burning wouldn’t generate any new energy, so this isn’t what they feed on. But the answer presents itself - if you’re in the sky or better yet, above the cloud layer, there’ something very valuable up there - sunlight. Trees grow so big and engage in chemical warfare just to get access to the sun, but our hydrogen sacs soar over the groundlings. There might be a similar competition, but the sacs have their own problems: as altitude increases, ambient pressure decreases (which is good) but temperatures plunge (bad) and other forms of radiation increase (ultraviolet?). As well, if our sacs are photosynthetic, they need inputs: water & carbon dioxide for photosynthesis, and the usual organic bulk materials & rarer elements for themselves. Which actually explains where our ropes are coming from: they are the sacs’ “roots”.

How could such a lifeform evolve? I have no idea. There are animals which glide (eg. flying squirrel), others which are dispersed by wind (spiders), and so on, but none that actually crack water into hydrogen & oxygen or exploit hydrogen for gliding or buoyancy. And there are serious issues with the hydrogen sacs: lightning would seem to be a problem… Still, we could reuse our ‘competition for solar radiation’ idea; maybe a tree, striving to be taller but running into serious engineering issues to do with power laws, tweaked its photosynthesis to divert some of the split hydrogen to storage vacuoles which would make it lighter and able to grow a little taller. Rise and repeat for millions of years to obtain something which is free-floating and has shed much of its old tree-form for a new spherical shape.

Imagine that a plant or animal did so evolve, and evolved before humanity did. Millions of floating creatures around the world, each one with lifting capacity of a few pounds; or since they could probably grow very large without the same engineering limitations as trees, perhaps hundreds to thousands of pounds. When humanity gets a clue, they will seize on the sacs without hesitation! Horses changed history, and the sacs are better than horses. The sacs are mobile over land and sea, hang indefinitely, allow aerial assaults, and would be common. It’s hard to imagine a Great Wall of China effective against a sac-mounted nomad force! There’s barrage balloons, but those are impossibly expensive on any large scale.

More troubling, early states had major difficulties maintaining control. When you read about ancient Egypt or China or Rome, again and again one encounters barbarians or nomads invading or conquering entirely the state, and how they were, man for man, superior to the soldiers of the government. Relatively modest technical innovations meant that when the Mongols got their act together and refined their strategy, they conquered most of the world. Formal empires and states are not inevitable outcomes, as much as they dominate our thinking in modern times - they didn’t exist for most of human history, didn’t control most territory or people for much of the period they could be said to exist, and it’s unclear how much longer they will survive even in this age of their triumph & universalization. History is shot through with contingency and luck. That China did not have an Industrial Revolution and oddball England did is a matter to give us pause.

What happens when we give nomadic humans, in the un-organized part of history, a creature unparalleled in mobility? At the very least, I think we can expect any static agriculture-based empire (the Indus, Yang-tze, Nile) to be strangled in its cradle. Without states, history would be completely different with few recognizable entities except perhaps ethnicities. The English state seemed closely involved in the Industrial Revolution (funding the Age of Exploration, patents, etc.) and also the concurrent scientific revolution (it is the Royal society, and even Newton worked much of his life for the Crown). No state, no Revolution? As cool as it would be to ride a sac around the world, I wouldn’t trade them for science and technology.

But optimistically, could we expect something else to arise - so that the sac variant of human history not be one damn thing after another, happy savages until a pandemic or asteroid finally blots out the human world? I think so. If a sac can lift one person, then can’t we tie together sacs and lift multiple people? Recycling ropes from dead sacs, we could bind together hundreds of sacs and suspend buildings from them. (I say suspend because to put them ‘on top’ of the sac-structure would cut off the light that the sacs need and might also be unstable as well.) A traveling village would naturally be a trading village - living in the air is dangerous, so I suspect there will always be villages planted firmly on the ground (even if they keep a herd of sacs of their own). This increased mobility and trade might spark a global economy of its own.

I failed to mention earlier that the sacs, besides being a potent tool of mobility exceeding horses, could also constitute a weapon of their own: a highly refined and handy package of hydrogen. Hydrogen burns very well. If nothing else, it makes arson and torching a target very handy. Could sacs be weaponized? Could a nomad take a sac, poke a spigot into it, light a match and turn the sac into a rocket with a fiery payload on impact? If they can be, then things look very dim indeed for states. But on the flip side, hydrogen burns hot and oxyhydrogen was one of the first mixtures for welding. Our nomads will be able to easily melt and weld tough metals like iron. Handy.

I leave the thought exercise at this point, having overseen the labefaction of the existing world order and pointed at a potential iron-using airborne anarchy. Which of the two is a better world, I leave to the unknowable unfolding of the future.

Remote Monitoring

Desire: some way to monitor freelancer’s activity (if they are billing by time rather than results).

Why? This enables better compliance and turns freelancers into less of a lemon market - allowing for higher salaries due to lower risk. Reportedly, such monitoring also helps one’s own akrasia - one could use it both while ‘working’ and ‘not working’, just with someone else (akin to coffee shops perhaps). The idea comes from Cousin It and Richard Hollerith’s https://www.lesswrong.com/posts/MhWjxybo2wwowTgiA/anti-akrasia-remote-monitoring-experiment (even if it wouldn’t go as far as letting one’s life be managed!).

Potential solutions:

  1. remote desktops: screenshots or video. Requirements:

    • cross-platform (Linux, Windows & Mac)

    • secure (eg. using SSH for transport, especially since we already use SSH for full-text access)

    • easily toggleable on and off

    Of the remote desktop protocols, only the VNC protocol is acceptable: has many open source & proprietary cross-platform implementations for both client and server on, and can be tunneled over SSH. (Nick Tarleton says Macs are already compatible with a VNC client user.) TightVNC seems like it would work well. (One difficulty: the natural tool to use once a VNC server is running on the remote desktop is vncsnapshot which does what you think it does, but the Debian summary warns it does not work over SSH. vnccapture may or may not work.)

  2. browser URL logging (since much work takes place in browsers). Requirements:

    • cross-browser (Firefox, Chrome, Safari; IE users can die in a fire)

    • at a minimum, passworded

    RescueTime has a paid group tracking set of features that seems designed for this sort of task. There are many other Internet possibilities. (I used to use the Firefox extension PageAddict which worked well for this sort of thing but is unmaintained; the most popular maintained extension, Leechblock, doesn’t export statistics. about:me would probably work, but wouldn’t be automated.)

  3. Other

    For example, my sousveillance script; it would be trivial to set up a folder and then add a call to the script like scp xwd-?????.png yyylj@euclid.u.washington.edu:/rc12/d16/yyli/screenshots/gwern/. This should be easily implemented for Macs, but for Windows? I am sure it is doable to write some sort of batch script which integrates with Task Scheduler, but I left Windows before I wrote my first script, so I don’t know how hard it would be.

Surprising Turing-Complete Languages

Split out to Turing-complete.

North Paw

A direction sensor belt (a ring of vibrators around one’s waist; the one closest to North buzzes gently). See the Wired article on it, and a 2009 article describing Sensebridge’s North Paw product The feelSpace homepage is here. There is a thread on Hackers News about building one’s own. Here’s a version of the belt made using Arduino. There’s a quasi-commercial version available for $119-214 from Sensebridge and through Think Geek (video), intended for wearing on one’s ankle (the original ankle-based project is “Noisebridge”). There’s an Arduino-based belt, then there’s a hat! I think most approaches are just a little baroque; it might make more sense to have each vibrator be independent - with a vibrator, a compass, and a battery. After all, each one should be able to know independently of the others whether it is facing North or not.

Parts:

Tutorials:

I had been meaning to buy or build one ever since I read the Wired article back in 200717ya or so, but had never quite gotten around it. The topic came up briefly on Hacker News and I suddenly remembered my intention and I worried that Sensebridge no longer sold them 7 years later; fortunately, they still did, so I took the hint and decided to get around to it.

Purchase

I ordered a pre-assembled North Paw on 2014-08-14.

It arrived 20 August; calibration was straightforward. Smaller than it looked in the few photos online, and the packaging is accordingly brief:

North Paw in packaging

North Paw in packaging

Stretched out, it reminds me of a watch, with its big black box and smaller blue battery attached by wires:

A North Paw laid out flat, with control box and battery pack visible

A North Paw laid out flat, with control box and battery pack visible

Curled up, it look more reasonable to wear:

North Paw in closed configuration (quarter for scale).

North Paw in closed configuration (quarter for scale).

The gray cable you can see in this closeup is how the chip/compass communicates with and controls the motors hidden in the band itself:

Close-up of North Paw case & externally-mounted battery (quarter for scale).

Close-up of North Paw case & externally-mounted battery (quarter for scale).

The fabric band can be unzipped to see and rearrange the little motors if their positioning is bad:

North Paw unzipped and 8 vibrators visible

North Paw unzipped and 8 vibrators visible

And then putting it on to take a look:

Wearing a North Paw on one’s ankle, from the side and from above

Wearing a North Paw on one’s ankle, from the side and from above

My initial impression is that the vibrations are stronger than expected, but they turn off after a minute or so with no movement. Interesting sensation feeling the motors successively turn on/off as one spins; it’s also a dramatic demonstration of the ‘sensory homunculus’ - I can feel the individual motors very distinctly when I hold it with my fingers, but when I put it on, the skin around the ankle reports only the vaguest “there’s some buzzing over here, maybe” sensations. After about 6 hours of use (1 hour walking around the neighborhood with it on), I don’t feel transformed.

Instead, what I feel is sort of a ‘wall’ to the north of me, in the same way that when you’re in a very large open room such as a gymnasium or the Smithsonian Air & Space museum outside of Washington D.C., you don’t feel uprooted & disoriented like you might in a place like Iowa or out in the ocean where it’s flat and landmarkless as far as the eye can see; instead, you sort of orient yourself ‘against’ or ‘towards’ the nearest wall (however far away it might be) and you get closer or further to the wall as you move around. With the North Paw on, I feel vaguely like there’s a wall far away to the north of me that I rotate or shift with respect to (which I suppose is more or less the case with the magnetic north pole). An odd feeling.

The battery life seems to be at least 8 hours, and one recharges it with a USB Mini B cable. (I was worried I didn’t have one and would have to order a cable, but it turned out the hard drive enclosures for my laptop-size backup hard drives is such a cable.) There does not seem to be a battery life indicator, so I will simply charge it overnight.

On the third day, I noticed that the vibrations seemed to be weaker and harder to notice, although when I felt it with my fingers the motors seemed to be vibrating as strongly as ever, so perhaps the adaptation really is happening and my mind is gradually filtering out the vibrations. During my walk, the battery pack came loose: it turns out to be attached to the fabric circlet by an adhesive mount, so it can come loose. This is a little worrisome (what if it comes off during a violent or sudden movement? will it break the North Paw as it rips out?) but pressing it back in place firmly seemed to work, and for good measure, I used a black binder clip overnight.

The fourth day, I happened to take a drive. The vibration from driving seems to mostly drown out the North Paw, but I did notice that roads seem to be aligned north/south or east/west to a degree I’d never appreciated before. The binder clip didn’t work and the battery came off again. This time I simply took a blue rubber band and wrapped it around the battery/anklet, which works nicely.

By day 7, the vibration is definitely starting to be filtered out and is no longer annoying. It’s a little comforting, even. (For a few moments one night while going to sleep, I thought I could feel some vibrating on my left ankle. Phantom paw syndrome?)

After 4 weeks or so, I began to get a little disenchanted with it; I was feeling nothing particularly new.

Another few weeks after that, it no longer seemed to be working right as motors would not go off in sequence as I did a slow spin, so I put it aside for 2 months. After dealing with the holidays, I was playing with it some more (was the battery dead? were individual motors not working, perhaps because the wiring had come loose? was it not turning on right?) since I had spent a fair bit of money on it, when I noticed that doing a vertical rotation seemed to trigger all the motors - so perhaps somehow the calibration had gone wrong. I re-calibrated, and that fixed the problem. So I began using it again.

Conclusion

After some further on/off periods, I decided to stop use and sold it in October 2016 to someone else to try. The buyer noted that most of the motors didn’t seem to be working, and the wires/motors looked badly corroded (perhaps I sweated too much?); he bought 10-wire cable with IDC clips, shrink tubing, & pager motors, and was able to get the North Paw working again - it was only a wire/motor problem and the circuit box/battery were still working.

Ultimately, the North Paw was a failure for me. I felt little sense of mental rewiring or intuitive sense of direction, and nothing worth the money. Oh well.

Leaf Burgers

One thing I was known for in Boy Scouts (or so I thought) was my trick of cooking hamburgers with leaves rather than racks or pans. I had learned it long ago at a campboree, and made a point of cooking my hamburger that way and not any other.

The way it works is you take several large green leafs straight from the tree, and sandwich your burger. Ideally you only need 2, one leaf on top and the other on bottom. (I was originally taught using just one leaf, and carefully flipping the burger on top of its leaf, but that’s error prone - one bad flip and your burger is irretrievably dirty and burned.) Then you put your green sandwich on top of a nice patch of coals - no flames! - and flip it in 10 minutes or so.

You’ll see it smoke, but not burn. The green leaves themselves don’t want to burn, and the hamburger inside is giving off lots of water, so you don’t need to worry unless you’re overcooking it. At about 20 minutes, the leaves should have browned and you can pull it out and enjoy.

What’s the point of this? Well, it saves one dishes. Given how difficult it is to clean dishes out there where there are no dishwashers or sinks, this should not be lightly ignored. It cooks better: much more evenly and with less char or burning of the outside. Given many scouts’ cooking skills, this is no mean consideration either. It’s a much more interesting way to cook. And finally, the hamburger ends up with a light sort of ‘leafy’ taste on the outside, which is quite good and not obtainable any way else.

Night Watch

The gloom of dusk.
An ox from out in the fields
comes walking my way;
and along the hazy road
I encounter no one.51

Night watch is not devoid of intellectual interest. The night is quite beautiful in its own right, and during summer, I find it superior to the day. It is cooler, and often windier. Contrary to expectation, it is less buggy than the day. Fewer people are out, of course.

My own paranoia surprises me. At least once a night, I hear noises or see light, and become convinced that someone is prowling or seeks to break in. Of course, there is no one there. This is true despite it being my 4th year. I reflect that if it is so for me, then what might it be like for a primitive heir to millennia of superstition? There is a theory that spirits and gods arise from overly active imaginations, or pattern-recognition as it is more charitably termed. My paranoia has made me more sympathetic to this theory. I am a staunch atheist, but even so!

The tempo at night varies as well. It seems to me that the first 2 years, cars were coming and going every night. Cars would meet, one would stay and the other go; or a car would enter the lot and not leave for several days (with no one inside); or they would simply park for a while. School buses would congregate, as would police-cars, sometimes 4 or 5 of them. In the late morning around 5 AM, the tennis players would come. Sometimes when I left at 8 AM, all 4 or 5 courts would be busy - and some of the courts hosted 4 players. I would find 5 or 6 tennis balls inside the pool area, and would see how far I could drop-kick them. Now, I hardly ever find tennis balls, since I hardly ever see tennis players. A night in which some teenagers congregate around a car and smoke their cigarettes is a rarity. Few visit my lot.

I wonder, does this have to do with the recession which began in 200816ya?

Liminality

Another year gone by
And still no spring warms my heart.
It’s nothing to me
But now I am accustomed
To stare at the sky at dawn.52

The night has, paradoxically, sights one cannot see during the day. What one can see takes on greater importance, becoming new and fresh. I recall one night long ago; on this cool dark night, the fogs lay heavy on the ground, light-grey and densely soupy. In the light, one could watch banks of fog swirl and mingle in myriads of meetings and mutations; it seemed a thing alive. I could not have seen this under the sun. It has no patience for such ethereal and undefinable things. It would have burned off the fog, driven it along, not permitted it to linger. And even had it existed and been visible, how could I have been struck by it if my field of view were not so confined?

One feels an urge to do strange things. The night has qualities all its own, and they demand a reflection in the night watcher. It is strange to be awake and active in the wrong part of the day, and this strangeness demands strangeness on one’s own part.

At night, one feels in another world, so changed is everything. To give an example: have you ever lain in the middle of a road? I don’t mean a road in the middle of nowhere like in a farmer’s fields in the Midwest where a car might pass once or twice a day; that is a cheat, and one feels nothing laying there. I mean an active road, a road one has watched thousands of cars pass through at unrelenting speed during the daylight hours, with fractions of seconds between them, a road warm with the friction and streaked black with the rubber. To go at 4AM and lie down precisely on the yellow double line and gaze at the stars is an experience worth having as one reflects that at another time, to do this would be certain death. It is forbidden, not by custom or law, but by unappealable facts: “you do not lie down in a busy road or you will die.” But at night, you lie and you do not die. You are the same body, the road is the same road, only a matter of timing is different. And this makes all the difference in the world.

Often when doing my rounds I have started and found myself perched awkwardly on a bench or fence. I stay for a time, ruminating on nothing in particular. The night is indefinite, and my thoughts are content to be that way as well. And then something happens, and I hop down and continue my rounds.

For I am the sole inhabitant of this small world. The pool is bounded by blackened fences, and as it lies prostrate under tall towers bearing yellowed flood-lights. The darkness swallows all that is not pool, and returns a feeling of isolation. As if nothing besides remains. I circumnambulate to recreate the park, to assure me it abides, that it is yet there to meet my eyes - a sop to conscience, a token of duty; an act of creation.

I bring the morning.

Two Cows: Philosophy

Philosophy two-cows jokes:

  • Free will: you have 2 cows; in an event entirely independent of all previous events & predictions, they devour you alive; this makes no sense as cows are herbivores, but you are no longer around to notice this.

  • Fatalism: you have 2 cows; whether they survive or not is entirely up to the inexorable and deterministic course of the universe, and what you do or not likewise, so you don’t feed your cows and they starve to death; you reflect that the universe really has it in for you.

  • Compatibilism: you have 1 cow which is free and capable of making decisions, and 1 cow that is determined and bound to follow the laws of physics; they are the same cow. But you get 2 cows’ worth of milk anyway.

  • Existentialism: You have two cows; one is a metaphor for the human condition. You kill the other and in bad faith claim hunger made you do it.

  • Ethics: You have two cows, and as a Utilitarian, does it suit the best interests of yourself and both cows to milk them, or could it be said that the interests of yourself, as a human, come above those of the cows, who are, after all, inferior to the human race? Aristotle would claim that this is correct, although Peter Singer would disagree.

  • Sorites: you have 2 cows who produce a bunch of milk; but if you spill a drop, it’s still a bunch of milk; and so on until there’s no more milk left. Obviously it’s impossible to have a bunch of milk, and as you mope over how useless your cows are, you die of thirst.

  • Nagarjuna: You have 2 cows; they are ‘empty’, of course, since they are dependent on grass; you milk them and get empty-milk (dependent on the cow), which tastes empty; you sell them both and go get some real cows. Moo mani hum

  • Descartes: You have 2 cows, therefore you are (since deceive me howbeit the demon may, he can never make it so that I have 2 cows yet am not as the predicate of ownership entails the predicate of existence); further, there are an infinite # of 2-cows jokes, and where could this conception of infinity have come from but God? Therefore he exists. You wish you had some chocolate milk.

  • Bentham: no one has a natural right to anything, since that would be ‘2 cows walking upon stilts’; everything must be decided by the greatest good for the greatest number; you get a lobotomy and spend the rest of your life happily grazing with your 2 cows.

  • Tocqueville: Cows are inevitable, so we must study the United Cows of America; firstly, we shall take 700 pages to see how this nation broke free of the English Mooarchy, and what factors contributes to their present demoocracy…

  • Gettier: You see 2 cows in your field - actually, what you see is 2 cow-colored mounds of dirt, but there really are 2 cows over there; when you figure this out, your mind is blown and >200024ya years of epistocowlogy shatters.

  • Heidegger: dasein dasein apophantic being-in cow being-in-world milk questioning proximate science thusly Man synthesis time, thus, 2 cows.

  • Husserl: You have 2 cows, but do you really see them?

Venusian Revolution

Greg Laughlin, interviewed in “Cosmic Commodities: How much is a new planet worth?”:

Venus is a great example. It does pretty well in the equation, and actually gets a value of about one and a half quadrillion dollars if you tweak its reflectivity a bit to factor in its bright clouds. This echoes what unfolded for Venus in the first half of the 20th century, when astronomers saw these bright clouds and thought they were water clouds, and that it was really humid and warm on the surface. It gave rise to this idea in the 1930s that Venus was a jungle planet. So you put this in the formula, and it has an explosive valuation. Then you’d show up and face the reality of lead melting on the surface beneath sulfuric-acid clouds, and everyone would want their money back!

If Venus is valued using its actual surface temperature, it’s like 10-12 of a single cent. @home.com was valued on the order of a billion dollars for its market cap, and the stock is now literally worth zero. Venus is unfortunately the @home.com of planets.

It’s tragic, amazing, and extraordinary, to think that there was a small window, in 195668ya, 195767ya, when it wasn’t clear yet that Venus was a strong microwave emitter and thus was inhospitably hot.

The scientific opinion was already going against Venus having a clement surface, but in those years you could still credibly imagine that Venus was a habitable environment, and you had authors like Ray Bradbury writing great stories about it. At the same time, the ability to travel to Venus was completely within our grasp in a way that, shockingly, it may not be now. Think what would have happened, how history would’ve changed, if Venus had been a quadrillion-dollar world, we’d have had a virgin planet sitting right next door. Things would have unfolded in an extremely different way. we’d be living in a very different time.

Sounds like a good alternate history novel. The space race heats up in the 1950s, with a new planet at stake. Of course, along the lines of Peter Thiel’s reasoning about France & John Law & the Louisiana Territory, the ‘winner’ of such a race would probably suffer the winner’s curse. (Don’t go mine for gold yourself; sell pick-axes to the miners instead.)

Hard Problems in Utilitarianism

The Nazis believed many sane things, like exercise and the value of nature and animal welfare and the harmful nature of smoking.

Possible rationalist exercise:

  1. Read The Nazi War on Cancer

  2. Assemble modern demographic & mortality data on cancer & obesity.

  3. Consider this hypothetical: ‘If the Nazis had not attacked Russia and negotiated a peace with Britain, and remained in control of their territories, would the lives saved by the health benefits of their policies outweigh the genocides they were committing?’

  4. Did you answer yes, or no? Why?

  5. As you pondered these questions, was there ever genuine doubt in your mind? Why was there or not?

Who Lives Longer, Men or Women?

Do men or women live longer? Everyone knows women live a few years longer; if we look at America and Japan (from the 201113ya CIA World Factbook):

  1. 75.92 vs 80.93

  2. 78.96 vs 85.72

5-7 years additional bulk longevity is definitely in favor of women. But maybe what we are really interested in is whether women have longer effective lives: the amount of time which they have available to pursue those goals, whatever they may be, from raising children to pursuing a career. To take the Japanese numbers, women may live 8.6% longer, but if those same women had to spend 2 hours a day (or 1⁄12th a life, or 8.3%) doing something utterly useless or distasteful, then maybe one would rather trade off that last 0.3%.

But notice how much we had to assume to bring the female numbers down to male: 2 hours a day! That’s a lot. I had not realized how much of a lifetime those extra years represented: it was a larger percentage than I had assumed.

The obvious criticism is that social expectations that women appear as attractive as possible will use up a lot of women time. It’s hard to estimate this, but men have to maintain their appearance as well; a random guess would be that men spend half an hour and women spend an hour on average, but that only accounts for a fourth of the extra women time. Let’s say that this extra half hour covers make-up, menstruation, waiting in female bathroom lines, and so on. (This random guess may understate the impact; the pill aside, menstruation reportedly can be pretty awful.)

Sleep patterns don’t entirely account for the extra time either; one guide says “duration of sleep appears to be slightly longer in females”, and Zeo, Inc.’s sleep dataset indicates a difference of women sleeping 19 minutes more on average. If we round to 20 minutes and add to the half hour for cosmetics, we’re still not even half the way.

And then there’s considerations like men becoming disabled at a higher rate than women (from the dangerous jobs or manual labor, if for no other reason). Unfortunately, the data doesn’t seem to support this; while women have longer lifespans, they also seem to have more illnesses than men53.

Pregnancy and raising children is a possible way to even things out. The US census reports a 200024ya figure that 19% of women 40-44 did not have children. So the overwhelming majority of women will at some point bear the burden of at least 1 pregnancy. So that’s 9 months there, and then…?

That’s not even 1 year, so a quarter of the time is left over if we assume the pregnancy is a total time-sink but the women involved do not spend any further time on it (but also that the average male expenditure is zero time, which was never true and is increasingly less true as time passes). That leaves a decent advantage for women of ~2 years.

If you wanted to erase the female longevity advantage, you could argue that between many women having multiple children, and many raising kids full-time at the expense of their careers or non-family goals, that represents a good decade of lost productivity, and averaging it out (81% of 10 years) reduces their effective lives by 8.1 years, and then taking into account the sleep and toiletry issues reduces the number by another 2 years, and now women lifetimes are shorter than men lifetime.

So at least as far as this goes, your treatment of childbearing will determine whether the longevity advantage is simply a fair repayment, as it were, for childbearing and rearing, or whether it really is a gift to the distaff side.

Politicians Are Not Unethical

Toward the end of my two-week [testosterone injection] cycle, I can almost feel my spirits dragging. In the event of a just-lost battle, as Matt Ridley points out in his book The Red Queen, there’s a good reason for this to occur. If you lose a contest with prey or a rival, it makes sense not to pick another fight immediately. So your body wisely prompts you to withdraw, filling your brain with depression and self-doubt. But if you have made a successful kill or defeated a treacherous enemy, your hormones goad you into further conquest. And people wonder why professional football players get into postgame sexual escapades and violence. Or why successful businessmen and politicians often push their sexual luck.

Andrew Sullivan, “The He Hormone”

Dominique Strauss-Kahn, while freed of the charge of rape, stands convicted in the court of public opinion as an immoral philanderer; after all, even by his account he cheated on his wife with the hotel maid, and he has been accused in France by a writer of raping her; where there is smoke there is fire, so Khan has probably slept with quite a few women54. This is as people expect - politicians sleep around and are immoral. Power corrupts. To be a top politician, one must be an risk-taking alpha male reeking of testosterone, to fuel status-seeking behavior.55 And then it’s an easy step to say that the testosterone causes this classically hubristic behavior of ultimately self-destructive streaks of abuse:

Power corrupts, unconsciously, leading to abuse of power and an inevitable fall - the paradox of power. Such conventional wisdom almost dares examination. Politicians being immoral and sleeping around is a truism - people in general are immoral and sleep around. What’s really being said is that politicians do more immorality and sleeping-around than another group, presumably upper-class56 but still non-politician white men57.

Revealed Moralities

But is this true? I don’t think I’ve ever seen anyone actually ask this question, much less offer any evidence. It’s a simple question: do white male politicians (and national politicians in particular) sleep around more than upper-class white males in general? It’s easy to come up with examples of politicians who stray paying prostitutes, having a ‘wide stance’ sending photographs online, possibly to young pages, or impregnating mistresses, but those are anecdotes, not statistics. Consider how many ‘national-level’ politicians there are that could earn coverage with their infidelities: Congress alone, between the House and the Senate, has 535 members; then one could add in 9 Justices, the President & Vice-president and Cabinet make 17, and then there are the governors of each of the 50 states, for a total of 611 people.

A Priori Rates

If those 611 were merely ordinary, what would we expect? Lifetime estimates of adultery seem to center around 20%5859 although Kinsey put it at 50% for men. So we might expect 122-305 of the current set of national politicians to be unfaithful eventually! That’s 4-10 sex scandals a year on average (assuming a 30-year career), each of which might be covered for weeks on national TV. I do not know about you, but either end of that range seems high, if anything; it’s not every other month that a politician goes down in flames. (Who went down as scheduled in September or August 201113ya? No one?) Why does it feel the opposite way, though? We might chalk it up to the base rate fallacy - saying ‘that’s a lot’ while forgetting what we are comparing to.

And 611 is very low an estimate. After all, everyone lives somewhere. The 8 million inhabitants of New York City will read about and be disgusted by the assistant New York Governor, the Mayor of New York City and his flunkies, the New York State legislature (212 members); and then there are the nearby counties like Nassau or Suffolk which are covered by newspapers in circulation in NYC like Newsday. We could plausibly double or triple this figure. (I had not heard of many politicians involved in sex scandals - like Khan, come to think of it - so they do not even need to be famous.)

So we have noticed that there are ‘too few’ sex scandals in politics; the same reasoning seems to work for ordinary crimes like murder - there are too few! In fact, besides Congressmen rarely committing suicide60 (despite the considerable stresses), it seems that politicians in general are uncannily honest; the only category I can think of where politicians are normally unethical would be finance (bribes, conflicts of interest, insider trading by Representatives & Senators, etc). Why is this?

Why?

Self-discipline seems like an obvious key. A reputation is built over decades, and can be destroyed in one instant. But that seems a little too friendly - we’re praising our politicians for morality and we’re also going to claim it’s because they are more disciplined (with all the positive moral connotations)?

Maybe the truth is more sinister: they whore around as much as the rest of us, they’re just covering it up better.

And we need a cover-up which actually reduces the number of scandals going public to make this all go away and leave our prejudices alone.

Investigating

If all the media was doing was delaying reporting on said scandals, we’d still see the same total number of scandals - just shifted around in time. To some extent, we see delays. For example, we seem to now know a lot about John F. Kennedy’s womanizing, but his contemporaries ignored even a determined attempt to spread the word; similar stories seem true of other Presidents & presidential candidates (FDR & Wendell Wilkie & John Edwards61). This suggests a way to distinguish the permanent cover-up from the delayed cover-up theory: hit the history books and see how many politicians in a political cohort turn out to have mistresses and credible rumors of affairs. Take every major politician from, say 193094ya, and check into their affairs; how many were then known to have affairs? How many were revealed to have affairs decades later? This will give us the delay figure and let us calculate the ‘shadow scandals’, how many sex scandals there ought to be right now but aren’t.

(One could probably even automate this. Take a list of politicians from Wikipedia and feed them into Google Books, looking for proximity to keywords like ‘sex’/‘adultery’/‘mistress’, etc.)

Uses

The shadow rate is interesting since the mass media audience finds sex scandals interesting to a nauseating degree. (Why does the media spend so much time on something like Weiner? Because it sells.) The shadow rate ought to be negative if anything: there is so much incentive to report on sex scandals one might expect the media to occasionally make up a scandal, on the same principle as William Randolph Hearst and the Spanish-American War - it sells well. Any positive shadow rate shows something very interesting: that the media values the politicians’ interests more than its own, to the point where they are collectively (it only takes one story to start the frenzy) willing to conceal something their customers avidly demand.

In other words, the shadow rate is a proxy for how corrupt the media is.

Defining ‘But’

The word ‘but’ is pretty interesting. It seems to be short hand for a pretty complex logical argument, which isn’t just modus tollens but something else, in much the same way that natural language’s if-then is not just the material conditional.

(Modus tollens, in a quick refresher, is ‘A → B’, ‘not B’, therefore, ‘not A’. Its counterpart is modus ponens, ‘A → B’, ‘A’, therefore, ‘B’.)

Most arguments proceed by repeated modus ponens; ‘this’ implies ‘that’ which implies ‘the other’, and ‘this’ is in fact the case, so you must agree with me about ‘the other’. It’s fairly rare to try to dispute an argument immediately by denying ‘this’ but conceding the rest of the argument; instead, one replies with a ‘but’. But what?

I thought, and I think we could formalize ‘but’ as a probabilistic modus tollens. Usually we know we’re dealing in slippery probabilities and inductions; if I make an argument about GDP and tax rates, I only get a reliable conclusion if I am not talking about the cooked books of Greece. My conclusion is always less reliable than my premises because probability intervenes at every step: the probability of both A and B must be less than or equal to the probability of either A or B. So, when we argue by repeated modus ponens, what we are actually saying (although we pretend to be using good old syllogisms and deductive logic) is something more like: ‘A implies B; probably A; therefore (less) probably B’.

When someone replies with ‘But C!’, what they are saying is: ‘C implies ~B; both A implies B and C implies ~B cannot be true as it is a contradiction, and C is more likely than A, so we should really conclude that C and ~A, and therefore, ~B’.

They are setting up an unstated parallel chain of arguments. Imagine a physicist discussing FTL neutrinos; ‘this observation therefore that belief therefore this conclusion that the neutrinos arrived faster than light’. And someone speaks up, saying ‘But there was no burst of neutrinos before we saw light from that recent supernova!’ What is going on here is the audience is weighing the probabilities of two premises, which then work backwards to the causal chains. One might conclude that it is more likely that the supernova observations were correct than the FTL observations were correct, and thus reason with modus tollens about the FTL - ‘FTL-Correct → (seeing neutrino burst)62; ~(seeing neutrino burst); therefore, ~FTL-Correct’. But if it goes the other way, then one would reason, ‘Seeing-neutrino-burst → ~FTL; FTL; therefore, ~Seeing-neutrino-burst’.

You don’t really find such probabilistic inference in English except in ‘but’. Try to explain it without ‘but’. Here’s an example:

  1. ‘Steve ran by with a bloody sword, but he likes to role-play games so I don’t think he’s a serial killer’ versus

  2. ‘Steve ran by clutching a sword which is consistent with the theory that he is a serial killer and also consistent with the theory that he is role-playing a game; I have a low prior for him ever being a serial killer and a high prior for him carrying a sword, bloody or otherwise, for reasons like role-playing and when I multiply them out, the role-playing explanation has a higher probability than the serial killer explanation’

I exaggerate a little here; nevertheless, I think this shows ‘but’ is a little more complex and sophisticated than one would initially suspect.

On Meta-Ethical Optimization

The killer whale’s heart weighs one hundred kilos
but in other respects it is light.
There is nothing more animal-like
than a clear conscience
on the third planet of the Sun.

Wisława Szymborska, “In Praise of Self-Deprecation”; cf. “Musée des Beaux Arts”

Jesus said unto him, If thou wilt be perfect, go and sell that thou hast, and give to the poor, and thou shalt have treasure in heaven: and come and follow me.

The Gospel Of Matthew, 19:21

When I or another utilitarian point out (eg. in Charity is not about helping) that it costs only a few thousand dollars to reliably save a human life, and then note that one choosing to spend money on something else is choosing to not save that life, one of the common reactions is that this is true of every expenditure and that this implies we ought to donate most or all of our wealth.

This is quite true. If you have $10,000 and you donate it all, there will be say 5 more humans alive than in the counterfactual scenario where you spend $10,000 on a daily cup of coffee at Starbucks. This is a fact about how the world works. To deny it requires quibbling about probabilities and expected value (despite one accepting them in every other part of one’s life) or engaging in desperate postulations about infinitely precise counter-balancing mechanisms (“maybe if I donate, that means someone somewhere will donate that much less! So it conveniently doesn’t matter whether or not I do, I don’t make a difference!”). Fundamentally, if to give a little helps, then for non-billionaires, giving a lot helps more, and given even more helps even more. What a dull point to make.

But the reaction to this dull point is interesting. Apparently for many people, this shows that utilitarianism is not correct! I saw this particularly in the reception to Peter Singer’s book The Life You Can Save - that Singer to some extent lives up to his proposed standards seems to make the ideas even more intolerable for these people.

It seems that people intuitively think that the true ethical theory will not be too demanding. This is rather odd.

…Hitherto I had stuck to my Resolution of not eating animal Food; and on this Occasion, I consider’d with my Master Tryon, the taking every Fish as a kind of unprovok’d Murder, since none of them had or ever could do us any Injury that might justify the Slaughter. All this seem’d very reasonable. But I had formerly been a great Lover of Fish, & when this came hot out of the Frying Pan, it smeled admirably well. I balanc’d some time between Principle & Inclination: till I recollected, that when the Fish were opened, I saw smaller Fish taken out of their Stomachs: Then thought I, if you eat one another, I don’t see why we mayn’t eat you. So I din’d upon Cod very heartily and continu’d to eat with other People, returning only now & then occasionally to a vegetable Diet. So convenient a thing it is to be a reasonable Creature, since it enables one to find or make a Reason for every thing one has a mind to do.63

A few criteria are common in meta-ethics, that the One True Ethics should satisfy. For example, universalizability: the One True Ethics should apply to Pluto just as much as it does Earth, or work a few galaxies over just like we would apply it in the Milky Way. Similarly for time: it’d be an odd and unsatisfying ethics which said casual murder was forbidden before 2050 AD but OK afterwards. (Like physics, the rules should stay the same, even if different input means different output.) It ought to cover all actions and inactions, if only to classify it as morally neutral. (It would be odd if one were pondering the morality of something and asked, only to be told in a very Buddhist way, that the action was: not moral, not immoral, not neither moral nor immoral, not both moral and immoral…) And finally, the ethical theory has to do work: it has to make relatively specific suggestions, and ideally those suggestions would be specific enough that it permits little and forbids much. (For example, could one base a satisfactory ethical theory on the Ten Commandments and nothing else? If all one had to do was be moral was to not violate a commandment? That would be not that hard, but I suspect, as we watch our neighbors fornicate with their goats and sheep, we will suspect that it is immoral even though nowhere in the Ten Commandments did God forbid bestiality - or many other things, for that matter, like child molestation.) The theory may not specify a unique action, but that’s OK. (You see two strangers drowning and can save only one; your ethical theory says you can randomly pick, because saving either stranger is equally good. That seems fine to me, even though your ethics did not give you just one moral option, but two.)

Given that every person faces, at every moment, a mindboggling number of possible actions and inactions, even an ethics which permitted thousands of moral actions in a given circumstance is ruling out countless more. And since there are a lot of moments in a lifetime, that’s a lot of actions too. Considering this, it would not be a surprise if people frequently chose immoral or amoral actions: no one bats a thousand and even Homer nods, as the sayings go. So there is a lot of room for improvement. If this were true of ethics, that would only mean ethics is like every other field of human endeavour in having an ideal that is beyond attainment - no doctor never makes a mistake, no chess player never overlooks an easy checkmate, no artist never messes up a drawing, and so on. There is no end to moral improvement:

Disquiet in philosophy may be said to arise from looking at philosophy wrongly, seeing it wrong, namely as if it were divided into (infinite) longitudinal strips instead of into (finite) cross strips. This inversion in our conception produces the greatest difficulty. So we try, as it were, to grasp the unlimited strips and complain that it cannot be done piecemeal. To be sure it cannot, if by a piece one means an infinite longitudinal strip. But it may well be done, if one means a cross-strip. –But in that case we never get to the end of our work! –Of course not, for it has no end.64

Once we abandon the neurotic quest for certainty and perfection, then these ideas become acceptable:

The moral code of our society is so demanding that no one can think, feel and act in a completely moral way. […] Some people are so highly socialized that the attempt to think, feel and act morally imposes a severe burden on them. In order to avoid feelings of guilt, they continually have to deceive themselves about their own motives and find moral explanations for feelings and actions that in reality have a non-moral origin.65

Yet, people seem to expect moral perfection to be easy! When utilitarianism tells them that they are far from being morally perfect (like they are not perfect writers or car drivers), they say that utilitarianism is stupid and sets unobtainable goals. Well, yes. Wouldn’t it be awfully odd if goodness were as attainable as playing a perfect game of tic-tac-toe? If all one had to do to be a good person on par with heroes like Jonas Salk or Norman Borlaug was to simply not do anything awful and be nice to the people around you? (“It takes a certain lack of imagination to have an entirely clean conscience.”) Why would one expect morality to be easy? Is morality really easier to master than making wine or cheese? Most human endeavors are hard, and ethics covers all our endeavors; and in those endeavours, people somehow seem comfortable being aware of their fallibility and the large gap between perfection and what they actually achieve - engineers do not say that the bridge which kills only a few people is perfect and a better bridge would be “supererogatory”, mathematicians do not say that perfect proofs have only a few non sequiturs in them and fixing the gaps would be supererogatory, programmers do not regard a program with only a few bugs in it as the same as a perfect program…

To object to utilitarianism because it points to a very high ideal is reminiscent, to me, of rejecting heliocentrism because it makes the universe much bigger and the earth much smaller. The small-minded want an equally small-minded ethics.

Alternate Futures: The Second English Restoration

The pricing of third-party candidates in political prediction markets is a difficult exercise in pricing low-probability outcomes which may well include genuine black swans. A case in point is the repeated pricing of libertarian/Republican Ron Paul for American president in Intrade, the Iowa Electronic Markets, & Bets of Bitcoin at a floor of ~1%; this pricing persists even long into the particular presidential campaign, well past the Democratic & Republican conventions, and up to Election Day. Part of this represents the inefficiencies of those markets, who make it difficult to profitably short contracts below 10% (leading to a “long-shot bias”), and due to Ron Paul fans who cannot face reality. But an unknown part of it is due to the observation that it is possible for a third-party candidate or a major-party dark horse to win and so the predictions should not be exactly 0%.

The American plurality election system (as opposed to some sort of proportional or probabilistic system) almost forces a polarized system of 2 parties, because any third party serves to ‘split’ the vote of the closer party (and be split) and hence there’s strong incentive to somehow merge or for voters to force the merge by backing the stronger horse. So it’s not surprising that we see no third-party candidates elected to offices higher than Representative or Senator after the Democrat/Republican system solidified in the late 1800s/early 1900s, and Teddy Roosevelt demonstrated Duverger’s Law in practice with his 1912112ya Progressive Party (and Ralph Nader in 200024ya). On the other hand, plurality voting only forces there to be 2 parties, not that they be 2 specific parties or that each party remain consistent - the Progressive Party’s Teddy Roosevelt beat the Republican’s William Taft 27% to 23%, and in the late 2000s we saw something close to a hostile takeover or schism in the Republican party by the Tea Party (note the name), which while the Tea Party didn’t entirely succeed, it still had a dramatic impact on the composition and planning of the main Republican party. This is 2 ‘near-misses’ in just 1 century with ~25 presidential elections.

Would you be willing to bet $1,000 to my $10 that from 2016–2116, every single President will be a Democrat or a Republican⸮ I wouldn’t! If we used Laplace’s rule of succession, we’d estimate $ = 3.7%, and actually, I would be uneasy at any prediction under 5%!

How would this 5% actually work out? There could be a split in one of them and the new party steal all the old think-tanks, voter lists, incumbents, and the whole laundry list of resources which power the giant parties to their assured victories; that’s one route. Or… there could be a convention fight. Conventions have an odd vestigial function in presidential elections: technically, the entire apparatus of caucuses and primaries doesn’t 100% determine who the delegates vote for at the convention! It’s understood - of course! who could possibly think otherwise‽ - that the delegates, even when not legally bound to vote for the person who won the most votes, will do so. ‘Understood’, which is another word for ‘they could do otherwise’. But delegates used to frequently changes who they’d vote for, throughout the 1800s for both parties. Why can’t this happen again? No real reason. There’s a gap between the formal powers of the convention and how everyone expects the convention to go, but such gaps are ripe for rare events to exploit. (A program might have a security vulnerability which requires 14 different bugs to exploit, which could never happen in practice just from random click or writing, until along comes one motivated hacker.) A similar thing is true of the Electoral College; it was not intended by the Founding Fathers to be a mechanical rubberstamp of voting totals, since if they had intended a direct election they would have simply wrote the Constitution that way, but to allow the electors to make their own choices. Here too we all expect them to be rubberstamps… but the formal powers are still there.

The United States is far from alone in having some curious gaps between de facto and de jure powers. Every constitutional monarchy exemplifies this - and open up their own low-probability events. Constitutions sometimes have loopholes like ‘emergency powers’ which are prudent precautions and of course would never be abused, until they are. (Who in 1870154ya, seeing the emerging German economic & military giant under the leadership of the Kaiser and the realpolitik genius Otto von Bismarck, could have guessed that within 80 years the Kaiser would be a bad memory and a failed artist would have risen on mass approval to seize, quite legally and with surprisingly little opposition, all power to the utter ruin of the country?)

England is an interesting example: the monarchy is a funny little thing for the tourists and tabloids, but suppose a driven strategic genius like Frederick the Great were crown prince and the Queen died tomorrow; do you really think that there would still be a <1% chance that in 50 years when he dies, England won’t be something like Singapore writ large⸮ The Royal Family is completely feckless and embarrassing (perhaps because they have no purpose but useless, or to be polite, ‘ceremonial’ duties), but they possess a power-base that ordinary politicians would kill for: annual income in the dozens of millions, world-wide fame, the unthinking adoration of a still-significant chunk of the British masses, well-attended bully pulpits, and in general enough tradition & age & properties sufficient to beat down and render groveling the staunchest democrat.

An Englishman would tell you that any attempt by a monarch to meddle in affairs would - of course! who could possibly think otherwise‽ - be slapped down by the real government and any de jure laws employed would be quickly repealed by Parliament. After all, their “uncodified constitution” is believed to say as much. (But who exactly carries out the orders of a constitution which doesn’t even have a physical embodiment⸮) But on the other hand, reserve powers still exist in the Commonwealth and are exercised from time to time.

Maybe we can rule out a simple coup scenario. But a more subtle strategy carried out over decades? An Outside View doesn’t help too much in assessing such strategies. We certainly can point to existing monarchies with tremendous power and wealth who rule through a democratic framework: pre-WWII Japan saw considerable influence by the Emperor through the nominally democratic government, the monarchy of Thailand is widely believed - outside the reach of Thai censorship - to exert considerable control over Thai politics, and some countries like Saudi Arabia don’t have even that democratic framework. ~45 monarchies exist, of varying degrees of symbolism; just one country with powerful royalty would give us ~4% rate of predicting a powerful royalty in a country given the data that the country is also a monarchy, but we already know England is a weak symbolic monarchy. We are more interested in the change the English monarchy will cease to be symbolic in the next century. Is it more, equally, or less probably than a third-party winning? (We can think of the monarchy as an inactive third-party in the English political system.) In the absence of known attempts, it’s really hard to calculate - we can calculate that if there’s 1 success in 100 ‘attempts’, that gives us a point-estimate of but if we ask instead the 95% binomial proportion confidence interval of 1 success in 100 trials, we get 0-3%! Any big bets on it being 1% seem like a bad idea when it could easily be 0.1% or 3% instead… (This is not a surprise if you think about it a little: how could you be precise to as much or more than a single percentage when you only have 100 pieces of data? To narrow it down to a specific percentage will take more than that!)

Statistics aside, we can ask a different question: are there multiple independent disjunctive paths to power (increasing the odds of it happening), or just one unlikely conjunctive path consisting of multiple necessary steps? What might a path to power look like? And specifically, one exploiting the formal gaps in power? Monarchies have been rising and falling throughout history, so it stands to reason that some managed to claw their way back from irrelevancy (the Meiji Restoration providing a well-documented example with far-reaching consequences).

Formally, the English monarchy doesn’t seem to directly command either the police or military, and is under the Parliament which apparently can legally do pretty much anything it wants. So Parliament will figure in plans: Parliament must be co-opted, made to delegate powers, or simply neutralized. Since the constitution is unwritten, sufficient popularity would enable the monarch to do anything or at least shift the Overton window to its desired policies.

An example of a strategy for neutralizing Parliament: a young crown prince is gifted with a copy of Edward Luttwak’s Coup d’Etat: A Practical Handbook; he enters the military (as is usual for the royal family) and begins building a power-base or “deep state” using his good looks, hard work, heritage, and also his inherited wealth (helping out impoverished retired officers, sponsoring parties, etc.). He leaves the military to go into politics, gradually easing his way in (pushing the Overton window to make this acceptable); during a major crisis - perhaps a second Great Depression? - which highlights the fecklessness of the civilian government, his cabal of young turks stages a lightning bloodless coup to restore the legitimate monarch to de facto control over the civilian government, and who immediately calls for the Parliamentary elections which the existing Parliament had been delaying since it was fearful of voter anger, fears which immediately prove justified as the new king’s favored candidates sweep in. The king now controls the military, is legitimized by a popular aegis, and has a compliant Parliament to enact his new deal. The public can be counted on to remain passive and accept the changes in the Overton window, just as the American public could be counted on post-9/11 to acquiesce to anything.

(Yes, we are postulating a remarkable crown price here: it is rare for someone to be handsome and intelligent and driven by a nigh-sociopathic lust for power and extroverted or charming; however, a century is multiple generations and our story only requires 1 such person. His low probability is just evidence that the current English royal family is self-sabotaging its prospects - by indulging in the demographic transition and having so few kids! When you need a win from the genetic lottery, you cannot afford to buy few tickets. If nothing else, the spare heirs can make themselves useful by gathering power-bases in various business industries or government agencies; they’ll almost have to, given the limited royal funds. The other steps in this scenario, while all less than likely, do not seem extremely unlikely.)

We can think of an even more interesting strategy! Consider the very long perspective: way back in 1066 when William the Conqueror conquered England, he technically owned the whole place as spoils of war. Where did it all go? Well, most of it went to his supporters as their reward, sooner or later. And we can’t appeal to the formal/informal gap and have the monarchy repossess it because the sales usually included clauses about the sales being permanent or perpetual, which are hard to escape. But actually, billions is left! Why isn’t the Queen a billionaire, really? Because it’s all controlled by Parliament in a strange agreement dating back to 1760264ya in which the monarchy gets a sort of pension called the Civil List for paying the bills of the Royal Households of the United Kingdom which runs to ~$10 million annually, and in exchange Parliament controls the entirety of the Crown Estate - worth a cool ~$11 billion and yielding ~$300 million annually. It’s clear that Parliament has the better end of this deal, and also clear that our hypothetical prince won’t be running much of a campaign based on the gleanings from his politically-vulnerable income of $10 million.

The formal/informal gap may help here. This agreement turns out to have been modified since 1760 at the start of each new reign, because the new monarch has to agree to the arrangement! It’s understood that he or she will immediately agree - of course! who could possibly think otherwise‽ - but here is a chink. Control over a fortune of $11b goes a very long way towards building a genuine power-base.

The question of the Crown Estate and the deal’s stability has been discussed from time to time; the longest discussion I’ve seen is a 1901123ya essay by G. Percival Best on “The Civil List and the Hereditary Revenues of the Crown”. He mentions many interesting details, for example provisions in the relevant laws which trigger only if the monarchy decides to not surrender the Crown Estate income: eg.

  1. The Hereditary Excise Duties: These were granted to the Crown in 1660364ya by the Acts 12 Car II c 24 in lieu of the feudal rights then abolished. Various re-arrangements were made from time to time, whereby some of the duties ceased to be payable. The remaining duties, being duties on ale, beer, and cider brewed in Great Britain, are in abeyance but will revive in the event of the Crown at any future time not making the usual surrender…

  2. Compensation for Wine Licence Revenue: The revenue from wine licences ceased to form part of the Hereditary Revenues in 1757267ya, when by the Act 30 Geo II c 13 the annual sum of

Best confirms my suspicions that between the deal and the provisions in law for the deal lapsing, the only real barrier to a new monarch is that great bugaboo, “custom” or the “unwritten constitution” or “public opinion”66:

That His present Majesty had a legal right to resume possession of these Hereditary Revenues is clear from the provisions of the Civil List Act, 1837, but whether he could constitutionally have done so is open to question. It has been said that “the arrangements by which the Crown at the beginning of each reign surrenders its life interest in the Crown lands and other Hereditary Revenues, though apparently made afresh on each demise of the Crown, is really an integral part of the Constitution and could not be abandoned.”2 This view was shared by Spencer Walpole, who, writing with reference to the surrender of the casual revenues by William IV, stated that “a surrender of this kind once made was virtually irrevocable. It would have been as impossible for any future Sovereign to have resumed a revenue which his predecessors had surrendered as it would have been impracticable for him to have restored the Star Chamber, or to have made the appointment of the Judges dependent on his pleasure.”3 The late Professor Freeman’s words on the point are equally emphatic. After discussing the rights of the Crown and of the public over the Crown lands he continued, “A custom as strong as law now requires that at the beginning of each fresh reign the Sovereign shall, not by an act of bounty but by an act of justice, restore to the nation the land which the nation lost so long ago.”4

…If, therefore, the King exercised his legal right and resumed possession be would only be entitled to retain a sum sufficient for the support of his household and family in a state befitting the Royal dignity. The remaining produce would have to be devoted to the public service. As in the last resort it would be for Parliament to say what sum the King should retain, the advantage of a resumption instead of a surrender is problematical.

Note that this alone could still be very useful for our would-be Frederick the Great - since this seems to imply that in a resumption, the monarchy will gain complete control of how it spends its allowance, and more importantly, how any properties in the Crown Estate are disposed of or contracted about.

With these legalities in mind, we can imagine a new scenario: The old monarch dies, and the crown prince succeeds. He declines to surrender, whereupon if Parliament strikes back by insisting the state budget must now be maintained by his Crown Estate (which is of course these days grossly inadequate), he beseeches Parliament to authorize the usual taxes to close the gap in his funding… This puts them in a fascinating dilemma: if they refuse, he carries on the most limited core functions and abandons everything else, causing people and especially those dependent on state subsidies to hate Parliament and sweep monarchists in during the next election which he of course has called; while if they agree, he now has full power of the purse and can begin building up his power base with wise administration to withstand the future attacks of Parliament.

Legislatures are rarely known for their courage and for being willing to hazard enormous upheaval, but there doesn’t have to be too insane upheaval - that’s the threat to Parliament: “sure, you can take my bet if you think I’m bluffing and then pass appropriate laws later, but do you want to?”

We can analogize this brinksmanship to American government shutdowns: who will the voters blame for being unreasonable and inflicting the pain & suffering? The 1995 shutdown was widely interpreted as a victory for the Democrats and a defeat for Republican architect Newt Gingrich (who of course has argued that this interpretation is wrong and it was actually a victory67 despite Clinton’s boosted approval rating).

Another interesting example of the Overton window and the frailty of ‘custom’, besides the obvious point that the Founding Fathers would not recognize the current giant federal government or understand how their carefully-written Constitution could have permitted such a thing (whatever good reasons underlie the growth), is the explosion of the filibuster from a legislative judo move which was understood to be a key tool of the minority (and whose removal by the majority would be the “nuclear option”) whose invocation was personally taxing (internalizing the costs, eg. Mr Smith Goes to Washington) and used only in rare circumstances (like the English reserve powers! How about that⸮). So much for the camaraderie of the Senate and centuries-old custom.

Cicadas

5 Words Or Less Summary: “Got some. Cicadas are crunchy.”

In April 201311ya, I was excited to read local paper’s article on a cicada emergence this year; the print version included a detailed Maryland map apparently sourced from cicadas.info with point-estimates of emergences - it was hard to see my particular hamlet, but I was clearly near more than a few. I had had no idea that there were any cicadas in the area or that this was the year. A 17-year brood, so I resolved to make the most of this opportunity - and eat some cicadas!

(Yes, they’re perfectly safe to eat as long as you aren’t stupid and forget to cook it or try to eat a rotting dead one. People eat insects all the time, and weirder things like insect barf.)

I eagerly tracked a website for Virginian daily ground temperature readings throughout April 201311ya, and was frustrated by the incredibly slow rise and occasional reverses that set back progress by weeks. Finally - the line was crossed! I woke up early to search for cicadas (I had read they tended to be most active early in the morning), only to find none at all. Turns out that cicada groups are very localized, and indeed, none emerged in my area. The closest I came was in late June, when I thought spotted a single severed cicada wing on the road, but I was not sure.

I could just go elsewhere, since it’s not as if there was any shortage of cicadas in places that had them. But I had to wait until early June due to interference like my sister visiting and trying to piggyback a harvesting expedition on my jury duty (which was fantastically ill-timed in overlapping with both catching cicadas and driving my sister from & to BWI). Waiting was very frustrating because I would read articles in places like the NT of fully-active emergences which were finishing, and know that just up the road were cicadas if only I could reach them. Finally, I managed to get to a local park by Leonardtown where Magicicada.org’s live “2013 Magicicada Brood II Records” collaborative Google Map indicated that cicadas had been spotted.

We got there to find that most of the cicadas were dead and shells. The overall sound was remarkable: like being on the shoulder of a freeway in the middle of the day.

Finding cicadas a little challenging, but the red eyes helped a lot - very striking against a green backdrop. Capturing was both easy and difficult. I had problems with my own personal squeamishness in not being willing to pinch a cicada with enough force to avoid dropping it or it flying away. some stupid enough to just shift branch and wait for me to try again. funny response on being seized: they switched to a steady buzz which sounded quite unhappy until I dropped them into my ziplock plastic bag. Collected 20-30 or ~75g. Sitting on my windowsill, they churned around in their bag making a slight buzzing noise and crawling over each other:

My cicadas, fresh from the park’s trees and bushes

My cicadas, fresh from the park’s trees and bushes

I carried the bag around to the animals; the dog didn’t seem interested, and the cat just stared even when I gave it a cicada to play with. My sister was napping and gratified me with a shriek. No one seemed remotely interested in having them for dinner, and the most I could extract was a promise that they might try cicada chip cookies if I made them. Well, bugger that for a lark - I wanted a right proper meal off them after busting my hump to secure them. I had been hoping for enough cicadas that I could make multiple recipes, but I had to settle for making just one big batch.

On pg8 of Cicada-licious: Cooking and Enjoying Periodical Cicadas (Jenna Jadin & the University of Maryland Cicadamaniacs 200420ya), I hit a likely-sounding recipes:

The Simple Cicada: Don’t want to bother cooking up something fancy just to enjoy the delicious taste of the cicada?? Well here is a quick and easy main dish recipe that should take only minutes to prepare:

Ingredients:

  • 2 cups blanched cicadas

  • Butter to sauté

  • Two cloves crushed garlic

  • 2 tbsp finely chopped fresh basil, or to taste

  • Your favorite pasta

Directions:

  1. Melt butter in sauté pan over medium heat.

  2. Add garlic and sauté for 30 seconds.

  3. Add basil and cicadas and continue cooking, turning down the heat if necessary, for 5 minutes or until the cicadas begin to look crispy and the basil is wilted.

  4. Toss with pasta and olive oil. Sprinkle with Parmesan cheese if desired.

Yield: 4 servings

I had all those ingredients except for Parmesan. The cicadas being sautéed in butter:

A saucepan of butter, spare bacon grease, and 75g of cicadas.

A saucepan of butter, spare bacon grease, and 75g of cicadas.

~10 minutes later (I had to reheat the pasta, which I made first), I had my final product:

A pasta and cicada and tomato sauce dinner

A pasta and cicada and tomato sauce dinner

How was it? Well, it would’ve been better with more sauce. The cicadas themselves? They had an odd consistency - they were crispy and hollow, like a cheese puff, and tasted sort of like toasted peanuts, but mostly just like sautéed butter. The main problem: the wings and legs were also crisped, so every so often as they went down the hatch, there would be a sort of scratching sensation. Not terribly pleasant. I regretted thinking they would break off or burn away, and ignoring the cookbook’s advice to remove them:

Adult males have very hollow abdomens and will not be much of a mouthful, but the females are filled with fat. Just be sure to remove all the hard parts, such as wings and legs before you use the adults. These parts will not harm you, but they are also not very tasty.

I have no one to blame but myself there. (Given how late I went hunting and many of my cicadas were crispy/hollow, I suspect I caught mostly males who had failed to mate.) I also somewhat miscalculated portions, and wound up stuffed to the gills with cicada & pasta.

Overall, an interesting experience. The next cicada emergence I am near, I’ll try the chocolate chip cicada cookie recipes.

No-Poo Self-Experiment

Modern Western-style shampoo is a fairly recent hygiene innovation, which for some people raise the question of how useful it could really be and whether it actually works; claims that shampoo is useless or harmful to hair appearance has given rise to the no poo meme, which is what it sounds like.

I find it an interesting assertion (it’s not like I’ve ever run into randomized controlled trials demonstrating shampoo is superior to no-poo), and my fine curly brown hair often becomes oily and unattractive if I do not shower regularly, so it would be great if I could save time, cut out shampoo/soap, and look better more consistently. No poo advocates also cite some low-quality studies in support of their claims.

But I didn’t see how I could test the claims in a self-experiment: you quickly adapt to your own body odor or appearance, you cannot be blinded since you know if you’re not using shampoo/soap and you can’t use my usual placebo-pill trick for blinding, and the consequences of being wrong about whether you have offensive body odor or nasty hair can be severe (judgment based on appearance is pervasive and applicable to all sexes & ages, see Langlois et al 2000). You could try to get around the adaptation problem by asking a third party to sniff you regularly and rank you on a dankness scale, but there’s no one I’d inflict such an ordeal upon. So my initial interest subsided until I happened to read Julia Scott’s NYT Magazine article, “My No-Soap, No-Shampoo, Bacteria-Rich Hygiene Experiment”, about a startup arguing that some commensual bacteria can substitute for soap & shampoo and how she enrolled in their trial to see how it works. She reports the usual sequence for no-poo anecdotes: an initial period of a week or three where her cleanliness and appearance go to hell, and then a slow recovery to baseline.

Reading about how her hair “turned a full shade darker for being coated in oil that my scalp wouldn’t stop producing”, I suddenly realized: there was a simple way to test the hair half of the no-poo meme, in a way which was blind, did not involve a third party, and avoided the adaptation problem of a rating each day. Take photos of your hair every morning of the experiment in the same place & posture & indoors lighting (I choose 3 pictures in ‘automatic’ mode and 3 in ‘closeup’ mode), storing them on the same digital camera; cameras record metadata such as the day a photo was taken, preserving the information about what randomized (50-50) experimental condition (poo or no-poo) the photo was taken under; then at the end of the experiment, without ever looking at any of the photos by date, have a program randomly select photos, ask you for a rating, storing the date/filename/rating; then do the statistical analysis on that triplet. In this way, an objective dataseries (of hair photos) is created without any chance for (visual) adaptation and the rater is kept blinded (as to whether each photo is from poo or no-poo days) when extracting the rating. Because only the rating is being done blinded, it’s a partial blinding and it’s possible that the subject could neglect their hair differentially under one condition but not the other - but this partial blinding addresses the biases I highly likely expect to be skewing most no-poo anecdotes, and renders the self-experiment more than a self-deluding waste of time.

The ratings will be a Likert scale of 1-5. The best analysis for this would be, I think, a multilevel ordinal logistic model with ratings nested in photos and photos nested in days; the main variable is poo vs no-poo, of course, but my hair is visibly affected by other factors and those should be included as covariates: whether I showered the previous day; whether I took a long walk the previous day; the local heat high the previous day; the local humidity (high humidity makes my hair curlier); how long I slept the night before (bedhead); and for an interaction term, whether the day is the first week of the month when - for no-poo - hair should look worst, and it might be work trying # of days since start of month to see if there’s a linear improvement over time. The weather data can be sourced from Wunderground like for my Weather & mood analysis. A final tweak might be use multiple ratings of each photo to estimate how much measurement error there are for ratings and fit a errors-in-variables model (although these don’t seem to be well-supported in regular R libraries, which might motivate a move to a Bayesian language like JAGS or Stan).

Because no one seems to adapt in under a week, the blocks must be very long. I chose pairs of months as usefully long blocks which are also convenient. I am not sure in advance how long I should run the experiment: I have no previous experiments I can compare to for effect sizes nor any guide from research literature; probably I will cut it short if the no-poo turns out disastrous, otherwise I’ll run it for perhaps half a year since it’s not much work to take photos in the morning. (Hopefully I will have an answer before Christmas forces me to reach a conclusion.)

  1. June: poo; July; no-poo

  2. August: poo; September 1-20: no-poo

  3. September 21 - October 21: poo; October 22 - November 22: no-poo

  4. November 23 - 2015-01-04: poo; 5 - 30 January: no-poo

    The exigencies of the holidays interfered with the planned switch in late December. On 6 January, my camera died (RIP 2004112015) and I switched to using my Samsung Galaxy smartphone.

  5. 31 January - 30 March: poo; 31 March - 3 April: no-poo

    Easter interfered with the planned transition; didn’t want to risk being greasy.

  6. 4 April - 8 April: poo; 9 April - 5 May: no-poo

  7. 5 May: poo; 30 May - 30 June: no-poo

  8. 1 - 16 July: no-poo; 17 July - 7 September: poo

  9. 8 September - 30 September: no-poo; 2015-10-016m2016-03-27:

  10. 2016-04-121m2016-05-20: no-

  11. 2016-06-2523d2016-07-17: no-

  12. 2016-08-186m2017-02-22:

The main concerns with this design seem to be whether a month is enough time to show any no-poo adaptation, and whether it would be possible to estimate time for each photo despite controlling the conditions as much as possible.

After 2 weeks of no-poo, it seems like my hair has indeed darkened, but it also looks fine for the usual time period after a shower; the main difference seems to be that when it looks bad (due to yardwork or not showering), it looks very bad.

Newton’s System of the World and Comets

Split out to”Newton’s System of the World and Comets”.

Rationality Heuristic for Bias Detection: Updating Towards the Net Weight of Evidence

Bias tests look for violations of basic universal properties of rational belief such as subadditivity of probabilities or anchoring on randomly-generated numbers. I propose a new one for the temporal consistency of beliefs: agents who believe that the net evidence for a claim c from t1 to t2 is positive or negative must then satisfy the inequalities that P(c, t1)<P(c, t2) & P(c, t1)>P(c, t2), respectively. A failure to update in the direction of the believed net evidence indicates that nonrational reasons are influencing the belief in c; the larger the net evidence without directional updates, the more that nonrational reasons are influencing c. Extended to a population level, this suggests that a heuristic measurement of the nonrational grounds for belief can be conducted using long-term public opinion surveys of important issues combined with contemporary surveys of estimated net evidence since the start of the opinion surveys to compare historical shifts in public opinion on issues with the net evidence on those issues.

A friend of yours tells you he’s worried he saw a snake on his driveway yesterday which he thinks may be a poisonous coral snake rather than a harmless snake; he gives it 50-50 chance - it looked a lot like a coral snake, but he didn’t think this state had any poisonous snakes. You see him the next day and ask him about the snake. “Terrible news!” he says. “I looked it up on Wikipedia, and turns out, this state does have poisonous snakes, coral snakes even.” How unfortunate. So what probability does he think it is a coral snake? His probability must have gone up, after all. “Oh, 50-50.” What? Is he sure about that? “Sure I’m sure. I still wonder if it was a coral snake or not…” Your friend is fond of gambling, so you know he has not misspoken; he knows what a probability is. You politely end the conversation and conclude that while you have little idea if it was a coral snake or not, you do know your friend is fundamentally not thinking straight on the issue of snakes: he understood that he found net evidence for the snake being a coral snake, but somehow did not update his beliefs in the right direction. Whatever his thinking process, it is non-rational; perhaps he has herpetophobia and is in denial, or has some reason to lie about this.

It can be hard to decide whether someone’s conclusions are irrational because they could have different priors, have different causal models, have been exposed to different evidence, have different preferences, and so on. But there are a few hard rules for bare minimums of rationality: no contradictions; conjunctions are equally or less likely than any of their conjuncts; disjunctions are equally or more likely than any disjuncts; probabilities of exhaustive sets of claims sum to 1; 0 and 1 are not degrees of belief; and - net evidence for a claim increases the posterior probability of that claim. (Or to put it another, per Bayes rule , for arbitrary P(A) and P(), if P(B|A) > 1 then P(A|B) > P(A); is a contradiction.) And what applies to coral snakes applies to everything else - if your friend agrees evidence suggests his pool was a bad buy, he should be less optimistic about it than he was when he bought it, and so on. Your friend might have totally different priors or causal models or life experiences or political affiliations, but whatever they are, he still must make his net evidence and update direction jive. Updating is not sufficient for rationality (one can still have wrong models which indicate something is net evidence which shouldn’t be, or update too much, or be irrational on other matters) and updating doesn’t itself show notable rationality (perhaps one was just profoundly ignorant about a topic), but it is necessary.

We can broaden it further beyond individuals. If someone fails to update their earlier estimate towards their claimed weight of evidence, then they are wrong. What about everyone else? If you surveyed your neighbors and your friend, they would agree that however much one should believe it was a coral snake, upon learning that coral snakes do in fact live around here, it is terrible news and evidence for the snake being a coral snake. They might not agree with a starting 50% probability, and might argue about whether the Wikipedia article should matter a lot or a little (“whenever I check a WP article, it’s always vandalized”), but they would agree that the evidence is in favor of a coral snake and that the correct increase is definitely not 0% or -5%, and anyone who changes their belief that way is just wrong. Hence, for your neighborhood as a whole, each person is wrong if they don’t change their earlier probability upwards.

Can we broaden it further? If (for some reason, perhaps because we too suffer from herpetophobia) we have surveys of your neighbors about the risk of snakes on their part of this mortal plane going back decades, then we can employ the same trick: ask them what they think the weight of evidence about coral snakes is, and their current probability, and compare to their old probability.

Can we broaden it further? There are few long-term surveys of opinions of the same people, so this heuristic is hard to apply. But what applies to your neighborhood should also generally apply to populations over time, barring relatively exotic changes in population composition like natural selection for high religiosity priors. Now we ask everyone what they think, or they think the general population thinks, the net weight of evidence has been. (Somewhat like Bayesian truth serum.) If there is some issue which 100 years ago split the population 50-50 on, and everyone agrees that events/data/research since then have generally favored one side of the issue, and everyone is also meeting bare minimums of rationality, we should see that weight of evidence reflected in proportions shifting towards the winning side. We definitely do not expect to see surveys reporting the split remains exactly 50-50. If it does, it suggests that the population is not dealing with the issue rationally but for other reasons like personal advantage or politics or cognitive biases.68

These directions do not need to be the exactly same over all time periods or for all issues. For example, consider the question of whether there is alien life on other planets in the Solar system or in other solar systems in the universe from the period 1500 to 1900, 1900 to 2016, and 1500 to 2016. Isaac Newton and other natural philosophers speculated about life on other planets and throughout the universe, and I think the net weight of evidence as astronomy and biology progressed was heavily on the possibility of life with an ever-expanding universe to generate life somewhere, and so the direction of belief would have been increasing towards 1900124ya for life in the Solar system and universe; but then, as progress continued further, there was a drastic reversal of fortune - the canals on Mars were debunked, spectroscopy showed no signatures of life, the launch of space probes showed that Venus was not a garden planet but a sulfuric-rain molten-lead hellhole while Mars was a freeze-dried husk of sand sans life; and after the abrupt extinguishing of hopes for Solar life, Enrico Fermi famously asked ‘where are they’ with no hints of radio activity or stellar engineering after billions of years even as development of rocketry and space technology demonstrated that advanced alien civilizations could colonize the entire galaxy in merely millions of years. And who knows, perhaps some clear signal of life will yet be discovered and the weight of evidence will abruptly swing back in favor of life in the universe. Another example might be behavioral genetics and intelligence tests, for which there is an extraordinary disparity between expert beliefs and the general public’s beliefs, and for which an equally extraordinary amount of evidence has been published in the past decade on the role of genetics in individual differences in everything from human evolution over the past few thousand years to the presence of dysgenics to the genetic bases of intelligence/personality/income/violence/health/longevity; surveyed experts would doubtless indicate strong weights of evidence against the long-dominant blank slatism and show accordingly changed beliefs, and a survey of the general public might show little or no weights of evidence and belief shifts - but that is not evidence for strongly nonrational public beliefs because it might simply reflect considerable ignorance about the scientific research, which have been minimally reported on and when reported on, the meaning & implications minimized. So depending on the time period, question, and group the update might be up or down - but as long as it’s consistent, that’s fine.

An example of an application of the net evidence heuristic might be cryonics. Many objections were raised to cryonics at the start: religious and dualist objections; cell lysosomes would ‘explode’ immediately after death, erasing all information before vitrification; personality and memories were encoded in the brain not as stable chemical or biological structures but as complex electrical dynamics which would be erased immediately upon death; cryonics organizations would disappear or would routinely fail to keep corpses at liquid nitrogen temperatures; scanning technology would never be able to scan even a small fraction of a brain and Moore’s law would halt long before coming anywhere near the equivalent of a brain, rendering upload permanently impossible; nuclear war would obviate the issue along with Western civilization, or if not that, then the long-anticipated hyperinflation of the US dollar would bankrupt cryonics organizations; laws would be passed forbidding the practice; angry mobs of religious fanatics would destroy the facilities; the expense would be far too much for anyone but millionaires to afford; and so on. Given all this, it is unsurprising that cryonics was not super cool and few people believed in it or did it. I don’t know of any surveys, but as a proxy, the number of memberships of early cryonics groups and later ALCOR suggest that cryonics could count perhaps a few hundred or perhaps thousand out of the US population of 180m in the 1960s or so. In the half-century since then, cryonics has survived all challenges: materialism is the order of the day; lysosomes do not explode; personality and memory are not encoded as anything fragile but as durable properties of the brain; cryonics organizations using the nonprofit model have done well at surviving and keeping all corpses stored without ever thawing; scanning technology has advanced massively and it is now conventional wisdom that at some point it may be possible to scan a brain; Moore’s law has continued the whole time; there have been no serious legal danger to cryonics in the USA, nor have there ever been any riots or lynch mobs; median American household real income has increased ~3x 1960562016; cryonics has demonstrated ever larger proofs of concept such as reviving a vitrified kidney and demonstrating that C. elegans memories are preserved upon revival; and in general neuroscience has moved strongly towards an information-theoretic point of view. I would say that the net weight of evidence for cryonics is massively positive. Cryonics has never looked more possible. So half a century later with this massive weight of evidence and a much wealthier & larger (~324m) US population (likewise globally), what membership numbers do we find for ALCOR and CI? We find… 1,101 members in September 2016 and 1446 respectively. In other words, far from increasing, it may actually have fallen per capita, implying beliefs about cryonics have gotten more pessimistic rather than optimistic.

What can we conclude from the size of weight of evidence and observed shifts or lack thereof? If we survey people asking for net weight of evidence, they will be probabilistically unsophisticated and it’s unlikely anyone, even experts, can easily assert that the claim “democracy is the best form of government” is exactly 2x as likely as in 1667357ya; we might ask for a rating on a scale 1-5. We can look for surveys on a range of popular issues such as global warming, whether AI is possible, or atheism, and use the longest time-series we can for each issue to calculate a shift in belief odds. Then we can survey contemporary people and ask for their estimate of the weight of evidence. Finally, we can divide the rating by the odds shift to rank issues by how much changes in evidence correlates with shifts in belief. This ranking, a sort of “rationality quotient”, might be interesting and correlate with our intuitive expectations about in what areas are beliefs most non-rational. (My guess at some of the results: the durability of religious belief will likely contradict the weight of evidence, while AI and global warming will show more updating.)

Why might this method work? It might seem a bit akin to asking “do you think you are wrong about something?” to ask questions about weight of evidence and opinion changes, as the normative requirement here is so basic and guessable. Why would it help to split apart Bayesian updating and ask about such a specific part rather than focus on something more important like what the priors were or what the posterior probability is? But I think the method might work precisely because it splits apart absolute levels of belief from changes in beliefs: someone holding a particular belief like theism feels like it is a harmless question and can safely admit that the last few centuries haven’t been too great for theism, because it doesn’t threaten whether their belief is >50% in the way that a blunter question like “do you agree that theism has been debunked?” might - it would be socially undesirable to admit that one’s belief has fallen a large amount, but it is safe to admit that it has slipped an unspecified amount. This is similar to how Bayesian truth serum avoids self-serving biases by asking about other people (“do you use drugs?” “oh gosh no, I am a good person and not an addict” “how many people in your community use drugs?” “all of them”). The questions can also be split between respondents, limiting their ability to infer what test is being done and what normative standard responses are being compared against and adjust their responses.

If this works out, it offers something valuable: an objective, easy, widely-applicable test of long-run population rationality, which controls for individual differences in priors and causal models and knowledge.

Littlewood’s Law and the Global Media

Main article.

D&D Game #2 Log

Account of the second D&D game I’ve ever played, with my relatives, in which a party of adventurers seeks to return the renowned dancing bear Bufo stolen from a circus, only to discover a maze of lies in which they have lost their bearings, with perhaps an unbearable truth, but it all ends fur the beast. Also, remember, kids—winners don’t do drugs!

D&D newbie. Dungeons & Dragons is justly famous as the nerd activity par excellance; unfortunately, it is inherently social, and as a kid, I knew no one who played. (The ones I did know preferred Magic: The Gathering or Warhammer 40,000, but those were staggeringly expensive to me, and I thought they were best done well if done at all.) As it happens, my brother-in-law is surprisingly nerdy and into board-gaming & TTRPGs, so at holidays we have been playing short D&D games.

Structured storytelling. I had never understood how exactly D&D worked, but after the first session it had become clearer to me: it’s really a sort of collaborative storytelling where all the paraphernalia of dice & rules & numbers serve as a way to keep the storytelling honest and introduce an outside force of imagination. A complex game like Nethack or Dwarf Fortress can sort of approximate this emergent storytelling, and AI Dungeon 2 comes even closer, but there is still a gap.

Our second game was especially ambitious, taking a good 8 hours over several days, and I was amused by it, so I thought it bore repeating.

“Bearer Bonds (Cash On Delivery)”

Find & return Bufo the Dancing Bear! We set up a party of 5 level 1 characters: I was a lawful neutral half-elfish paladin, there was a lizard soldier, a rogue (“Chai Matcha”), a merchant, and a wizard. We arrived via boat at Red Larch City on a contract from the Adventurer’s Guild to investigate the theft of a performing bear from the local circus inside the city by bandits.

Red Larch (The… Larch)

The nervous guard captain filled us in, and we went through the quasi-Venice city filled with odd ancient crystal bridges to the circus after a failed attempt to purchase a mystical ‘bear whistle’. (One party member kept looking everywhere for a bear whistle because the purchase price was unbearable, but failed to find one.)

Inside job? We looked at the animals and debated whether to free the dire wolf, which telepathy revealed longed for freedom. I argued this was illegal and you couldn’t spell ‘dire’ without ‘die’, and we didn’t. The beastmaster and ringmaster were gone, so the lizard pretended to be a new employee and stole rope while I wasn’t looking; the merchant went into the ringmaster’s tent and read his journal and then stole a bunch of gold pieces. We learned the bear, ‘Bufo’, had been the best performer at the circus, and brought in a ton of money for the ringmaster. He and the beastmaster had been at odds, and we became suspicious it was an inside job. My squire was dispatched to buy fish.

On squires & beasts of burden. The ringmaster reportedly drank at a local tavern, so we took a gondola. As a paladin I had a squire, but fortunately, my earlier history check about the laws regarding animal welfare in Red Larch rolled a “natural 20”. Thus, apparently, I had become an expert on local law while in college, and among other things, knew that squires are legally considered animals here and thus rode for free. (I named my squire ‘Fred’ because ‘Frederick’ is a people name. I tossed him a fish when he returned as a reward.)

Homō hominī lupus. While exiting, my passive perception check noticed the gondolier’s assistant pickpocketing us of gold pieces. I accused him of theft and the gondolier exclaimed that theft from customers is punished by death in the code of the gondolier guild, and fatally stabbed the boy in the chest. I approved of executing the laws of the guild. However, the boy bled suspiciously little and the body disappeared in the water. While leaving, another party member tried to shake him down for all the money back, and he kicked the lamp, blinding everyone and setting the gondola on fire. A hairy situation. On dry land, I and 2 others split the party by chasing the wet boy’s footprints.

Inside job. He lost us in an alley with a rope ladder, but the ringmaster were there, so we cut our losses and interrogated him. He admitted the bandits were made up but the bear and beastmaster were probably where we had been told the bandits were, because the beastmaster kept talking about that spot outside Red Larch. The other party members left the canal, somewhat worse for the wear, and rejoined us, having failed to capture the miscreant gondolier. We figured that his boat burning was probably adequate punishment, and we left the city. At an inn along the way, the lizardman picked a fight at breakfast for no good reason, and got us kicked out.

In The Hall Of The Mountain Kodiak

Trip into the den of thieves. We then found a big cave, guarded by a man-tree-fungus, who informed us that it was Druidic territory, and we were not welcome. I said that we should leave as it was trespassing and Druids had property rights like anyone else—but the party attacked and easily defeated it. While remaining strongly opposed to any further trespass and murder (or at least, destruction of property), the party ignored me & so I decided to accompany them in, albeit reluctantly. The lizardman looked at all the fungus and mushrooms growing on the walls and decided to eat some, to no apparent effect. We found a hidden door to a room with bandit corpses; apparently there had been bandits—but they were all dead now.

Inhaling the spore. In the next room, we met and defeated 3 more fungus-corpses dressed up as bears, harlequins and dancers. Ice-bolts helped lock them down, but when destroyed, they emitted dangerous spore-clouds. Thinking about the Druid connection and comments that the bear was remarkably intelligent, we began wondering if the bear Bufo was a Druid, perhaps shapeshifting to make money or something, or the beastmaster himself, who we had not yet found.

Throwing stones at (floating) glass houses. In the next room, a cairn floated in the middle of an abandoned workshop. I had done a medicine check and ascertained that the lizardman was slightly sick and had dilated pupils, but otherwise seemed normal. Nevertheless, I was suspicious of him and the fungi & mushrooms, especially as the DM had passed him a note & he kept doing unexplained dice rolls. The lizardman tried casting arcane bolts at the cairn several times, to no effect, until suddenly it collapsed as we were walking around it, doing drastic damage to our HP. (Gee thanks.) The lizardman suggested to the party that they try eating the mushrooms; two of us agreed, while I and Chai Matcha said ‘hell naw!’

The Sleeper. We proceeded through two parallel corridors into the final room, which was a wide open cave. On a stone dais rested Bufo, who was extremely large, had an unusual dark mane, and was covered by fungus glowing in mystical organic patterns. We ordered Fred to place the fish by Bufo’s head; Bufo sniffed and began sleep-eating the fish. We debated whether to try to wake up Bufo or leash him or what, when a fungus-man screeched behind us that ‘I must not allow you to waken the Dreamer!’ and spawned 3 smaller fungi-creatures to fight for him. This turned out to be the beastmaster himself. (He had been hidden behind a hidden door in the second chamber, and we had failed to followup on the DM’s hint about hearing ‘a metal slide close’.) Some icebeams kept him locked down.

Bringing down the roof‽ Suddenly, the lizardman stopped fighting and began casting arcane attacks at the fungus-covered wall! I logically concluded that he was being mind-controlled by the fungi and Bufo, and was attempting to enslave the rest of us, and that further, the linchpin of the whole system appeared to be Bufo. (The lizardman admitted later that he was just doing it to be Chaotic—because of course that ursinine player would…) In all contract law, it is understood that self-defense and survival supersedes all contracts and since we had already taken considerable damage, half the party turning on us 2 would spell certain doom, and our lawful priority was self-preservation; therefore, it was time to attack Bufo.

The unbearable lightness of being (a bear). I began using my crossbow on the sleeping Bufo, dealing substantial damage to the beastmaster, knocking out his minions, and all 3 party members who’d eaten the wall-snacks. With the traitors incapacitated, we were able to dispose of the beastmaster, and then stabilize them at 1HP. As they no longer seemed actively hostile, we approached Bufo. The 3 controlled members now had telepathic communication with Bufo, and interrogated him. It appeared he might have been a Druid, or some other sapient being, as he remembered not being a bear always (if not what he was); he had fled the circus with the beastmaster to this Druidic cave located on a ley line to recover and recuperate, and the fungi were the cave’s natural defenses. He was too weak and ill to resist us, and if we forced him to go back to the circus, he could not attack us.

No right to bear harms. As an expert on local animal law, I was well-aware that slavery was strictly forbidden and sapient creatures could not be property; therefore, bringing him back would constitute kidnapping, false imprisonment, and given his statements about ill-health, reckless endangerment and possibly even manslaughter. It is also an universal feature of contract law that there are no legally enforceable contracts to commit illegal acts; therefore, if Bufo was sapient, our contract was null and void, and the circus had no rights to Bufo. I concluded that the only legal option was to leave Bufo alone and return to report why the contract was illegal and dissolved. We would not make a profit on this trip, but that was our cross to bear.

An ending fur the best. My unaffected comrade agreed on the basis that it would be immoral to hurt an intelligent bear by bringing it back to the circus. And our 3 affected comrades agreed likewise—not feeling it necessary to mention that they were being mind-controlled by Bufo and if we had decided to bring him back, would have had a polar opposite reaction, fought us, and delivered us to a grizzly fate. (One was interested in the possibility of selling Bufo on the black-market as a quasi-Druid, knowing that rather ominous entities would pay thousands of gold pieces for him, but naturally I disagreed, and he was a sad panda.) We left the cave and Bufo in peace, and received ‘the favor of the Druids’.

Fin

Incidentally, the DM’s original scenario was some sort of ‘cave of the necromancer’ from Princes of the Apocalypse, but it turned out we’d spent so long in the city, pawsing the plot, that the original scenario wouldn’t work in the time (and patience) we had, so the DM simplified it on the fly to Bufo hibernating.

Highly Potent Drugs As Psychological Warfare Weapons

See main article.

Who Buys Fonts?

See main article.

Twitter Follow-Request UX Problems

See main article.

The Diamond Earrings

See main article.

Advanced Chess Obituary

As automation and AI advance in any field, it will first find a task impossible, then gradually become capable of doing it at all, then eventually capable of better than many or most humans who try to do something, and then better than the best human. But improvement does not stop there, as ‘better than the best human’ may still be worse than ‘the best human using the best tool’; so this implies a further level of skill, where no human is able to improve the AI’s results at all rather than get in the way or harm it. We might call these different phases ‘subhuman’, ‘human’, ‘superhuman’, and ‘ultrahuman’.

The interesting thing about this distinction is that each level has different practical implications. At subhuman, the AI is unimportant and used largely in cases where performance doesn’t matter much or where a human is unusable for some reason (such as environment). Taking arithmetic as an example, Pascal’s calculator was ingenious but sold only a handful of units and made correspondingly little difference. Once the human phase is reached, then it may become an economic force to be reckoned with, as it will be better than many humans and have other advantages; this may prompt a global revolution in that field as all the humans adopt the new technology and use it to assist themselves. A calculator as fast & accurate as the median human at arithmetic will be better than the median human because it is more systematically reliable. Here the mechanical calculator can take off, with de Colmar’s Arithmometer selling millions of units. The calculator becomes a ‘complement’ to a human accountant or clerk, as it double-checks sums and by its reliability helps handle the escalating arithmetic needs of the industrial economy such as double-entry accounting & statistics for small businesses, corporations, researchers etc. At the superhuman level, no longer does any human do the task on their own except for learning purposes or debugging; those humans now focus on things like when the task should be done or from what perspective it should be described. The humans using it become more productive and more valuable and employment increases; they do not become unemployed because the accountant still needs to punch in the right numbers to the calculator to figure out the corporation’s balance, and the (human) computers are still executing more complex algorithms than the calculator understands even if the calculator is now doing all the arithmetic - a pile of fancy electro-mechanical calculators could not replace the human computers for the Manhattan Project, they needed many people (often women) to handle the full workflow to answer various questions the physicists set up, check the answers at a higher level for sanity, etc. At the ultrahuman level, the technology becomes autonomous in the sense that a human no longer contributes to it at all, and that occupation disappears. The calculator, having developed to the level of a programmable digital computer, now fully obsoletes ; the ‘computer’ is fully unemployed and no longer exists as an occupation. The calculator has gone from ‘complement’ to ‘substitute’: it now fully replaces a computer. No human does arithmetic or square roots for a living, nor do they even double-check the arithmetic results, ‘computer’ now means exclusively programmable digital computers’, and the lower-skilled parts of occupations which formerly involved much arithmetic cease to exist and people are now employed for higher-skilled roles - eg. accountants now specialize in international tax evasion rather than cranking through the balance sheet and verifying that all accounts balance to zero.

From the global perspective, the ultrahuman level is the ultimate goal of all technological development: to eliminate the need for any human involvement and allow unlimited expansion and efficiency. Arithmetic done by a human is expensive, and can be afforded only in limited circumstances like accounting; but as arithmetic becomes cheaper, it allows for ever more complicated and arithmetic-heavy things to be done. FLOP for FLOP, computing a business’s solvency in 1500 is far more valuable than calculating the critical mass of an atomic bomb in 194579ya, which is far more valuable than running agricultural statistics in 196064ya, which is far more valuable than animating a bouncing icon on your smartphone, but despite these steeply diminishing returns, humanity is vastly better off because arithmetic has become so cheap that unimaginable amounts of it can be used for anything at all. For the occupants of a particular occupation, on the other hand, things may not be so rosy; forced out of a job, they may have trouble finding another one, particularly if they are old or unexceptional. For them, the ideal phase was superhuman: the point at which their personal productivity was steeply increased by the technology and they can reap most of the gains, but where it’s not so good that it replaces them entirely.

Chess has gone through a similar sequence of technological improvement. Early mechanical automatons could not play a full game of chess at all. Alan Turing wrote the first computer chess program in 195173ya, inaugurating the subhuman phase; the subhuman phase could be said to have lasted until Mac Hack Six in 196757ya or Chess 4.6 in 197747ya, so perhaps 26 years. The human phase saw the first defeat of a chess master in 198143ya, the first defeat of a grandmaster in 198836ya, and then as well known, last until the 199727ya victory of Deep Blue, for a total of 20 years. The superhuman phase begun in 199727ya still allowed for the top grandmasters to occasionally beat a computer, depending on available hardware (not everyone could afford computing power on par with Deep Blue), rules and timesettings (computers perform much better than humans at short time controls like blitz due to their calculating and tactical advantage in avoiding blunders), how well the human had prepared ‘anti-computer tactics’, luck etc, but it’s suggested that by 2004–200519ya, chess AIs were definitively superhuman and no human could systematically beat them.

However, the story does not end there. A grandmaster alone couldn’t defeat a chess AI, but what about a grandmaster assisted by another chess AI? Kasparov proposed this “advanced chess” variant and the first tournament was run in June 199826ya; Kasparov only drew with Topalov instead of crushing him, because of Topalov’s AI assistance. Both players played better than on their own. Indeed, in 20052200717ya, the best advanced chess teams often had humans who weren’t all that great at playing chess themselves, but were great at compiling game databases, and watching the chess AIs evaluate moves while intuiting where the AIs were weak and should be overridden. These advanced chess games were likely among the best chess games to ever be played (along with the best correspondence chess games).

(In Go playing, the windows were the same but the timescale was vastly compressed: ~30 years of research produced systems that could defeat weak amateurs, then the introduction of Monte Carlo tree search could beat strong amateurs over a decade and then low-ranked pros, then AlphaGo’s initial tests defeated a strong pro after a research program lasting a year or so inspired by the promising but sub-MCTS performance of CNNs on Go board evaluation, with another half a year of progress defeated a former world champion 4-1 while using a very large amount of hardware, and within another half year, as ‘Master’ went 60-0 with pros & world champions in blitz matches on a single GPU-equivalent, and was expected to win both the world champion match and the team matches set up after the Master demonstrations.)

As such, advanced chess has been employed (particularly in a rash of books/op-eds ~2013) as an exemplar of what increasing technological development may imply: not technological unemployment, but increasing partnership. The rising tide will lift all ships.

However, if advanced chess is going to be used this way, we should remember that after the superhuman phase, comes the ultrahuman phase, and ask how long the superhuman phase lasts in which ‘advanced chess’ is possible. Advanced chess players generally admit that at some point humans will cease to be net contributors; when was that?

The subhuman to human phase lasted 26 years, and the human to superhuman phase took 20 years; splitting the difference we might guess that the next phase, superhuman to ultrahuman would take a similar amount of time, 23 years, and that would put the transition point at 2020. Alternately, if each transition takes 6 years less and so the last transition takes 14 years, then it would happen in 201113ya.

when did man vs machine become impossible?

  • Kasparov says the era of human competitiveness was ~10 years, 199410200420ya: “Before 199430ya and after 200420ya these duels held little interest. The computers quickly went from too weak to too strong.”

  • 2005 Hydra: https://en.chessbase.com/post/adams-vs-hydra-man-0-5-machine-5-5

    Cowen 201311ya: “The last time a serious public contest was tried, in 200519ya, Hydra crushed Michael Adams 5.5 to 0.5. A half point indicates a draw, and Adams was lucky to come away with that one draw. In the other games he didn’t put up much of a fight, even though he was ranked number seven in the world at the time-among humans.”

https://en.wikipedia.org/wiki/Advanced_chess “advanced chess”/“cyborg chess”/“centaur chess”/“freestyle chess”

  1. first tournament June 199826ya: a month before, he’d trounced Topalov 4-0. But the centaur play evened the odds. This time, Topalov fought Kasparov to a 3-3 draw. Kasparov:

    Despite access to the “best of both worlds,” my games with Topalov were far from perfect. We were playing on the clock and had little time to consult with our silicon assistants. Still, the results were notable. A month earlier I had defeated the Bulgarian in a match of “regular” rapid chess 4-0. Our advanced chess match ended in a 3-3 draw. My advantage in calculating tactics had been nullified by the machine.

    TODO: Topalov & Kasparov ELOs in 1998

  2. Friedel 200024ya, of the first two advanced chess matches:

    In the following year [199925ya] Vishy Anand played against Anatoly Karpov. Both players were assisted during the game by ChessBase 7.0 and the chess engine Hiarcs 7.32. Karpov was quite inexperienced at operating a computer, while Anand happens to be one of the most competent ChessBase users on the planet. The result was that we were witness to an (unplanned) experiment of man and computer vs man. Karpov didn’t have a chance and was trounced 5:1 by his opponent. I am convinced that a player like Anand, using a computer to check crucial lines during the game, is playing at a practical level of over 3000 Elo points.

    TODO: Anand & Karpov ELOs in 1999

  3. Kramnik 200222ya https://en.chessbase.com/post/kramnik-on-advanced-che-and-fritz

    How about a giant Internet qualification event including amateurs for next year’s Advanced Chess?

    Well, it makes sense. It is clear that with the computer the difference in playing strength is reduced. Normally I can beat a player of 2600 without great difficulty, but if we both have a computer it is already not so easy. At the highest level it is not just about understanding, the very top players are also better at everything - calculation, imagination. With the computer that is no longer so useful.

    TODO: what was Kramnik’s ELO in 200222ya when he said this?

  4. 2005 freestyle tournament: Cowen 201311ya:

    ZackS defeated Russian grandmaster Vladimir Dobrov and his very well rated (2,600+) colleague, who of course worked together with the programs. Who was ZackS? Two guys from New Hampshire, Steven Cramton and Zackary Stephen, then rated at the relatively low levels of 1,685 and 1,398, respectively…Anson does not have any formal chess rating, but he estimates his chess skill at about 1,700 or 1,800 rating points, or that of a competent local club player. Nonetheless, he has done very well with his two quad-core laptops at the Freestyle level. Anson and his team would crush any grandmaster in a match. Against other teams, during one span of top-level play, Anson’s team scored twenty-three wins against only one defeat (and twenty-seven draws) across four Freestyle tournaments and fifty-one games.

    So in 200519ya, 2600+computer < ~1550+computer or 1750274ya+computer, implying the best teams had an human-part ELO advantage >1050

    Dagh Nielsen estimated that the Freestyle teams were at least 300 Elo rating points better than the machines alone (a measurement of players’ relative skill levels), although that was a few years ago. Nelson Hernandez estimates a 100-150 rating point advantage, which is like the difference between the number one player in the world and the number seventy-five player.

    So ~2010 (“a few years ago”) Nielsen estimated the difference at 300; in 201311ya, Hernandez at 150.

  5. Nickel, 200519ya: of the freestyle tournament

    Just to give you clue, I would say that the strength of good Advanced Chess players must be something around Elo 3000. Frederic Friedel was right in his conclusions last year: “The level of play may be the highest ever seen at these time settings. There cannot be a doubt that a human player, even one of the top players in the world, would have no serious chance in such a field.”

    The first chess engine, Rybka, reached ELo 3000 by ~2006 http://chessok.com/?p=21214

  6. Vasik Rajlich 200915ya https://rybkaforum.net/cgi-bin/rybkaforum/topic_show.pl?tid=10960 : the Rybka cluster may be slightly better than any other freestyle team (presumably due to its much greater computing capacity)

  7. Ken Regan, 2012: near-perfect play may be ~3600 ELO (http://www.cse.buffalo.edu/~regan/papers/pdf/RMH11b.pdf); with Komodo in 2016 at 3358 ELO and regular ELO improvements at ~50 per year (not just from the computing power doubling; see eg. https://web.archive.org/web/20140926161932/https://en.chessbase.com/post/komodo-8-the-smartphone-vs-desktop-challenge ), that limit will be approached within 5 years, and hence advanced chess obsolete, setting an upper bound of ~2021 chess engines with standard hardware or 2016 chess engines with 32x that Komodo’s computing power ((3600-3358)/50) (32x is not even that unreasonable, when Amazon EC2 will rent you a x1.32xlarge instance with 128 CPUs for $12/hour)

  8. Cowen 201311ya, based on interviewing top Advanced Chess players/teams: “Today, the top Freestyle players fear that the next or maybe even the current generation of programs (eg. Rybka Cluster) will beat or hold even with the top Freestyle teams….Vasik Rajlich says that, to date, the gap between the programs and the top Freestyle teams has stayed more or less constant. The human element really does add something, at least for the time being, although he too wonders how long this will remain the case.”

  9. GM Hikaru Nakamura + Rybka (200 Elo weaker on 200816ya laptop) vs 2014 Stockfish without opening/endgame books; Nakamura lost 3-1 https://www.chess.com/news/stockfish-outlasts-nakamura-3634

  10. InfinityChess2014: “InfinityChess Freestyle Battle2014” tournament articles+writeups http://www.infinitychess.com/News/Freestyle%20Chess : general attitude that it’s getting very difficult to beat a chess AI without extensive preparations and a surprising number of games decided by human screwups like mouse misclicks (!). Evaluation of engine vs centaur is given in “The Freestyle Battle2014: Computer-based Chess with Houdini & Co”, Nickel2014

    A few years ago, between 20053200816ya, during the PAL/CSS-Freestyle-Tournaments on the ChessBase server, which offered prizes, this question was hotly debated, and answered with a “yes, but…”. The results were in favor of the centaurs despite some occasional spectacular success of the machines. Here preparation played a significant role, because the superiority of the centaurs was more marked in round robin tournaments than in open tournaments with short term pairings. Specific opening choices, time management, structural knowledge, positional feeling, and deep analysis of critical variations in advance (going into the variations) were cornerstones of the centaur-strategy, even though one had to concede that the computers achieved a relatively high number of draws, particularly so when playing with White. Today, at a time when computer developments are rapidly taking place, and a new generation of chess engine has changed the chess world, the question of the role humans play in this battle can no longer be answered that clearly. A lot of chess commentary and video-livestreams at tournaments, in which engines (mostly Houdini) run parallel to the games, sometimes create the impression that the chess engines know it all. What, then, can a human do? However, in reality things look rather different as everybody knows who has ever tried to analyze positions, in which several candidate moves of apparently rather equal value are possible, with the help of a computer knows.

    Centaurs against pure engines

    In the Freestyle Battle2014 the participants every round can choose whether they want to play as centaurs, which technically means having to enter the moves manually, or whether they let any UCI-engine play automatically (of course with a specifically prepared opening book). 16 of the 30 participants always play as centaurs; another 9 play mainly as centaurs, but in a few cases (when they had other obligations) employed an engine; 3 computer players occasionally tried their luck as centaurs, and only 2 players relied exclusively on the engines. Roughly speaking, 83% of the field are centaurs and 17% pure engine players. The engine players thus more often than not play against centaurs, and a third of all games is played between these two groups. Only in 10 from 265 games did computer programs play against each other.

    In the competition between centaurs and pure engines, which, however, does not affect the distribution of prizes, the centaurs lead 53,5 to 42,5 after 18 rounds: +24 / =59 / -13, which on average is one point in every ten games (5.5:4.5). In 54 of these 96 games the centaurs played with White, in 42 games they played with Black, and thus they had a certain advantage resulting from the random sequence of the games. This, however, might later be leveled. But the distribution of color is an important factor because the superiority of the centaurs strongly relies on the white pieces - of the 24 wins the centaurs scored against the engines 20 were achieved with White. The engines score 9 wins with White compared to 4 wins with Black. This means that according to the current trend centaurs have a 65 % winning chance with White, but only a 45 % winning chance with Black. This allows one to conclude that the advantage of the centaurs lies mainly in the exploitation of opening advantages, but that it almost vanishes if no opening advantage is gained.

    http://www.infinitychess.com/Page/Public/Article/DefaultArticle.aspx?id=141 “Freestyle Battle2014: Hours of decision”

    Engines 70 (46%) : Centaurs 82 (54%)

    Looks like 46% vs 54% translates to an Elo difference of 52 using a 2014 Stockfish Elo estimate of 3247: (3247 + 400*(82-70)) / (82+70)

  11. 2015 GM Daniel Naroditsky + Rybka vs Stockfish match: expected score of Stockfish was 0.799, actual score was 3 wins & 1 draw; Naroditsky apparently made no positive contribution (judging from his comments and how the score is consistent with the expected score predicted by the chess AIs’ ELOs) but was not experienced at advanced chess

  12. Internet Chess Club ran a “1st Ultimate Chess Championship” in October 2015 with 8 players. Of the 28 games in the finals, 25 were drawn, so it’s difficult to infer anything from the results. Of the 8 players, the top one was human+AI, the middle 6 were ~4 AIs (guessing from their profile pages; “jpsingh1972”, “ComputerGOD”?, “Blitz-Masta”) and 2 human+AIs (“RayJr”, “Bookbuilder”?), and the last place was human+AI (“Gaon”). The winner of the tournament was Alvin Alcala (“ENGINEMASTER”; as of 2014 he used Houdini 4/Komodo TCEC/Stockfish) with 1 win & 6 draws. Bobby Ang asks how:

    Wow! three decisive results out of 28 games. I asked my fellow-admins in the ICC why there were so many draws and they replied that it wasn’t for lack of trying. According to the admin who ran the tournament, the observation was that with all the powerful computer hardware and software around winning is basically impossible unless someone goes bonkers or is simply weak and has a bad computer or something…. Alvin’s secret is that he built a huge database collection of human games, correspondence chess games, computer engine games and freestyle chess games, and put together several “trees” of opening analysis, similar to the method described by Alexander Kotov in “Think like a Grandmaster”. Every move is a branch, and the possible replies to each move is a sub-branch.

    The few participants, small prizes, and high rate of draws suggests that the ‘Ultimate Chess Championship’ might not be drawing the best advanced chess players or represent the state-of-the-art, but to the extent it does, it suggests that the AIs are now on par with the humans as Alcala won by a single game and almost all were draws.

  13. InfinityChess has run a series of 8 monthly “Centaur Weekend Tourney” tournaments in 2015 for selection into a planned 2016/2017 freestyle tournament with human+AI, mixed, and AI-only participation. (The rules encourage either pure human+AI or engine play.) I can’t find the final results, but the cumulative rankings of the 30 players after the 7th are available: http://infinitychess.com/Page/Public/Article/DefaultArticle.aspx?id=262 Pure centaurs took rankings ‘novelties in advance’, with the engines playing remarkably well - in earlier tournaments one certainly didn’t find pure engines taking up almost the entire middle and threatening to enter the top 3. ELO ratings for InfinityChess’s centaur-specific ELO ranking scale, which in this tourney ranges 2300–2686; on this particular ELO scale, centaurs average +100 points with average 2502(65.5) vs 2607(71.4) and a gap of 141 ELO between the best centaur & AI, with an average points of 3.75 vs 3.375 or 0.8 wins vs 0.4. Centaurs won 8 of 70 games, and AIs won 8 of 140 games; a multilevel model estimates a base probability of winning a game as 4.3% and being a centaur roughly doubles that to 8.7%.

    CWT PGNs: http://infinitychess.com/Download/8th%20Centaur%20Weekend%20Tourney%20(119%20games)_267.zip http://infinitychess.com/Download/7th%20Centaur%20Weekend%20Tourney%20(104%20games)_256.zip http://infinitychess.com/Download/6th%20Centaur%20Weekend%20Tourney%20(118%20Games)_255.zip http://infinitychess.com/Download/5th%20Centaur%20Weekend%20Tourney%20(114%20Games)_254.zip http://infinitychess.com/Download/4th%20Centaur%20Weekend%20Tourney%20(110%20Games)_253.zip http://infinitychess.com/Download/3rd%20Centaur%20Weekend%20Tourney%20(105%20Games)_252.zip http://infinitychess.com/Download/2nd%20Centaur%20Weekend%20Tourney%20(126%20games)_223.zip http://infinitychess.com/Download/1st%20Centaur%20Weekend%20Tourney%20(112%20games)_222.zip


Kramnik 200222ya https://en.chessbase.com/post/kramnik-on-advanced-che-and-fritz

How would you fare with the computer against a player like Leko, Topalov or Anand if they were not using a computer?

I would win, of course, and the other way around also. I cannot give you an exact performance rating, but it makes a huge difference. In classical chess it would probably be less profitable, but even there it makes a serious difference. In one-hour or 30 minute games it is absolutely decisive.

Kasparov said (after the first Advanced Chess match) that Topalov with a computer would crush him without a computer.

Yes, I agree. I never tried it, and I wouldn’t like to do so. Maybe somebody else can go for this experiment. I don’t know about “crush”. It depends on style. I think that my style is so solid that even if someone is playing with a computer I can fight. But only fight and lose with a respectable score.

…How about a giant Internet qualification event including amateurs for next year’s Advanced Chess?

Well, it makes sense. It is clear that with the computer the difference in playing strength is reduced. Normally I can beat a player of 2600 without great difficulty, but if we both have a computer it is already not so easy. At the highest level it is not just about understanding, the very top players are also better at everything - calculation, imagination. With the computer that is no longer so useful.

http://smarterthanyouthink.net/excerpt/ Smarter Than You Think, Clive Thompson:

In June 199826ya, Kasparov played the first public game of human-computer collaborative chess, which he dubbed “advanced chess,” against Veselin Topalov, a top-rated grand master. Each used a regular computer with off-the-shelf chess software and databases of hundreds of thousands of chess games, including some of the best ever played. They considered what moves the computer recommended; they examined historical databases to see if anyone had ever been in a situation like theirs before. Then they used that information to help plan. Each game was limited to sixty minutes, so they didn’t have infinite time to consult the machines; they had to work swiftly.

Kasparov found the experience “as disturbing as it was exciting.” Freed from the need to rely exclusively on his memory, he was able to focus more on the creative texture of his play. It was, he realized, like learning to be a race-car driver: He had to learn how to drive the computer, as it were-developing a split-second sense of which strategy to enter into the computer for assessment, when to stop an unpromising line of inquiry, and when to accept or ignore the computer’s advice. “Just as a good Formula One driver really knows his own car, so did we have to learn the way the computer program worked,” he later wrote. Topalov, as it turns out, appeared to be an even better Formula One “thinker” than Kasparov. On purely human terms, Kasparov was a stronger player; a month before, he’d trounced Topalov 4-0. But the centaur play evened the odds. This time, Topalov fought Kasparov to a 3-3 draw.

In 200519ya, there was a “freestyle” chess tournament in which a team could consist of any number of humans or computers, in any combination. Many teams consisted of chess grand masters who’d won plenty of regular, human-only tournaments, achieving chess scores of 2,500 (out of 3,000). But the winning team didn’t include any grand masters at all. It consisted of two young New England men, Steven Cramton and Zackary Stephen (who were comparative amateurs, with chess rankings down around 1,400 to 1,700), and their computers.

Why could these relative amateurs beat chess players with far more experience and raw talent? Because Cramton and Stephen were expert at collaborating with computers. They knew when to rely on human smarts and when to rely on the machine’s advice. Working at rapid speed-these games, too, were limited to sixty minutes-they would brainstorm moves, then check to see what the computer thought, while also scouring databases to see if the strategy had occurred in previous games. They used three different computers simultaneously, running five different pieces of software; that way they could cross-check whether different programs agreed on the same move. But they wouldn’t simply accept what the machine accepted, nor would they merely mimic old games. They selected moves that were low-rated by the computer if they thought they would rattle their opponents psychologically.

In essence, a new form of chess intelligence was emerging. You could rank the teams like this: (1) a chess grand master was good; (2) a chess grand master playing with a laptop was better. But even that laptop-equipped grand master could be beaten by (3) relative newbies, if the amateurs were extremely skilled at integrating machine assistance. “Human strategic guidance combined with the tactical acuity of a computer,” Kasparov concluded, “was overwhelming.”

Better yet, it turned out these smart amateurs could even outplay a supercomputer on the level of Deep Blue. One of the entrants that Cramton and Stephen trounced in the freestyle chess tournament was a version of Hydra, the most powerful chess computer in existence at the time; indeed, it was probably faster and stronger than Deep Blue itself. Hydra’s owners let it play entirely by itself, using raw logic and speed to fight its opponents. A few days after the advanced chess event, Hydra destroyed the world’s seventh-ranked grand master in a man-versus-machine chess tournament.

But Cramton and Stephen beat Hydra. They did it using their own talents and regular Dell and Hewlett-Packard computers, of the type you probably had sitting on your desk in 200519ya, with software you could buy for sixty dollars. All of which brings us back to our original question here: Which is smarter at chess-humans or computers? Neither. It’s the two together, working side by side.

Average is Over, Tyler Cowen 2013

As the programs improved, Freestyle chess circa 20043200717ya favored players who understood very well how the computer programs worked. These individuals did not have to be great chess players and very often they were not, although they were very swift at processing information and figuring out which lines of chess play required a deeper look with the most powerful programs. Today, the top Freestyle players fear that the next or maybe even the current generation of programs (eg. Rybka Cluster) will beat or hold even with the top Freestyle teams. The programs, playing alone without guidance, may not be so easy for the human to improve upon. If the program’s play is close enough to perfection, what room is there for the human partners to add wisdom?

…A series of Freestyle tournaments was held starting in 200519ya. In the first tournament, grandmasters played, but the winning trophy was taken by ZackS. In a final round, ZackS defeated Russian grandmaster Vladimir Dobrov and his very well rated (2,600+) colleague, who of course worked together with the programs. Who was ZackS? Two guys from New Hampshire, Steven Cramton and Zackary Stephen, then rated at the relatively low levels of 1,685 and 1,398, respectively. Those ratings would not make them formidable local club players, much less regional champions. But they were the best when it came to aggregating the inputs from different computers. In addition to some formidable hardware, they used the chess software engines Fritz, Shredder Junior, and Chess Tiger. The ZackS duo operated more like a frantic, octopus-armed techno disc jockey than your typical staid chess player, clutching his hands around his head in tectonic concentration. They understand their programs - and presumably themselves - very, very well.

Anson Williams is another top Freestyle player who doesn’t have much of a background in traditional chess. Anson, who lives in London, is a telecommunications engineer and software developer. A slim young man of Afro-Caribbean descent, he loves bowling and Johann Sebastian Bach. Fellow team member Nelson Hernandez describes Anson as laconic, very religious, and dedicated to his craft. Anson does not have any formal chess rating, but he estimates his chess skill at about 1,700 or 1,800 rating points, or that of a competent local club player. Nonetheless, he has done very well with his two quad-core laptops at the Freestyle level. Anson and his team would crush any grandmaster in a match. Against other teams, during one span of top-level play, Anson’s team scored twenty-three wins against only one defeat (and twenty-seven draws) across four Freestyle tournaments and fifty-one games. Along with Anson and Nelson Hernandez, the team is filled out by Yingheng Chen. In her late twenties, she is a graduate from the London School of Economics, not a traditional chess player at all, and now working in finance. She is Anson’s girlfriend and has learned the craft from him. Nelson Hernandez defended his passion for the game thus: > This may sound like easy work compared to OTB [over-the-board] chess but it really isn’t when you consider that your opponent can do the same things and thus has a formidable array of resources as well. It is also quite a trick to orchestrate all these things in real time so as to play the best possible chess…. > My role… is rather specialized. During these tournaments I am minimally involved and spectacularly indolent as I watch Anson demolish his opponents. Between tournaments I am very actively involved in his opening preparation. This is paradoxical, actually, because I am not a chess player. I approach the game entirely from an analytic, computer-oriented point of view.

Anson, when playing, is in perpetual motion, rushing back and forth from one machine to another, as Freestyle chess is, according to team member Nelson, “all about processing as much computer information as rapidly as possible.” Vasik Rajlich, the programmer of Rybka, considers the top players to be “genetic freaks,” though he stresses that he means this in a positive manner; he is a top Freestyle player himself. He sees speed and the rapid processing of information as central to success in Freestyle. In his view, people either have it or they don’t. The very best Freestyle players do not necessarily excel at chess and they pick up their Freestyle skills rather rapidly, sometimes within twenty hours of practice. He refers to Dagh Nielsen, one of the top Freestyle players, as operating in a rapid “swirl” during a Freestyle game. Some players enter these events using a chess engine only, set on autopilot and not using any additional human aid. These “teams” do not take the top prizes, and they are looked down upon by the more enthusiastic partisans of Freestyle. Dagh Nielsen estimated that the Freestyle teams were at least 300 Elo rating points better than the machines alone (a measurement of players’ relative skill levels), although that was a few years ago. Nelson Hernandez estimates a 100-150 rating point advantage, which is like the difference between the number one player in the world and the number seventy-five player.

…Top American grandmaster Hikaru Nakamura was not a huge hit when he tried Freestyle chess, even though he was working with the programs. His problem? Not enough trust in the machines. He once boasted, “I use my brain, because it’s better than Rybka on six out of seven days of the week.” He was wrong.

…This Freestyle model is important because we are going to see more and more examples of it in the world. Don’t think of it as an age in which machines are taking over from humanity. After all, the machines embody the principles of man-machine collaboration at their core-even when they are playing alone…Secret teams. Board games. Code names. Does this all sound a little too much like child’s play? Could the Freestyle chess model really matter all that much? Am I crazy to think direct man-machine cooperation, focused on making very specific evaluations or completing very specific tasks, will revolutionize much of our economy, including many parts of the service sector? Could it really be a matter of life or death?…What are the broader lessons about the Freestyle approach to working or playing with intelligent machines? They are pretty similar to the broader lessons about labor markets from chapters two and three:

  1. Human-computer teams are the best teams.

  2. The person working the smart machine doesn’t have to be expert in the task at hand.

  3. Below some critical level of skill, adding a man to the machine will make the team less effective than the machine working alone.

  4. Knowing one’s own limits is more important than it used to be.

“Can A GM And Rybka Beat Stockfish?”, 2015-05-15, GM Daniel Naroditsky:

When Tyson wrote to me in May, he had the experiment planned out: I would play a four-game match against Stockfish 5 (currently rated 3290, 13 points above Houdini 4) using the 200816ya version of Rybka (rated approximately 3050).

TODO: Naroditsky’s ELO

3 losses, 1 draw

3290 vs 3050; 240 ELO point difference https://en.wikipedia.org/wiki/Elo_rating_system 1 / (1 + 10^((3050-3290)/400)) = 0.799 “A player’s expected score is his probability of winning plus half his probability of drawing. Thus an expected score of 0.75 could represent a 75% chance of winning, 25% chance of losing, and 0% chance of drawing.” So Stockfish would be expected to score almost exactly as it did against Rybka+Naroditsky (3 victories, 1 loss or tie)

https://marginalrevolution.com/marginalrevolution/201311ya/11/what-are-humans-still-good-for-the-turning-point-in-freestyle-chess-may-be-approaching.html https://cse.buffalo.edu/~regan/chess/fidelity/FreestyleStudy.html https://rybkaforum.net/cgi-bin/rybkaforum/topic_show.pl?tid=25469

https://rjlipton.com/201212ya/05/31/chess-knightmare-and-turings-dream/

My work also hints that the Elo rating of perfect play may be as low as 3,600. This is not far-fetched: if Anand could manage to draw a measly two games in a hundred against any perfect player, the mathematics of the rating system ensure that the latter’s rating would never rise above 3,500, and if Gelfand could do it, 3,400. Perfect play on both sides is almost universally believed to produce a draw, even after a few small slips. All this raises a question:

As Ken Regan says, the present advantage of computers is roughly 400 elo points (which is an immense advantage).

https://en.wikipedia.org/wiki/Magnus_Carlsen peak rating 2,882 https://en.wikipedia.org/wiki/Chess_engine “According to one survey,[citation needed], the top engines have been increasing in strength by an average of 67 Elo per year since 198638ya.” highest chess engine rating: Komodo 3,358

1 / (1 + 10^((2882-3358)/400)) = 0.94

another way to put it is that Regan says that perfect play may be ~3,600 ELO. Komodo is then 242 points away, and historically chess engines increase at 67 points per year, so perfect play would be reached in 4+ years. by definition, then advanced chess becomes pointless

“Time for AI to cross the human performance range in chess”, AI Impacts:

Progress in computer chess performance took:

  1. ~0 years to go from playing chess at all to playing it at human beginner level

  2. ~49 years to go from human beginner level to superhuman level

  3. ~11 years to go from superhuman level to the current highest performance

“The Chess Master and the Computer”, Kasparov 201014ya:

My hopes for a return match with Deep Blue were dashed, unfortunately. IBM had the publicity it wanted and quickly shut down the project. Other chess computing projects around the world also lost their sponsorship. Though I would have liked my chances in a rematch in 1998 if I were better prepared, it was clear then that computer superiority over humans in chess had always been just a matter of time. Today, for $50 you can buy a home PC program that will crush most grandmasters. In 2003, I played serious matches against two of these programs running on commercially available multiprocessor servers-and, of course, I was playing just one game at a time-and in both cases the score ended in a tie with a win apiece and several draws. …There have been many unintended consequences, both positive and negative, of the rapid proliferation of powerful chess software. Kids love computers and take to them naturally, so it’s no surprise that the same is true of the combination of chess and computers. With the introduction of super-powerful software it became possible for a youngster to have a top- level opponent at home instead of needing a professional trainer from an early age. Countries with little by way of chess tradition and few available coaches can now produce prodigies. I am in fact coaching one of them this year, nineteen-year-old Magnus Carlsen, from Norway, where relatively little chess is played.

The heavy use of computer analysis has pushed the game itself in new directions. The machine doesn’t care about style or patterns or hundreds of years of established theory. It counts up the values of the chess pieces, analyzes a few billion moves, and counts them up again. (A computer translates each piece and each positional factor into a value in order to reduce the game to numbers it can crunch.) It is entirely free of prejudice and doctrine and this has contributed to the development of players who are almost as free of dogma as the machines with which they train. Increasingly, a move isn’t good or bad because it looks that way or because it hasn’t been done that way before. It’s simply good if it works and bad if it doesn’t. Although we still require a strong measure of intuition and logic to play well, humans today are starting to play more like computers.

The availability of millions of games at one’s fingertips in a database is also making the game’s best players younger and younger. Absorbing the thousands of essential patterns and opening moves used to take many years, a process indicative of Malcolm Gladwell’s “10,000 hours to become an expert” theory as expounded in his recent book Outliers. (Gladwell’s earlier book, Blink, rehashed, if more creatively, much of the cognitive psychology material that is re-rehashed in Chess Metaphors.) Today’s teens, and increasingly pre-teens, can accelerate this process by plugging into a digitized archive of chess information and making full use of the superiority of the young mind to retain it all. In the pre-computer era, teenage grandmasters were rarities and almost always destined to play for the world championship. Bobby Fischer’s 195866ya record of attaining the grandmaster title at fifteen was broken only in 199133ya. It has been broken twenty times since then, with the current record holder, Ukrainian Sergey Karjakin, having claimed the highest title at the nearly absurd age of twelve in 200222ya. Now twenty, Karjakin is among the world’s best, but like most of his modern wunderkind peers he’s no Fischer, who stood out head and shoulders above his peers - and soon enough above the rest of the chess world as well.

…This is not to say that I am not interested in the quest for intelligent machines. My many exhibitions with chess computers stemmed from a desire to participate in this grand experiment. It was my luck (perhaps my bad luck) to be the world chess champion during the critical years in which computers challenged, then surpassed, human chess players. Before 199430ya and after 200420ya these duels held little interest. The computers quickly went from too weak to too strong. But for a span of ten years these contests were fascinating clashes between the computational power of the machines (and, lest we forget, the human wisdom of their programmers) and the intuition and knowledge of the grandmaster.

…Having a computer partner also meant never having to worry about making a tactical blunder. The computer could project the consequences of each move we considered, pointing out possible outcomes and countermoves we might otherwise have missed. With that taken care of for us, we could concentrate on strategic planning instead of spending so much time on calculations. Human creativity was even more paramount under these conditions. Despite access to the “best of both worlds,” my games with Topalov were far from perfect. We were playing on the clock and had little time to consult with our silicon assistants. Still, the results were notable. A month earlier I had defeated the Bulgarian in a match of “regular” rapid chess 4-0. Our advanced chess match ended in a 3-3 draw. My advantage in calculating tactics had been nullified by the machine…This experiment goes unmentioned by Russkin-Gutman, a major omission since it relates so closely to his subject. Even more notable was how the advanced chess experiment continued. In 200519ya, the online chess-playing site Playchess.com hosted what it called a “freestyle” chess tournament in which anyone could compete in teams with other players or computers. Normally, “anti-cheating” algorithms are employed by online sites to prevent, or at least discourage, players from cheating with computer assistance. (I wonder if these detection algorithms, which employ diagnostic analysis of moves and calculate probabilities, are any less “intelligent” than the playing programs they detect.)

Lured by the substantial prize money, several groups of strong grandmasters working with several computers at the same time entered the competition. At first, the results seemed predictable. The teams of human plus machine dominated even the strongest computers. The chess machine Hydra, which is a chess-specific supercomputer like Deep Blue, was no match for a strong human player using a relatively weak laptop. Human strategic guidance combined with the tactical acuity of a computer was overwhelming.

The surprise came at the conclusion of the event. The winner was revealed to be not a grandmaster with a state-of-the-art PC but a pair of amateur American chess players using three computers at the same time. Their skill at manipulating and “coaching” their computers to look very deeply into positions effectively counteracted the superior chess understanding of their grandmaster opponents and the greater computational power of other participants. Weak human + machine + better process was superior to a strong computer alone and, more remarkably, superior to a strong human + machine + inferior process.

https://en.chessbase.com/post/freestyle-tournament-advice-from-an-expert “Free survival tips for the $16,000 Freestyle Tournament” Arno Nickel March 2005

Garry Kasparov, the spiritual father of Advanced Chess, which he proposed in 199826ya, saw his views confirmed by the 1st Freestyle Tournament. He wrote about this in his New In Chess column (NiC Magazine 5/200519ya, p. 96f). “At first the results [of the PAL/CSS Freestyle tournament] seemed quite predictable. Even the strongest computers were eliminated by IMs and GMs using relatively weak machines to avoid blunders.”

Three semi-finalists were indeed grandmasters, working with computers, but the fourth player, ‘ZackS’, who eventually won the event, turned out to be two American amateurs, both rated under 1700324ya. Kasparov points out that the ZackS team consistently played the Grunfeld and the Najdorf, using the machine to pick the right lines. “This could be the postmodern way to play the opening,” writes Kasparov, “using database statistics and the machine’s ‘instincts’. And they beat GMs with this technique.”

Just to give you clue, I would say that the strength of good Advanced Chess players must be something around Elo 3000. Frederic Friedel was right in his conclusions last year: “The level of play may be the highest ever seen at these time settings. There cannot be a doubt that a human player, even one of the top players in the world, would have no serious chance in such a field.”

https://en.chessbase.com/post/a-history-of-cheating-in-chess-4 “A history of cheating in chess (4)” 200024ya Frederic Friedel

“Playing Chess With the Devil”, Regan & Lipton:

Potentially computers can play a decisive role at the very highest levels of chess. This was made very clear to me during the Super GM tournament in Las Palmas in 199727ya…At this point Kasparov went into a deep think. Jan Timman started to speculate whether White couldn’t play the very forceful 20.g4…The game lasted six hours, Anand defended very tenaciously and at around 10 p.m., much to the disappointment of Kasparov, a draw was agreed. When he left the stage Garry spotted me and walked straight over. “I couldn’t win it, could I, Fred?” he asked, with a troubled look on his face. It was a bit shocking: the world champion and best player of all times consulting a chess amateur, asking for an evaluation of the game he has just spent six hours on! Naturally Garry wasn’t asking me, he was asking Fritz. He knew I would have been following the game with the computer. “Yes, you had a win, Garry. With 20.g4!” My answer vexed him deeply. “But I saw that! It didn’t work. How does it work? Show me.” He and Anand listened in horror while Juri dictated the critical lines…The next day Garry did an interview with the German magazine Der Spiegel. He spoke about “Advanced Chess”, a new concept he has developed, which involves playing games in real time with computer assistance. He used the game against Anand from the previous day to illustrate his point. This is what he had to say: “That game provides us with new arguments for Advanced Chess. If I had had a computer yesterday, I would give you the full line with 20.g4 within five minutes. Maybe less. I would enter g4 and check all the lines. I know where to go. It would give me the confidence to play moves like this. Can you imagine the quality of the games, the brilliancy one could achieve?”

In the time since those remarks there have been two Advanced Chess matches in León, Spain. In the first Kasparov was unable to defeat Bulgarian GM Veselin Topalov, who made efficient use of Fritz to defend against the world champion. The match ended in a 3:3 draw, although Kasparov had just demolished Topalov 5:1 in a match without computers. In the following year Vishy Anand played against Anatoly Karpov. Both players were assisted during the game by ChessBase 7.0 and the chess engine Hiarcs 7.32. Karpov was quite inexperienced at operating a computer, while Anand happens to be one of the most competent ChessBase users on the planet. The result was that we were witness to an (unplanned) experiment of man and computer vs man. Karpov didn’t have a chance and was trounced 5:1 by his opponent. I am convinced that a player like Anand, using a computer to check crucial lines during the game, is playing at a practical level of over 3000 Elo points.

One-on-one the programs totally dominate the humans - even on laptops programs such as Stockfish and Komodo have Elo ratings well above 3100 whereas the best humans struggle to reach even 2900 - but the human+computer “Centaurs” had better results than the computers alone. In the audience were representatives of defense and industrial systems that involve humans and computers. Ken got into freestyle chess not as a player but because of his work on chess cheating-see this for example. Freestyle chess says “go ahead and cheat, and let’s see what happens…” The audience was not interested in cheating but rather in how combining humans and computers changes the game. While chess programs are extremely strong players, they may have weaknesses that humans can help avoid. Thus, the whole point of freestyle chess is:

Are humans + computers > computers alone?

That is the central question. Taken out of the chess context it becomes a vital question as computers move more and more into our jobs and our control systems. The chess context attracts interest because it involves extreme performance that can can be precisely quantified, and at least until recently, the answer has been a clear “Yes.”

http://www.infinitychess.com/Page/Public/Article/DefaultArticle.aspx?id=83 centaur-only Elo rankings

http://www.cse.buffalo.edu/~regan/papers/pdf/RBZ14aaai.pdf https://cse.buffalo.edu/~regan/chess/fidelity/FreestyleStudy.html

The PAL/CSS “Freestyle” dataset comprises 3,226 games played in the series of eight tournaments of human-computer tandems sponsored in 20053200816ya by the PAL Group of Abu Dhabi and the German magazine Computer-Schach und Spiele. All of these games were analyzed with Stockfish 4 in the same mode as for CEGT. …unfortunately no Freestyle events of comparable level and prize funds have been held between 200816ya and a tournament begun by InfinityChess.com in February 2014 through April 10.

…Figure 9 measures that the two CEGT and three PAL/CSS data sources are respectively close to each other, that personal computer engines under similar playing conditions were significantly stronger in 201311ya than in 2007-08, and that the human-computer tandems were significantly ahead of the engines playing alone even without aggregating the events together. The 2-sigma confidence intervals are the empirically-tested “adjusted” ones of (Regan and Haworth 201113ya); we show them to four digits although the rating values themselves should be rounded to the nearest 5 or 10.

computer: CEGT 200717ya: 3009 computer CEGT 200816ya: 2963 computer: CEGT all (2007–200816ya): 2985 computer: TCEC (201311ya): 3083 computer: Komodo2016: 3358 PAL/CSS 200519ya: 3102 PAL/CSS 200618ya: 3086 PAL/CSS 200816ya: 3128 human-computer: PAL/CSS all (200519ya - 200816ya): Elo 3106

(3083 - 2985) / (201311ya - 200519ya.5) = 13 Elo points per year Komodo2016 vs CEGT 200717ya: (3358 - 3009) / (2016 - 200717ya) = 38 Elo points per year PAL/CSS 200519ya vs PAL/CSS 200816ya: (3128 - 3102)/(200816ya - 200519ya) = 8.6 Elo points per year

baseline year 200816ya: computer trend: 2963 + 38x human-computer trend: 3128 + 8.6x

2963 + 38x = 3128 + 8.6x x=5.6 or 201311ya.6

or comparing to the 2014 InfiniChess: 52 Elo difference between centaurs and engines 200816ya difference: 3128 PAL/CSS vs CEGT 2963: 165 Elo difference 165->52 from 2008->2014 implies the gap is shrinking by 18 Elo points per year; so 52 Elo would take 2.8 years, putting crossover at 2016.8

2007 PAL/CSS group interview http://www.rybkachess.com/docs/freestylers_version_2.htm differences mentioned were 53%, 60%, 75%, 80%, and 100-150 Elo

Q. It’s pretty evident now that a top centaur combination is stronger than an unassisted engine. How would you quantify this difference? By what margin could you win a match against the newest Rybka running on your strongest machine and using Noomen’s latest RybkaII.ctg opening book?

Dagh Nielsen: I would not dare to try and quantify the difference in playing level. Personally, if I had an engine running on auto, I would be horrified about the risk that a type of position is entered that the machine simply will not play very well on its own. An additional advantage for the centaur is that he can identify critical situations. Perhaps at some point in a sharp game, the game will essentially be won, drawn or lost within a span of, say, 5 moves. Also, I don’t think pure engines can be expected to react properly on tricky novelties, and I am not really aware of any method to ensure that you will be the last one leaving preparation in every single game.

Arno Nickel: Knowing that Rybka plays with its own opening book might be a decisive advantage for the centaur. Further on the time-management of unassisted engines is one of their weaknesses. These two points should guarantee a score of about 66%. But without knowing Rybka’s opening book the score should not be more than 60%.

Jiri Dufek: I can predict, that Xakru team under this condition probably scored about 70-80% (match with minimum 12 games).

Nelson Hernandez: Quantifying this difference is difficult to do as a lot depends on the centaur’s technique (see #3 in previous paragraph). This is further muddled by the relative strength of each side’s opening book, hardware, access to EGTBs etc and other factors previously mentioned. However my general impression is that, all other things being equal, a top centaur has a 100-150 ELO advantage. You can calculate how that translates into a success rate. An indifferent centaur might have no advantage at all.

Nolan Denson: Centaur don’t walk blindly into the well know traps. Most programs when following books made for it do not always follow the best line. - 75%

Jochen Rindfleisch: As the most recent tournament has shown, only the best - or most lucky? - centaur teams perform better than a well adjusted engine. So - taking a wild guess now - I would hope for 53% in a Kaputtze/Rybka match, if using identical hardware for analysis.

Nick Carlin: 60-40 is my guess, I suspect that the centaur advantage will come especially in the end game, where human assistance can avoid losses and win games that engines might draw. An example is in some rook endings where the engine will push a pawn to the seventh rank and obtain a draw in a won position. The missed win is coming through leaving the pawn on the sixth rank and bringing the King up to support its coronation.

Eros Riccio: the advantage(s) of the combination Human(s) + engine(s) is obvious: In general, an engine alone is “stupid”. It plays random openings, wastes precious time on forced moves, stubbornly wants to play for the win, sometimes forcing the position and losing, when a draw would have been enough to qualify… so, relying on an automatic engine may be quite risky… a human instead, may control all those things which an engine can’t. The only advantage of automatic engine I can see, is with little time left in difficult positions, as there is no risk of mouse slips and losing on time.

Correspondence chess converging to draw-death with everyone drawing with everyone (implying humans no longer add any advantage, or even offsetting advantage+disadvantage, which offset wins by losses): https://x.com/LeoLjubicic66/status/1289682018851262465


  1. Notably, SF author Vernor Vinge, who died 2024-03-20 of Parkinson’s disease, apparently also declined cryopreservation (despite occasional use of cryonics as a plot device in his fiction, like A Fire Upon the Deep).↩︎

  2. I am reminded of Woody Allen’s gag about the child deeply upset to learn about the heat death of the universe: the humor derives from the fact that the child is correct to be upset, as the heat death is a terrible thing, but we are no more psychologically capable of caring about it proportionately to how bad it is than we are capable of caring about wars or factory farming or disease.↩︎

  3. This special masked guest can be a famous person, someone who is being honored or having a special day, or could be some sort of unique background or status not easily imitated.↩︎

  4. The more players, the more tokens are necessary to avoid either running out or having too many false-Celebrity winners.↩︎

  5. Because Guests turn into a false Celebrity after the first guess, they can only guess once. Adding a process for Celebrity → Guest would overly complicate the game, ruining the elegance.↩︎

  6. It is an interesting empirical question whether it is easier or harder for the true Celebrity to convince Guests than the false Celebrities, as it is something like a Keynesian beauty contest and the imitators may be more authentic than the original; regardless, it would be gauche to award him a prize.↩︎

  7. No longer available as of September 2024; they now sell only 6, 24, or 50-packs.↩︎

  8. It really is: even in the narrowest definition, there are RPS tournaments, manga & anime (even AI anime), card games…↩︎

  9. This efficiency might be related to base-3/e’s radix economy? Seems like an interesting coincidence.↩︎

  10. That the children were small elementary schoolers did not stop them from being dangerous: one little angel, in “demon mode”, had broken a teacher’s nose and been finally sent to a ‘locked’ school after hurling a desk so hard it damaged the cathedral-style ceiling; another one finally got sent there too because every day, when he got mad, he would walk up to his pregnant teacher’s belly and try to punch it—she remarked that even if they were developmentally-delayed, they had an eerie ability to sense fear & weakness in the staff.↩︎

  11. This is not to say that puppies or kittens do not experience any infant mortality. They do, as infections can kill entire litters, and young cats remain vulnerable to some issues (eg. the disease FIP—rare in adult cats, terminal in young cats). But if they survive, they will usually be in good health thereafter. Of my family’s ~6 dogs & cats while I was growing up, I cannot recall any of them ever just ‘getting sick’ the way we kids did. Generally, they all seemed to be enviably in good health every day, right up to their final decline.↩︎

  12. I was listening to an acquaintance, a retired school librarian, discussing her daughter’s first years as an elementary school teacher, and how she kept being absent for weeks due to illness. Wasn’t that a problem for her career? Oh no, she explained, it’s ordinary and expected that these teachers, especially in their first years, will be absent constantly and the school districts ensure ample supplies of substitute teachers. Getting sick is just part of the job.↩︎

  13. Contemporary horse-racing records are an upper bound. Peak horse speed has improved markedly over the past few centuries; Gardner2006’s win time-series starting in the 1846 Epsom Derby suggests horses have become roughly a quarter faster if we compare then vs. now, so medieval horses would be at least a quarter slower, suggesting they were more like 33–42MPH.↩︎

  14. The widespread claims that honey has been recovered from Egyptian tombs and is still liquid and edible after several thousand years turn out to be false.↩︎

  15. Depending on how trace a residue is acceptable as ‘edible’, it is possible collagen might count, as collagen has been detected in 96 million year & 195 million year old dinosaur bones, due to unusual chemical stability; but I think these are far too trace to count as potentially being a meal.↩︎

  16. Facebook critics’ output look a lot like Bitcoin’s reception: one way you could know in 2011 that Bitcoin was +EV to invest in was simply the sheer level of anger, irrationality, and ideological bloviating it induced in critics, critics who plainly hated Bitcoiners on a personal level as contemptible smelly basement-dwelling nerds & goldbugs and who exhibited shockingly poor epistemic standards (typically showing no interest in reading the whitepaper, or correcting their factual errors, or in not propagating myths, or blatantly reasoning in the form “I do not like the potential political consequences if this worked, therefore, it will not work”). Some of their criticisms may have been correct, but only in a stopped-clock sort of way: throw enough arguments at the wall, and some will stick. This behavior is useful for a trader to observe, because when an asset can only go up or down, and most opinions are held for bad reasons, then you don’t need to have any opinion on the asset itself or know what the ‘true’ price is—you can simply assume the current price is too low, because with binary questions, reversed stupidity is intelligence. (A pity one can’t buy ZUCK prediction market futures the way one could buy bitcoins!)↩︎

  17. Microsoft was eclipsed in the legal & public imagination by other tech companies and boogie men—note that despite a market cap of $2.2 trillion in February 2022, “techlash” discussion has conspicuously omitted any discussion of Microsoft monopolies or punitive regulation in favor of an obsessive focus on Amazon, Facebook, & Google.↩︎

  18. Much of the 1990s MS hatred was hatred of their technology, not their business or Bill Gates: worse than clunky or ugly, it had severe & user-visible deficiencies in reliability and security. Countless people must have had their computer corrupted by a worm or hit one too many BSODs one day, and developed a seething resentment, which then found an outlet.

    MS eventually turned over a new leaf, what people found loathsome about Bill Gates when they hated him became appealing quirks when they began looking for reasons to like him, and new generations of users grew up not knowing what insults like “M$” were about. (However, this probably won’t help Zuckerberg. Technically, Facebook works well, and always has. What people hate about Facebook is themselves. There are no programming languages to fix that.)↩︎

  19. Why 20, when Zuckerberg could, even with current medical tech, live another 50 years with all his marbles given his current SES and physical/mental health? Because the aging process still affects him, and a second act needs to be started before it’s too late and one gets tired & stuck in a rut. Examples like Bloomberg’s mayoralty, or Donald Trump bumbling ass-first into the presidency while shilling his books (to his surprise & horror) are exceptions that prove the rule.↩︎

  20. In the USA, ‘inverse probability’ throve as much as pure ‘probability theory’ throve in Russia; Bayesian statistics & decision theory in particular survived mostly because of their pragmatic utility.

    Bayesian statistics were employed by Alan Turing & I. J. Good for code-cracking during WWII, and researchers like Jimmy Savage or Hugh Everett III (see Many Worlds) spent much of their careers affiliated with institutions like RAND (Air Force) or MITRE or IBM or the US military itself, with notable applications being Kalman filters (for missiles & anything involving sensors of moving objects) or Bayesian search theory (finding the USS Scorpion) or game theory or operations research.

    On the other hand, Russians, despite the great mathematical gifts of researchers like Kolmogorov (as further proven by some pioneering work on topics like linear programming) & the personal safety found in being useful to the USSR’s military, generally contributed little, and there appeared to be ideological/philosophical opposition that helped shut down research on topics like cybernetics (for some background, see Red Plenty). This has not changed much that Communism has fallen, or in the Russian STEM diaspora to the West, that I can tell. We are not surprised if, say, statistics research related to public opinion polling or measuring economic growth did not thrive in the USSR (much as it does not thrive in contemporary China), but what about all of the military-relevant statistics…?↩︎

  21. Marie-Dominique Chenu, Yves Congar, Edward Schillebeeckx, Henri de Lubac, Karl Rahner, Bernard Lonergan, Hans Urs von Balthasar, Hans Küng, Karl Wojtyła, and Joseph Ratzinger↩︎

  22. For an extended analysis of Gunbuster, see:

    ↩︎
  23. speculated to have visually influenced Evangelion’s Gendo Ikari because of the gloves↩︎

  24. Quoted in The bomb and the computer: wargaming from ancient Chinese mapboard to atomic computer, Wilson1969↩︎

  25. pg814 of my Seidensticker e-book:

    …Because the prince had gone there for his retreats, an occasional messenger came down from the monastery and, rarely, there was a note from the abbot himself, making general inquiries about their health. He no longer had reason to call in person. Day by day the Uji villa was lonelier. It was the way of the world, but they were sad all the same. Occasionally one or two of the village rustics would look in on them. Such visits, beneath their notice while their father was alive, became breaks in the monotony. Mountain people would bring in firewood and nuts, and the abbot sent charcoal and other provisions.

    “One is saddened to think that the generous flow of gifts may have ceased forever”, said the note that came with them.

    It was a timely reminder: their father had made it a practice to send the abbot cottons and silks against the winter cold. The princesses made haste to do as well.

    Rereading this passage, I think it could be defended as not crass based on bit about how “the abbot sent charcoal and other provisions” - it could be that the cottons & silks are part of a barter exchange because they all are too elegant and high-class to engage in such déclassé merchant-like ‘buying’ or ‘selling’. I am not sure how realistic this is, given that textiles pre-Industrial-Revolution were extremely expensive goods, and even now a silk garment would cost many kilograms of charcoal (charcoal seems to cost ~$0.5/kg, while silk robes seem to rarely be <$50).↩︎

  26. This quote was recorded by Sir John Sinclair, 1st Baronet in pg390–391 of his letters/memoirs, The Correspondence v1:

    …I recollect, when I was lamenting to the Doctor [Adam Smith] the misfortunes of the American war, and exclaimed, “If we go on at this rate, the nation must be ruined”; he answered, “Be assured, my young friend [Sinclair was ~23 by Saratoga], that there is a great deal of ruin in a nation.”

    ↩︎
  27. I’m reminded of Davies’s ‘J-curve’ theory of revolution People don’t revolt when things are bad, no matter how objectively bad they are; they revolt when they suffer a sudden disappointment even if objectively things were getting better. This is germane to recent movements like the Arab Spring and while I used to be skeptical of Peter Turchin’s grand theories of history, his phrase “elite overproduction” has become frighteningly prescient over the past decade.↩︎

  28. Remember when Japan had the most cutting-edge consumer electronics and Japanese tech was the subject of nerd technolust? I’m old enough to remember that. How disappointing it was to see Japan’s long-term investments in robotics prove useless in Fukushima.↩︎

  29. This is also true of new content in general; they are not a pure win, but impose additional costs on catalogers and collectors and libraries and whatnot. This is true even when they do not take a common name or word as their title, as lamentably many new works do. New works in general are hard to justify; see Culture is not about Esthetics.↩︎

  30. This is part of the so-called “obesity paradox”, where high BMI doesn’t appear as harmful in cross-sectional correlational data as it ought to be, because the thin/underweight may be so because of illness.↩︎

  31. In the 2009 LessWrong survey, 94/73.4% were consequentialists, and those who didn’t believe in morality were only one fewer than the deontologists! (There were 5 virtue ethicists, to cover the last major division of modern secular ethics.) The results were similar in the 2011 & 2012 surveys.↩︎

  32. “Is That What Love is? The Hostile Wife Phenomenon in Cryonics”, by Michael G. Darwin, Chana de Wolf, and Aschwin de Wolf 200816ya.↩︎

  33. “Normal Cryonics”, Eliezer Yudkowsky↩︎

  34. ‘Anne’, commenting on Overcoming Bias↩︎

  35. “Until Cryonics Do Us Part”, NYT, Kerry Howley↩︎

  36. C↩︎

  37. Sarokrae↩︎

  38. JS Allen, commenting on Katya Grace’s post on hostile wives, “Why do ‘respectable’ women want dead husbands?”↩︎

  39. Thom Blake↩︎

  40. “Why Men Are Bad At ‘Feelings’”, Robin Hanson↩︎

  41. I always wondered - suppose one cultivates a character of generosity, bravery, etc. How does that character decide? Virtue ethics seems like buck-passing to me.↩︎

  42. Charles Darwin, On the Origin of Species, (1st ed.)↩︎

  43. “Say Goodnight, Grace (and Julia and Emma, too)”. New York Times Magazine↩︎

  44. Wikipedia again↩︎

  45. The area of a sphere is given by the equation:
    1 AU = kilometers
    30 AU = , or km
    55 AU = , or km
    So the shell is the volume of the outer sphere minus the inner sphere:
    , or .↩︎

  46. Modeling it as a binomial in R: table(sort(rbinom(365, size=10000, prob=1/(20*365.25))))↩︎

  47. Life is too short to understand everything to a meaningful level, as specialization has far outstripped individuals’ ability or interest to understand them all, and such division of cognitive labor is required to create the modern world, even if this comes at the cost of any global understanding: Autonomous Technology: Technics-Out-Of-Control, Winner 198935ya:

    Society is composed of persons who cannot design, build, repair, or even operate most of the devices upon which their lives depend…In the complexity of this world people are confronted with extraordinary events and functions that are literally unintelligible to them. They are unable to give an adequate explanation of man-made phenomena in their immediate experience. They are unable to form a coherent, rational picture of the whole. Under the circumstances, all persons do, and indeed must, accept a great number of things on faith…Their way of understanding is basically religious, rather than scientific; only a small portion of one’s everyday experience in the technological society can be made scientific…The plight of members of the technological society can be compared to that of a newborn child. Much of the data that enters its sense does not form coherent wholes. There are many things the child cannot understand or, after it has learned to speak, cannot successfully explain to anyone…Citizens of the modern age in this respect are less fortunate than children. They never escape a fundamental bewilderment in the face of the complex world that their senses report. They are not able to organize all or even very much of this into sensible wholes….

    Do you know how to draw a bicycle?↩︎

  48. For readers who may think my phrasing is a bit hyperbolic, it’s worth mentioning that I had at this point spent several years researching darknet markets (although I had finished my work by releasing my darknet markets archive in 2015), and the FBI had in fact paid me a friendly (but unannounced) visit in March 2016.↩︎

  49. yes, I know they have lower bandwidth because they plug into a single port, but the USB trackball & keyboard will use up hardly any and the USB drives can run at near-full speed↩︎

  50. Ultracentrifugation since other approaches, like thermal diffusion or laser separation, are even more expensive and/or technically challenging.↩︎

  51. Shōtetsu; 59 ‘An Animal in Spring’; Unforgotten Dreams: Poems by the Zen monk Shōtetsu; trans. Steven D. Carter, ISBN 0-231-10576-2↩︎

  52. Fujiwara no Teika; pg 663 of Donald Keene (199925ya), Seeds in the Heart: Japanese Literature from Earliest Times to the Late Sixteenth Century, Columbia University Press, ISBN 0-231-11441-9↩︎

  53. Angus Deaton, “What does the empirical evidence tell us about the injustice of health inequalities?” (January 201113ya):

    Men die at higher rates than women at all ages after conception. Although women around the world report higher morbidity than men, their mortality rates are usually around half of those of men. The evidence, at least from the US, suggests that women experience similar suffering from similar conditions, but have higher prevalence of conditions with higher morbidity, and lower prevalence of conditions with higher mortality so that, put crudely, women get sick and men get dead, Case & Paxson2005.

    ↩︎
  54. Tristane Banon came forward only after the maid, and Khan’s calm behavior after the maid incident suggests he considered it routine. Both facts suggest that the ‘probability of public revelation’, if you will, is fairly low, and so we ought to expect numerous previous unreported such liaisons. (An analogy: a manager catches an employee stealing from the till. The employee claims this was the first time ever and he’ll be honest thenceforth. Should the manager believe him?)↩︎

  55. If this is such common knowledge, one wonders what the wives think; during sex scandals, they seem to remain faithful, when other women divorce over far less than such public humiliation. Why would Khan’s wife - the wealthy and extremely successful Anne Sinclair - remain linked with him? I’ve seen it suggested that such marriages are ‘open’ relationships, where neither party expects fidelity of the other, and like many aristocratic marriages of convenience, the heart of the agreement is to not be caught cheating. In Khan’s case, perhaps Sinclair judges him not fatally politically wounded, with still a chance at the French presidency. It is an interesting question how conscious such considerations are; Keith Henson has an evolutionary theory somewhat relevant - that women (in particular) can transfer their affections to powerful males such as captors to safeguard their future reproductive prospects.

    On the other hand, in June 201212ya, newspapers were reporting that Sinclair had separated from him, which is consistent with the interpretation that she felt it was her duty to not stab her husband in the back immediately but wait for the scandal to die down. On the gripping hand, June 201212ya is well after a crushing French Socialist defeat of Sarkozy on all political fronts; President Holland’s victory & elevation could be seen as scuttling Strauss-Kahn’s future prospects for that exact position, and Sinclair’s separation merely a cold-blooded cutting her losses on Strauss-Kahn. The latter seems less likely than the former, since I seem to recall a number of politicians’ wives waiting a discreet period before separating or divorcing.

    And of course, we can’t rule out less cynical explanations; for example, perhaps the wives are commendably optimistic about finding forgiveness for their wayward spouses & the chances of patching up their marriages, and it simply takes them that year or two to give up.↩︎

  56. National-level legislators usually being well-educated and well-off, when they are not mega-millionaires like John Kerry or millionaires like Barack Obama.↩︎

  57. Minorities and women being rare even now.↩︎

  58. “Married, With Infidelities”:

    In 200123ya, The Journal of Family Psychology summarized earlier research, finding that “infidelity occurs in a reliable minority of American marriages.” Estimates that “20-25% of all Americans will have sex with someone other than their spouse while they are married” are conservative, the authors wrote. In 201014ya, NORC, a research center at the University of Chicago, found that, among those who had ever been married, 14% of women and 20% of men admitted to affairs.

    Baumeister 201014ya, Is There Anything Good About Men? pg 242 puts it much higher:

    According to the best available data, in an average year, well over 90% of husbands remain completely faithful to their wives. In that sense, adultery is rare. Then again, if you aggregate across all the years, something approaching half of all husbands will eventually have sex with someone other than their wives…There are many sources on adultery and extramarital sex. The best available data are in Laumann, E. O., Gagnon, J. H., Michael, R. T., & Michaels, S. (199430ya). The social organization of sexuality: Sexual practices in the United States. Chicago, IL: University of Chicago Press. For an older, but thoughtful and readable introduction, see Lawson, A. (198836ya). Adultery: An analysis of love and betrayal. New York: Basic Books.

    Taormino 200816ya, Opening Up:

    There’s another [key] indicator that monogamous marriages and relationships aren’t working: cheating is epidemic. The Kinsey Report was the first to offer statistics on the subject from a large study published in 195371ya; it reported that 26% of wives and 50% of husbands had at least one affair by the time they were 40 years old. Other studies followed, with similar findings. According to the Janus Report of 199331ya, more than one-third of men and more than one-quarter of women admit to having had at least one extramarital sexual experience. 40% of divorced women and 45% of divorced men reported having had more than one extramarital sexual relationship while they were still married.’ In a 200717ya poll conducted by MSNBC and iVillage, half of more than 70,000 respondents said they’ve been unfaithful at some point in their lives, and 22% have cheated on their current partner.

    ↩︎
  59. One interesting perspective is the rate of false paternity: a rough consensus of 4% of all children. (Cochran prefers estimates around the 1% range.) Since the per-sex-act pregnancy risk is estimated at 1-5% by various health sites we can ask the question of the Poisson distribution: what is the median number of sex-acts (outside the holy bounds of matrimony) for each of these children? The median is ~28 (this is a lower bound since not all pregnancies come to term). So if ~20% of the USA is children under 14, the US population is ~310m, ~4% of the children are misattributed, and each such child implies 28 illicit sex-acts, then we can give a loose lower bound of annual adultery per year to be ↩︎

  60. From “Leaving Office Feet First: Death In Congress”:

    According to the most reliable estimate available, eight members of Congress have committed suicide (Eisele 995). Amer (198935ya) reported only seven, but the 192599ya suicide of Senator Joseph McCormick (R-IL), who overdosed on barbiturates, was subsequently made public (Miller 199232ya). Senator Lester Hunt (D-WY) is the only member to have killed himself in the Russell Office Building. He did so after supporters of Senator Joseph McCarthy (R-WI) threatened to publicize the arrest of Hunt’s son for committing homosexual acts in a Washington park unless Hunt withdrew from his 195470ya re-election campaign - an incident that provided the inspiration for Allen Drury’s (195965ya) novel Advise and Consent.

    • Eisle, Albert 199529ya. “Members of Congress No Strangers to Violent Deaths”, The Hill (September 6)

    • Amer, Mildred 198935ya. “Members of the U.S. Congress Who have Died of Other Than a Natural Death While Still in Office: A Selected List”. Washington, DC: Congressional Research Service

    • Miller, Kristie 199232ya. Ruth Hanna McCormick: A Life in Politics 188064194480ya. Albuquerque: University of New Mexico Press

    If there are 535 members of Congress and each has a career of 22 years, and we look at the 88 years of 192588201311ya, then that is >=8 suicides spread over 4 blocks or a 0.37% suicide rate; but in general, suicide in the USA is a leading cause of death at around 100,000 deaths a year. Estimating total lifetime risk is harder, but some searching turned up Nock et al 2008 with estimates for US adults of suicide attempts somewhere around 1.9-8.7%, which if another estimate is right that out of every 10 attempts 1 succeeds may imply that the Congressional risk is equivalent (1.9%/10 < 0.37) or lower (8.7/10 > 0.37). For comparison, alcoholics have 2–3% lifetime risk