Skip to main content

2017 News

Annual summary of 2017 Gwern.net newsletters, selecting my best writings, the best 2017 links by topic, and the best books/movies/anime I saw in 2017.

This is the 2017 summary edition of the Gwern.net newsletter (archives), summarizing the best of the monthly 2017 newsletters:

Previous annual newsletters: 2016, 2015.

Writings

Posts:

  1. The Kelly Coin-Flipping Problem: Exact Solutions via Decision Trees

  2. “‘Story Of Your Life’ Is Not A Time-Travel Story”

  3. “Banner Ads Considered Harmful”

  4. On the history of the tank/neural-net urban legend

  5. Efficiently calculating the average maximum datapoint from a sample of Gaussians

Site traffic (July 2017-January 2018) was up: 326,852 page-views by 155,532 unique users.

Media

Overview

AI/genetics/VR/Bitcoin/general:

AI: as I hoped in 2016, 2017 saw a re-emergence of model-based RL with various deep approaches to learning reasoning, meta-RL, and environment models. Using relational logics and doing planning over internal models and zero/few-shot learning are no longer things “deep learning can’t do”. My selection for the single biggest breakthrough of the year was when AlphaGo racked up a second major intellectual victory with the demonstration by Zero that using a simple expert iteration algorithm (with MCTS as the expert) does not only solve the long-standing problem of NN self-play being wildly unstable (dating back to attempts to failed attempts to extend TD-Gammon to non-backgammon domains in the 1990s), but also allows superior learning to the complicated human-initialized AGs—in both wallclock time & end strength, which is deeply humbling. 200024ya years of study and tens of millions of active players, and that’s all it takes to surpass the best human Go players ever in the supposedly uniquely human domain of subtle global pattern recognition. Not to mention chess. (Silver et al 2017a, Silver et al 2017b.) Expert iteration is an intriguingly general and underused design pattern, which I think may prove useful, especially if people can remember that it is not limited to two-player games but is a general method for solving any MDP. The second most notable would be GAN work: Wasserstein GAN losses (Arjovsky et al 2017) considerably ameliorated the instability issues when using GANs with various architectures, and although WGANs can still diverge or fail to learn, they are not so much of a black art as the original DCGANs tended to be. This probably helped with later GAN work in 2017, such as the invention of the CycleGAN architecture (Zhu et al 2017) which accomplishes magical & bizarre kinds of learning such as learning, using horse and zebra images, to turn an arbitrary horse image into a zebra & vice-versa, or your face into a car or a bowl of ramen soup. “Who ordered that?” I didn’t, but it’s delicious & hilarious anyway, and suggests that GANs really will be important in unsupervised learning because they appear to be learning a lot about their domains. Additional demonstrations like being able to translate between human languages given only monolingual corpuses merely emphasize that lurking power—I still feel that CycleGAN should not work, much less high-quality neural translation without any translation pairs, but it does. The path to larger-scale photorealistic GANs was discovered by Nvidia’s ProGAN paper (Karras et al 2017): essentially StackGAN’s approach of layering several GANs trained incrementally as upscalers does work (as I expected), but you need much more GPU-compute to reach 10241,000yax1024-size photos and it helps if each new upscaling GAN is only gradually blended in to avoid the random initialization destroying everything previously learned (analogous to transfer learning needing low learning rates or to freeze layers). Time will tell if the ProGAN approach is a one-trick pony for GANs limited to photos. Finally, GANs started turning up as useful components in semi-supervised learning in the GAIL paradigm (Ho & Ermon2016) for deep RL robotics. I expect GANs are still a while off from being productized or truly critical for anything—they remain a solution in search of a problem, but less so than I commented last year. Indeed, from AlphaGo to GANs, 2017 was the year of deep RL (subreddit traffic octupled). Papers tumbled out constantly, accompanied by ambitious commercial moves: Jeff Dean laid out a vision for using NNs/deep RL essentially everywhere inside Google’s software stack, Google began full self-driving services in Phoenix, while noted researchers like Pieter Abbeel founded robotics startups betting that deep RL has finally cracked imitation & few-shot learning. I can only briefly highlight, in deep RL, continued work on meta-RL & neural net architecture search with fast weights, relational reasoning & logic modules, zero/few-shot learning, deep environment models (critical for planning), and robot progress in sample efficiency/imitation learning/model-based & off-policy learning, in addition to the integration of GANs a la GAIL. What will happen if every year from now on sees as much progress in deep reinforcement learning as we saw in 2017? (Suppose deep learning ultimately does lead to a Singularity; how would it look any different than it does now?) One thing missing from 2017 for me was use of very large NNs using expert mixtures, synthetic gradients, or other techniques; in retrospect, this may reflect hardware limitations as non-Googlers increasingly hit the limits of what can be iterated on reasonably quickly using just 1080944yatis or P100s. So I am intrigued by the increasing availability of Google’s second-generation TPUs (which can do training) and by discussions of multiple maturing NN accelerator startups which might break Nvidia’s costly monopoly and offer 100s of teraflops or petaflops at non-AmaGoogBookSoft researcher/hobbyist budgets.

Genetics in 2017 was a straight-line continuation of 2016: the UKBB dataset came online and is fully armed & operational, with exomes now following (and whole-genomes soon), resulting in the typical flurries of papers on everything which is heritable (which is everything). Genetic engineering had a banner year between CRISPR and older methods in the pipeline—it seemed like every week there was a new mouse or human trial curing something or other, to the point where I lost track and the NYT has begun reporting on clinical trials being delayed by lack of virus manufacturing capacity. (A good problem to have!) Genome synthesis continues to greatly concern me but nothing newsworthy happened in 2017 other than, presumably, continuing to get cheaper on schedule. Intelligence research did not deliver any particularly amazing results as the SSGAC paper has apparently been delayed to 2018 (with a glimpse in Plomin & von Stumm2018), but we saw two critical methodological improvements which I expect to yield fruit in 2017–2018: first, as genetic correlation researchers have noted for years, genetic correlations should be able to boost power considerably by correcting for measurement error & increasing effective sample size by appropriate combination of polygenic scores, and MTAG demonstrates this works well for intelligence (Hill et al 2017b increases PGS to ~7% & Hill et al 2018 to ~10%); second, Hsu’s lasso predictions were proven true by Lello et al 2017 demonstrating the creation of a polygenic score explaining most SNP heritability/predicting 40% of height variance. The use of these two simultaneously with SSGAC & other datasets ought to boost IQ PGSes to >10% and possibly much more. Perhaps the most notable single development was the resolution of the long-standing dysgenics question using molecular genetics: has the demographic transition in at least some Western countries led to decreases in the genetic potential for intelligence (mean polygenic score), as suggested by most but not all phenotypic analyses of intelligence/education/fertility? Yes, in Iceland/USA/UK, dysgenics has indeed done that on a meaningful scale, as shown by straightforward calculations of mean polygenic score by birth decade & genetic correlations. More interestingly, the increasing availability of ancient DNA allows for preliminary analyses of how polygenic scores change over time: over tens of thousands of years, human intelligence & disease traits appear to have been slowly selected against (consistent with most genetic variants being harmful & under purifying selection) but that trend reversed at some point relatively recent.

For 2016, I noted that the main story of VR was that it hadn’t failed & was modestly successful; 2017 saw the continuation of this trend as it climbs into its “trough of productivity”—the media hype has popped and for 2017, VR just kept succeeding and building up an increasingly large library of games & applications, while the price continued to drop dramatically (as everyone should have realized but didn’t) with the Oculus now ~$365.33$3002017. So much for “motion sickness will kill VR again” or “VR is too expensive for gamers”. Perhaps the major surprise for me was that Sony’s quiet & noncommittal approach to its headset (which made me wonder if it would be launched at all) masked a huge success, as PSVR has sold into the millions of units and is probably the most popular ‘real’ VR solution despite its technical drawbacks compared to Vive/Oculus. There continues to be no killer app, but the many upcoming hardware improvements like 4K displays or wireless headsets or eye tracking+foveated-rendering will continue increasing quality while prices drop and libraries continue to build up; if there is any natural limit to the VR market, I haven’t seen any sign of it yet. So for 2018–2019, I wonder if VR will simply continue to grow gradually with mobile smartphone VR solutions eating the lunch of full headsets, or if there will be a breakout moment where the price, quality, library, and a killer app hit a critical combination?

Bitcoin underwent one of its periodic ‘bubbles’, complete with the classic accusations that this time Bitcoin will surely go to zero, the fee spikes mean Bitcoin will never scale (“nobody goes there anymore, it’s too popular”), people can’t use it to pay for anything, it’s a clear scam because of various peoples’ foolishness like taking out mortgages to gamble on further increases, Coinbase is run by fools & knaves, random other altcoins have bubbled too & will doubtless replace Bitcoin soon, Bitcoin has failed to achieve any libertarian goals and is now a plaything of the rich, people who were wrong about Bitcoin every time from $1.4$12011 in 201113ya to now will claim to be right morally, the PoW security is wasteful, etc—one could copy-paste most articles or comments from the last bubble (or the one before that, or before that) into this one with no change other than the numbers. As such, while I have benefited from it, there is little worth saying about it other than to note its existence with bemusement, and reflect on how far Bitcoin & cryptocurrencies have come since I first began using them in 201113ya: Even if Bitcoin goes to zero now, it’s unleashed an incredible Cambrian explosion of cryptography applications and economics crossovers. Cryptoeconomists are going to spend decades digesting proof-of-work, proof-of-stake, slashing, Truthcoin/HiveMind/Augur, zk-SNARKs and zk-STARKs, Mimblewimble, TrueBit, scriptless scripts & other applications of Schnorr signatures, Turing-complete contracts, observed cryptomarkets like the DNMs… You can go through Tim May’s Cyphernomicon and each section corresponds to a project made possible only via Bitcoin’s influence. Bitcoin had more influence in its first 5 years than Chaum’s digital cash has had in 30 years. Cryptography will never be the same. The future’s so bright I gotta wear mirrorshades.

A short note on politics: Donald Trump’s presidency and its backlash in the form of Girardian scapegoating (sexual harassment scandals & social-media purges) have received truly disproportionate coverage and have become almost an addiction. They have distracted from important issues and from important facts like 2017 being one of the best years in human history, many scientific & technological improvements and breakthroughs like genetic engineering or AI, or global & US economic growth. Objectively, Trump’s first year has been largely a non-event; a few things were accomplished like packing federal courts and a bizarre tax bill, but overall not much happened, and Trump has not lived up to the apocalyptic predictions & hysteria. If the next 3 years are similar to 2017, one would have to admit that Trump as president turned out better than George W. Bush!

Books

Nonfiction:

  1. Selected Non-Fictions, Jorge Luis Borges (July selection)

  2. Site Reliability Engineering: How Google Runs Production Systems (January review)

  3. The Playboy Interview: Volume II

  4. Artificial Life, Levy

  5. Possible Worlds and Other Essays, J.B.S. Haldane 1927

  6. Annual Berkshire Hathaway letters of Warren Buffett

  7. Tokyo: A Certain Style

  8. The Grand Strategy of the Roman Empire: From the First Century CE to the Third, Luttwak 2016

  9. Moon Dust: In Search of the Men Who Fell to Earth, Smith2005

Fiction:

  1. Unsong, Scott Alexander

  2. Unforgotten Dreams: Poems by the Zen monk Shōtetsu, Steven D. Carter

  3. The Anubis Gates, Tim Powers (January review)

  4. Sunset in a Spiderweb: Sijo Poetry of Ancient Korea, Baron & Kim1974

TV/movies

Nonfiction movies:

  1. Amy (2015; June review)

  2. The Great Happiness Space (November review)

Fiction:

Anime:

  1. The Tale of the Princess Kaguya (201311ya; review)

  2. Kubo and the Two Strings (2016; January review)

  3. Fullmetal Alchemist: Brotherhood (May review)

  4. Zootopia