Skip to main content

Scientific stagnation

Shrinking marginal returns to science and technology

One common objection to the idea of a Technological Singularity is that any exponential increase in something like computing power may be offset by a corresponding increase in the difficulty of the remaining scientific & practical problems, leading to essentially a stalemate and an indefinite continuation of the status quo.

This is definitely a concern, and one widely discussed under various terms like “the great stagnation”, “picking the low-hanging fruit”, and “diminishing marginal returns”. I think there’s a lot of force to the S-curve view of things and it’s one of the most plausible reasons that there might be no Singularity in a sense we’d recognize now.

There’s a few difficulties in grappling with this proposal:

  1. it’s not clear that this defeats all forms of Singularity; for example, right now humans are essentially completely un-selfmodifying. A healthy genius today is smart in basically the same ways as a healthy genius 1500 years ago. Sure, we have things like Google and computers to help them out a lot, but the difference doesn’t seem that big. When we finally get full AI, it’s not clear that it will be a simple extrapolation of current humans and subject to the same diminishing returns and so essentially business as usual. It may well be wildly different and an entirely new S-curve of its own which will peter out, if it ever does, in a wildly different post-human regime. This is a way in which marginal returns could be diminishing, the S-curve model of technological improvement true, but a Singularity happen just as predicted by the Vingean and Yudkowskian schools.

  2. on its own terms, it makes little unconditional prediction: maybe problem difficulty is increasing exponentially, but the resources being devoted to the problem are increasing exponentially too - different exponents, or different constant factors. There’s a billion Chinese ‘coming online’ for R&D, one might say, and that compensates for a lot.

  3. the literature is big, the proxies for return on investments poor, and it’s hard to get a grasp on it all.

I was reading up on the subject and wrote a little bit here, but I eventually realized I couldn’t keep up with all the relevant papers or contribute anything new, so I abandoned the topic. My notes are left below for those interested in the topic.

Diminishing Marginal Returns

The Long Stagnation thesis can be summarized as: “Western civilization is experiencing a general decline in marginal returns to investment”. That is, every $1 or other resource (such as ‘trained scientist’) buys less in human well-being or technology than before, aggregated over the entire economy.

This does not imply any of the following:

  1. No exponential curves exist (rather, they are exponential curves which are part of sigmoids which have yet to level off; Moore’s law and stagnation can co-exist)

    Sudden dramatic curves can exist even amid an economy of diminishing marginal returns; to overturn the overall curve, such a spike would have to be a massive society-wide revolution that can make up for huge shortfalls in output.

  2. Any metrics in absolute numbers have ceased to increase or have begun to fall (patents can continue growing each year if the amount invested in R&D or number of researchers increases)

  3. We cannot achieve meaningful increases in standards of living or capabilities (the Internet is a major accomplishment)

  4. Specific scientific or technological will not be achieved (eg. AI or nanotech) or be achieved by certain dates

  5. The stagnation will be visible in a dramatic way (eg. barbarians looting New York City)

The stagnation thesis instead suggests that naive forecasts have under-appreciated error-bars, which cannot be reduced without careful consideration. It suggests that we may under-estimate the chance of periods of essentially no progress, like Euclidean geometry or astronomy between the first century BC and the Renaissance, periods where the forces that were able to overcome diminishing returns suddenly hit hard brick walls1. In particular, if the stagnation thesis is true and the overall landscape of science/technology looks like broad slow gradual improvements with occasional sigmoids blasting up exponentials and puttering out, as opposed to a more techno-optimistic Kurzweilian scenario of ‘accelerating returns’ over much of the science/technology landscape, then we should assign more weight to scenarios in which technologies are ‘imbalanced’; for example, a scenario where the Moore’s law sigmoid does not flatten out until 2040 but the AI software curve continues to follow a gradual increase poses a serious ‘hardware overhang’ risk as enormous computing power sits around waiting for the first AI program just barely well-designed enough to make use of it and fix its primitive algorithms to make proper use of said computing power. An imbalance favoring either hardware or software may be disastrous: if the luck of the sigmoid draw favors hardware, then the first primitive program to come along will win all the marbles; if chance instead sends software up on a sigmoid rocket, then this incentivizes an arms race to assemble enough computing power to run one’s faithful militarized AI and win all the marbles before another actor can run their slavishly loyal AI. (Whereas in a Kurzweilian scenario of interlocking feedback loops, the AI program would be developed roughly around the same time a computer powerful enough to run it at all is developed, and any overhang potential will be limited compared to the other scenarios.) Damien Broderick:

What if, as Vernor Vinge proposed, exponentially accelerating science and technology are rushing us into a Singularity (Vinge, 198638ya; 199331ya), what I have called the Spike? Technological time will be neither an arrow nor a cycle (in Stephen Jay Gould’s phrase), but a series of upwardly accelerating logistical S-curves, each supplanting the one before it as it flattens out. Then there’s no pattern of reasoned expectation to be mapped, no knowable Chernobyl or Fukushima Daiichi to deplore in advance. Merely - opacity.

The stagnation thesis is as big as history, and has a long literature of ‘declinism’, eg. Spengler’s 1918106ya The Decline of the West, and so it is tempting to take the Outside View and mock it as obviously falsified - but this is where the caveats about marginal returns come into play. To give a simple example: world population in 1900124ya was around ~1.6 billion, and in 200024ya ~6 billion, an increase by a factor of 3.75. Plausibly, the populations of educated scientists and other such people increased even more2 (the fraction of the American populace going to college in 1900124ya was, shall we say, smaller than in 200024ya). So even if the 200024ya scientists were shockingly 50% less ‘productive’ than their 1900124ya counterparts, because there are >3.75 times as many, we will still witness >1.8x as much productivity. It would be very easy to simply compare 200024ya and 1900124ya and say talk of stagnation is ludicrous. So we see the burden ‘marginal returns’ puts on us: we need to be constantly adjusting any absolute figures - which are hard enough to come by - for the size of the relevant population.

When we seek to measure stagnation, we have several main areas of interest:

  1. pure science

    • cost:

      • persons per breakthrough or paper or other metric

      • dollars per metric

      • age at discovery

    • benefit:

      • judgement of contemporaries: citations & awards

      • judgement of posterity: historiometry

  2. commercialization

    • cost:

      • persons per result

      • R&D budgets per result

    • benefit:

      • sales

      • increases in metrics of quality of life (eg. lifespan)

  3. general economics

    • growth rates

    • productivity changes

With this schematic, we can slot results in nicely. For example, Charles Murray’s Human Accomplishment engages in historiometry, finding that on a population-adjusted (per capita) basis, the most productive period for the sciences was the late 1800s; his results would be listed under pure science, benefit, judgement of posterity, since he uses only references written after 195074ya about people & results before 195074ya (to control for the obvious biases). On the other hand, Tyler Cowen’s economic citations in The Great Stagnation would go into general economics, while Joseph Tainter’s Collapse of Complex Societies will straddle all 3 categories but mostly economics.

Science

Cost

All else equal, the less time it takes a scientist to make a breakthrough, the cheaper the breakthrough is - less opportunity cost to him, the sooner others can build on it, less maintenance & depreciation, less effort required to cover prerequisites and get up to speed, etc. Murray unfortunately does not give us a deep history of ‘age of greatest accomplishment’, but we can still look at the 20th century’s Nobel Prizes (eg. Chemistry ages); Jones & Weinberg2011 matched winners with their age when they performed the award-winning work:

A study of Nobel Laureates 1901107200816ya in these three fields examined the age at which scientists did their prize-winning work. Results showed that before 1905119ya, about two-thirds of winners in all three fields did their prize-winning work before age 40, and about 20% did it before age 30. But by 200024ya, great achievements before age 30 nearly never occurred in any of the three fields. In physics, great achievements by age 40 only occurred in 19% of cases by the year 200024ya, and in chemistry, it nearly never occurred….Earlier work on creativity in the sciences has emphasized differences in the ages when creativity peaks across various scientific disciplines, assuming that those differences were stable over time, Weinberg said. But this new work suggests that the differences in the age of creativity peaks between fields like chemistry and physics are actually quite small compared to the differences in creativity peaks between time periods within each discipline…For the study, the researchers analyzed the complete set of 525 Nobel Prizes given between 1901123ya and 200816ya in the three fields - 182 in physics, 153 in chemistry and 190 in medicine. Through extensive historical and biographical analysis, they determined the ages at which each Nobel Prize winner produced their prize-winning work. In general, there was an aging pattern over the 20th century as to when scientists made their breakthrough discoveries, although there were differences between the three fields. The most interesting case is physics, Weinberg said. In physics, there was an especially notable increase in the early 20th century in the frequency of young scientists producing prize-winning work. The proportion of physicists who did their prize-winning work by age 30 peaked in 1923101ya at 31%. Those that did their best work by age 40 peaked in 193490ya at 78%. The proportion of physicists under age 30 or 40 producing Nobel Prize-winning work then declined throughout the rest of the century…Another reason that younger scientists may have made more significant contributions early in the 20th century is that they finished their training earlier in life. The majority of Nobel Laureates received their doctoral degrees by age 25 in the early 20th century, the researchers found. However, all three fields showed substantial declines in this tendency, with nearly no physics or chemistry laureates receiving their degrees that early in life by the end of the century. In another analysis, the researchers examined the age of studies referenced in important scientific papers in the three fields through the 20th century. They found that in the early part of the 1900s – the time when quantum mechanics made its mark – there was a strong tendency for physics to reference mostly recent work. “The question is, how much old knowledge of the field do you need to know to make important scientific contributions in your field?” Weinberg said. “The fact that physicists in the early 20th century were citing mostly recent work suggests that older scientists didn’t have any advantage – their more complete knowledge of older work wasn’t necessary to make important contributions to the field. That could be one reason why younger scientists made such a mark.” But now, physicists are more likely to cite older studies in their papers, he said. That means older scientists may have an advantage because of their depth of knowledge.

Rather, a researcher’s output tends to rise steeply in the 20’s and 30’s peak in the late 30’s or early 40’s and then trail off slowly through later years (Lehman, 195371ya; Simonton, 199133ya).

  • Lehman HC (195371ya) Age and Achievement (Princeton University Press, Princeton, NJ)

  • Simonton DK (199133ya) “Career landmarks in science: Individual differences and interdisciplinary contrasts”. Dev Psychol 27:119-130

Jones2006 covered a similar upward shift in age for noted inventors

[Why peak in the early 40s? The age-related decline in intelligence is steady from 20s. There must be some balance between acquiring and exploiting information with one’s disappearing intelligence which produces a peak in the 40s before the decline kills productivity; the graph in aging section of the DNB curiously shows late 40s is where you hit 0 std-devs, intelligence/memory-wise, vis-a-vis the general population]

This reasoning is explored in Jones 200519ya, which studies “ordinary” inventors, looking at all U.S. patents in the 197525200024ya period. is rising at a rate of 6 years/century.

Jones2005:

The estimates suggest that, on average, the great minds of the 20th Century typically became research active at age 23 at the start of the 20th Century, but only at age 31 at the end - an upward trend of 8 years. Meanwhile, there has been no compensating shift in the productivity of innovators beyond middle age.

The technological almanacs compile key advances in technology, by year, in several different categories such as electronics, energy, food & agriculture, materials, and tools & devices. The year (and therefore age) of great achievement is the year in which the key research was performed. For the technological almanacs, this is simply the year in which the achievement is listed.

The largest mass of great innovations in knowledge came in the 30’s (42%), but a substantial amount also came in the 40’s (30%), and some 14% came beyond the age of 50. Second, there are no observations of great achievers before the age of 19. Dirac and Einstein prove quite unusual, as only 7% of the sample produced a great achievement at or before the age of 26. Third, the age distribution for the Nobel Prize winners and the great inventors, which come from independent sources, are extremely similar over the entire distributions. Only 7% of individuals in the data appear in both the Nobel Prize and great inventors data sets.

While laboratory experiments do suggest that creative thinking becomes more difficult with age (eg. Reese et al, 200123ya), the decline in innovative output at later ages may largely be due to declining effort, which a range of sociological, psychological, institutional, and economic theories have been variously proposed to explain (see Simonton 199628ya for a review).

  • Reese, H,W. Lee, L.J., Cohen, S. H., and Puckett, J.M. “Effects of Intellectual Variables, Age, and Gender on Divergent Thinking in Adulthood”, International Journal of Behavioral Development, November 200123ya, 25 (6), 491-500

  • Simonton “Creativity,” in The Encyclopedia of Gerontology, San Diego, CA: Academic Press, 1996

In fact, aggregate data patterns, much debated in the growth literature, have noted long-standing declines in the per-capita output of R&D workers, both in terms of patent counts and productivity growth (Machlup 196262ya; Evenson, 199133ya; Jones 199529yaa; Kortum, 199727ya). Simple calculations from aggregate data suggest that the typical R&D worker contributes approximately 30% as much to aggregate productivity gains today as she did at the opening of the 20th Century.29

29: Combining Machlup’s data on growth in knowledge producing occupations for 190059195965ya (Machlup 196262ya, Table X-4) with similar NSF data for 195940199925ya (National Science Foundation, 200519ya), we see that the total number of knowledge-producing workers in the United States has increased by a factor of approximately 19. Meanwhile, the U.S. per-capita income growth rate, which proxies for productivity growth over the long-run, suggests a 6-fold increase in productivity levels (based on a steady growth rate of 1.8%; see Jones 199529yab). The average rate at which individual R&D workers contribute to productivity growth is A=LR , or gA=LR , where A is aggregate productivity, g is the productivity growth rate, and LR is the aggregate number of R&D workers. The average contribution of the individual R&D worker in the year 200024ya is then a fraction A200024ya =A1900124ya =(L200024ya =L1900124ya ) = 6=19 (32%) of what it was in 1900124ya.

Figure 5 compares the estimated life-cycle curves for the year 1900124ya and the year 200024ya, using specification (3). We see that the peak ability to produce great achievements in knowledge came around age 30 in 1900124ya but shifted to nearly age 40 by the end of the century. An interesting aspect of this graph is the suggestion that, other things equal, lifetime innovation potential has declined.

The first analysis looks directly at evidence from Ph.D. age and shows that Ph.D. age increases substantially over the 20th Century. The second analysis harnesses world wars, as exogenous interruptions to the young career, to test the basic idea that training is an important preliminary input to innovation. I show that, while the world wars do not explain the 20th century’s age trend, they do indicate the unavoidable nature of training: lost years of training appear to be “made up” after the war. The final analysis explores cross-field, cross-time variation. I show that variations in training duration predict variations in age at great invention

Indeed, several studies have documented upward trends in educational attainment among the general population of scientists. For example, the age at which individuals complete their doctorates rose generally across all major fields in a study of the 196719198638ya period, with the increase explained by longer periods in the doctoral program (National Research Council, 199034ya). The duration of doctorates as well as the frequency and duration of post-doctorates has been rising across the life-sciences since the 1960s (Tilghman et al, 199826ya). A study of electrical engineering over the course of the 20th century details a long-standing upward trend in educational attainment, from an initial propensity for bachelor degrees as the educational capstone to a world where Ph.D.’s are common (Terman, 199826ya).

  • Terman, F.E. “A Brief History of Electrical Engineering Education”, Proceedings of the IEEE, August 199826ya, 86 (8), 179281800224ya

Most strikingly, both achievement and Ph.D. age in Physics experienced a unique decline in the early 20th century. This unusual feature, beyond reinforcing the relationship between training and achievement age, may also serve to inform more basic theories for the underlying dynamics and differences across fields.

[that there was a fall in age for this radical and revolutionary period in physics validates the general approach]

First, mean life expectancy at age 10 was already greater than 60 in 1900124ya, while it is clear from Sections 2 and 3 that innovation potential is modest beyond 60, so that adding years of life beyond this age would have at most mild effects on the optimization.25 Related, even modest discounting would substantially limit the effect of gains felt 35+ years beyond the end of training on the marginal training decision. Next, common life expectancy changes cannot explain the unique cross-field and cross-time variation explored in §4, such as the unique behavior of physics. Moreover, Figure 7 suggests, if anything, accelerating age trends after the second world war, which is hard to explain with increased longevity, where post-war gains have slowed.

One proxy measure is research collaboration in patenting - measured as team size - which is increasing at over 10% per decade.28 A more direct measure of specialization considers the probability that an individual switches technological areas between consecutive patents. Jones 200519ya shows that the probability of switching technological areas is substantially declining with time. These analyses indicate that training time, E, is rising, while measures of breadth, b, are simultaneously declining. It is then difficult to escape the conclusion that the distance to the knowledge frontier is rising.28 Large and general upward trends in research collaboration are also found in journal publications (eg. Adams et al 200420ya).

  • Adams, James D., Black, Grant C., Clemmons, J.R., and Stephan, Paula E. “Scientific Teams and Institutional Collaborations: Evidence from U.S. Universities, 198118199925ya”, NBER Working Paper #10640, July 2004

Analogously, problems that require more experiential training have older peak ages. For instance, Jones (200618ya) finds that the peak age for natural scientists has drifted higher over the twentieth century. Relative to 100 years ago, more experience now needs to be accumulated to reach the cutting edge of scientific fields.

lifespan increases cannot make up for this: https://www.lesswrong.com/posts/zRKW7LotZxJi6Cyeg/living-forever-is-hard-part-2-adult-longevity

Jones, Benjamin F. “The Burden of Knowledge and the Death of the Renaissance Man: Is Innovation Getting Harder?” NBER Working Paper #11360, 2005

Upward trends in academic collaboration and lengthening doctorates, which have been noted in other research, can also be explained by the model, as can much-debated trends relating productivity growth and patent output to aggregate inventive effort. The knowledge burden mechanism suggests that the nature of innovation is changing, with negative implications for long-run economic growth.

Given this increasing educational attainment, innovators will only become more specialized if the burden of knowledge mechanism is sufficiently strong. More subtly, income arbitrage produces the surprising result that educational attainment will not vary across technological fields, regardless of variation in the burden of knowledge or innovative opportunities.

Any relation to Baumol’s cost disease?

“As Science Evolves, How Can Science Policy?”, Jones 201014ya; summary of all the Jones papers

First, R&D employment in leading economies has been rising dramatically, yet TFP growth has been flat (Jones, 199529yab). Second, the average number of patents produced per R&D worker or R&D dollar has been falling over time across countries (Evenson 198440ya) and U.S. manufacturing industries (Kortum 199331ya). These aggregate data trends can be seen in the model as an effect of increasingly narrow expertise, where innovators are becoming less productive as individuals and are required to work in ever larger teams.

  • Jones “Time Series Tests of Endogenous Growth Models,” Quarterly Journal of Economics, 199529yab, 110, 495-525

Essentially, the greater the growth in the burden of knowledge, the greater must be the growth in the value of knowledge to compensate. Articulated views of why innovation may be getting harder in the growth literature (Kortum 199727ya, Segerstrom 199826ya) have focused on a “fishing out” idea; that is, on the parameter χ. The innovation literature also tends to focus on “fishing out” themes (eg. Evenson 199133ya, Cockburn & Henderson, 199628ya). This paper offers the burden of knowledge as an alternative mechanism, one that makes innovation harder, acts similarly on the growth rate, and can explain aggregate data trends (see §4). Most importantly, the model makes specific predictions about the behavior of individual innovators, allowing one to get underneath the aggregate facts and test for a possible rising burden of knowledge using micro-data

  • Kortum, Samuel S. “Equilibrium R&D and the Decline in the Patent-R&D Ratio: U.S. Evidence,” American Economic Review Papers and Proceedings, May 199331ya, 83, 450-457

  • Evenson 199133ya “Patent Data by Industry: Evidence for Invention Potential Exhaustion?” Technology and Productivity: The Challenge for Economic Policy, 199133ya, Paris: OECD, 233-248

This result is consistent with Henderson & Cockburn’s (199628ya) finding that researchers in the pharmaceutical industry are having a greater difficulty in producing innovations over time.

  • Henderson, Rebecca and Cockburn, Iain. “Scale, Scope, and Spillovers: The Determinants of Research Productivity in Drug Discovery,” Rand Journal of Economics, Spring 199628ya, 27, 32-59.

The age at which individuals complete their doctorates rose generally across all major fields from 196719198638ya, with the increase explained by longer periods in the doctoral program (National Research Council, 199034ya). The duration of doctorates as well as the frequency of post-doctorates has been rising across the life-sciences since the 1960s (Tilghman et al, 199826ya). An upward age trend has also been noted among the great inventors of the 20th Century at the age of their noted achievement (Jones, 200519ya), as shown in Table 1. Meanwhile, like the general trends in innovator teamwork documented here, upward trends in academic coauthorship have been documented in many academic literatures, including physics and biology (Zuckerman & Merton, 197351ya), chemistry (Cronin et al, 200420ya), mathematics (Grossman, 200222ya), psychology (Cronin et al, 200321ya), and economics (McDowell & Melvin, 198341ya; Hudson, 199628ya; Laband & Tollison, 200024ya). These coauthorship studies show consistent and, collectively, general upward trends, with some of the data sets going back as far as 1900124ya.

  • National Research Council, On Time to the Doctorate: A Study of the Lengthening Time to Completion for Doctorates in Science and Engineering, Washington, DC: National Academy Press, 1990

  • Tilghman, Shirley (chair) et al. Trends in the Early Careers of Life Sciences, Washington, DC: National Academy Press, 1998

  • Zuckerman, Harriet and Merton, Robert. “Age, Aging, and Age Structure in Science,” in Merton, Robert, The Sociology of Science, Chicago, IL: University of Chicago Press, 197351ya, 497-559

  • Cronin et al, 200420ya “Visible, Less Visible, and Invisible Work: Patterns of Collaboration in 20th Century Chemistry,” Journal of the American Society for Information Science and Technology, 200420ya, 55(2), 160-168

  • Grossman, Jerry. “The Evolution of the Mathematical Research Collaboration Graph,” Congressus Numerantium, 200222ya, 158, 202-212

  • Cronin, Blaise, Shaw, Debora, and La Barre, Kathryn. “A Cast of Thousands: Coauthorship and Subauthorship Collaboration in the 20th Century as Manifested in the Scholarly Journal Literature of Psychology and Philosophy,” Journal of the American Society for Information Science and Technology, 200321ya, 54(9), 855-871

  • McDowell, John, and Melvin, Michael. “The Determinants of Coauthorship: An Analysis of the Economics Literature,” Review of Economics and Statistics, February 198341ya, 65, 155-160

  • Hudson, John. “Trends in Multi-Authored Papers in Economics,” Journal of Economic Perspectives, Summer 199628ya, 10, 153-158

  • Laband, David and Tollison, Robert. “Intellectual Collaboration,” Journal of Political Economy, June 200024ya, 108, 632-662

Of further interest is the drop in total patent production per total researchers, which has been documented across a range of countries and industries and may go back as far as 1900124ya and even before (Machlup 196262ya). Certainly, not all researchers are engaging in patentable activities, and it is possible that much of this trend is explained by relatively rapid growth of research in basic science.22 However, the results here indicate that among those specific individuals who produce patentable innovations, the ratio of patents to individuals is in fact declining. In particular, the recent drop in patents per U.S. R&D worker, a drop of about 50% since 197549ya (see Segerstrom 199826ya), is roughly consistent in magnitude with the rise in team size over that period.

TODO Performance Curve Database - many sigmoids or linear graphs?

Benefit

One warning sign is citation by contemporaries. If the percentage of papers which never get cited increases, this suggests that either the research itself is no good (even a null result is worth citing as information about what we know is not the case), or fellow researchers have a reason not to cite them (reasons which range from the malign like researchers are so overloaded that they cannot keep up with the literature, which implies diminishing marginal returns, or professional jealousy, which implies the process of science is being corrupted, to the merely possibly harmful, like length limits). Uncitedness is most famous in the humanities, which are not necessarily of major concern, but I have collected estimates that there are multiple hard fields like chemistry where uncitedness may range up to 70+%, and there’s a troubling indication that uncitedness may be increasing in even the top scientific journals.

demand for math inelastic and collaboration not helpful?

“The Collapse of the Soviet Union and the Productivity of American Mathematicians”, by George J. Borjas and Kirk B. Doran, NBER Working Paper No. 17800, February 2012

We use unique international data on the publications, citations, and affiliations of mathematicians to examine the impact of a large post-1992 influx of Soviet mathematicians on the productivity of their American counterparts. We find a negative productivity effect on those mathematicians whose research overlapped with that the Soviets. We also document an increased mobility rate (to lower-quality institutions and out of active publishing) and a reduced likelihood of producing “home run” papers. Although the total product of the pre-existing American mathematicians shrank, the Soviet contribution to American mathematics filled in the gap. However, there is no evidence that the Soviets greatly increased the size of the “mathematics pie.”

Ben Jones, a professor at the Kellogg School of Management, at Northwestern University, has quantified this trend. By analyzing 19.9 million peer-reviewed academic papers and 2.1 million patents from the past fifty years, he has shown that levels of teamwork have increased in more than ninety-five per cent of scientific subfields; the size of the average team has increased by about twenty per cent each decade. The most frequently cited studies in a field used to be the product of a lone genius, like Einstein or Darwin. Today, regardless of whether researchers are studying particle physics or human genetics, science papers by multiple authors receive more than twice as many citations as those by individuals. This trend was even more apparent when it came to so-called “home-run papers”-publications with at least a hundred citations. These were more than six times as likely to come from a team of scientists.

…A few years ago, Isaac Kohane, a researcher at Harvard Medical School, published a study that looked at scientific research conducted by groups in an attempt to determine the effect that physical proximity had on the quality of the research. He analyzed more than thirty-five thousand peer-reviewed papers, mapping the precise location of co-authors. Then he assessed the quality of the research by counting the number of subsequent citations. The task, Kohane says, took a “small army of undergraduates” eighteen months to complete. Once the data was amassed, the correlation became clear: when coauthors were closer together, their papers tended to be of significantly higher quality. The best research was consistently produced when scientists were working within ten metres of each other; the least cited papers tended to emerge from collaborators who were a kilometre or more apart. “If you want people to work together effectively, these findings reinforce the need to create architectures that support frequent, physical, spontaneous interactions,” Kohane says. “Even in the era of big science, when researchers spend so much time on the Internet, it’s still so important to create intimate spaces.”

https://www.newyorker.com/magazine/201212ya/01/30/groupthink

Commercialization

Pharmaceuticals

Drugs are perhaps the most spectacular example of a sigmoid suddenly hitting and destroying a lot of expectations. If you read the transhumanist literature from the ’80s or ’90s, even the more sober and well-informed projections, it’s striking how much faith is put into ever-new miracle drugs coming out. And why not? The War On Cancer still didn’t seem like it was going too badly - perhaps it would take more than $1 billion, but real progress was being made - and a crop of nootropics came out in the ’70s and ’80s like modafinil, followed up with famous blockbusters like Viagra, and then the Human Genome Project would triumphantly finish in a decade or so, whereupon things would really get cooking.

What went unnoticed in all this was the diminishing marginal returns. First, it turned out that pharmaceutical companies were doing a pretty good job at searching all the ‘small’ (lighter weight) chemicals:

“What Do Medicinal Chemists Actually Make? A 50-Year Retrospective”

The idea is to survey the field from a longer perspective than some of the other papers in this vein, and from a wider perspective than the papers that have looked at marketed drugs or structures reported as being in the clinic. I’m reproducing the plot for the molecular weights of the compounds, since it’s an important measure and representative of one of the trends that shows up. The prominent line is the plot of mean values, and a blue square shows that the mean for that period was statistically-significantly different than the 5-year period before it (it’s red if it wasn’t). The lower dashed line is the median. The dotted line, however, is the mean for actual launched drugs in each period with a grey band for the 95% confidence interval around it.

Increase in average size of drugs 1960–44200420ya

Increase in average size of drugs 196044200420ya

As a whole, the mean molecular weight of a J. Med. Chem. has gone up by 25% over the 50-year period, with the steeped increase coming in 19904199430ya. “Why, that was the golden age of combichem”, some of you might be saying, and so it was. Since that period, though, molecular weights have just increased a small amount, and may now be leveling off. Several other measures show similar trends. “Fifty Years of Med-Chem Molecules: What Are They Telling Us?”

The more atoms in a particular drug, the more possible permutations and arrangements; a logarithmic slowdown in average weight would be expected of a search through an exponentially increasing space of possibilities. Smaller drugs are much more desirable than larger ones: they often are easier & cheaper to synthesize, more often survive passage through the gut or can pass the blood-brain barrier, etc. So there are many more large drugs than small drugs, but the small ones are much more desirable; hence, one would expect a balance between them, with no clear shift - if we were far from exploiting all the low-hanging fruit. Instead, we see a very steady trend upwards, as if there were ever fewer worthwhile small drugs to be found.

(One could make an analogy to oil field exploration: the big oil fields are easiest to find and also the best, while small ones are both hard to find and the worst; if the big ones exist, the oil companies will exploit them as much as possible and neglect the small ones; hence, a chart showing ever decreasing average size of producing oil fields smells like a strong warning sign that there are few big oil fields left.)

Simultaneously with this indication that good drugs are getting harder to find, we find that returns are diminishing to each dollar spent on drug R&D (it takes more dollars to produce one drug):

Although modern pharmaceuticals are supposed to represent the practical payoff of basic research, the R&D to discover a promising new compound now costs about 100 times more (in inflation-adjusted dollars) than it did in 1950. (It also takes nearly three times as long.) This trend shows no sign of letting up: Industry forecasts suggest that once failures are taken into account, the average cost per approved molecule will top $3.8 billion by 2015. What’s worse, even these “successful” compounds don’t seem to be worth the investment. According to one internal estimate, approximately 85 percent of new prescription drugs approved by European regulators provide little to no new benefit. We are witnessing Moore’s law in reverse.3

The average drug developed by a major pharmaceutical company costs at least $4 billion, and it can be as much as $11 billion…The drug industry has been tossing around the $1 billion number for years. It is based largely on a study (supported by drug companies) by Joseph DiMasi of Tufts University…But as Bernard Munos of the InnoThink Center for Research In Biomedical Innovation has noted, just adjusting that estimate for current failure rates results in an estimate of $4 billion in research dollars spent for every drug that is approved…Forbes (that would be Scott DeCarlo and me) took Munos’ count of drug approvals for the major pharmas and combined it with their research and development spending as reported in annual earnings filings going back fifteen years…The range of money spent is stunning. AstraZeneca has spent $12 billion in research money for every new drug approved, as much as the top-selling medicine ever generated in annual sales; Amgen spent just $3.7 billion.4

Kindler’s attempts to figure out what to do about research were even more anguished. He was right that the old Pfizer model wasn’t working. Bigger wasn’t better when it came to producing new drugs. Studies by Bernard Munos, a retired strategist at Eli Lilly (LLY), show that both massive increases in research spending and corporate mergers have failed to increase R&D productivity. Between 2000 and 2008, according to Munos, Pfizer spent $60 billion on research and generated nine drugs that won FDA approval – an average cost of $6.7 billion per product. At that rate, Munos concluded, the company’s internal pipeline simply couldn’t sustain its profits.5

But the very opposite of Moore’s Law is happening at the downstream end of the R&D pipeline. The number of new molecules approved per billion dollars of inflation-adjusted R&D has declined inexorably at 9% a year and is now 1/100th of what it was in 1950. The nine biggest drug companies spend more than $60 billion a year on R&D but are finding new therapies at such a slow rate that, as a group, they’ve little chance of recouping that money. Meanwhile, blockbuster drugs are losing patent protection at an accelerating rate. The next few years will take the industry over a “patent cliff” of $170 billion in global annual revenue. On top of this, natural selection is producing resistant disease strains that undermine the efficacy not only of existing antibiotics and antivirals but (even faster) of anti-cancer drugs. Many people believe that something is terribly wrong with the way the industry works. The problem, some think, is that science-to mix clichés-is scraping the bottom of the biological barrel after plucking the low-hanging fruit.6

“Eroom’s law”: “Diagnosing the decline in pharmaceutical R&D efficiency”; Scannell et al 201212ya (Lowe):

The past 60 years have seen huge advances in many of the scientific, technological and managerial factors that should tend to raise the efficiency of commercial drug research and development (R&D). Yet the number of new drugs approved per billion US dollars spent on R&D has halved roughly every 9 years since 195074ya, falling around 80-fold in inflation-adjusted terms. There have been many proposed solutions to the problem of declining R&D efficiency. However, their apparent lack of impact so far and the contrast between improving inputs and declining output in terms of the number of new drugs make it sensible to ask whether the underlying problems have been correctly diagnosed. Here, we discuss four factors that we consider to be primary causes, which we call the ‘better than the Beatles’ problem; the ‘cautious regulator’ problem; the ‘throw money at it’ tendency; and the ‘basic research-brute force’ bias. Our aim is to provoke a more systematic analysis of the causes of the decline in R&D efficiency.

similar graph: http://dl.dropbox.com/u/85192141/2008-wobbe.pdf “Figure 3. New drugs discovered per billion dollars R&D spending and annual R&D spending”

“Drug development: Raise standards for preclinical cancer research”, C. Glenn Begley & Lee M. Ellis 201212ya - academic studies irreproducible

TODO

Since 200717ya, real median household income has declined 6.4% and is 7.1 % below the median household income peak prior to the 200123ya recession. Justin Wolfers,

2010 Census data: > The U.S. Census Bureau announced today that in 2010, median household income declined, the poverty rate increased and the percentage without health insurance coverage was not statistically-significantly different from the previous year. > Real median household income in the United States in 2010 was $49,445, a 2.3 percent decline from the 2009 median. > The nation’s official poverty rate in 2010 was 15.1 percent, up from 14.3 percent in 2009 ─ the third consecutive annual increase in the poverty rate. There were 46.2 million people in poverty in 2010, up from 43.6 million in 2009 ─ the fourth consecutive annual increase and the largest number in the 52 years for which poverty estimates have been published. > The number of people without health insurance coverage rose from 49.0 million in 2009 to 49.9 million in 2010, while the percentage without coverage −16.3 percent - was not statistically-significantly different from the rate in 2009.

Until this morning, the official data showed that the U.S. productivity growth accelerated during the financial crisis. Nonfarm business productivity growth supposedly went from a 1.2% annual rate in 20052200717ya, to a 2.3% annual rate in 20072200915ya. Many commentators suggested that this productivity gain, in the face of great disruptions, showed the flexibility of the U.S. economy.

Uh, oh. The latest revision of the national income accounts, released this morning, makes the whole productivity acceleration vanish. Nonfarm business productivity growth in the 2007-09 period has now been cut almost in half, down to only 1.4% per year. https://innovationandgrowth.wordpress.com/201113ya/07/29/productivity-surge-of-2007-09-melts-away-in-new-data/

I would pay up to $500 per year [for a search engine like Google]. It’s that valuable to me. What about you?

Last year three researchers at University of Michigan performed a small experiment to see if they could ascertain how much ordinary people might pay for search. Their method was to ask students inside a well-stocked university library to answer questions asked on Google, but to find the answers only using the materials in the library. They measured how long it took the students to answer a question in the stacks. On average it took 22 minutes. That’s 15 minutes longer that the 7 minutes it took to answer the same question, on average, using Google. Figuring a national average wage of $22/hour, this works out to a savings of $1.37 per search. https://kk.org/thetechnium/would-you-pay-f/

‘A survey indicated that 46 percent of Americans would be unwilling to give up television for the rest of their lives in return for a million dollars.’ Cowen on: “Would You Give Up TV for a Million Bucks?” 199232ya. TV Guide, October 10, pp. 10-15

How much would someone have to pay you to give up the Internet for the rest of your life? Would a million dollars be enough? Twenty million? How about a billion dollars? “When I ask my students this question, they say you couldn’t pay me enough,” says Professor Michael Cox, director of the O’Neil Center for Global Markets and Freedom at Southern Methodist University’s Cox School of Business. http://reason.tv/picks/show/would-you-give-up-the-internet

An 86-page 201014ya FCC study concludes that “a representative household would be willing to pay about $59 per month for a less reliable Internet service with fast speed (”Basic”), about $85 for a reliable Internet service with fast speed and the priority feature (“Premium”), and about $98 for a reliable Internet service with fast speed plus all other activities (“Premium Plus”). An improvement to very fast speed adds about $3 per month to these estimates.” https://siepr.stanford.edu/system/files/shared/Final_Rosston_Savage_Waldman_02_04_10__1_.pdf

A study from Japan found that: “The estimated WTP for availability of e-mail and web browsing delivered over personal computers are 2,709 Yen ($35) and 2,914 Yen ($38), on a monthly basis, respectively, while average broadband access service costs approximately 4,000 Yen ($52) in Japan” https://www.mediacom.keio.ac.jp/publication/pdf200915ya/03_Masanori%20KONDO.pdf

The Austan Goolsbee paper, based on 200519ya data, does a time study to find that the consumer surplus of the internet is about two percent of income. https://faculty.chicagobooth.edu/austan.goolsbee/research/timeuse.pdf

The productivity of U.S. workers dropped from April through June for the second consecutive quarter, leading to an increase in labor costs that may restrain gains in profits. The measure of employee output per hour fell at a 0.3 percent annual rate in the second quarter after a revised 0.6 percent drop in the prior three months, figures from the Labor Department showed today in Washington. The median estimate of 60 economists surveyed by Bloomberg News projected a 0.9 percent decrease. Expenses per employee climbed at a 2.2 percent rate.

…From the second quarter of 201014ya, productivity climbed 0.8 percent compared with a 1.2 percent year-over-year increase in the first quarter. Labor costs rose 1.3 percent from the year- earlier period following a 1.1 percent increase in the 12 months ended in the first quarter. Today’s productivity report incorporated revisions to prior years. Worker efficiency was revised to 4.1 percent in 201014ya from a previously reported 3.9 percent. For 200915ya, it was revised down to 2.3 percent from 3.7 percent. Labor costs fell 2 percent in 201014ya, the biggest decline since records began in 194876ya. Gross domestic product expanded at a 1.3 percent annual pace from April through June, after a 0.4 percent rate in the previous three months, the Commerce Department said on July 29. Household spending rose at 0.1 percent pace, the weakest since the same period in 200915ya.

https://www.bloomberg.com/news/articles/2011-08-09/productivity-in-u-s-falls-for-second-straight-quarter-as-labor-costs-rise

Canadian manufacturing & goods productivity stagnating 200010201014ya; resource extraction falling 60% since 196064ya! notice this is despite increasing total absolute extraction rate: > You can see that Mining and Extraction TFP takes a long plunge, even though Canada today prospers through selling natural resources. So what’s up? One of Gordon’s arguments against TFP is his claim that this graph implies earlier mining technologies were better than current mining technologies (unlikely), but that is a misunderstanding of what TFP measures. Think of TFP as trying to pick “the stuff we get for free through innovation.” Falling TFP in mining reflects Canada’s move from “suck it up with a straw” oil to complex, high cost extraction tar sands projects and the like. They have moved down this curve a long, long way. > > Yet Canada still prospers: someone is willing to pay for all the time and trouble they put into extraction, because the other natural resource options are costlier at the relevant margin. Another way to make the point is that this graph, and the embedded story of productivity, is very bad news for someone, just not Canada, at least not so far. https://marginalrevolution.com/marginalrevolution/201113ya/08/is-there-a-productivity-crisis-in-canada.html

The statistical trend for growth in total economy LP ranged from 2.75 percent in early 196262ya down to 1.25 percent in late 197945ya and recovered to 2.45 percent in 200222ya. Our results on productivity trends identify a problem in the interpretation of the 2008-09 recession and conclude that at present statistical trends cannot be extended past 200717ya.

For the longer stretch of history back to 1891133ya, the paper provides numerous corrections to the growth of labor quality and to capital quantity and quality, leading to significant rearrangements of the growth pattern of MFP, generally lowering the unadjusted MFP growth rates during 1928-50 and raising them after 195074ya. Nevertheless, by far the most rapid MFP growth in U. S. history occurred in 1928-50, a phenomenon that I have previously dubbed the “one big wave.”

Its conclusion is that over the next 20 years (2007202027) growth in real potential GDP will be 2.4 percent (the same as in 2000-07), growth in total economy labor productivity will be 1.7 percent, and growth in the more familiar concept of NFPB sector labor productivity will be 2.05 percent. The implied forecast 1.50 percent growth rate of per-capita real GDP falls far short of the historical achievement of 2.17 percent between 192995ya and 200717ya and represents the slowest growth of the measured American standard of living over any two-decade interval recorded since the inauguration of George Washington.

http://www.nber.org/papers/w15834

Australia mining productivity falling:

“Everyone here also knows that it is now just about impossible to avoid the conclusion that productivity growth performance has been quite poor since at least the mid-2000s,” he said. Based on the output per hours worked, the best productivity performers over the past five years were information, media and telecommunications (up 6.1 per cent a year on average), followed by agriculture, forestry and fishing (up 3.8 per cent) and financial and insurance services (up 3.7 per cent). In contrast, mining productivity went backwards by 4.9 per cent per year on average, and electricity, gas and waste services by 5.1 per cent. https://www.theaustralian.com.au/business/economics/mining-drags-down-productivity/story-e6frg926-1226107755451

Chad Jones (Fig. 1, p. 763, and in his short, readable text Introduction to Economic Growth) has reminded economists that the number of scientists and researchers has more than doubled in the G-5 countries since 195074ya, while the growth rate of living standards hasn’t budged: Twice the researchers, zero effect on growth.

The reasons for retracting 742 English language research papers retracted from the PubMed database between 200024ya and 201014ya were evaluated. Reasons for retraction were initially dichotomised as fraud or error and then analysed to determine specific reasons for retraction.

Results: Error was more common than fraud (73.5% of papers were retracted for error (or an undisclosed reason) vs 26.6% retracted for fraud). Eight reasons for retraction were identified; the most common reason was scientific mistake in 234 papers (31.5%), but 134 papers (18.1%) were retracted for ambiguous reasons. Fabrication (including data plagiarism) was more common than text plagiarism. Total papers retracted per year have increased sharply over the decade (r=0.96; p<0.001), as have retractions specifically for fraud (r=0.89; p<0.001). Journals now reach farther back in time to retract, both for fraud (r=0.87; p<0.001) and for scientific mistakes (r=0.95; p<0.001). Journals often fail to alert the naïve reader; 31.8% of retracted papers were not noted as retracted in any way.

https://jme.bmj.com/content/37/4/249.abstract

Below is a figure constructed using the quarterly TFP [total factor productivity] series of John Fernald at the San Francisco Fed. (extreme divergence from the exponential growth, around 19703197351ya, to something that looks linear - with no acceleration in the ’90s or 2000s)

Using country-level analysis as a base, we estimated that the total gross value of Internet search across the global economy was $780 billion in 2009, equivalent to the GDP of the Netherlands or Turkey. By this estimate, each search is worth about $0.50. Of that value, $540 billion-69 percent of the total and 25 times the annual value added (profits) of search companies-flowed directly to global GDP, chiefly in the form of e-commerce, advertising revenues, and higher corporate productivity. Search accounted for 1.2 percent of US and for 0.5 percent of India’s GDP. The remaining $240 billion (31 percent) does not show up in GDP statistics. It is captured by individuals rather than companies, in the form of consumer surplus, and arises from unmeasured benefits, such as lower prices, convenience, and the time saved by swift access to information. We estimate those benefits at $20 a month for consumers in France, Germany, and the United States and at $2 to $5 a month for their counterparts in Brazil and India. https://web.archive.org/web/20110814182809/https://www.mckinseyquarterly.com/Marketing/Digital_Marketing/Measuring_the_value_of_search_2848

  • A typical Internet search for academic information takes seven minutes. Relying on physical references takes 22 minutes.44

  • A consumer generally finds time to perform ten searches online but only two searches offline for each purchase.45

  • It takes the same amount of time to do three searches in an online business directory as it does to do one in a physical directory.46

Analysis for this report suggests that knowledge workers in business each save 30 to 45 hours per year as a result of search. When it comes to price transparency, academic research shows that the more visits made to price comparison Web sites, the lower prices fall and the greater the difference between the average and minimum price for a particular good.77 Thus, price transparency has a disciplining effect on the margins retailers can expect, which benefits consumers. Preliminary research shows prices online are, on average, 10 percent lower than those offline as a result of the price transparency afforded by search tools.78 Better matching is particularly valuable to consumers when they want long-tail items. Research shows that consumers value a hard-to-find, long-tail product anywhere between 1.3 to 1.8 times the actual price of the product.79 Consumers therefore capture large amounts of surplus when they buy products in the long tail. With regard to time saved, various studies taken together suggest that consumers who search online for their purchase can save 10 to 20 hours a year.80 Using data from academic studies, we valued that time at between $0.5 and $7 per hour, based on average, after-tax income per household in each country and the assumption that a consumer’s leisure time was worth 65 percent of this figure.81,82

It seems like market forecasts of low real yields 30 years into the future support TGS. How long does it take for long-run money neutrality to win out? If the yield curve showed low yields 100 years out, would that dissuade those looking for a monetary solution? https://marginalrevolution.com/marginalrevolution/201113ya/08/capital-depreciation-as-stimulus.html

Today, very few teenagers work full time jobs, and the number of teens employed in summer jobs has decreased from ~60% in 199430ya, to ~40% in 200816ya.[29]

29. Camarota & Jensenius 201014ya: “A Drought of Summer Jobs: Immigration and the Long-Term Decline in Employment Among U.S.-Born Teenagers”. In: Backgrounder. Center for Immigration Studies; 201014ya.

http://chronopause.com/chronopause.com/index.php/201113ya/08/20/interventive-gerontology-1-0-02-first-try-to-make-it-to-the-mean-diet-as-a-life-extending-tool-part-3/index.html

Here, we see that the percentage representation of teens in the U.S. workforce in 201014ya is 5.1% less than the level recorded in 200222ya. That figure confirms that teens are indeed being displaced from the U.S. workforce at the minimum wage level….In practical terms, for the 5.1% percentage decline from 200222ya through 201014ya in the teen share of American federal minimum wage earners, approximate half were displaced by young adults Age 20-34 (2.7%), while the remainder were displaced by geezers Age 45-59 (2.4%). https://politicalcalculations.blogspot.com/201113ya/07/how-much-are-geezers-displacing-teens.html based on “the Bureau of Labor Statistics’ annual reports on the Characteristics of Minimum Wage Workers.” see also https://politicalcalculations.blogspot.com/201113ya/07/disappearing-teen-jobs-and-minimum-wage_14.html

“A simple decomposition of the variance of output growth across countries”, Reicher 201113ya:

This paper outlines a simple regression-based method to decompose the variance of an aggregate time series into the variance of its components, which is then applied to measure the relative contributions of productivity, hours per worker, and employment to cyclical output growth across a panel of countries. Measured productivity contributes more to the cycle in Europe and Japan than in the United States. Employment contributes the largest proportion of the cycle in Europe and the United States (but not Japan), which is inconsistent with the idea that higher levels of employment protection in Europe dampen cyclical employment fluctuations…In the United States, productivity only contributes about 27% of the cycle and labor input four-fifths. Meanwhile, in France and Germany, productivity contributes 43% and 38% of the cycle, respectively. Japan is more European than Europe in this regard; productivity contributes 59% of the cycle there, while Korea looks more like the United States.

I’m the author of that paper. My own interpretation of my paper is that Japan sees a lot more labor hoarding than Europe or the United States, so unemployment is a particularly bad measure of the cycle there. We’d want to look at some measure of the output gap as a cyclical indicator since labor market indicators from Japan don’t carry much information about the macro situation. BUT, Karl has a major point, which is what the second quote was about. If we look at output, our analysis is complicated by the fact that the trends which we saw through 199034ya or so-convergence in productivity and unusually high hours worked per worker-have stopped. Without putting words in Tyler’s mouth, Japan picked the low-hanging fruit and now its productivity has been at 70% of that of the United States for some time now. We can’t just naively extrapolate that trend and expect a large amount of growth. Combine that with low population growth and a sharp downward trend in hours worked, and the Japanese growth slowdown since then is not surprising. https://marginalrevolution.com/marginalrevolution/201113ya/08/where-does-the-japanese-slowdown-come-from.html

The need for further productivity gains doesn’t really make sense to me as an explanation. Japan has low hanging productivity fruit out the wazoo. The stereotypical salaryman stays out late “working” every night. Send the same dude home at 5:00pm and he’d get just as much done and increase productivity by 4 hours a day, easily. Or is the suggestion that Japanese culture is too resistant to this kind of change, hence productivity couldn’t grow, hence it got hit by TGS? https://marginalrevolution.com/marginalrevolution/201113ya/08/where-does-the-japanese-slowdown-come-from.html

The paper measures productivity growth in seventeen countries in the nineteenth and twentieth centuries. GDP per worker and capital per worker in 198539ya US dollars were estimated for 1820204ya, 1850174ya, 1880144ya, 1913111ya, and 193985ya by using historical national accounts to back cast Penn World Table data for 196559ya and 199034ya. Frontier and econometric production functions are used to measure neutral technical change and local technical change. The latter includes concurrent increases in capital per worker and output per worker beyond the highest values achieved. These increases were pioneered by the rich countries of the day. An increase in the capital-labor ratio was usually followed by a half century in which rich countries raised output per worker at that higher ratio. Then the rich countries moved on to a higher capital-ratio, and technical progress ceased at the lower ratio they abandoned. Most of the benefits of technical progress accrued to the rich countries that pioneered it. It is remarkable that countries in 199034ya with low capital labor ratios achieved an output per worker that was no higher than countries with the same capital labor ratio in 1820204ya. In the course of the last two hundred years, the rich countries created the production function of the world that defines the growth possibilities of poor countries today.

“Technology and the great divergence: Global economic development since 1820”, Allen2011

China

Education

Wikileaks diplomatic cable https://www.zerohedge.com/news/wikileaks-cable-reveals-chinese-warning-domestic-asset-bubbles-overcapacity-early-2010-bashing-

China will need to restructure its economy so that it has a significantly higher share of knowledge-based services, especially research and development. However China’s “terrible” educational system, which promotes copying and pasting over creative and independent thought, is the largest impediment the country faces on this front, our IFC contact said….

\1\1. (SBU) However, Lai [Consul General, the head of IFC’s Chengdu office, Lai Jinchang] identified China’s “terrible” educational system as presenting a serious impediment toward achieving a shift to a more knowledge-based economy. The current system promotes copying and pasting over creative and independent thought. Lai said that the system rewards students for thinking “within a framework” in order to get the grade. He described the normal process undertaken by students when writing as essentially collecting sentences from various sources without any original thinking. He compared the writing ability of a typical Chinese Phd as paling in comparison to his “unskilled” staff during his decade of work with the IFC in Africa.

R&D

Publication bias stronger in China? “Local Literature Bias in Genetic Epidemiology: An Empirical Evaluation of the Chinese Literature”, 200519ya:

We targeted 13 gene-disease associations, each already assessed by meta-analyses, including at least 15 non-Chinese studies. We searched the Chinese Journal Full-Text Database for additional Chinese studies on the same topics. We identified 161 Chinese studies on 12 of these gene-disease associations; only 20 were PubMed-indexed (seven English full-text). Many studies (14-35 per topic) were available for six topics, covering diseases common in China. With one exception, the first Chinese study appeared with a time lag (2-21 y) after the first non-Chinese study on the topic. Chinese studies showed significantly more prominent genetic effects than non-Chinese studies, and 48% were statistically significant per se, despite their smaller sample size (median sample size 146 versus 268, p< 0.001). The largest genetic effects were often seen in PubMed-indexed Chinese studies (65% statistically significant per se). Non-Chinese studies of Asian-descent populations (27% significant per se) also tended to show somewhat more prominent genetic effects than studies of non-Asian descent (17% significant per se).

“Chinese Innovation Is a Paper Tiger: A closer look at China’s patent filings and R&D spending reveals a country that has a long way to go”, WSJ, by Anil K. Gupta and Haiyan Wang:

China’s R&D expenditure increased to 1.5% of GDP in 201014ya from 1.1% in 200222ya, and should reach 2.5% by 2020. Its share of the world’s total R&D expenditure, 12.3% in 201014ya, was second only to the U.S., whose share remained steady at 34%-35%. According to the World Intellectual Property Organization, Chinese inventors filed 203,481 patent applications in 200816ya. That would make China the third most innovative country after Japan (502,054 filings) and the U.S. (400,769)…According to the Organization for Economic Cooperation and Development, in 200816ya, the most recent year for which data are available, there were only 473 triadic patent filings from China versus 14,399 from the U.S., 14,525 from Europe, and 13,446 from Japan. Starkly put, in 201014ya China accounted for 20% of the world’s population, 9% of the world’s GDP, 12% of the world’s R&D expenditure, but only 1% of the patent filings with or patents granted by any of the leading patent offices outside China. Further, half of the China-origin patents were granted to subsidiaries of foreign multinationals….A 200915ya survey by the China Association for Science and Technology reported that half of the 30,078 respondents knew at least one colleague who had committed academic fraud. Such a culture inhibits serious inquiry and wastes resources.

Insiders agree; a dean at Tsinghua University (first or second best university in China):

In reality, however, rampant problems in research funding-some attributable to the system and others cultural-are slowing down China’s potential pace of innovation.

Although scientific merit may still be the key to the success of smaller research grants, such as those from China’s National Natural Science Foundation, it is much less relevant for the megaproject grants from various government funding agencies…the guidelines are often so narrowly described that they leave little doubt that the “needs” are anything but national; instead, the intended recipients are obvious. Committees appointed by bureaucrats in the funding agencies determine these annual guidelines. For obvious reasons, the chairs of the committees often listen to and usually cooperate with the bureaucrats.

“Expert opinions” simply reflect a mutual understanding between a very small group of bureaucrats and their favorite scientists. This top-down approach stifles innovation and makes clear to everyone that the connections with bureaucrats and a few powerful scientists are paramount, dictating the entire process of guideline preparation. To obtain major grants in China, it is an open secret that doing good research is not as important as schmoozing with powerful bureaucrats and their favorite experts.

This problematic funding system is frequently ridiculed by the majority of Chinese researchers. And yet it is also, paradoxically, accepted by most of them. Some believe that there is no choice but to accept these conventions. This culture even permeates the minds of those who are new returnees from abroad; they quickly adapt to the local environment and perpetuate the unhealthy culture. A significant proportion of researchers in China spend too much time on building connections and not enough time attending seminars, discussing science, doing research, or training students (instead, using them as laborers in their laboratories). Most are too busy to be found in their own institutions. Some become part of the problem: They use connections to judge grant applicants and undervalue scientific merit. editorial, “China’s Research Culture”, Science

An investigation by the Chinese Association of Scientists has revealed that only about 40 percent of the funds allocated for scientific research is used on the projects they are meant for. The rest is usually spent on things that have nothing to do with research. Some research project leaders use the money to buy furniture, home appliances and, hold your breath, even apartments. In the most appalling scandal, an accountant in the National Science Foundation of China misappropriated more than 200 million yuan ($3.12 million) in eight years until he was arrested in 2004. Besides, the degree of earnestness most scientists show in their research projects nowadays is questionable. Engaging in scientific research projects funded by the State has turned out to be an opportunity for some scientists to make money. There are examples of some scientists getting research funds because of their connections with officials rather than their innovation capacity. Qian Xuesen, known as the father of China’s atomic bomb and satellites, used to say during the last few years before his death in 2009 that the biggest problem is that Chinese universities cannot cultivate top-class scientists. “Honest Research Needed”, China Daily (government paper)

Zinch China, a consulting company that advises American colleges and universities about China, published a report last year that found cheating on college applications to be “pervasive in China, driven by hyper-competitive parents and aggressive agents.”

From the survey’s introduction: “Our research indicates that 90 percent of recommendation letters are fake, 70 percent of essays are not written by the applicant, and 50 percent of high school transcripts are falsified.”

http://rendezvous.blogs.nytimes.com/201212ya/02/05/sneaking-into-class-from-china/

But there’s growing evidence that the innovation shortfall of the past decade is not only real but may also have contributed to today’s financial crisis. Think back to 199826ya, the early days of the dot-com bubble. At the time, the news was filled with reports of startling breakthroughs in science and medicine, from new cancer treatments and gene therapies that promised to cure intractable diseases to high-speed satellite Internet, cars powered by fuel cells, micromachines on chips, and even cloning. These technologies seemed to be commercializing at “Internet speed,” creating companies and drawing in enormous investments from profit-seeking venture capitalists-and ordinarily cautious corporate giants. Federal Reserve Chairman Alan Greenspan summed it up in a 200024ya speech: “We appear to be in the midst of a period of rapid innovation that is bringing with it substantial and lasting benefits to our economy.” With the hindsight of a decade, one thing is abundantly clear: The commercial impact of most of those breakthroughs fell far short of expectations-not just in the U.S. but around the world. No gene therapy has yet been approved for sale in the U.S. Rural dwellers can get satellite Internet, but it’s far slower, with longer lag times, than the ambitious satellite services that were being developed a decade ago. The economics of alternative energy haven’t changed much. And while the biotech industry has continued to grow and produce important drugs-such as Avastin and Gleevec, which are used to fight cancer-the gains in health as a whole have been disappointing, given the enormous sums invested in research. As Gary P. Pisano, a Harvard Business School expert on the biotech business, observes: “It was a much harder road commercially than anyone believed.”…With far fewer breakthrough products than expected, Americans had little new to sell to the rest of the world. Exports stagnated, stuck at around 11% of gross domestic product until 200618ya, while imports soared. That forced the U.S. to borrow trillions of dollars from overseas. The same surges of imports and borrowing also distorted economic statistics so that growth 19989200717ya, rather than averaging 2.7% per year, may have been closer to 2.3% per year.

…Even the sequencing of the human genome-an acclaimed scientific achievement-has not reduced the cost of developing profitable drugs. One indicator of the problem’s scope: 200816ya was the first year that the U.S. biotech industry collectively made a profit, according to a recent report by Ernst & Young-and that performance is not expected to be repeated in 200915ya.

…If an innovation boom were truly happening, it would likely push up stock prices for companies in such leading-edge sectors as pharmaceuticals and information technology. Instead, the stock index that tracks the pharmaceutical, biotech, and life sciences companies in the Standard & Poor’s (MHP) 500-stock index dropped 32% from the end of 1998 to the end of 2007, after adjusting for inflation. The information technology index fell 29%. To pick out two major companies: The stock price of Merck declined 35% between the end of 1998 and the end of 2007, after adjusting for inflation, while the stock price of Cisco Systems (CSCO) was down 9%. Consider another indicator of commercially important innovation: the trade balance in advanced technology products. The Census Bureau tracks imports and exports of goods in 10 high-tech areas, including life sciences, biotech, advanced materials, and aerospace. In 1998 the U.S. had a $30 billion trade surplus in these advanced technology products; by 2007 that had flipped to a $53 billion deficit. Surprisingly, the U.S. was running a trade deficit in life sciences, an area where it is supposed to be a leader.

…The final clue: the agonizingly slow improvement in death rates by age, despite all the money thrown into health-care research. Yes, advances in health care can affect the quality of life, but one would expect any big innovation in medical care to result in a faster decline in the death rate as well. The official death-rate stats offer a mixed but mostly disappointing picture of how medical innovation has progressed since 199826ya. On the plus side, Americans 65 and over saw a faster decline in their death rate compared with previous decades. The bad news: Most age groups under 65 saw a slower fall in the death rate. For example, for children ages 1 to 4, the death rate fell at a 2.3% annual pace between 199826ya and 200618ya, compared with a 4% decline in the previous decade. And surprisingly, the death rate for people in the 45-to-54 age group was slightly higher in 200618ya than in 199826ya.

https://www.bloomberg.com/bw/magazine/content/09_24/b4135000953288.htm Mandel; the point about stock market is interesting, because a defender of no-declines could say that any failed predictions of innovations - like the ones surrounding the Human Genome Project - have been cherry-picked by declinists, but the stock market should be immune to such cherry-picking. Yet, it wasn’t.

Howard 200123ya Searching the Real World for Signs of Rising Population Intelligence:

Howard (199925ya) looked at chess performance since the inaugural FIDE (international chess federation) rating list in 197054ya. The list is based on an objective measure of each player’s chess performance, on a scale from about 200024ya to 2800. The rating changes with each game played, depending on result and opponent’s strength, and thus reflects current prowess….Since 197054ya, players were reaching high performance levels at progressively earlier ages. For example, the median age of the top ten dropped from the late 30s in the 1970s to the mid-20s in the 1990s. Evidence discussed in detail suggested that the trend was due to rising intelligence.

  • Howard, R. W. (199925ya). “Preliminary real-world evidence that average human intelligence really is rising”. Intelligence, 27, 235±250

…However, in the Soviet Union where chess was the national sport, this had been occurring since the 1920s (Charness & Gerchak, 199628ya). If g was not rising, the age trend should have started much earlier.

  • Charness, N., & Gerchak, Y. (199628ya). “Participation rates and maximal performance”. Psychological Science, 7, 46±51

…An informal study by Nunn (199925ya) supports the view. Using the computer program Fritz’s blundercheck mode, which scans games for serious errors, he com- pared the standard of play in two major tournaments across the century; Carlsbad, 1911113ya and the Biel Interzonal, 199331ya. Both had many of their era’s best players. Performance was much better in 199331ya, players making many fewer serious errors. Nunn concluded that the 1911113ya tournament would be considered very weak today.

Howard (199925ya) noted that, since 197054ya, chess has had an increasing number of prodigies (chess gifted children), despite fewer youngsters in the aging Western population. Some pre-1970 data relating to the Chess Olympiad and the prestigious international grandmaster title were obtained. The title itself dates back to 1914110ya but only in 195074ya did FIDE officially award it. Table 1 shows the age records for gaining the grandmaster title from 195074ya, either a player’s exact age when receiving the title (if known) or age on July 1 of the year receiving it. The table shows the record being broken several times in the 1950s, but the 195767ya record stood until 199133ya, and thereafter was repeatedly broken. The record setters in the 1950s were exceptionally talented, all except Bronstein becoming world champion.

The same age record decrease has occurred with another prestigious performance-based title, the US Chess Federation (USCF) master title. The age record has been broken several times recently, extremely young players gaining the title. In the last few years, some record-holders have been; Jordy Reynaud aged 10 years, 7 months; Vinay Bhat 10 years, 6 months, and in 199826ya Hikaru Nakamura at 10 years, 2 months, only about 29 months after learning to play. However, efforts to gain longitudinal data on this title from the USCF were unsuccessful. Parenthetically, in 199826ya, Irina Krush set another age record by winning the US Women’s Championship aged only 14 years.

  • Nunn, J. (199925ya). John Nunn’s chess puzzle book. London: Gambit Publications.

…Francis, Francis & Truscott 199430ya provide data on players and tournament results. Some additional data were obtained from bridge federations. Table 1 presents age records for the US Contract Bridge League life master title. There seem to be many more bridge prodigies with time, the age record steadily dropping in bridge as in chess. The present record holder reportedly only began playing bridge the year before. It is interesting to note that the USCF chess master and US bridge master age records have decreased to about the same age.

  • Francis, H. G., Francis, D. A., & Truscott, A. E. (199430ya). The official encyclopedia of bridge (5th ed.). Memphis, TN: American Contract Bridge League

…Fig. 2 presents median age of the players in the World Open Championship titles (consisting of two player teams). All ages are as at the age on January 1 of the year considered, as most birth dates available listed only year. The event was held every 2 years. Because of the small samples, data are median age of all players on the winning teams for each decade. The trend partly parallels the trend for chess grandmasters, going downwards from the 1960s, but then it rises in the 1990s.

Fig. 3 gives median age of the six players in each winning team in the Bridge Olympiad, held every 4 years since 196064ya. The median age increased 196064ya197252ya, then declined and then rose from 198242ya. Clearly, top Bridge Olympiad players are not getting progressively younger, with players being displaced by younger, stronger players. The trend upwards from 196460ya occurred because the exact same French team won three times in a row.

…However, go has a major problematical aspect for the present study. Unlike chess and bridge, there are great barriers to entry at upper levels. Players generally must start training in elementary school and must serve a lengthy apprenticeship with a top player. They only are admitted to the ranks of professionals and to dan levels by vote of other professionals (Bozulich, 199232ya). This system favours the pre-eminence of older established players who could keep out young, talented players. The time required and difficulty of rising may discourage great talent.

…These are the prestigious Kisei, Tengen, Meijin, Honinbo, Judan, Gosei, and Oza titles. Most were first awarded in the 1950s. The competitions for each title usually are held every year and the winner is determined by a series of matches. Is the age of title winners dropping? Fig. 2 gives the median age of all title winners combined (“go: all”) and of first-time winners (“go: unique”) of each title in each decade. The age trend partly parallels that for chess and bridge, decreasing from the 1960s to the 1970s but then rising somewhat. However, the unique go title winners in the later decades are much younger than those in the 1950s and 1960s. There is no real go olympiad. Perhaps the closest equivalent is the annual (usually) team match between the two strongest nations, Japan and China, which ran until 199628ya. The span of years is fairly short. Fig. 3 presents median age of the winning team over this period. The data are quite variable, usually because the Chinese team started much younger and got older and the Japanese team got younger. The data show no clear downwards age trend.

  • Bozulich, R. (199232ya). The go player’s almanac. Tokyo: The Ishi Press

…Various factors varying over the decades may affect scientific productivity, masking any effects of rising g. First, funding for basic research may vary greatly, and particular fields may fall in or out of favour. Second, fields change over time, making comparisons between decades problematical. It may take much longer to reach the frontiers of knowledge in later decades, for example. In the early stages, there may be relatively few researchers and different problems to solve (Gupta & Karisiddappa, 199628ya). A field’s easy problems may be solved and the remaining ones be intractable. Some fields even become relatively worked out, with their major problems solved, and so productivity declines. Horgan (199628ya) even argues that science itself soon will be worked out. The increasing cost of equipment has meant more team work in some fields. A particle physics paper may have hundreds of authors.

  • Gupta, B. M., & Karisiddappa, C. R. (199628ya). “Author productivity patterns in theoretical population genetics (190080198044ya)”. Scientometrics, 36, 19±41

  • Horgan, J. (199628ya). The end of science. Reading, MA: Addison-Wesley.

…Stephan & Levin 199232ya argue that the scientific capacity of the United States has declined over the last few decades, partly because the scientific community is aging and because they say that the average quality of new scientists is declining. Science has become a less attractive career. The United States produces about a third of the world’s science but evidence suggests that intellectual talent has been shifting to more attractive fields. For example, Bowen & Schuster 198638ya say that, between 194579ya and 196955ya, 1.2 times as many Phi Beta Kappa (an elite student society) members chose careers in business, law and medicine as in academe. But in the 1970s, five times as many did. US science graduate students now often are foreigners as locals shift to better paid fields (North, 199529ya). In Australia, the entrance exam mark cutoffs to enter university science courses have steadily dropped over the years as fewer students apply, while top marks are needed for courses in finance, law and medicine.

I examined some Institute of Scientific Information (ISI) data 195542199727ya, from the ISI’s Science Citation Index Guide in 199727ya, which includes lists of source publications. Fig. 4 presents numbers of articles published in each year and number of unique source authors. The latter category naturally would not include all scientists, as many PhD graduates never publish an article (Cole & Phelan, 199925ya). Data on author numbers 196613197945ya could not be obtained, despite repeated requests to ISI. Also, ISI’s published figure for authors in 196559ya, nearly double that of 196460ya, may be a misprint. Fig. 4 shows a huge rise in number of articles published. So, by this measure scientific productivity has risen greatly. However, the number of unique authors has also risen, while the actual productivity per unique author has declined slightly, from 0.967 in 195569ya to 0.771 in 199727ya. This may have many causes, such as the trend to multi-author papers, rising cost of equipment, shorter career spans, and so on. The data suggest that scientific productivity has risen. Indeed, in many fields of science and in mathematics, the annual number of articles published is doubling every 10-15 years (Odlyzko, 199529ya). The numbers in Fig. 4 even may underestimate the growth in productivity. Competition for publication space often is severe. Many journals have high rejection rates, taking only the best of those submitted. The number of articles never published may have risen greatly, too.

  • Odlyzko, A. M. (199529ya). Tragic loss or good riddance? The impending demise of traditional scholarly journals. International Journal of Human-Computer Studies, 42, 71±122

“Philanthropy’s success stories”, GiveWell co-founder Holden Karnofsky:

“One exception is the Casebook for The Foundation: A Great American Secret, which lists and discusses”100 of the highest-achieving foundation initiatives” since 1900124ya…I thoroughly examined this volume, and collected some basic notes into a spreadsheet…The most impressive cases (in my view) are mostly the earlier ones. Though the Casebook focuses on more recent philanthropy (78 of its 100 cases are post-1950), 9 of the 14 cases I found most impressive are pre-1950 (and a 10th is from 195272ya).

A possible explanation is that the space of doing good has become more crowded over time. For example, note that

  • Total U.S. government health spending was 0.26% of GDP in 1902122ya and 0.92% of GDP in 195074ya; by contrast, in 200915ya, it was 7.06% of GDP (these figures are in the spreadsheet linked above), and even most developing countries spend 2%+ of GDP on in this area (source). In 192797ya, the Commonwealth Fund piloted a rural hospital program; there aren’t a lot of “philanthropic opportunities” that look like that today.

  • Total U.S. government education spending was 1.07% of GDP in 1902122ya and 3.28% of GDP in 195074ya; by contrast, in 200915ya, it was 6.16% of GDP (these figures are in the spreadsheet linked above), and even most developing countries spend 3%+ of GDP on in this area (source). In 1902122ya, the Rockefeller Foundation funded advocacy for providing public schools in the U.S. South; there aren’t a lot of “philanthropic opportunities” that look like that today.

  • More context: The Department of Education was created in 197945ya, the National Science Foundation was created in 195074ya, and the National Institutes of Health began in 193094ya (but have grown significantly since; in fact one of the “success stories” in the Casebook discusses the growth of the NIH budget from $2.4 million in 1945 to $5.5 billion in 1985).”

https://gwern.net/doc/algernon/2012-woodley.pdf :

Innovation rates were obtained from Huebner (200519yaa), who defines this variable in terms of the number of important scientific and technological developments per year divided by the world population. This metric therefore captures the innovative capacity of populations on a yearly basis. In developing his innovation rate measures Huebner obtained a list of 7198 important events in the history of science and technology compiled by Bunch & Hellemans 200420ya, which spans 1455–2004. By curve-fitting these data to a Gaussian distribution, Huebner attempts to predict future innovation rates out to the 22nd century. Huebner’s historical and future world population estimates were derived from the U.S. Census Bureau (200420yaa, 200420yab). The estimates were available on a decadal basis and were obtained from Huebner’s Fig. 1 (p. 982).

Murray’s index [Human accomplishment: The pursuit of excellence in the arts and sciences, 800 BC to 1950] is computed on the basis of the weighted percentage of sources (ie. multiple lists of key events in the history of science and technology), which include a particular key event. Although Murray’s data are not as extensive in time as are Huebner’s, it is apparent that rate of accomplishment increases commensurately with Huebner’s index in the period from 1455 to the middle of the 19th century, and then declines towards the end of that century and into the 20th. Murray’s index was found to correlate highly with Huebner’s (r=.865, p < .01, N = 50 decades). In an earlier unpublished study, Gary (199331ya) computed innovation rates using Asimov’s (199430ya) Chronology of Science and Discovery. He found the same shaped curve as that described by both Huebner and Murray, with an innovation peak occurring at the end of the 19th century. Huebner’s index correlates strongly with Gary’s (r=.853, Pb .01, N =21 time points). It should be noted that the observation of peak innovation at the end of the 19th century dates back to the work of Sorokin (194282ya), thus it is concluded that Huebner’s index exhibits high convergent validity.

  • Gary, B. L. (199331ya). “A new timescale for placing human events, derivation of per capita rate of innovation, and a speculation on the timing of the demise of humanity”. Unpublished Manuscript

  • Sorokin, P. A. (194282ya). The crisis of our age: The social and cultural outlook. Boston: E. P. Dutton

To control for this Huebner’s critics suggest re-estimating innovation rates using just the innovation-generating countries. This analysis was conducted using raw decadal innovation data from Bunch & Hellemans 200420ya, along with data on European population growth 1455–1995 (from McEvedy & Jones [197846ya] and the US Census Bureau) combined with data on US population growth 1795200199529ya (from various statistical abstracts of the United States available from the US Census Bureau). The resultant innovation rates were found to correlate at r = .927 (P b .01, N = 55 decades) with Huebner’s original estimates, which indicates that the innovation rate data are insensitive to decision rules concerning which set of population estimates are used. Where choice of population matters is in extrapolating future declines in innovation rate.

  • McEvedy & Jones [197846ya] ???

Whilst a genotypic IQ decline of between 1 and 2 points a generation does not seem large, it is important to stress the impact that such a change can have on the frequencies of those with the highest levels of IQ. A 105-109 point decline in the Western genotypic IQ mean would have decreased the proportion of the population with the sort of IQ needed for significant innovation (ie. ≥ 135) by ~55-75% percent. The worldwide increase in the rate of innovation 1455–1873 followed by a sharp decline is consistent not only with continued dysgenesis in the West since the latter half of the 19th century, but also with the existence of a “eugenic phase” in the population cycle (Weiss, 200816ya). During this phase genotypic intelligence was rising and innovators were becoming more common on a per capita basis, congruent with positive directional selection for ‘bourgeois’ traits.

It must be noted that total numbers of innovations are not as strongly related to genotypic IQ as are innovation rates (r=.512, pb.01, N=55). Total numbers of innovations (which based on Bunch and Hellemans [200420ya] appear to have peaked in the 1960’s) relate more strongly to the size of the most innovative populations. This relationship suggests that bigger populations contain more innovators, however dysgenesis is essentially ‘diluting’ the impact of innovators, such that per capita innovative capacity declines with the passage of time. This process should be apparent in the ways in which science is organized in the modern world. For example, if relative to the population as a whole high intelligence individuals are becoming scarcer, established scientists might have to resort to recruiting individuals of more mediocre ability. This might explain the tendency for contemporary scientists, more so than scientists of earlier generations, to select for conscientious and sociable workers as high conscientiousness does not require high IQ (Charlton, 200816ya). Consistent with Charlton’s (200816ya) argument, it has been found that whilst the size of scientific teams has been increasing, the relative impact of individual scientists has been decreasing (Jones, 200915ya; Wuchty, Jones, & Uzzi, 200717ya).

  • Charlton, B. G. (200816ya). Why are modern scientists so dull? How science selects for perseverance and sociability at the expense of intelligence and creativity. Medical Hypotheses, 72, 237-243

  • Jones, B. (200915ya). The burden of knowledge and the “death of the renaissance man”: Is innovation getting harder? The Review of Economic Studies, 76, 283-317

  • Wuchty, S., Jones, B. F., & Uzzi, B. (200717ya). The increasing dominance of teams in the production of knowledge. Science, 316, 1036–1039

Another hazard is that in the absence of a “critical mass” of sufficiently intelligent individuals engendering an appropriate level of scientific rigor, “junk science” has the potential to proliferate to an extent never before seen in free nations (cf. Cofnas, 201212ya).

This trend may also couple with the anti-Flynn effect, which has been observed in a number of Western nations over the last couple of decades, and is characterized by significant losses in phenotypic IQ (Flynn 200915yab; Shayer & Ginsburg 200915ya; Sundet et al 200420ya; Teasdale & Owen 200816ya).

  • Flynn 200915yab. Requiem for nutrition as the cause of IQ gains: Raven’s gains in Britain 193870200816ya. Economics and Human Biology, 7, 18-27

  • Shayer & Ginsburg 200915ya. Thirty years on—A large anti-Flynn effect? (II): 13- and 14-year olds. Piagetian tests of formal operations norms 197630200618ya/7. British Journal of Educational Psychology, 79, 409-418

  • Sundet et al 200420ya. “The end of the Flynn effect? A study of secular trends in mean intelligence test scores of Norwegian conscripts during half a century”. Intelligence, 32, 349-362

  • Teasdale & Owen 200816ya. “Secular declines in cognitive test scores: A reversal of the Flynn effect”. Intelligence, 36, 121-126

Technologies like gamete cloning, when mature enough, may permit individuals to select for IQ enhancing alleles, but would only realistically be able to raise the IQ of offspring by a point or two at best (Lee 201014ya).

  • Lee 201014ya, “Review of intelligence and how to get it: Why schools and cultures count, R.E. Nisbett 200915ya, Norton, New York, NY. ISBN: 9780393065053”. Personality and Individual Differences, 48, 247–255 [count me skeptical, given the existing genetic variation… we should be able to get more than a point or two]

…it is indicated that the Flynn effect hasn’t really started to take off in these nations, but that it has the potential to do so (Wicherts et al 201014ya). This is evidenced by observations of a nascent Flynn effect in South Africa (te Nijenhuis et al 201113ya), Kenya (Daley et al 200321ya), Dominica (Meisenberg et al 200519ya), Saudi Arabia (Batterjee 201113ya) and elsewhere. It is entirely possible therefore that many of the less developed nations are entering into the early stages of an “enhanced growth” phase in the Flynn effect, a consequence of which might be significant decreases in poverty, such as is currently occurring in Africa (Sala-i-Martin & Pinkovskiy 201014ya).

  • Wicherts et al 201014ya. “Raven’s test performance of sub-Saharan Africans: Mean level, psychometric properties, and the Flynn effect”. Learning and Individual Differences, 20, 135-151

  • te Nijenhuis, J., Murphy, R., & van Eeden, R. (201113ya). “The Flynn effect in South Africa”. Intelligence, 36, 456-467

  • Daley, T. C., Whaley, S. E., Sigman, M. D., Espinosa, M. P., & Neumann, C. (200321ya). “IQ on the rise: the Flynn effect in rural Kenyan children”. Psychological Science, 14, 215-219

  • Meisenberg, G., Lawless, E., Lambert, E., & Newton, A. (200519ya). “The Flynn effect in the Caribbean: Generational change of cognitive test performance in Dominica”. Mankind Quarterly, 46, 29-69.

  • Batterjee, A. (201113ya). “Intelligence and education: the Saudi case”. Mankind Quarterly, 52, 133-190

  • Sala-i-Martin, X., & Pinkovskiy, M. (201014ya). “African poverty is falling… much faster than you think!” National Bureau of Economic Research Working Paper No. 15775

http://dl.dropbox.com/u/85192141/2012-meisenberg.pdf :

The relationship between intelligence and fertility has been investigated since the early years of the 20th century. During the first half of the century, studies in Britain and the United States usually found a negative relationship between IQ and completed family size (Anastasi, 195668ya; Cattell, 193688ya, 193787ya; Dawson, 193292ya/33), although atypical results were obtained occasionally (Willoughby & Coogan, 194084ya). These early results were challenged by a series of studies with mainly White middle-class groups in the United States at the time of the baby boom, which reported a negligible or slightly positive relationship between IQ and number of children (Bajema, 196361ya, 196856ya; Falek, 197153ya; Higgins et al 196262ya; Waller, 197153ya). These results were complemented by the observation that subfertility of men in Who’s Who in America disappeared for cohorts born after 1910114ya (Kirk, 195767ya). The conclusion at that time was that dysgenic fertility for intelligence was a temporary phenomenon during the demographic transition when the more intelligent pioneered the use of contraception, but disappeared at a later stage when contraceptive habits had diffused through the entire population (Osborn & Bajema, 197252ya).

Although these results seemed to make the relationship between intelligence and reproduction a dead issue, studies of cohorts who reproduced after the 1960s again showed the familiar negative relationship. As early as 197846ya, a negative relationship was observed in married White American women. Remarkably, this negative relationship remained significant even with education and socioeconomic background controlled (Udry, 197846ya). However, the value of this result is ambiguous because most of the women in this sample (aged 15-44) had not yet completed their childbearing. More substantial were the findings of Vining (198242ya, 198638ya, 199529ya), who provided evidence for the reemergence of a dysgenic trend among those born after 193589ya. Vining’s conclusions were further supported by Retherford and Sewell (198836ya, 198935ya), who found a negative relationship between intelligence at age 17 and number of children at age 35 for a predominantly White sample of high school seniors in Wisconsin. Additional evidence was found in the General Social Survey (van Court & Bean, 198539ya; Lynn & van Court, 200420ya), which showed a negative relationship between word knowledge and the number of children. Lynn and van Court (200420ya) concluded that the relationship had been negative for all cohorts born after 1900124ya, although it was weaker for those born 19209192995ya. In these studies, the dysgenic fertility was far stronger in females than males.

…Tables 3 and 4 also show that in these models the likelihood of being married is increased by IQ and g, both directly and, to a lesser extent, indirectly through religious attendance. However, in the White groups this effect is opposed by the anti-marriage effect of education. These results contrast with a British study, which found that never-married women had higher childhood IQs than married women although married men tended to have higher IQs than never-married men (Taylor et al 200519ya). In an Afro-Caribbean population, however, high IQ raised the marriage rate for both males and females (Meisenberg, Lawless, Lambert, & Newton, 200618ya).

…Selection against high intelligence has been observed throughout most of the 20th century in Europe and the United States (Cattell, 193688ya, 193787ya; Lynn & van Court 200420ya; Retherford & Sewell, 198836ya; van Court & Bean, 198539ya), where it was probably present since the beginning of the fertility transition in the 19th century (Notestein, 193688ya; Stevenson, 1920104ya). We can estimate that without this selection effect the average intelligence in these countries today would be up to 5 points higher than it is-about as high as the average IQ in China today (Lynn & Vanhanen, 200618ya), where reproductive differentials still favored wealth, literacy and presumably intelligence in the early part of the 20th century (Lamson, 193589ya; Notestein, 193886ya). In pre-industrial societies, fertility usually was highest among the wealthy classes (Clark & Hamilton, 200618ya; Hadeishi, 200321ya; Harrell, 198539ya; Lamson, 193589ya) and also among the educated, at least in the few studies that included a measure of education (Clark & Hamilton, 200618ya; Hadeishi, 200321ya; Lamson, 193589ya). Although selection against high educational attainment, and presumably high intelligence, is found worldwide today (Meisenberg, 200816ya; Weinberger, 198737ya), historically it presents a novel phenomenon.

During the 20th century, the small genetic decline was masked by massive environmental improvements, especially in the educational system, which caused IQ gains on the order of 10 points per generation (Flynn, 198737ya). This environmental effect was at least ten times greater than the decline predicted from genetic selection, and thus made genetic selection seem irrelevant. However, recent results show that this rising trend, known as the Flynn effect, is either diminishing or reversing in the most advanced societies. A marginal Flynn effect was still observed among children born between 197351ya and 199529ya in the United States (Rodgers & Wänström, 200717ya), most recent trends in Britain are ambiguous (Flynn, 200915ya; Shayer, Ginsburg & Coe, 200717ya), and military conscripts born after about 198044ya in Denmark (Teasdale & Owen 200816ya) and Norway (Sundet, Barlaug & Torjussen, 200420ya) show stagnating intelligence or a slow decline.

…The implications of the present findings for the United States need to be stated clearly: Assuming an indefinite continuation of current fertility patterns, an unchanging environment and a generation time of 28 years, the IQ will decline by about 2.9 points/century as a result of genetic selection. The proportion of highly gifted people with an IQ higher than 130 will decline by 11.5% in one generation and by 37.7% in one century. Since many important outcomes, including economic wealth (Rindermann, 200816yaa) and democracy (Rindermann, 200816yab), are favored by high intelligence, adverse long-term consequences of such a trend would be expected although short-term consequences on a time scale of less than one century are negligible.

Flynn effect on hollow IQ? Wicherts et al 200420ya: “Are intelligence tests measurement invariant over time? Investigating the nature of the Flynn effect”

Measurement invariance implies that gains over the years can be attributed to increases in the latent variables that the tests purport to measure. The studies reported contain original data of Dutch Wechsler Adult Intelligence Scale (WAIS) gains 196732199925ya, Dutch Differential Aptitude Test (DAT) gains 198411199529ya, gains on a Dutch children intelligence test (RAKIT) 198211199331ya, and reanalyses of results from Must, Must, and Raudik [Intelligence 167 (200321ya) 1-11] and Teasdale and Owen [Intelligence 28 (200024ya) 115-120]. The results of multigroup confirmatory factor analyses clearly indicate that measurement invariance with respect to cohorts is untenable. Uniform measurement bias is observed in some, but not all subtests. The implications of these findings are discussed.

Selection

As you develop more drugs, your standard for safety should go up because it becomes ever less likely that a new drug is superior to any of the old ones but the chance it fooled your tests remains pretty much the same.

Imagine you have a little random number generator 1-100, and you want to maximize the number you draw, but you also have, say, a 10% chance of misreading the number each time. Initially you’d keep discarding your number - pfft, a 50? pfft, a 65? I can do better than that! - but once you’ve successfully drawn a 95, then you want to start examining the numbers carefully. ‘I just drew a 96, but the odds of getting a number >95 is just 4%! It’s more likely that I just misread this 96… oh wait, it was actually 69. My bad.’

(I’m not sure how accurate this little model is, but it captures how I feel about it.)

In general, I don’t buy the argument that the FDA is strangling all sorts of fantastic innovation through its focus on safety. If safety is costing us billions and billions every year in opportunity cost for either drug approvals or research, we ought to see countries with laxer regulations and scientific capability - like in East Asia - starting up massive pharmaceutical giants and steamrolling Western corps with the low-hanging fruit we have fastidiously turned up our noses at. We don’t observe this. We observe occasional innovations and contributions, and apparently they’re pretty active in stem cell research (which America did repress), but at the rates one would generally expect. The fall in returns is pretty huge, and if it was due solely to safety, abandoning safety ought to lead to productivity gains of 2, 3, maybe 10 times the American scientist equivalents. I think we would have heard if that were the case.

What causes diminishing returns? Dunno. It’s a pretty common phenomenon.

genetics underachieving: https://sethroberts.net/201212ya/03/18/genomics-confidential-the-faux-wonderland-of-iceland/

Question: Dick, would you care to comment on the relative effectiveness between giving talks, writing papers, and writing books?

Hamming: In the short-haul, papers are very important if you want to stimulate someone tomorrow. If you want to get recognition long-haul, it seems to me writing books is more contribution because most of us need orientation. In this day of practically infinite knowledge, we need orientation to find our way. Let me tell you what infinite knowledge is. Since from the time of Newton to now, we have come close to doubling knowledge every 17 years, more or less. And we cope with that, essentially, by specialization. In the next 340 years at that rate, there will be 20 doublings, i.e. a million, and there will be a million fields of specialty for every one field now. It isn’t going to happen. The present growth of knowledge will choke itself off until we get different tools. I believe that books which try to digest, coordinate, get rid of the duplication, get rid of the less fruitful methods and present the underlying ideas clearly of what we know now, will be the things the future generations will value. Public talks are necessary; private talks are necessary; written papers are necessary. But I am inclined to believe that, in the long-haul, books which leave out what’s not essential are more important than books which tell you everything because you don’t want to know everything. I don’t want to know that much about penguins is the usual reply. You just want to know the essence.

Richard Hamming, “You and Your Research”

Dysgenics

One of the more controversial explanations for diminishing returns is that the diminishing reflects the quality of the human capital: the peak quality has declined. On this view, important discoveries and inventions are disproportionately due to the smartest scientists and inventors. As the smartest cease to command a reproductive advantage, their ranks are inevitably impoverished.

This might seem to contradict the well-known Flynn effect and also fly counter to the many IQ-enhancing interventions over the past centuries such as vaccinations or iodine supplementation, except the dysgenic hypothesis refers to the genotypic potential for intelligence and only indirectly to the phenotype. That is, intelligence is a joint product of genes and environment: genes set a ceiling but the environment determines how much of the potential will be realized. So if the environment improves, more individuals will be well-nurtured - and hit their genetic ceilings. This sort of reasoning predicts that we could see an increase in population-wide averages, per the Flynn effect, and we could also see decreases in the tail for low intelligence (per the public health interventions, eg. no more iodine-related cretinism), but assuming the environment was not so terrible that no individual hit their ceilings, we’d see a truncating of the bell curve with fewer individuals than one would predict. If the dysgenic selection effects continued, one might even see reductions in the absolute numbers of highly intelligent people.

So in this narrative, genes for intelligence cook along through history in subpar deficient environments, eking out modest fitness advantages (due to presumable costs like increased metabolism) and maintaining their presence in the gene pool, until the Industrial Revolution happens, causing the demographic transition in which suddenly richer countries begin to reproduce less, apparently due to wealth, and who are the wealthiest in those countries? The most intelligent. So even as the Industrial and Scientific Revolutions and economic growth (powered by the intelligent) all simultaneously improve the environment in a myriad of ways, the most intelligent are failing to reproduce and the genotypic ceiling begins falling even as the phenotypic average continues rising, until the trends intersect.

This is a complex narrative. There are multiple main points to establish, any of which could torpedo the overall thesis:

  1. that intelligence has a substantial genetic component

    If there is no genetic basis, then there can be no dysgenics.

  2. that the intelligent (and highly intelligent) have not always reproduced less and suffered fitness losses

    If we observed that the highly intelligent were always at fitness disadvantages, this implies various bizarre or falsified claims (like humans starting eons ago with IQs of 1000s), and that our basic model was completely wrong. The truth would have to be something more exotic like intelligence is determined by spontaneous mutations or the reproductive penalty is balanced by the inclusive fitness of close relatives with mediocre intelligence (perhaps some heterozygote advantage).

  3. that the intelligent (and highly intelligent) now reproduce less and their genes suffer a loss of fitness

    If intelligence is being reproductively selected for, then the pressures would not be dysgenic in this sense but eugenic. (Such opposite pressures would not explain any diminishing marginal returns and actually argue against it.)

  4. that the highly intelligent are not increasing in modern times

    Another basic sanity check like #2: if the highly intelligent are increasing in proportion, this is the opposite of what the narrative needs.

  5. that the absence of the highly intelligent could in fact explain diminishing returns

    If they turn out to be only as productive as less extreme members of the bell curve, then this discussion could be entirely moot albeit interesting: the loss of them would be offset by the gain of their equally productive but dimmer brethren. Dysgenic pressures would only matter if it began to diminish their ranks too, but this could be some sort of stable equilibrium: the dimmer occasionally give birth to brighter offspring, who do not reproduce much and also do not produce any more than their parents, all in accordance with the previous points but with no dysgenic threats to the dimmer ranks.

  • http://www.springerlink.com/content/p15h2830v4115281

  • http://www.redorbit.com/news/education/1139160/reading_writing_and_sex_the_effect_of_losing_virginity_on/

  • “Smart Teens Don’t Have Sex (or Kiss Much Either)”, Halpern et al 2000

  • http://counterpoint.mit.edu/archives/Counterpoint_V21_I3_200123ya_Nov.pdf

  • http://www.halfsigma.com/200618ya/07/smarter_people_.html

  • http://www.halfsigma.com/200618ya/07/sex_drive_decre.html

  • http://www.gnxp.com/blog/200717ya/04/intercourse-and-intelligence.php (testosterone link; see also the SMAP papers)

  • https://www.sciencedirect.com/science/article/pii/S1090513805000619

    http://statsquatch.blogspot.com/200915ya/02/fun-with-fertilitydata.html

  • https://en.wikipedia.org/wiki/Fertility_and_intelligence

https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.71.8846&rep=rep1&type=pdf “Exploring Scientists’ Working Timetable: Do Scientists Often Work Overtime?” Wang et al 201212ya; such a grind is very Conscientious, but is it good for genuine creativity? doesn’t creativity require time off and working on other things?

A novel method is proposed to monitor and record scientists’ working timetable. We record the downloads information of scientific papers real-timely from Springer round the clock, and try to explore scientists’ working habits. As our observation demonstrates, many scientists are still engaged in their research after working hours every day. Many of them work far into the night, even till next morning. In addition, research work also intrudes into their weekends. Different working time patterns are revealed. In the US, overnight work is more prevalent among scientists, while Chinese scientists mostly have busy weekends with their scientific research.

“The Graying of Academia: Will It Reduce Scientific Productivity?”, Stroebe2010

This change resulted in a drop in retirements of older academics and has already altered the age structure at U.S. universities (Ashenfelter & Card, 200222ya; Clark & Ghent, 200816ya). On the basis of data obtained from 16,000 older faculty members at 104 colleges and universities across the United States, Ashenfelter & Card 200222ya concluded that after the abolition of mandatory retirement, the percentage of 70-year-old professors continuing to work increased from 10% to 40%. In an analysis of data from the North Carolina university system, Clark & Ghent 200816ya drew a similar conclusion: > Prior to 199430ya, the retirement rate was 59 percent for faculty age 70, 67 percent for faculty age 71 and 100 percent for faculty age 72. After the policy of mandatory retirement was removed, 24 percent of faculty age 70, 19 percent of faculty age 71, and 17 percent of faculty age 72 retired. (pp. 156 -157) As a result of such changes, the percentage of fulltime faculty members age 70 or older went up threefold (to 2.1%) between the years 199529ya and 200618ya (Bombardieri, 200618ya). However, at some universities the situation is more extreme. For example, in the Harvard University Faculty of Arts and Sciences, the percentage of tenured professors age 70 years and older has increased from 0% in 199232ya to 9.1% in 200618ya (Bombardieri, 200618ya). The impact of the changing age structure has also been felt at the National Institutes of Health (NIH), where the average age of principal investigators for NIH grants has increased from 30 - 40 years in 198044ya to 48 years in 200717ya.

although creativity is moderately positively correlated with IQ up to intelligence levels that are approximately one standard deviation above the mean, the relationship becomes essentially zero for more intelligent individuals (Barron & Harrington, 198143ya; Feist & Barron, 200321ya). Thus, when IQ scores are correlated with some valid criterion of scientific distinction (eg. number of citations), the correlations approach zero (eg. Bayer & Folger, 196658ya; Cole & Cole, 197351ya). This makes it highly unlikely that a modest agerelated decrease in intelligence should impair a scientist’s ability to produce high-quality research. Similar reservations apply to measures of divergent thinking, which are considered more closely related to creativity than are traditional intelligence tests (eg. Hennessey & Amabile, 201014ya). Although there is some evidence that age decrements in divergent thinking appear as early as in the 40s (eg. McCrae, Arenberg, & Costa, 198737ya), age accounts for very little variance.

The most influential theory of the association of age, cognitive ability, and scientific achievement has been suggested by Simonton (eg. 198539ya, 198836ya, 199727ya, 200222ya), undoubtedly the most important and prolific researcher in the area of the psychology of science. He developed an elegant quantitative model of the decline in creative potential, which predicts that the association between age and productivity is curvilinear and declines with career age rather than chronological age. The basic assumption of Simonton’s theory is that each creator starts off with a fixed amount of initial creative potential. This creative potential consists of “concepts, ideas, images, techniques, or other cognitions that can be subjected to free variation” (Simonton, 199727ya, pp. 67- 68). Of the possible combinations of these, only a subset are sufficiently promising to justify further elaboration. Some of them may fail the empirical test, but some may finally be worked out into finished products that might be published. Each time individuals produce new research, they use up part of their creative potential and reduce the ideational combinations that are available to them. According to Simonton (199727ya), productivity increases during the first 20 years of an individual’s career, when the individual still has a rich fund of creative potential and is getting better and better at turning these ideas into publishable output. However, approximately 20 years into an individual’s career, a peak is typically reached. After that, productivity begins to decline, because the individual has used up a substantial proportion of his or her initial creative potential.

It has been argued that the differences between scientists in research productivity are too extreme to be explained merely by differences in ability or motivation (Cole & Cole, 197351ya). For example, in a study of the scientific output of more than 1,000 American academic psychologists, Dennis (195470ya) found that the most productive 10% authored 41% of all publications, whereas the bottom 10% produced less than 1%. In fact, the top half were responsible for 90% of total output, and the bottom half, for only the remaining 10%. Similarly biased distributions have been shown for other sciences as well as for the arts and humanities (Simonton, 200222ya). Findings such as these led Price (196361ya), a historian of science, to propose Price’s law. According to this law, if k is the number of researchers who have made at least one contribution to a given field, the square root of k will be responsible for half of all contributions in this field. Thus, if there are 100 contributors in a field, the top 10% will be responsible for half of the contributions to this area.2

For example, in a study of publications by the 60 members of the editorial board of the Journal of Counseling Psychology in 200717ya, Duffy, Martin, Bryan, and Raque-Bogdan (200816ya) found number of publications and number of citations to correlate .80. This correlation is somewhat higher than the correlations typically found for psychology, which vary between .50 and .70 (Simonton, 200222ya). Simonton (200222ya) therefore concluded “that the quality of output is a positive function of quantity of output: the more publications one produces, the higher the odds that one will get cited” (p. 45). It is interesting to note that the same relationship has been observed in brainstorming research, where the number of ideas that are produced by an individual or a group is highly correlated with the number of good ideas (eg. Diehl & Stroebe, 198737ya; Stroebe, Nijstad, & Rietzschel, 201014ya).

Because of the exponential growth of the scientific community during the last few centuries, there has always been an overrepresentation of younger scientists (Price, 196361ya). Thus, even if scientific achievement were unrelated to age, one would expect more eminent contributions from young rather than old scientists. The same bias arises with studies that use number of publications in top journals as their index of scientific achievement. For example, if one took the publications of 10 major scientific journals as one’s sample and then plotted the age distribution of the authors of these publications, the results would again be distorted by the fact that there are likely to have been more younger than older scientists in the population of scientists from which the successful publishers were drawn.

The classic study of Nobel laureates was published by Zuckerman (197747ya). It was based on 92 Nobel Prize winners who worked in the United States and won the Nobel Prize between 1901123ya and 197252ya. She found that the average age at which these individuals did their prize-winning research was 39 years, with winners of the prize in physics doing their research at 38.6 years and winners of the prize in medicine and physiology doing it at 41.1 years. Similar results were reported by Stephan & Levin 199331ya, who in an update and extension of Zuckerman’s (197747ya) study analyzed the 414 winners of the Nobel Prize in the natural sciences in the years 190191199232ya. The average age for conducting the prize-winning research for all disciplines was 37.6 years, with physicists doing their research the earliest, at 34.5 years, and medical research being conducted by somewhat older researchers, at 38.0 years. Although this is not old, it is also not precociously young. However, before one draws any conclusions, one must remember that these findings inform us only of the proportion of Nobel Prizes won by scientists of different ages. They do not tell us at which age scientists are most likely to win that prize. For this, we need to know the age distribution of the population of scientists from which the Nobel Prize winners were selected. Although Stephan & Levin 199331ya failed to make such a correction, Zuckerman (197747ya) did, and she compared the age distribution of her laureates to that of the general population of American scientists (see Figure 1). This comparison shows that the only substantial deviations from the general population occur for the age group of 35 to 44 years, which is clearly overrepresented among the Nobel laureates, and the age group of 55 years and older, which is underrepresented. Before one concludes from this evidence that great science is really the domain of the middle-aged, one should remember that during the period considered in these studies, even American scientists were subject to compulsory retirement. Most research in the natural sciences requires monetary resources, personnel, and laboratory facilities, which may have become unavailable to older scientists after their retirement. In anticipation of this fact, many scientists in their mid-50s may have already stopped initiating projects that they expected to be unable to finish before retirement.

For example, when Harvey Lehman, one of the most prolific researchers on age and scientific achievement, tabulated the ages at which a sample of 52 deceased philosophers had written their most significant work, a single-peaked function emerged: The mean age for producing a philosophical masterwork was 41.5 years. Practically the same age curve also describes the age at which significant works were produced in psychology (Lehman, 196658ya). Lehman’s (195371ya, 196658ya) research can be criticized for his failure to take account of the age distribution of the population of philosophers and scientists from which he drew the sample of excellent contributions. The data were not corrected for the fact that there were likely to be many more younger than older individuals in the population of which the eminent individuals were a subsample. However, Wray (200420ya), who studied landmark discoveries in bacteriology between 1877147ya and 1899125ya, also found that scientists 36 to 45 years of age were responsible for a disproportionate number of these discoveries, even after he corrected for the likely age distribution of scientists in the total population. In contrast, younger scientists (35 years and younger) and older scientists (46 to 65 years) were relatively underrepresented. Finally, Over (198836ya), who used publications in Psychological Review as his criterion for outstanding contributions (admittedly a less demanding criterion than that of landmark discoveries, even though Psychological Review is one of the top journals of our discipline), found a similar curvilinear distribution that peaked for individuals who were 12 to 17 years past their PhDs (ie. ages 38 to 45 years) and declined thereafter. However, Over (198836ya) argued that because 60% of American psychologists active in research between 196559ya and 198044ya were under 40, one could expect that about 60% of the articles appearing in Psychological Review in this period would be authored by psychologists under the age of 40. In fact, 59.9% of the articles in his sample were published by authors who were 0 to 11 years past their PhDs. Thus, despite the less demanding criterion, the curvilinear relationship between age and scientific achievement reported here is similar to that found in studies of Nobel laureates.

The pattern of findings of these early studies is similar to that found in the studies of Nobel laureates and scientists with lesser achievements, with age being curvilinearly related to scientific productivity, which reaches a peak around ages 40 to 45 and then drops off (eg. Bayer & Dutton, 197747ya; Cole, 197945ya; Dennis, 195668ya; Horner, Rushton, & Vernon, 198638ya; Kyvik, 199034ya; Over, 198242ya). This pattern was replicated in cross-sectional (Bayer & Dutton, 197747ya; Cole, 197945ya; Kyvik, 199034ya) and longitudinal or crosssequential studies (Dennis, 195668ya; Over, 198242ya; Horner et al 198638ya) conducted in the United States (Bayer & Dutton, 197747ya; Cole, 197945ya; Horner et al 198638ya) and Europe (Dennis, 195668ya; Kyvik, 199034ya; Over, 198242ya). However, not all disciplines showed this pattern (Levin & Stephan, 198935ya). But the only discipline in which a discrepant pattern has been replicated repeatedly is mathematics. Several studies of samples of mathematicians resulted in a linear relationship, with neither an increase nor a decline in productivity (Cole, 197945ya; Stern, 197846ya). Three examples of studies suffice to illustrate the typical patterns found in this research area. In one of the most extensive cross-sectional studies, Cole (197945ya) compared the publication rates in the years 19654196955ya of 2,460 scientists from six different disciplines, including psychology. Figure 2 presents the overall productivity for the six fields combined, as well as the overall citation rate. As the figure indicates, age is curvilinearily related to both productivity and citations. Overall, the rates for productivity and citations peaked around age 40 and then dropped off. This relationship was valid for all disciplines, except for mathematics, for which the relationship was linear, “supporting the conclusion that productivity does not differ significantly with age” (Cole, 197945ya, p. 965). Cole thus replicated the findings of Stern (197846ya), who concluded from her cross-sectional study that “the notion that younger mathematicians are, as it were, ‘physiologically’ more able to produce papers would appear to be in error. In general, we can state categorically that age explains very little, if anything, about productivity” (p. 134). Two cross-sequential studies of psychologists were conducted by Over (198242ya) and Horner et al 198638ya. Over (198242ya) analyzed the relationship between age and productivity of a small sample of British psychologists ranging in age 26–65 years. These individuals were assessed twice, once in 196856ya -1970 and a second time in 197846ya - 198044ya. British psychologists in general published as frequently in 197846ya -1980 as in 196856ya -1970 (ie. there was no period effect). However, both the cross-sectional and the longitudinal analyses indicated that psychologists over 45 years of age published significantly less frequently than their younger colleagues. The publication rates correlated .49 across the two times of measurement, indicating substantial stability of individual productivity. Over (198242ya) concluded that “a person’s previous research productivity was a far better predictor of subsequent research output than age was” (p. 519). Another cross-sequential analysis on scientific productivity was based on 1,084 American academic psychologists and was conducted by Horner et al 198638ya. Both the cross-sectional and the longitudinal analyses resulted in a curvilinear relationship between age and productivity. On average, the productivity at ages 35 to 44 was significantly higher than the productivity at younger and older ages. Again, the correlations between an individual’s number of publications at different periods indicated a great deal of stability. Finally, age accounted on average for only 6.9% of the variance across time (more for low than for high publishers). The findings of these early studies allow four conclusions: (a) The overwhelming majority of studies reported an age-related decline in productivity (indicated by number of articles published), and most studies found the association to be curvilinear, with a peak somewhere around the ages of 40 to 45 years. (b) Even though there was a curvilinear relationship between age and productivity, age accounted for less than 8% of the variance in productivity. In mathematics, the relationship between age and productivity even appears to be linear, with age being unrelated to productivity. (c) In contrast, past performance was by far the best predictor of future productivity. As Simonton (200222ya) estimated, “Between one third to two thirds of the variance in productivity in any given period may be predicted from the individual difference observed in the previous period” (p. 86). (d) Finally, even if older researchers are somewhat less productive than their younger colleagues, the quality of their work (as reflected by citations) appears to be no less high. Over (198836ya) correlated the number of citations each article published in Psychological Review had received in the first five years after publication with the age of the article’s author and found that the correlation was not significantly different from zero. Similar findings were reported by Simonton (198539ya) in a study of the impact of the publications of 10 psychologists who had received the APA’s Award for Distinguished Scientific Contributions. He found that the ratio of high-impact publications to total output fluctuated randomly throughout their careers.

Although a recent longitudinal analysis of the association of age and productivity for 112 eminent members of the U.S. National Academy of Sciences also resulted in a nonlinear relationship (Feist, 200618ya), this relationship was different from that reported in most earlier studies. Three unconditional growth curve models were constructed. The best fit to the data was achieved with a cubic model, providing “population estimates on productivity that increase rapidly until approximately 20 years into one’s career, then flatten over the next 15 years, and then rise again over the last 5-year interval” (Feist, 200618ya, p. 29). Because these individuals started publishing their first articles between 22 and 25 years of age, they would have reached their first peak around age 45. After a 15-year leveling-off period, their productivity would increase again after age 60. A somewhat different pattern was reported by Joy (200618ya), who examined the publication data of 1,216 faculty member from 96 schools ranging from elite research universities to minor undergraduate colleges. Data were collected in 200420ya. Figure 3 presents the mean number of publications per year by career age (ie. years since receiving the PhD) of full-time faculty members at three homogeneous subgroups of institutions. In the context of the focus of this article, I restrict myself to discussing the data for the 399 faculty members associated with research universities (eg. Princeton University, the University of Massachusetts at Amherst, Northeastern University). These academics published more during the first five years of their careers than in later years; their productivity remained essentially stable for the next 25 years, with perhaps a slight increase between the 26th and 30th years of their careers. Thus, the data for faculty members at research universities (or for those at other institutions) failed to show the pattern reported in earlier studies, in which productivity reached a peak around ages 40 to 45 and then dropped off (Bayer & Dutton, 197747ya; Cole, 197945ya; Dennis, 195668ya; Horner et al 198638ya).

This study was based on 6,388 professors and researchers who had published at least one journal article over the eight-year period 20007200717ya. The study used 10-year age categories, ranging from age 20 to age 70. Two different sets of data were used in compiling average productivity, namely, the average productivity of all professors and that of active professors who had published at least one journal article at the age in question. Although the association between age and productivity was curvilinear for both samples, only the total sample showed a decline after age 50. For the active professors, productivity increased to age 50 and then stayed at the same level until age 70. (There were too few older professors to extend the study beyond age 70.) Thus, these active professors sustained their productivity at a high level throughout their careers. There was also no decline in quality for the group of active professors. In fact, the average number of articles they published in high-impact journals (ie. the top 1% cited journals) rose steadily to age 70, and so did the average number of articles that were among the top 10% of highly cited articles. The findings of Gingras et al 200816ya are discrepant with practically all of the early research. Given that, as noted above, the province of Quebec had already abolished compulsory retirement in 198044ya, this change would offer a plausible explanation for the fact that productivity did not decline for the older age group.

“Creative careers: the life cycles of Nobel laureates in economics”, Weinberg & Galenson2005

This paper studies life cycle creativity among Nobel laureate economists. We identify two distinct life cycles of scholarly creativity. Experimental innovators work inductively, accumulating knowledge from experience. Conceptual innovators work deductively, applying abstract principles. We find that conceptual innovators do their most important work earlier in their careers than experimental laureates. For instance, our estimates imply that the probability that the most conceptual laureate publishes his single best work peaks at age 25 compared to the mid-50s for the most experimental laureate. Thus while experience benefits experimental innovators, newness to a field benefits conceptual innovators.

We measure the importance of work using citations. Citations were collected from the Web of Science, an on-line database comprising the Social Science Citation Index, the Science Citation Index, and the Arts and Humanities Citation Index.1 We collected the number of citations to all works in each year of each laureate’s career made between 198044ya and 199925ya inclusive.2 These data on citations to the works each laureate published in each year of his career are our units of analysis. For the purpose of the empirical analysis, laureates are included in our sample from the time they received their doctorate or from the time of their first cited publication if it preceded their doctorate or if they never earned a doctorate.

The importance of scholars depends primarily on their most important contributions. We use two methods to identify the years in which the laureates made important contributions. One method is to identify all years in which citations are above a threshold. To do this, we first estimate the mean and standard deviation of each laureate’s annual citations. We define years in which a laureate’s citations were at least 2 of his standard deviations above his mean to be his two standard deviation peaks. To estimate the year in which each laureate made his single most important contribution, we also consider the single year with the most citations for each laureate. We refer to this year as the laureate’s single best year.

…Given the range of our index of 201, the implied difference in mean age of important contributions between the most experimental and most conceptual laureates is 20.5 years. The second column shows analogous results for the single best years. Here each laureate appears exactly one time and Ageij denotes the age at which laureate i had his single best year. For the single best years, a 1 point increase in the index corresponds to a .113 year reduction in the mean age. Given the range of our index, the implied difference in mean ages of the single best years between the most experimental and most conceptual laureates, is 22.7 years.

…The most conceptual laureate’s probability of a two standard deviation peak is 15% in the first year of the career and it reaches a peak at age 28.8 years. For the most experimental laureate, the probability of two standard deviation peak is less than half of a percent at the beginning of the career, reaching a peak at age 56.9, close to double the age of the most conceptual laureate. By comparison, the mean laureate’s profile peaks at age 47.1.

The profiles for the single best years are beneath those for the two standard deviation peaks because there are fewer single best years than two standard deviation peaks. There is little difference in the shape of the profiles between the two standard deviation peaks and single best years for the most experimental laureates - both peak in the mid 50s. For the most conceptual laureate the probability of a single best year is close to that of an important year at the beginning of the career, but increases less before dropping. For the most conceptual laureate, the probability of a single best year peaks at age at age 24.8.

“Age and Outstanding Achievement: What do We Know After a Century of Research?”, Simonton1988

One empirical generalization appears to be fairly secure: If one plots creative output as a function of age, productivity tends to rise fairly rapidly to a definite peak and thereafter decline gradually until output is about half the rate at the peak (see, eg. S. Cole, 197945ya; Dennis, 195668yab, 196658ya; Lehman, 195371yaa; but see Diamond, 198638ya). In crude terms, if one tabulates the number of contributions (eg. publications, paintings, compositions) per time unit, the resulting longitudinal fluctuations may be described by an inverted backward-J curve (Simonton, 197747yaa). Expressed more mathematically, productive output, say p(t), over a career tends to be roughly approximated by a second-order polynomial of the form

p(t) = b1 + b2t + b3t^2 (1)

…In applying this equation, the independent variable, t, is not chronological age but rather career or professional age, where t = 0 at the onset of the career (see Bayer & Dutton, 197747ya; Lyons, 196856ya). However, in practice, chronological age is often used in lieu of career age, a substitution justified by their high correlation (eg. r = .87, according to Bayer & Dutton, 197747ya). …beyond a certain value of t, the predicted level of productivity becomes negative, a meaningless outcome if output is gauged by single contributions or items. 2 Instead, the curve tends to approach the zero productivity rate more or less asymptotically, a tendency that implies that a third-order polynomial in time may fit the data more precisely (Simonton, 198440yaa). The addition of further terms would also serve to remove another fault of a simple quadratic, namely, that it implies that the pre- and postpeak slopes are roughly equal, which is seldom true in fact (el. Diemer, 197450ya).

At one extreme, some fields are characterized by relatively early peaks, usually around the early 30s or even late 20s in chronological units, with somewhat steep descents thereafter, so that the output rate becomes less than one-quarter the maximum. This agewise pattern apparently holds for such endeavors as lyric poetry, pure mathematics, and theoretical physics, for example (Adams, 194678ya; Dennis, 196658ya; Lehman, 195371yaa; Moulin, 195569ya; Roe, 197252yab; Simonton, 197549yaa; Van Heeringen & Dijkwel, 198737ya). At the contrary extreme, the typical trends in other endeavors may display a leisurely rise to a comparatively late peak, in the late 40s or even 50s chronologically, with a minimal if not largely absent drop-off afterward. This more elongated curve holds for such domains as novel writing, history, philosophy, medicine, and general scholarship, for instance (Adams, 194678ya; Richard A. Davis, 198737ya; Dennis, 196658ya; Lehman, 195371yaa; Simonton, 197549yaa). Of course, many disciplines exhibit age curves somewhat between these two outer limits, with a maximum output rate around chronological age 40 and a notable yet moderate decline thereafter (see, eg. Fulton & Trow, 197450ya; Hermann, 198836ya; McDowell, 198242ya; Zhao & Jiang, 198638ya). Output in the last years appears at about half the rate observed in the peak years. Productive contributions in psychology, as an example, tend to adopt this temporal pattern (Homer et al 198638ya; Lehman, 195371yab; Over, 198242yaa, 198242yab; Zusne, 197648ya). It must be stressed that these interdisciplinary contrasts do not appear to be arbitrary but instead have been shown to be invariant across different cultures and distinct historical periods (Lehman, 196262ya). As a case in point, the gap between the expected peaks for poets and prose authors has been found in every major literary tradition throughout the world and for both living and dead languages (Simonton, 197549yaa). Indeed, because an earlier productive optimum means that a writer can die younger without loss to his or her ultimate reputation, poets exhibit a life expectancy, across the globe and through history, about a half dozen years less than prose writers do (Simonton, 197549yaa). This cross-cultural and transhistorical invariance strongly suggests that the age curves reflect underlying psychological universals rather than arbitrary sociocultural determinants.

Individual differences in lifetime output are substantial (Simonton, 198440yab, chap. 5; 198836yab, chap. 4). So skewed is the cross-sectional distribution of total contributions that a small percentage of the workers in any given domain is responsible for the bulk of the work. Generally, the top 10% of the most prolific elite can be credited with around 50% of all contributions, whereas the bottom 50% of the least productive workers can claim only 15% of the total work, and the most productive contributor is usually about 100 times more prolific than the least (Dennis, 195470yab, 195569ya; also see Lotka, 192698ya; Price, 196361ya, chap. 2). Now from a purely logical perspective, there are three distinct ways of achieving an impressive lifetime output that enables a creator to dominate an artistic or scientific enterprise. First, the individual may exhibit exceptional precocity, beginning contributions at an uncommonly early age. Second, the individual may attain a notable lifetime total by producing until quite late in life, and thereby display productive longevity. Third, the individual may boast phenomenal output rates throughout a career, without regard to the career’s onset and termination. These three components are mathematically distinct and so may have almost any arbitrary correlation whatsoever with each other, whether positive, negative, or zero, without altering their respective contributions to total productivity. In precise terms, it is clear that O = R ( L P), where O is lifetime output, R is the mean rate of output throughout the career, L is the age at which the career ended (longevity), and P is the age at which the career began (precocity). The correlations among these three variables may adopt a wide range of arbitrary values without violating this identity. For example, the difference L - P, which defines the length of a career, may be more or less constant, mandating that lifetime output results largely from the average output rate R, given that those who begin earlier, end earlier, and those who begin later, end later. Or output rates may be more or less constant, forcing the final score to be a function solely of precocity and longevity, either singly or in conjunction. In short, R, L, and P, or output rate, longevity, and precocity, comprise largely orthogonal components of O, the gauge of total contributions. When we turn to actual empirical data, we can observe two points. First, as might be expected, precocity, longevity, and output rate are each strongly associated with final lifetime output, that is, those who generate the most contributions at the end of a career also tend to have begun their careers at earlier ages, ended their careers at later ages, and produced at extraordinary rates throughout their careers (eg. Albert, 197549ya; Blackburn et al 197846ya; Bloom, 196361ya; Clemente, 197351ya; S. Cole, 197945ya; Richard A. Davis, 198737ya; Dennis, 195470yaa, 195470yab; Helson & Crutchfield, 197054ya; Lehman, 195371yaa; Over, 198242yaa, 198242yab; Raskin, 193688ya; Roe, 196559ya, 197252yaa, 197252yab; Segal, Busse, & Mansfield, 198044ya; R. J. Simon, 197450ya; Simonton, 197747yac; Zhao & Jiang, 198638ya). Second, these three components are conspicuously linked with each other: Those who are precocious also tend to display longevity, and both precocity and longevity are positively associated with high output rates per age unit (Blackburn et al 197846ya; Dennis, 195470yaa, 195470yab, 195668yab; Horner et al 198638ya; Lehman, 195371yaa, 195866ya; Lyons, 196856ya; Roe, 195272ya; Simonton, 197747yac; Zuckerman, 197747ya). The relation between longevity and precocity becomes particularly evident when care is first taken to control for the impact of differential life span (Dennis, 195470yab). Because those who are very prolific at a precocious age can afford to die young and still end up with a respectable lifetime output, a negative relation emerges between precocity and life span, necessitating that careers be equalized on life span before the correlation coefficients are calculated (Simonton, 197747yac; Zhao & Jiang, 198638ya).

When Lehman (195371yaa) compared tabulations of superior contributions in a wide range of creative activities against those for works of lesser merit, he concluded that the age curves obtained were indeed contingent on the quality criterion utilized in constructing the counts. For the most part, the peak productive age tended to stay relatively stable, only the peak was far more pronounced when only exceptional works were tabulated (see also Lehman, 195866ya, 196658yaa). In contrast, when the standards of excellence were loosened, the age curves flattened out appreciably, and the postpeak decline was much less conspicuous. This generalization was largely replicated by Dennis (196658ya)

When such precautions are taken, very different results emerge (Simonton, 197747yaa, 198440yab, chap. 6, 198539yab, 198836yab, chap. 4). First, if one calculates the age curves separately for major and minor works within careers, the resulting functions are basically identical. Both follow the same second-order polynomial (as seen in Equation 1), with roughly equal parameters. Second, if the overall age trend is removed from the within-career tabulations of both quantity and quality, minor and major contributions still fluctuate together. Those periods in a creator’s life that see the most masterpieces also witness the greatest number of easily forgotten productions, on the average. Another way of saying the same thing is to note that the “quality ratio,” or the proportion of major products to total output per age unit, tends to fluctuate randomly over the course of any career. The quality ratio neither increases nor decreases with age nor does it assume some curvilinear form. These outcomes are valid for both artistic (eg. Simonton, 197747yaa) and scientific (eg. Simonton, 198539yab) modes of creative contribution (see also Alpaugh, Renner,& Birren, 197648ya, p. 28). What these two results signify is that if we select the contribution rather than the age period as the unit of analysis, then age becomes irrelevant to determining the success of a particular contribution. For instance, the number of citations received by a single scientific article is not contingent upon the age of the researcher (Oromaner, 197747ya). The longitudinal linkage between quantity and quality can be subsumed under the more general “constant-probability-ofsuccess model” of creative output (Simonton, 197747yaa, 198440yab, 198539yab, 198836yab, chap. 4). According to this hypothesis, creativity is a probabilistic consequence of productivity, a relationship that holds both within and across careers. Within single careers, the count of major works per age period will be a positive function of total works generated each period, yielding a quality ratio that exhibits no systematic developmental trends. And across careers, those individual creators who are the most productive will also tend, on the average, to be the most creative: Individual variation in quantity is positively associated with variation in quality. There is abundant evidence for the application of the constant-probability-of-success model to cross-sectional contrasts in quantity and quality of output (Richard A. Davis, 198737ya; Simonton, 198440yab, chap. 6; 198539yab, 198836yab, chap. 4). In the sciences, for example, the reputation of a nineteenthcentury scientist in the twentieth century, as judged by entries in standard reference works, is positively correlated with the total number of publications that can be claimed (Dennis, 195470yaa; Simonton, 198143ya a; see also Dennis, 195470yac). Similarly, the number of citations a scientist receives, which is a key indicator of achievement, is a positive function of total publications (Crandall, 197846ya; Richard A. Davis, 198737ya; Myers, 197054ya; Rushton, 198440ya), and total productivity even correlates positively with the citations earned by a scientist’s three best publications (J. R. Cole & S. Cole, 197351ya, chap. 4). Needless to say, the correlations between quantity and quality are far from perfect for either longitudinal or cross-sectional data.

Lastly, a long rule means an abundance of events from which we can construct performance indicators (Simonton, 198440yad). To illustrate these potential assets, an inquiry was made into the careers of 25 European kings and queens from over a dozen nations–such as Queen Elizabeth I, Frederick the Great, Ivan the Terrible, and Suleiman the Magnificent–that found that most objective performance indicators either decline with age or else exhibit a curvilinear inverted-U function that maximized at the 42nd year of life, this latter curve holding for some measures of military and diplomatic success (Simonton, 198440yac). What made this study sensitive to longitudinal changes was the fact that none of the sampled leaders ruled fewer than 36 years, and the average reign length was 43 years, giving career durations more comparable to those found in the research on distinguished creativity.

An analogous age gap appears between revolutionaries and leaders of long-established political institutions. Although as many as half of the notable revolutionaries were younger than 35 (Rejai & Phillips, 197945ya), very few of the world’s political leaders attained power before age 40 (Blondel, 198044ya). Indeed, just as poets can die younger than prose writers and still achieve a durable reputation (Simonton, 197549yaa), so the predominant youthfulness of revolutionaries betrays itself in a lower life expectancy. In the Cox (192698ya) sample of 301 geniuses, who had an overall life span mean of 66 years, the revolutionaries averaged only 51 years, not one living to be 80 and more than 44% dying prior to age 50. These figures contrast dramatically with the statesmen in Cox’s sample who operated under more status quo conditions; their life expectancy was 70, only about 5% lived fewer than 50 years, and fully 30% survived to their 80th birthday. Furthermore, these results are enlarged by the finding that as political institutions mature, the age of their leaders increases as well (Lehman, 195371yaa, chap. 17). In the United States, for example, members of the House of Representatives and the Senate, House speakers, cabinet officers, Supreme Court justices, ambassadors, and army commanders have all gotten older and older since the nation’s founding, trends that cannot be explained by corresponding enhancements in general life expectancy (see also Simonton, 198539yac, 198737yad, chap. 4). Indeed, transhistorical data have consistently shown that the mean life span has not significantly changed over the centuries but rather has stayed close to around 65 years (see, eg. Simonton, 197549yaa, 197747yac; Zhao & Jiang, 198638ya), a figure close to the “three-score years and ten” said by Solon to be the normal term of a human life way back in ancient Greece.

Whatever the specific precautions taken, once the intrusion of the compositional fallacy has been denied, the empirical resuits discussed earlier in this review yet persist, albeit the decline may not be so pronounced as it sometimes looks in many published data. The location of the age peak is singularly immune from this consideration, and for good cause (Lehman, 196262ya). The number of individuals who died before they would be expected to reach their peak age for achievement is quite small (Zhao & Jiang, 198638ya; cf. Bullough et al 197846ya). Only 11% of Cox’s (192698ya) sample failed to attain the 50th year, which comes about a decade after the expected peak for most activities. To be sure, poets die young, yet their age optimum is correspondingly younger. And even if the peak age for leadership sometimes occurs after the 50th year, the life expectancy of leaders is older in rough proportion. Although the concern of most researchers has been on how the compositional fallacy may introduce an artifactual decline, it is clear that it may impede accurate inferences in other ways as well. Most notably, those studies mentioned earlier that claim to have divulged saddle-shaped age functions may actually have failed to segregate distinctive achievement domains that harbor discrepant peaks (Simonton, 198440yaa). For example, if achievement in pure mathematics peaks at an earlier age than that in applied mathematics, then aggregating across both types of contributions will perforce generate a double-peak age curve (of. Dennis, 196658ya). Hence, the errors of aggregation can be very pervasive.

Many investigators pinpointed a decline in intellectual power in the later years of life (or at least a drop in “fluid” as opposed to “crystallized” intelligence) (eg. Horn, 198242ya), and others reported single-peak functions and negative age slopes for certain creativity measures as well (Alpaugh & Birren, 197747ya; Alpaugh, Parham, Cole, &Birren, 198242ya; Bromley, 195668ya; Cornelius & Caspi, 198737ya; Eisenman, 197054ya; McCrae et al 198737ya; Ruth & Birren, 198539ya; cf. Jaquish & Ripple, 198143ya). Yet the defensiveness noted twice earlier in this essay may have provoked the debate that followed these published results, a controversy about whether the decreases with age were real or simply reflected some pernicious age bias. Some of the issues in this debate were the same recurrent methodological questions that plague life span developmental research, especially the potential artifact introduced by depending on cross-sectional data when inferring longitudinal trends (Kogan, 197351ya; Romanuik & Romanuik, 198143ya; Schaie & Strother, 196856ya).

To begin with, even ifa minimal level of intelligence is requisite for achievement, beyond a threshold of around IQ 120 (the actual amount varying across fields), intellectual prowess becomes largely irrelevant in predicting individual differences in either creativity or leadership (Simonton, 198539yaa).

The specific relation between age and outstanding achievement is by no means a purely academic issue. Yuasa (197450ya) argued that by the year 200024ya a decline in science in the United States is inevitable because of the shifting age structure of American scientists (see also Oromaner, 198143ya). More specifically, when the mean age of U.S. scientists attains the 50th year, the United States will soon be replaced by some other nation as the center of scientific activity (Zhao & Jiang, 198539ya). An analogous “Yuasa phenomenon” may attend achievement in other domains as well. Yet this forecast is predicated on the notion that the slope of the age function is negative after some peak in the late 30s or early 40s. Despite the considerable empirical and theoretical corroboration this postulate possesses, more documentation is necessary before this prognosis of doom (for U.S. citizens anxious for Nobel prizes) projects full force, It bears repeating that the age structure of American society very much hinges on the baby boomers, and this generation is only about a decade away from the critical age when the United States may witness itself supplanted by some upstart nation. Admittedly, enhanced knowledge may give us no means to reverse inexorable historical trends (cf. Alpaugh et al 197648ya), yet we can at least have the consolation of understanding why the locus in outstanding achievement strayed from our own shores (Simonton, 198440yab, chap. 10).


  1. An example: Singapore’s government reportedly expects half the population to hold at least a bachelor’s degree by 2020. This is surely doable, just as the US could expect half its population to graduate high school, just like industrializing countries can shift their proletariat to the cities & factories. But this is a trick you can do only once! You can’t have half your population join the first half in the cities, leaving 1% to handle the mechanized agriculture - for tremendous economic growth - and then have another half move into the cities to keep the growth going, because there is no third half. Similarly for degrees. Bachelor’s, perhaps; master’s, maybe; but PhD? Not with existing populations or gene pools.↩︎

  2. One estimate of the increase from Jones 200618ya is the factor is 19x.↩︎

  3. “Trials and Errors: Why Science Is Failing Us”, Wired 2011↩︎

  4. “The Truly Staggering Cost Of Inventing New Drugs”, 2012↩︎

  5. “Inside Pfizer’s palace coup”, Fortune↩︎

  6. “Drugs That Are as Smart as Our Diseases”, WSJ↩︎