Skip to main content

Spaced Repetition for Efficient Learning

Efficient memorization using the spacing effect: literature review of widespread applicability, tips on use & what it’s good for.

Spaced repetition is a centuries-old psychological technique for efficient memorization & practice of skills where instead of attempting to memorize by ‘cramming’, memorization can be done far more efficiently by instead spacing out each review, with increasing durations as one learns the item, with the scheduling done by software. Because of the greater efficiency of its slow but steady approach, spaced repetition can scale to memorizing hundreds of thousands of items (while crammed items are almost immediately forgotten) and is especially useful for foreign languages & medical studies.

I review what this technique is useful for, some of the large research literature on it and the testing effect (up to ~2013, primarily), the available software tools and use patterns, and miscellaneous ideas & observations on it.

One of the most fruitful areas of computing is making up for human frailties. They do arithmetic perfectly because we can’t1. They remember terabytes because we’d forget. They make the best calendars, because they always check what there is to do today. Even if we do not remember exactly, merely remembering a reference can be just as good, like the point of reading a manual or textbook all the way through: it is not to remember everything that is in it for later but to later remember that something is in it (and skimming them, you learn the right words to search for when you actually need to know more about a particular topic).

We use any number of such neuroprosthetics2, but there are always more to be discovered. They’re worth looking for because they are so valuable: a shovel is much more effective than your hand, but a power shovel is orders of magnitude better than both - even if it requires training and expertise to use.

Spacing Effect

You can get a good deal from rehearsal,
If it just has the proper dispersal.
You would just be an ass,
To do it en masse,
Your remembering would turn out much worsal.

Ulrich Neisser3

My current favorite prosthesis is the class of software that exploits the spacing effect, a centuries-old observation in cognitive psychology, to achieve results in studying or memorization much better than conventional student techniques; it is, alas, obscure4.

The spacing effect essentially says that if you have a question (“What is the fifth letter in this random sequence you learned?”), and you can only study it, say, 5 times, then your memory of the answer (‘e’) will be strongest if you spread your 5 tries out over a long period of time - days, weeks, and months. One of the worst things you can do is blow your 5 tries within a day or two. You can think of the ‘forgetting curve’ as being like a chart of a radioactive half-life: each review bumps your memory up in strength 50% of the chart, say, but review doesn’t do much in the early days because the memory simply hasn’t decayed much! (Why does the spacing effect work, on a biological level? There are clear neurochemical differences between massed and spaced in animal models with spacing (>1 hour) enhancing long-term potentiation but not massed5, but the why and wherefore - that’s an open question; see the concept of memory traces or the sleep studies.) A graphical representation of the forgetting curve:

Stahl et al 201014ya; CNS Spectrums

Stahl et al 201014ya; CNS Spectrums

Even better, it’s known that active recall is a far superior method of learning than simply passively being exposed to information.6 Spacing also scales to huge quantities of information; gambler/financier Edward O. Thorp harnessed “spaced learning” when he was a physics grad student “in order to be able to work longer and harder”7, and Roger Craig set multiple records on the quiz show Jeopardy! 2010–201113ya in part thanks to using Anki to memorize chunks of a collection of >200,000 past questions8; a later Jeopardy winner, Arthur Chu, also used spaced repetition9. Med school students (who have become a major demographic for SRS due to the extremely large amounts of factual material they are expected to memorize during medical school) usually have thousands of cards, especially if using pre-made decks (more feasible for medicine due to fairly standardized curriculums & general lack of time to make custom cards). Foreign-language learners can easily reach 10-30,000 cards; one Anki user reports a deck of >765k automatically-generated cards filled with Japanese audio samples from many sources (“Youtube videos, video games, TV shows, etc”).

A graphic might help; imagine here one can afford to review a given piece of information a few times (one is a busy person). By looking at the odds we can remember the item, we can see that cramming wins in the short term, but unexercised memories decay so fast that after not too long spacing is much superior:

Wired (original, Wozniak?); massed vs spaced (alternative)

Wired (original, Wozniak?); massed vs spaced (alternative)

It’s more dramatic if we look at a video visualizing decay of a corpus of memory with random review vs most-recent review vs spaced review.

If You’re so Good, Why Aren’t You Rich

Most people find the concept of programming obvious, but the doing impossible.10

Of course, the latter strategy (cramming) is precisely what students do. They cram the night before the test, and a month later can’t remember anything. So why do people do it? (I’m not innocent myself.) Why is spaced repetition so dreadfully unpopular, even among the people who try it once?11

Scumbag Brain meme: knows everything when cramming the night before the test / and forgets everything a month later

Scumbag Brain meme: knows everything when cramming the night before the test / and forgets everything a month later

Because it does work. Sort of. Cramming is a trade-off: you trade a strong memory now for weak memory later. (Very weak12.) And tests are usually of all the new material, with occasional old questions, so this strategy pays off! That’s the damnable thing about it - its memory longevity & quality are, in sum, less than that of spaced repetition, but cramming delivers its goods now13. So cramming is a rational, if short-sighted, response, and even SRS software recognize its utility & support it to some degree14. (But as one might expect, if the testing is continuous and incremental, then the learning tends to also be long-lived15; I do not know if this is because that kind of testing is a disguised accidental spaced repetition system, or the students/subjects simply studying/acting differently in response to small-stakes exams.) In addition to this short-term advantage, there’s an ignorance of the advantages of spacing and a subjective illusion that the gains persist1617 (cf. Son & Simon201218, Mulligan & Peterson2014, Bjork et al 2013, Deslauriers et al 2019); from Kornell2009’s study of GRE vocab (emphasis added):

Across experiments, spacing was more effective than massing for 90% of the participants, yet after the first study session, 72% of the participants believed that massing had been more effective than spacing….When they do consider spacing, they often exhibit the illusion that massed study is more effective than spaced study, even when the reverse is true (Dunlosky & Nelson, 1994; Kornell & Bjork, 200816yaa; Simon & Bjork2001; Zechmeister & Shaughnessy, 1980).

As one would expect if the testing and spacing effects are real things, students who naturally test themselves and study well in advance of exams tend to have higher GPAs.19 If we interpret questions as tests, we are not surprised to see that 1-on-1 tutoring works dramatically better than regular teaching and that tutored students answer orders of magnitude more questions20.

This short-term perspective is not a good thing in the long term, of course. Knowledge builds on knowledge; one is not learning independent bits of trivia. Richard Hamming recalls in “You and Your Research” that “You observe that most great scientists have tremendous drive….Knowledge and productivity are like compound interest.”

Knowledge needs to accumulate, and flashcards with spaced repetition can aid in just that accumulation, fostering steady review even as the number of cards and intellectual prerequisites mounts into the thousands.

This long term focus may explain why explicit spaced repetition is an uncommon studying technique: the pay-off is distant & counterintuitive, the cost of self-control near & vivid. (See hyperbolic discounting.) It doesn’t help that it’s pretty difficult to figure out when one should review - the optimal point is when you’re just about to forget about it, but that’s the kicker: if you’re just about to forget about it, how are you supposed to remember to review it? You only remember to review what you remember, and what you already remember isn’t what you need to review!21

The paradox is resolved by letting a computer handle all the calculations. We can thank Hermann Ebbinghaus for investigating in such tedious detail than we can, in fact, program a computer to calculate both the forgetting curve and optimal set of reviews22. This is the insight behind spaced repetition software: ask the same question over and over, but over increasing spans of time. You start with asking it once every few days, and soon the human remembers it reasonably well. Then you expand intervals out to weeks, then months, and then years. Once the memory is formed and dispatched to long-term memory, it needs but occasional exercise to remain hale and hearty23 - I remember well the large dinosaurs made of cardboard for my 4th or 5th birthday, or the tunnel made out of boxes, even though I recollect them once or twice a year at most.

Literature Review

But don’t take my word for it - Nullius in verba! We can look at the science. Of course, if you do take my word for it, you probably just want to read about how to use it and all the nifty things you can do, so I suggest you skip all the way down to that section. Everyone else, we start at the beginning:

Background: Testing Works!

“If you read a piece of text through twenty times, you will not learn it by heart so easily as if you read it ten times while attempting to recite from time to time and consulting the text when your memory fails.” –The New Organon, Francis Bacon

The testing effect is the established psychological observation that the mere act of testing someone’s memory will strengthen the memory (regardless of whether there is feedback). Since spaced repetition is just testing on particular days, we ought to establish that testing works better than regular review or study, and that it works outside of memorizing random dates in history. To cover a few papers:

  1. Allen, G.A., Mahler, W.A., & Estes, W.K. (196955ya). “Effects of recall tests on long-term retention of paired associates”. Journal of Verbal Learning and Verbal Behavior, 8, 463-470

    1 test results in memories as strong a day later as studying 5 times; intervals improve retention compared to massed presentation.

  2. Karpicke & Roediger (200321ya). “The Critical Importance of Retrieval for Learning”

    In learning Swahili vocabulary, students were given varying routines of testing or studying or testing and studying; this resulted in similar scores during the learning phase. Students were asked to predict what percentage they’d remember (average: 50% over all groups). One week later, the students who tested remembered ~80% of the vocabulary versus ~35% for non-testing students. Some students were tested or studied more than others; diminishing returns set in quickly once the memory had formed the first day. Students reported rarely testing themselves and not testing already learned items.

    Lesson: again, testing improves memory compared to studying. Also, no student knows this.

  3. Roediger & Karpicke (200618yaa). “Test-Enhanced Learning: Taking Memory Tests Improves Long-Term Retention”

    Students were tested (with no feedback) on reading comprehension of a passage over 5 minutes, 2 days, and 1 week. Studying beat testing over 5 minutes, but nowhere else; students believed studying superior to testing over all intervals. At 1 week, testing scores were ~60% versus ~40%.

    Lesson: testing improves memory compared to studying. Everyone (teachers & students) ‘knows’ the opposite.

  4. Karpicke & Roediger (200618yaa). “Expanding retrieval promotes short-term retention, but equal interval retrieval enhances long-term retention”

    General scientific prose comprehension; from Roediger & Karpicke 200618yab: “After 2 days, initial testing produced better retention than restudying (68% vs. 54%), and an advantage of testing over restudying was also observed after 1 week (56% vs. 42%).”

  5. Roediger & Karpicke (200618yab). “The Power of Testing Memory: Basic Research and Implications for Educational Practice”

    Literature review; 7 studies before 194183ya demonstrating testing improves retention, and 6 afterwards. See also the reviews “Spacing Learning Events Over Time: What the Research Says” & “Using spacing to enhance diverse forms of learning: Review of recent research and implications for instruction”, Carpenter et al 201212ya.

  6. Agarwal et al 200816ya, “Examining the Testing Effect with Open- and Closed-Book Tests”

    As with #2, the purer forms of testing (in this case, open-book versus closed-book testing) did better over the long run, and students were deluded about what worked best.

  7. Bangert-Drowns et al 199133ya. “Effects of frequent classroom testing”

    Meta-analysis of 35 studies (192960198935ya) varying tests during school semesters. 29 found benefits; 5 found negatives; 1 null result. Meta-study found large benefits to testing even once, then diminishing returns.

  8. Cook 200618ya, “Impact of self-assessment questions and learning styles in Web-based learning: a randomized, controlled, crossover trial”; final scores were higher when the doctors (residents) learned with questions.

  9. Johnson & Kiviniemi 200915ya, “The Effect of Online Chapter Quizzes on Exam Performance in an Undergraduate Social Psychology Course” (“This study examined the effectiveness of compulsory, mastery-based, weekly reading quizzes as a means of improving exam and course performance. Completion of reading quizzes was related to both better exam and course performance.”); see also McDaniel et al 2012.

  10. Metsämuuronen 201311ya, “Effect of Repeated Testing on the Development of Secondary Language Proficiency”

  11. Meyer & Logan 201311ya, “Taking the Testing Effect Beyond the College Freshman: Benefits for Lifelong Learning”; verifies testing effect in older adults has similar effect size as younger

  12. Larsen & Butler 201311ya, “Test-enhanced learning”

  13. Yang et al 2021, “Testing (Quizzing) Boosts Classroom Learning: A Systematic And Meta–Analytic Review”

(One might be tempted to object that testing works for some learning styles, perhaps verbal styles. This is an unsupported assertion inasmuch as the experimental literature on learning styles is poor and the existing evidence mixed that there are such things as learning styles.24)

Subjects

The above studies often used pairs of words or words themselves. How well does the testing effect generalize?

Materials which benefited from testing:

  • foreign vocabulary (eg. Karpicke & Roediger 200321ya, Cepeda et al 2009, Fritz et al 200725, de la Rouviere2012)

  • GRE materials (like vocab, Kornell2009); prose passages on general scientific topics (Karpicke & Roediger, 200618yaa; Pashler et al, 200321ya)

  • trivia (McDaniel & Fisher1991)

  • elementary & middle school lessons with subjects such as biographical material and science (Gates1917; Spitzer193926 and Vlach & Sandhofer201227, respectively)

  • Agarwal et al 200816ya: short-answer tests superior on textbook passages

  • history textbooks; retention better with initial short-answer test rather than multiple choice (Nungester & Duchastel1982)

  • LaPorte & Voss1975 also found better retention compared to multiple-choice or recognition problems

  • Duchastel & Nungester, 1981: 6 months after testing, testing beat studying in retention of a history passage

  • Duchastel1981: free recall decisively beat short-answer & multiple choice for reading comprehension of a history passage

  • Glover1989: free recall self-test beat recognition or Cloze deletions; subject matter was the labels for parts of flowers

  • Kang et al 2007: prose passages; initial short answer testing produced superior results 3 days later on both multiple choice and short answer tests

  • Leeming2002: tests in 2 psychology courses, introductory & memory/learning; “80% vs. 74% for the introductory psychology course and 89% vs. 80% for the learning and memory course”28

This covers a pretty broad range of what one might call ‘declarative’ knowledge. Extending testing to other fields is more difficult and may reduce to ‘write many frequent analyses, not large ones’ or ‘do lots of small exercises’, whatever those might mean in those fields:

A third issue, which relates to the second, is whether our proposal of testing is really appropriate for courses with complex subject matters, such as the philosophy of Spinoza, Shakespeare’s comedies, or creative writing. Certainly, we agree that most forms of objective testing would be difficult in these sorts of courses, but we do believe the general philosophy of testing (broadly speaking) would hold-students should be continually engaged and challenged by the subject matter, and there should not be merely a midterm and final exam (even if they are essay exams). Students in a course on Spinoza might be assigned specific readings and thought-provoking essay questions to complete every week. This would be a transfer-appropriate form of weekly ‘testing’ (albeit with take-home exams). Continuous testing requires students to continuously engage themselves in a course; they cannot coast until near a midterm exam and a final exam and begin studying only then.29

Downsides

Testing does have some known flaws:

  1. interference in recall - ability to remember tested items drives out ability to remember similar untested items

    Most/all studies were in laboratory settings and found relatively small effects:

    In sum, although various types of recall interference are quite real (and quite interesting) phenomena, we do not believe that they compromise the notion of test-enhanced learning. At worst, interference of this sort might dampen positive testing effects somewhat. However, the positive effects of testing are often so large that in most circumstances they will overwhelm the relatively modest interference effects.

  2. multiple choice tests can accidentally lead to ‘negative suggestion effects’ where having previously seen a falsehood as an item on the test makes one more likely to believe it.

    This is mitigated or eliminated when there’s quick feedback about the right answer (see Butler & Roediger 200816ya “Feedback enhances the positive effects and reduces the negative effects of multiple-choice testing”). Solution: don’t use multiple choice; inferior in testing ability to free recall or short answers, anyway.

Neither problem seems major.

Distributed

A lot depends on when you do all your testing. Above we saw some benefits to testing a lot the moment you learn something, but the same number of tests could be spread out over time, to give us the spacing effect or spaced repetition. There are hundreds of studies involving the spacing effect:

Almost unanimously they find spacing out tests is superior to massed testing when the final test/measurement is conducted days or years later30, although the mechanism isn’t clear31. Besides all the previously mentioned studies, we can throw in:

The research literature focuses extensively on the question of what kind of spacing is best and what this implies about memory: a spacing that has static fixed intervals or a spacing which expands? This is important for understanding memory and building models of it, and would be helpful for integrating spaced repetition into classrooms (for example, Kelley & Whatson2013’s 10 minutes studying / 10 minutes break schedule, repeating the same material 3 times, designed to trigger LTM formation on that block of material?) But for practical purposes, this is uninteresting; to sum it up, there are many studies pointing each way, and whatever difference in efficiency exists, is minimal. Most existing software follows SuperMemo in using an expanding spacing algorithm, so it’s not worth worrying about; as Mnemosyne developer Peter Bienstman says, it’s not clear the more complex algorithms really help32, and the Anki developers were concerned about the complexity, difficulty of reimplementing SM’s proprietary algorithms, lack of substantial gains, & larger errors SM3+ risks attempting to be more optimal. So too here.

For those interested, 3 of the studies that found fixed spacings better than expanding:

  1. Carpenter, S. K., & DeLosh, E. L. (200519ya). “Application of the testing and spacing effects to name learning”. Applied Cognitive Psychology, 19, 619-63633

  2. Logan, J. M. (200420ya). Spaced and expanded retrieval effects in younger and older adults. Unpublished doctoral dissertation, Washington University, St. Louis, MO

    This thesis is interesting inasmuch as Logan found that young adults did considerably worse with an expanding spacing after a day.

  3. Karpicke & Roediger, 200618yaa

The fixed vs expanding issue aside, a list of additional generic studies finding benefits to spaced vs massed:

Generality of Spacing Effect

We have already seen that spaced repetition is effective on a variety of academic fields and mediums. Beyond that, spacing effects can be found in:

  • various “domains (eg. learning perceptual motor tasks or learning lists of words)”42 such as spatial43

  • “across species (eg. rats, pigeons, and humans [or flies or bumblebees, and sea slugs, Carew et al 1972 & Sutton et al 2002])”

  • “across age groups [infancy44, childhood45, adulthood46, the elderly47] and individuals with different memory impairments”

  • “and across retention intervals of seconds48 [to days49] to months” (we have already seen studies using years)

The domains are limited, however. Cepeda et al 200618ya:

[Moss1995, reviewing 120 articles] concluded that longer ISIs facilitate learning of verbal information (eg. spelling50) and motor skills (eg. mirror tracing); in each case, over 80% of studies showed a distributed practice benefit. In contrast, only one third of intellectual skill (eg. math computation) studies showed a benefit from distributed practice, and half showed no effect from distributed practice.

…[Donovan & Radosevich 199925ya] The largest effect sizes were seen in low rigor studies with low complexity tasks (eg. rotary pursuit, typing, and peg reversal), and retention interval failed to influence effect size. The only interaction Donovan and Radosevich examined was the interaction of ISI and task domain. It is important to note that task domain moderated the distributed practice effect; depending on task domain and lag, an increase in ISI either increased or decreased effect size. Overall, Donovan and Radosevich found that increasingly distributed practice resulted in larger effect sizes for verbal tasks like free recall, foreign language, and verbal discrimination, but these tasks also showed an inverse-U function, such that very long lags produced smaller effect sizes. In contrast, increased lags produced smaller effect sizes for skill tasks like typing, gymnastics, and music performance.

Skills like gymnastics and music performance raise an important point about the testing effect and spaced repetition: they are for the maintenance of memories or skills, they do not increase it beyond what was already learned. If one is a gifted amateur when one starts reviewing, one remains a gifted amateur. Ericsson covers what is necessary to improve and attain new expertise: deliberate practice51. From “The Role of Deliberate Practice”:

The view that merely engaging in a sufficient amount of practice—regardless of the structure of that practice—leads to maximal performance, has a long and contested history. In their classic studies of Morse Code operators, Bryan and Harter (1897, 1899) identified plateaus in skill acquisition, when for long periods subjects seemed unable to attain further improvements. However, with extended efforts, subjects could restructure their skill to overcome plateaus…Even very experienced Morse Code operators could be encouraged to dramatically increase their performance through deliberate efforts when further improvements were required…More generally, Thorndike (1921103ya) observed that adults perform at a level far from their maximal level even for tasks they frequently carry out. For instance, adults tend to write more slowly and illegibly than they are capable of doing…The most cited condition [for optimal learning and improvement of performance] concerns the subjects’ motivation to attend to the task and exert effort to improve their performance…The subjects should receive immediate informative feedback and knowledge of results of their performance…In the absence of adequate feedback, efficient learning is impossible and improvement only minimal even for highly motivated subjects. Hence mere repetition of an activity will not automatically lead to improvement in, especially, accuracy of performance…In contrast to play, deliberate practice is a highly structured activity, the explicit goal of which is to improve performance. Specific tasks are invented to overcome weaknesses, and performance is carefully monitored to provide cues for ways to improve it further. We claim that deliberate practice requires effort and is not inherently enjoyable.

Motor Skills

It should be noted that reviews conflict on how much spaced repetition applies to motor skills; Lee & Genovese 198836ya find benefits, while Adams 198737ya and earlier do not. The difference may be that simple motor tasks benefit from spacing as suggested by Shea & Morgan1979 (benefits to a randomized/spaced schedule), while complex ones where the subject is already operating at his limits do not benefit, suggested by Wulf & Shea2002. Stambaugh 200915ya mentions some divergent studies:

The contextual interference hypothesis (Shea and Morgan 197945ya, Battig 196658ya [“Facilitation and interference” in Acquisition of skill]) predicted the blocked condition would exhibit superior performance immediately following practice (acquisition) but the random condition would perform better at delayed retention testing. This hypothesis is generally consistent in laboratory motor learning studies (eg. Lee & Magill1983, Brady2004), but less consistent in applied studies of sports skills (with a mix of positive & negative eg. Landin & Hebert1997, Hall et al 199430ya, Regal2013) and fine-motor skills (Ollis et al 2005, Ste-Marie et al 2004).

Some of the positive spaced repetition studies (from Son & Simon 201212ya):

Perhaps even prior to the empirical work on cognitive learning and the spacing effect, the benefits of spaced study had been apparent in an array of motor learning tasks, including maze learning (Culler 1912112ya), typewriting (Pyle 1915109ya), archery (Lashley 1915109ya), and javelin throwing (Murphy 1916108ya; see Ruch 192896ya, for a larger review of the motor learning tasks which reap benefits from spacing; see also Moss 199628ya, for a more recent review of motor learning tasks). Thus, as in the cognitive literature, the study of practice distribution in the motor domain is long established (see reviews by Adams 198737ya; Schmidt & Lee 200519ya), and most interest has centered around the impact of varying the separation of learning trials of motor skills in learning and retention of practiced skills. Lee & Genovese 198836ya conducted a review and meta-analysis of studies on distribution of practice, and they concluded that massing of practice tends to depress both immediate performance and learning, where learning is evaluated at some removed time from the practice period. Their main finding was, as in the cognitive literature, that learning was relatively stronger after spaced than after massed practice (although see Ammons 198836ya; Christina & Shea 198836ya; Newell et al 198836ya for criticisms of the review)…Probably the most widely cited example is Baddeley & Longman1978’s study concerning how optimally to teach postal workers to type. They had learners practice once a day or twice a day, and for session lengths of either 1 or 2 h at a time. The main findings were that learners took the fewest cumulative hours of practice to achieve a performance criterion in their typing when they were in the most distributed practice condition. This finding provides clear evidence for the benefits of spacing practice for enhancing learning. However, as has been pointed out (Newell et al 198836ya; Lee & Wishart 200519ya), there is also trade-off to be considered in that the total elapsed time (number of days) between the beginning of practice and reaching criterion was substantially longer for the most spaced condition….The same basic results have been repeatedly demonstrated in the decades since (see reviews by Magill & Hall 199034ya; Lee & Simon 200420ya), and with a wide variety of motor tasks including different badminton serves (Goode & Magill 198638ya), rifle shooting (Boyce & Del Rey 199034ya), a pre-established skill, baseball batting (Hall et al 199430ya), learning different logic gate configurations (Carlson et al 198935ya; Carlson & Yaure 199034ya), for new users of automated teller machines (Jamieson & Rogers 200024ya), and for solving mathematical problems as might appear in a class homework (Rohrer & Taylor 200717ya; Le Blanc & Simon 200816ya; Taylor & Rohrer 201014ya).

In this vein, it’s interesting to note that interleaving may be helpful for tasks with a mental component as well: Hatala et al 2003, Helsdingen et al 2011, and according to Huang et al 2013 the rates at which Xbox Halo: Reach video game players advance in skill matches nicely predictions from distribution: players who play 4–8 matches a week advance more in skill per match, than players who play more (distributed); but advance slower per week than players who play many more matches / massed. (See also Stafford & Haasnoot2016.)

Abstraction

Another potential objection is to argue52 that spaced repetition inherently hinders any kind of abstract learning and thought because related materials are not being shown together - allowing for comparison and inference - but days or months apart. Ernst A. Rothkopf: “Spacing is the friend of recall, but the enemy of induction” (Kornell & Bjork 200816ya, p. 585). This is plausible based on some of the early studies53 but the 4 recent studies I know of directly examining the issue both found spaced repetition helped abstraction as well as general recall:

  1. Kornell & Bjork 200816yaa, “Learning concepts and categories: Is spacing the ‘enemy of induction’?” Psychological Science, 19, 585-592

  2. Vlach, H. A., Sandhofer, C. M., & Kornell, N. (200816ya). “The spacing effect in children’s memory and category induction”. Cognition, 109, 163-167

  3. Kenney 200915ya. “The Spacing Effect in Inductive Learning”

  4. Kornell, N., Castel, A. D., Eich, T. S., & Bjork, R. A. (201014ya). “Spacing as the friend of both memory and induction in younger and older adults”. Psychology and Aging, 25, 498-503

  5. Zulkiply et al 2011

  6. Vlach & Sandhofer 201212ya, “Distributing Learning Over Time: The Spacing Effect in Children’s Acquisition and Generalization of Science Concepts”, Child Development

  7. Zulkiply 201212ya, “The spacing effect in inductive learning”; includes:

  8. McDanie et al 201311ya, “Effects of Spaced versus Massed Training in Function Learning”

  9. Verkoeijen & Bouwmeester2014, “Is spacing really the ‘friend of induction’?”

  10. Rohrer et al 2014: 1, 2; Rorher et al 2019: “A randomized controlled trial of interleaved mathematics practice”

  11. Vlach et al 2014, “Equal spacing and expanding schedules in children’s categorization and generalization”

  12. Gluckman et al, “Spacing Simultaneously Promotes Multiple Forms of Learning in Children’s Science Curriculum”

Review Summary

To bring it all together with the gist:

  • testing is effective and comes with minimal negative factors

  • expanding spacing is roughly as good as or better than (wide) fixed intervals, but expanding is more convenient and the default

  • testing (and hence spacing) is best on intellectual, highly factual, verbal domains, but may still work in many low-level domains

  • the research favors questions which force the user to use their memory as much as possible; in descending order of preference:

    1. free recall

    2. short answers

    3. multiple-choice

    4. Cloze deletion

    5. recognition

  • the research literature is comprehensive and most questions have been answered - somewhere.

  • the most common mistakes with spaced repetition are

    1. formulating poor questions and answers

    2. assuming it will help you learn, as opposed to maintain and preserve what one already learned54. (It’s hard to learn from cards, but if you have learned something, it’s much easier to then devise a set of flashcards that will test your weak points.)

Using It

One doesn’t need to use SuperMemo, of course; there are plenty of free alternatives. I like Mnemosyne (homepage) myself - Free, packaged for Ubuntu Linux, easy to use, free mobile client, long track record of development and reliability (I’ve used it since ~2008). But the SRS Anki is also popular, and has advantages in being more feature-rich and a larger & more active community (and possibly better support for East Asian language material and a better but proprietary mobile client).

OK, but what does one do with it? It’s a surprisingly difficult question, actually. It’s akin to “the tyranny of the blank page” (or blank wiki); now that I have all this power - a mechanical golem that will never forget and never let me forget whatever I chose to - what do I choose to remember?

How Much To Add

The most difficult task, beyond that of just persisting until the benefits become clear, is deciding what’s valuable enough to add in. In a 3 year period, one can expect to spend “30–40 seconds” on any given item. The long run theoretical predictions are a little hairier. Given a single item, the formula for daily time spent on it is Time = 1⁄500 × nthYear−1.5 + 1⁄30000. During our 20th year, we would spend t = 1⁄500 × 20−1.5 + 1⁄3000, or 3.557e-4 minutes a day. This is the average daily time, so to recover the annual time spent, we simply multiply by 365. Suppose we were interested in how much time a flashcard would cost us over 20 years. The average daily time changes every year (the graph looks like an exponential decay, remember), so we have to run the formula for each year and sum them all; in Haskell:

sum $ map (\year -> ((1/500 * year ** (-(1.5))) + 1/30000) * 365.25) [1..20]
# 1.8291

Which evaluates to 1.8 minutes. (This may seem too small, but one doesn’t spend much time in the first year and the time drops off quickly55.) Anki user muflax’s statistics put his per-card time at 71s, for example. But maybe Piotr Woźniak was being optimistic or we’re bad at writing flashcards, so we’ll double it to 5 minutes. That’s our key rule of thumb that lets us decide what to learn and what to forget: if, over your lifetime, you will spend more than 5 minutes looking something up or will lose more than 5 minutes as a result of not knowing something, then it’s worthwhile to memorize it with spaced repetition. 5 minutes is the line that divides trivia from useful data.56 (There might seem to be thousands of flashcards that meet the 5 minute rule. That’s fine. Spaced repetition can accommodate dozens of thousands of cards. See the next section.)

To a lesser extent, one might wonder when one is in a hurry, should one learn something with spaced repetition and with massed? How far away should the tests or deadlines be before abandoning spaced repetition? It’s hard to compare since one would need a specific regimens to compare for the crossover point, but for massed repetition, the average time after memorization at which one has a 50% chance of remembering the memorized item seems to be 3-5 days.57 Since there would be 2 or 3 repetitions in that period, presumably one would do better than 50% in recalling an item. 5 minutes and 5 days seems like a memorable enough rule of thumb: ‘don’t use spaced repetition if you need it sooner than 5 days or it’s worth less than 5 minutes’.

Overload

One common experience of new users to spaced repetition is to add too much stuff - trivialities and things they don’t really care about. But they soon learn the curse of Borges’s Funes the Memorious. If they don’t actually want to learn the material they put in, they will soon stop doing the daily reviews - which will cause reviews to pile up, which will be further discouraging, and so they stop. At least with physical fitness there isn’t a precisely dismaying number indicating how far behind you are! But if you have too little at the beginning, you’ll have few repetitions per day, and you’ll see little benefit from the technique itself - it looks like boring flash card review.

What to Add

I find one of the best uses for Mnemosyne is, besides the classic use of memorizing academic material such as geography or the periodic table or foreign vocabulary or Bible/Koran verses or the avalanche of medical school facts, to add in words from A Word A Day58 and Wiktionary, memorable quotes I see59, personal information such as birthdays (or license plates, a problem for me before), and so on. Quotidian uses, but all valuable to me. With a diversity of flashcards, I find my daily review interesting. I get all sorts of questions - now I’m trying to see whether a Haskell fragment is syntactically correct, now I’m pronouncing Korean hangul and listening to the answer, now I’m trying to find the Ukraine on a map, now I’m enjoying some A.E. Housman poetry, followed by a few quotes from LessWrong quote threads, and so on. Other people use it for many other things; one application that impresses me for its simple utility is memorizing names & faces of students although learning musical notes is also not bad.

The Workload

On average, when I’m studying a new topic, I’ll add 3-20 questions a day. Combined with my particular memory, I usually review about 90 or 100 items a day (out of the total >18,300). This takes under 20 minutes, which is not too bad. (I expect the time is expanded a bit by the fact that early on, my formatting guidelines were still being developed, and I hadn’t the full panoply of categories I do now - so every so often I must stop and edit categories.)

If I haven’t been studying something recently, the exponential decaying of reviews slowly drops the daily review. For example, in March 201113ya, I wasn’t studying many things, so for 2011-03-243d2011-03-2613ya, my scheduled daily reviews are 73, 83, and 74; after that, it’ll 201212ya, the daily reviews are in the 40s or sometimes 50s for similar reasons, but the gradual shrinkage will continue. We can see this vividly, and we can even see a sort of analogue of the original forgetting curve, if we ask Mnemosyne 2.0 to graph the number of cards to review per day for the next year up to February 201311ya (assuming no additions or missed reviews etc.):

A wildly varying but clearly decreasing graph of predicted cards per day

A wildly varying but clearly decreasing graph of predicted cards per day

If Mnemosyne weren’t using spaced repetition, it would be hard to keep up with 18,300+ flashcards. But because it is using spaced repetition, keeping up is easy.

Nor is 18.3k extraordinary. Many users have decks in the 6–7k range, Mnemosyne developer Peter Bienstman has >8.5k & Patrick Kenny >27k, Hugh Chen has a 73k+ deck, and in #anki, they tell me of one user who triggered bugs with his >200k deck. 200,000 may be a bit much, but for regular humans, some amount smaller seems possible—it’s interesting to compare SRS decks to the feat of memorizing Paradise Lost or to the Muslim title of ‘hafiz’, one who has memorized the ~80,000 words of the Koran, or the stricter ‘hafid’, one who had memorized the Koran and 100,000 hadiths as well. Other forms of memory are still more powerful.60 (I suspect that spaced repetition is involved in one of the few well-documented cases of “hyperthymesia”, Jill Price: reading Wired, she has ordinary fallible powers of memorization for surprise demands with no observed anatomical differences and is restricted to “her own personal history and certain categories like television and airplane crashes”; further, she is a packrat with obsessive-compulsive traits who keeps >50,000 pages of detailed diaries—perhaps due to a childhood trauma—associates daily events nigh-involuntarily with past events. Marcus says the other instances of hyperthymesia resemble Price.)

When to Review

When should one review? In the morning? In the evening? Any old time? The studies demonstrating the spacing effect do not control or vary the time of day, so in one sense, the answer is: it doesn’t matter - if it did matter, there would be considerable variance in how effective the effect is based on when a particular study had its subjects do their reviews.

So one reviews at whatever time is convenient. Convenience makes one more likely to stick with it, and sticking with it overpowers any temporary improvement.

If one is not satisfied with that answer, then on general considerations, one ought to review before bedtime & sleep. Memory consolidation seems to be related, and sleep is known to powerfully influence what memories enter long-term memory, strengthening memories of material learned close to bedtime and increasing creativity; interrupting sleep without affecting total sleep time or quality still damages memory formation in mice61. So reviewing before bedtime would be best. (Other mental exercises show improvement when trained before bedtime; for example, dual n-back.) One possible mechanism is that it may be that the expectancy of future reviews/tests is enough to encourage memory consolidation during sleep; so if one reviews and goes to bed, presumably the expectancy is stronger than if one reviewed at breakfast and had an eventful day and forgot entirely about the reviewed flashcards. (See also the correlation between time of studying & GPA in Hartwig & Dunlosky 201212ya.) Neural growth may be related; from Stahl 201014ya:

Recent advances in our understanding of the neurobiology underlying normal human memory formation have revealed that learning is not an event, but rather a process that unfolds over time.16,17,18,[Squire 200321ya Fundamental Neuroscience],20 Thus, it is not surprising that learning strategies that repeat materials over time enhance their retention.20,21,22,23,24,25,26

…Thousands of new cells are generated in this region every day, although many of these cells die within weeks of their creation.31 The survival of dentate gyrus neurons has been shown to be enhanced in animals when they are placed into learning situations.16-20 Animals that learn well retain more dentate gyrus neurons than do animals that do not learn well. Furthermore, 2 weeks after testing, animals trained in discrete spaced intervals over a period of time, rather than in a single presentation or a ‘massed trial’ of the same information, remember better.16-20 The precise mechanism that links neuronal survival with learning has not yet been identified. One theory is that the hippocampal neurons that preferentially survive are the ones that are somehow activated during the learning process.16-2062 The distribution of learning over a period of time may be more effective in encouraging neuronal survival by allowing more time for changes in gene expression and protein synthesis that extend the life of neurons that are engaged in the learning process.

…Transferring memory from the encoding stage, which occurs during alert wakefulness, into consolidation must thus occur at a time when interference from ongoing new memory formation is reduced.17,18 One such time for this transfer is during sleep, especially during non-rapid eye movement sleep, when the hippocampus can communicate with other brain areas without interference from new experiences.32,33,34 Maybe that is why some decisions are better made after a good night’s rest and also why pulling an all-nighter, studying with sleep deprivation, may allow you to pass an exam an hour later but not remember the material a day later.

Prospects: Extended Flashcards

Let’s step back for a moment. What are all our flashcards, small and large, doing for us? Why do I have a pair of flashcards for the word ‘anent’ among many others? I can just look it up.

But look ups take time compared to already knowing something. (Let’s ignore the previously discussed 5 minute rule.) If we think about this abstractly in a computer science context, we might recognize it as an old concept in algorithms & optimization discussions - the space-time tradeoff. We trade off lookup time against limited skull space.

Consider the sort of factual data already given as examples - we might one day need to know the average annual rainfall in Honolulu or Austin, but it would require too much space to memorize such data for all capitals. There are millions of English words, but in practice any more than 100,000 is excessive. More surprising is a sort of procedural knowledge. An extreme form of space-time tradeoffs in computers is when a computation is replaced by pre-calculated constants. We could take a math function and calculate its output for each possible input. Usually such a lookup table of input to output is really large. Think about how many entries would be in such a table for all possible integer multiplications between 1 and 1 billion. But sometimes the table is really small (like binary Boolean functions) or small (like trigonometric tables) or large but still useful (rainbow tables usually start in the gigabytes and easily reach terabytes).

Given an infinitely large lookup table, we could replace completely the skill of, say, addition or multiplication by the lookup table. No computation. The space-time tradeoff taken to the extreme of the space side of the continuum. (We could go the other way and define multiplication or addition as the slow computation which doesn’t know any specifics like the multiplication table - as if every time you wanted to add 2+2 you had to count on 4 fingers.)

So suppose we were children who wanted to learn multiplication. SRS and Mnemosyne can’t help because multiplication is not a specific factoid? The space-time tradeoff shows us that we can de-proceduralize multiplication and turn it partly into factoids. It wouldn’t be hard for us to write a quick script or macro to generate, say, 500 random cards which ask us to multiply AB by XY, and import them to Mnemosyne.63

After all, which is your mind going to do - get good at multiplying 2 numbers (generate on-demand), or memorize 500 different multiplication problems (memoize)? From my experience with multiple subtle variants on a card, the mind gives up after just a few and falls back on a problem-solving approach - which is exactly what one wants to exercise, in this case. Congratulations; you have done the impossible.

From a software engineering point of view, we might want to modify or improve the cards, and 500 snippets of text would be a tad hard to update. So coolest would be a ‘dynamic card’. Add a markup type like <eval src="​"> , and then Mnemosyne feeds the src argument straight into the Python interpreter, which returns a tuple of the question text and the answer text. The question text is displayed to the user as usual, the user thinks, requests the answer, and grades himself. In Anki, Javascript is supported directly by the application in HTML <script> tags (currently inline only but Anki could presumably import libraries by default), for example for kinds of syntax highlighting, so any kind of dynamic card could be written that one wants.

So for multiplication, the dynamic card would get 2 random integers, print a question like x * y = ? and then print the result as the answer. Every so often you would get a new multiplication question, and as you get better at multiplication, you see it less often - exactly as you should. Still in a math vein, you could generate variants on formulas or programs where one version is the correct one and the others are subtly wrong; I do this by hand with my programming flashcards (especially if I make an error doing exercises, that signals a finer point to make several flashcards on), but it can be done automatically. kpreid describes one tool of his:

I have written a program (in the form of a web page) which does a specialized form of this [generating ‘damaged formulas’]. It has a set of generators of formulas and damaged formulas, and presents you with a list containing several formulas of the same type (eg. ∫ 2x dx = x^2 + C) but with one damaged (eg. ∫ 2x dx = 2x^2 + C).

This approach generalizes to anything you can generate random problems of or have large databases of examples of. Khan Academy apparently does something like this in associating large numbers of (algorithmicly-generated?) problems with each of its little modules and tracking retention of the skill in order to decide when to do further review of that module. For example, maybe you are studying Go and are interested in learning life-and-death positions. Those are things that can be generated by computer Go programs, or fetched from places like GoProblems.com. For even more examples, Go is rotationally invariant - the best move remains the same regardless of which way the board is oriented and since there is no canonical direction for the board (like in chess) a good player ought to be able to play the same no matter how the board looks - so each specific example can be mirrored in 3 other ways. Or one could test one’s ability to ‘read’ a board by writing a dynamic card which takes each example board/problem and adds some random pieces as long as some go-playing program like GNU Go says the best move hasn’t changed because of the added noise.

One could learn an awful lot of things this way. Programming languages could be learned this way - someone learning Haskell could take all the functions listed in the Prelude or his Haskell textbook, and ask QuickCheck to generate random arguments for the functions and ask the GHC interpreter ghci what the function and its arguments evaluate to. Games other than go, like chess, may work (a live example being Chess Tempo & Listudy, and see the experience of Dan Schmidt; or Super Smash Brothers). A fair bit of mathematics. If the dynamic card has Internet access, it can pull down fresh questions from an RSS feed or just a website; this functionality could be quite useful in a foreign language learning context with every day bringing a fresh sentence to translate or another exercise.

With some NLP software, one could write dynamic flashcards which test all sorts of things: if one confuses verbs, the program could take a template like “$PRONOUN $VERB $PARTICLE $OBJECT % {right: caresse, wrong: caresses}” which yields flashcards like “Je caresses le chat” or “Tu caresse le chat” and one would have to decide whether it was the correct conjugation. (The dynamicism here would help prevent memorizing specific sentences rather than the underlying conjugation.) In full generality, this would probably be difficult, but simpler approaches like templates may work well enough. Jack Kinsella:

I wish there were dynamic SRS decks for language learning (or other disciplines). Such decks would count the number of times you have reviewed an instance of an underlying grammatical rule or an instance of a particular piece of vocabulary, for example its singular/plural/third person conjugation/dative form. These sophisticated decks would present users with fresh example sentences on every review, thereby preventing users from remembering specific answers and compelling them to learn the process of applying the grammatical rule afresh. Moreover, these decks would keep users entertained through novelty and would present users with tacit learning opportunities through rotating vocabulary used in non-essential parts of the example sentence. Such a system, with multiple-level review rotation, would not only prevent against overfit learning, but also increase the total amount of knowledge learned per minute, an efficiency I’d gladly invest in.

Even though these things seem like ‘skills’ and not ‘data’!

Popularity

As of 2011-05-02:

Metric

Mnemosyne

Mnemododo

Anki

iSRS

AnyMemo

Homepage Alexa

383k

27.5m

112k

1,766k64

ML/forum members

461

4129/215

129

Ubuntu installs

7k

9k

Debian installs

164

364

Arch votes

85

96

iPhone ratings

Unreleased65

193

69

Android ratings

20

703

836

Android installs

100-500

10-50k

50-100k

SuperMemo doesn’t fall under the same ratings, but it has sold in the hundreds of thousands over its 2 decades:

Biedalak is CEO of SuperMemo World, which sells and licenses Wozniak’s invention. Today, SuperMemo World employs just 25 people. The venture capital never came through, and the company never moved to California. About 50,000 copies of SuperMemo were sold in 200618ya, most for less than $44.41$302006. Many more are thought to have been pirated.66

It seems safe to estimate the combined market-share of Anki, Mnemosyne, iSRS and other SRS apps at somewhere under 50,000 users (making due allowance for users who install multiple times, those who install and abandon it, etc.). Relatively few users seem to have migrated from SuperMemo to those newer programs, so it seems fair to simply add that 50k to the other 50k and conclude that the worldwide population is somewhere around (but probably under) 100,000.

Where Was I Going With This?

Nowhere, really. Mnemosyne/SR software in general are just one of my favorite tools: it’s based on a famous effect67 discovered by science, and it exploits it elegantly68 and usefully. It’s a testament to the Enlightenment ideal of improving humanity through reason and overcoming our human flaws; the idea of SR is seductive in its mathematical rigor69. In this age where so often the ideal of ‘self-improvement’ and progress are decried, and gloom are espoused by even the common people, it’s really nice to just have a small example like this in one’s daily life, an example not yet so prosaic and boring as the lightbulb.

See Also

In the course of using Mnemosyne, I’ve written a number of scripts to generate repetitively varying cards.

  • mnemo.hs will take any newline-delimited chunk of text, like a poem, and generates every possible Cloze deletion; that is, an ABC poem will become 3 questions: _BC/ABC, A_C/ABC, AB_/ABC

  • mnemo2.hs works as above, but is more limited and is intended for long chunks of text where mnemo.hs would cause a combinatorial explosion of generated questions; it generates a subset: for ABCD, one gets __CD/ABCD, A__D/ABCD, and AB__/ABCD (it removes 2 lines, and iterates through the list).

  • mnemo3.hs is intended for date or name-based questions. It’ll take input like “Barack Obama is %47%.” and spit out some questions based on this: “Barack Obama is _7./47”, “Barack Obama is 4_./47” etc.

  • mnemo4.hs is intended for long lists of items. If one wants to memorize the list of US Presidents, the natural questions for flashcards goes something like “Who was the 3rd president?/Thomas Jefferson”, “Thomas Jefferson was the _rd president./3”, “Who was president after John Adams?/Thomas Jefferson”, “Who was president before James Madison?/Thomas Jefferson”.

    You note there’s repetition if you do this for each president - one asks the ordinal position of the item both ways (item -> position, position -> item), what precedes it, and what succeeds it. mnemo4.hs automates this, given a list. In order to be general, the wording is a bit odd, but it’s better than writing it all out by hand! (Example output is in the comments to the source code).

The reader might well be curious by this point what my Mnemosyne database looks like. I use Mnemosyne quite a bit, and as of 2020-02-02, I have 16,149 (active) cards in my deck. Said curious reader may find my cards & media at gwern.cards (52M; Mnemosyne 2.x format).

The Mnemosyne project has been collecting user-submitted spaced repetition statistical data for years. The full dataset as of 2014-01-27 is available for download by anyone who wishes to analyze it.


  1. “One does not learn computing by using a hand calculator, but one can forget arithmetic.”Perlis1982↩︎

  2. Listing other neuroprosthetics is hard. It’s an interesting idea, but as proponents of externalism like Andy Clark have found, it’s easier to feel that externalism is meaningful than to nail down a clear definition which separates a neuroprosthetic or part of one’s mind from a random tool you like or find useful. Consider whether a pencil and paper a neuroprosthetic: clearly it is not for a child learning to write, who must carefully compose the words in his mind and put them down one after another, but it is not so clear for an adult who has been writing all his life and can doodle or write down thoughts without thinking about them and may even be surprised at what they happened to write.

    I like this definition: “a neuroprosthetic is anything whose results you use without further thought”. So in the classic example, when Otto needs to go somewhere, he never thinks “I am an amnesiac who stores locations in my notepad, and I must look up the location” - he just looks up the location. A good heuristic would be anything whose destruction leaves one feeling lost, slow, stupid, or ignorant.

    By this standard, I can think of only a few tools I use without noticeable thought:

    • keybindings such as window manager shortcuts, in particular shortcuts for Google searches; on occasion, XMonad’s Prompt gets inscrutably wedged, locking it. When this happens, I have to restart X because I Google everything and the keybinding is so engrained that not using it is unbearable. It would be like trying to write with your weak hand.

    • Google Calendar and PredictionBook: it is incredible how many followups or reminders or regularly happening tasks I can put into Google Calendar or PB. I have outsourced many habits or thoughts to them, and I no longer think of it as anything special. If either were gone, I would feel frightened - what events were passing, what beliefs falsified, what opportunities opening up (or closing!) that I had suddenly become ignorant of?

    • Evernote, for a similar reason; many of my memories have ceased to be things like “octopuses see too fast to watch TV and so only HDTV or UHDTV works for them; I read this in Orion Magazine” and become things like “octopus TV Evernote”, and if I want to know what it was about octopuses & TV, well, I’ll have to look it up in Evernote. Mnemosyne plays a similar role for me, but there the memories are much clearer on their own because of the spaced repetition.

    • my website Gwern.net; I’ve had to say many times that I don’t know what I think about something, but whatever that is, it’s on my website. (A more extreme form of the Evernote/Mnemosyne neuroprosthetic.) A commenter once wrote that reading Gwern.net felt like he was crawling around in my head. He was more right than he realized.

    ↩︎
  3. as quoted in “Retrieval practice and the maintenance of knowledge”, Bjork1988↩︎

  4. From “Close the Book. Recall. Write It Down: That old study method still works, researchers say. So why don’t professors preach it?”; The Chronicle of Higher Education

    Two psychology journals have recently published papers showing that this strategy works, the latest findings from a decades-old body of research. When students study on their own, “active recall” - recitation, for instance, or flashcards and other self-quizzing - is the most effective way to inscribe something in long-term memory. Yet many college instructors are only dimly familiar with that research…

    From “The Spacing Effect: A Case Study in the Failure to Apply the Results of Psychological Research” (Dempster 198836ya), whose title alone summarizes the situation (see also Kelley 200717ya, Making Minds: What’s Wrong with Education - and What Should We Do About It?):

    Second, it [the spacing effect] is remarkably robust. In many cases, two spaced presentations are about twice as effective as two massed presentations (eg. Hintzman, 197450ya; Melton, 1970), and the difference between them increases as the frequency of repetition increases (Underwood, 197054ya)…

    The spacing effect was known as early as 1885139ya when Ebbinghaus published the results of his seminal work on memory. With himself as the subject, Ebbinghaus found that for a single 12-syllable series, 68 immediately successive repetitions had the effect of making possible an errorless recital after seven additional repetitions on the following day. However, the same effect was achieved by only 38 distributed repetitions spread over 3 days. On the basis of this and other related findings, Ebbinghaus concluded that ‘with any considerable number of repetitions a suitable distribution of them over a space of time is decidedly more advantageous than the massing of them at a single time’ (Ebbinghaus, 1885139ya/1913111ya. p. 89)

    Son & Simon 201212ya:

    Furthermore, even after acknowledging the benefits of spacing, changing teaching practices proved to be enormously difficult. Delaney et al 201014ya wrote: “Anecdotally, high school teachers and college professors seem to teach in a linear fashion without repetition and give three or four noncumulative exams.” (p. 130). Focusing on the math domain, where one might expect a very easy-to-review-and-to-space strategy, Rohrer (200915ya) points out that mathematics textbooks usually present topics in a non-spaced, non-mixed fashion. Even much earlier, Vash (198935ya) had written: “Education policy setters know perfectly well that [spaced practice] works better [than massed practice]. They don’t care. It isn’t tidy. It doesn’t let teachers teach a unit and dust off their hands quickly with a nice sense of ‘Well, that’s done.’” (p. 1547477ya).

    • Rohrer, D. (200915ya). “The effects of spacing and mixing practice problems”. Journal for Research in Mathematics Education, 40, 4-17

    • Vash, C. L. (198935ya). “The spacing effect: A case study in the failure to apply the results of psychological research”. American Psychologist, 44, 1547477ya (a comment on Dempster’s article?)

    From Psychology: An Introduction:

    In one practical demonstration of the spacing effect, Bahrick, Bahrick, Bahrick, & Bahrick (199331ya) showed that retention of foreign language vocabulary was greatly enhanced if practice sessions were spaced far apart. For example, “Thirteen retraining sessions spaced at 56 days yielded retention comparable to 26 sessions spaced at 14 days.” In other words, subjects could use half as many study sessions, if the study sessions were spread over a time period four times as long.

    ↩︎
  5. “Synaptic evidence for the efficacy of spaced learning”, Kramar et al 201212ya (“Take your time: Neurobiology sheds light on the superiority of spaced vs. massed learning”):

    The superiority of spaced vs. massed training is a fundamental feature of learning. Here, we describe unanticipated timing rules for the production of long-term potentiation (LTP) in adult rat hippocampal slices that can account for one temporal segment of the spaced trials phenomenon. Successive bouts of naturalistic theta burst stimulation of field CA1 afferents markedly enhanced previously saturated LTP if spaced apart by 1 h or longer, but were without effect when shorter intervals were used. Analyses of F-actin-enriched spines to identify potentiated synapses indicated that the added LTP obtained with delayed theta trains involved recruitment of synapses that were “missed” by the first stimulation bout. Single spine glutamate-uncaging experiments confirmed that less than half of the spines in adult hippocampus are primed to undergo plasticity under baseline conditions, suggesting that intrinsic variability among individual synapses imposes a repetitive presentation requirement for maximizing the percentage of potentiated connections. We propose that a combination of local diffusion from initially modified spines coupled with much later membrane insertion events dictate that the repetitions be widely spaced. Thus, the synaptic mechanisms described here provide a neurobiological explanation for one component of a poorly understood, ubiquitous aspect of learning.

    ↩︎
  6. There are many studies to the effect that active recall is best. Here’s one recent study, “Retrieval Practice Produces More Learning than Elaborative Studying with Concept Mapping”, Karpicke 201113ya (covered in Science Daily and the NYT):

    Educators rely heavily on learning activities that encourage elaborative studying, while activities that require students to practice retrieving and reconstructing knowledge are used less frequently. Here, we show that practicing retrieval produces greater gains in meaningful learning than elaborative studying with concept mapping. The advantage of retrieval practice generalized across texts identical to those commonly found in science education. The advantage of retrieval practice was observed with test questions that assessed comprehension and required students to make inferences. The advantage of retrieval practice occurred even when the criterial test involved creating concept maps. Our findings support the theory that retrieval practice enhances learning by retrieval-specific mechanisms rather than by elaborative study processes. Retrieval practice is an effective tool to promote conceptual learning about science.

    From “Forget What You Know About Good Study Habits”. New York Times;

    Cognitive scientists do not deny that honest-to-goodness cramming can lead to a better grade on a given exam. But hurriedly jam-packing a brain is akin to speed-packing a cheap suitcase, as most students quickly learn - it holds its new load for a while, then most everything falls out….When the neural suitcase is packed carefully and gradually, it holds its contents for far, far longer. An hour of study tonight, an hour on the weekend, another session a week from now: such so-called spacing improves later recall, without requiring students to put in more overall study effort or pay more attention, dozens of studies have found.

    “The idea is that forgetting is the friend of learning”, said Dr. Kornell. “When you forget something, it allows you to relearn, and do so effectively, the next time you see it.”

    That’s one reason cognitive scientists see testing itself - or practice tests and quizzes - as a powerful tool of learning, rather than merely assessment. The process of retrieving an idea is not like pulling a book from a shelf; it seems to fundamentally alter the way the information is subsequently stored, making it far more accessible in the future.

    In one of his own experiments, Dr. Roediger and Jeffrey Karpicke, who is now at Purdue University, had college students study science passages from a reading comprehension test, in short study periods. When students studied the same material twice, in back-to-back sessions, they did very well on a test given immediately afterward, then began to forget the material. But if they studied the passage just once and did a practice test in the second session, they did very well on one test two days later, and another given a week later.

    ↩︎
  7. The Mathematics of Gambling, Thorp 198440ya, §2 “The Wheels”, Chapter 4, pg43-44:

    It was the spring of 195569ya. I was finishing my second year of graduate physics at U.C.L.A…I changed my field of study from physics to mathematics…I attended classes and studied 50–60 hours a week, generally including Saturdays and Sundays. I had read about the psychology of learning in order to be able to work longer and harder. I found that “spaced learning” worked well: study for an hour, then take a break of at least ten minutes (shower, meal, tea, errands, etc.). One Sunday afternoon about 3 p.m., I came to the co-op dining room for a tea break…My head was bubbling with physics equations, and several of my good friends were sitting around chatting.

    ↩︎
  8. From Final Jeopardy: Man Vs. Machine and the Quest to Know Everything, by Stephen Baker, pg 214:

    The program he put together tested him on categories, gauged his strengths (sciences, NFL football) and weaknesses (fashion, Broadway shows), and then directed him toward the preparation most likely to pay off in his own match. To patch these holes in his knowledge, Craig used a free online tool called Anki, which provides electronic flash cards for hundreds of fields of study, from Japanese vocabulary to European monarchs. The program, in Craig’s words, is based on psychological research on ‘the forgetting curve’. It helps people find holes in their knowledge and determines how often they need those areas to be reviewed to keep them in mind. In going over world capitals, for example, the system learns quickly that a user like Craig knows London, Paris, and Rome, so it might spend more time reinforcing the capital of, say, Kazakhstan. (And what would be the Kazakh capital? ‘Astana’, Craig said in a flash. ‘It used to be Almaty, but they moved it.’)

    ↩︎
  9. “Our Interview With Jeopardy! Champion Arthur Chu”:

    [Chu:] …Jeopardy! is aimed at the sort of average TV viewer, so they’re not going to ask things that are pointlessly obscure…So I used a program called Anki which uses a method called “spaced repetition.” It keeps track of where you’re doing well or poorly, and pushes you to study the flashcards you don’t know as well, until you develop an even knowledge base about a particular subject, and I just made flashcards for those specific things. I memorized all the world capitals, it wasn’t that hard once I had the flashcards and was using them every day. I memorized the US State Nicknames (they’re on Wikipedia), memorized the basic important facts about the 44 US Presidents. I really focused on those. But there’s a lot more stuff to know. I went on Jeopardy! knowing that there was stuff I didn’t know. For instance, everyone laughs about sports - but I also knew that [sports clues] were the least likely to come up in Double Jeopardy and Final Jeopardy and be very important. So I decided I shouldn’t sweat it too much, I should just recognize that I didn’t know them and let that go, as long as I can get the high value clues. So that was how I prepared.

    ↩︎
  10. Alan J. Perlis, “Epigrams in Programming” (198242ya)↩︎

  11. Web developer Persol writes in August 2012:

    I actually wrote a site that did this [spaced repetition] a few months ago. I had about 4000 users who had actually gone through a complete session…As guessed, the problem is that I couldn’t get people to start forming it as a habit. There is no immediate payback. Less than 20 people out of 4000 did more than one session…Additionally, there are at least 18 competitors. Here’s the list I made at the time. Very few seem to be successful. I shut the site down about a month ago. There are numerous free competitors which don’t have any great annoyances. I wouldn’t suggest starting another of these sites unless you figured out an effective way to “gamify” it.

    …~4000 people finished a session. Many more ‘tried’ than 4000…I just couldn’t determine which users were bots that registered randomly vs users that didn’t finish the first session.

    • Tried: lots (but unknown)

    • Finished 1 session: ~4000

    • Finished >1 session: ~20 [0.5%]

    ↩︎
  12. “Play it Again: The Master Psychopharmacology Program as an Example of Interval Learning in Bite-Sized Portions”, Stahl et al 201014ya:

    Since Ebbinghaus’ time, a voluminous amount of research has confirmed this simple but important fact: the retention of new information degrades rapidly unless it is reviewed in some manner. A modern example of this loss of knowledge without repetition is a study of cardiopulmonary resuscitation (CPR) skills that demonstrated rapid decay in the year following training. By 3 years post-training only 2.4% were able to perform CPR successfully.6 Another recent study of physicians taking a tutorial they rated as very good or excellent showed mean knowledge scores increasing from 50% before the tutorial to 76% immediately afterward.7 However, score gains were only half as great 3-8 days later and incredibly, there was no [statistically-]significant knowledge retention measurable at all at 55 days.7 Similar results have been reported by us in follow-up studies of knowledge retention from continuing medical education programs.1 [Stahl SM, Davis RL. Best Practices for Medical Educators. Carlsbad, CA: NEI Press; 200915ya]

    …This may be due to the fact that lectures with assigned reading are the easiest for teachers. Also, medical learning is rarely measured immediately after a lecture or after reading new material for the first time and then measured again a few days or weeks later, so that the low retention rates of this approach may not be widely appreciated.1,4 No wonder formal medical education conferences without enabling or practice-reinforcing strategies appear to have relatively little impact on practice and healthcare outcomes.8,9,10

    ↩︎
  13. One study looking at cramming is the 199331ya “Cramming: A barrier to student success, a way to beat the system or an effective learning strategy?”, Vacha et al 199331ya, abstract:

    Tested the hypothesis that cramming is an ineffective study strategy by examining the weekly study diaries of 166 undergraduates. All subjects also completed an end-of-semester questionnaire measuring study habits. subjects were classified in the following study patterns: ideal, confident, zealous, or crammer. Contrary to the hypothesis, results suggest that cramming is an effective approach, most widespread in courses using take-home essay examinations and major research papers. Crammers’ grades were as good as or better than those of subjects using other strategies; the longer subjects were in college, the more likely it was that they crammed. Crammers studied more hours than most students and were as interested in their courses as other students.

    Note that there is no measure of long-term retention, suggesting that people who only care about grades are rationally choosing to cram.↩︎

  14. Anki has its Cram Mode and Mnemosyne 2.0 has a cramming plugin. When a SRS doesn’t have explicit support, it’s always possible to ‘game’ the algorithm by setting one’s scores artificially low, so the SR algorithm thinks you are stupid and need to do a lot of repetitions.↩︎

  15. “Examining the examiners: Why are we so bad at assessing students?”, Newstead 200222ya:

    Conway, Cohen & Stanhope1992 looked at long term memory for the information presented on a psychology course. They found that some types of information, especially that relating to research methods, were remembered better than others. But in a follow up analysis, they found that the type of assessment used had an effect on memory. In essence, material assessed by continuous assessment was more likely to be remembered than information assessed by exams.

    ↩︎
  16. Stahl 201014ya:

    For example, simple restudying allows the learner to reexperience all of the material but actually produces poor long-term retention.25,26,35 Why do students keep studying the original materials? Certainly if this is their only choice, then restudying is a necessary tactic. Another answer may be that repeated studying falsely inflates students’ confidence in their ability to remember in the future because they sense that they understand it now, and they and their instructors may be unaware of the many studies that show poor retention on delayed testing after this form of repetition.25,26,35

    ↩︎
  17. From Kornell et al 201014ya:

    Contrary to the massing-aids-induction hypothesis, final test performance was consistently and considerably superior in the spaced condition. A large majority of participants, however, judged massing to be more effective than spacing, despite making the judgment after taking the test.

    …Metacognitive judgments-that is, judgments about one’s own memory and cognition-are often based on feelings of fluency(eg. see Benjamin, Bjork, & Schwartz, 1998; Rhodes & Castel, 2008). Because massing naturally leads to feelings of fluency and increases short-term task performance during learning, learners frequently rate spacing as less effective than massing, even when their performance shows the opposite pattern (Baddeley & Longman 197846ya; Kornell & Bjork, 200816ya; Simon & Bjork, 200123ya; Zechmeister & Shaughnessy, 198044ya). Averaged across Kornell and Bjork’s (200816ya) experiments, for example, more than 80% of participants rated massing as equally or more effective than spacing, whereas only 15% of participants actually performed better in the massed condition than in the spaced condition.

    …Such an illusion was apparent in the induction condition. Contrary to previous research, however, participants gave higher ratings for spacing than massing during repetition learning (see, eg. Simon & Bjork, 200123ya; Zechmeister & Shaughnessy, 198044ya). This outcome may have occurred because of a process of a habituation: Six presentations and a total of 30 s spent studying a single painting may have come to seem inefficient and pointless. Thus, there appears to be a turning point in metacognitive ratings based on fluency: As fluency increases, metacognitive ratings increase up to a point, but as fluency continues to increase and encoding or retrieval becomes too easy, metacognitive ratings may begin to decrease.

    …In advance of their research, Kornell & Bjork 200816ya were convinced that such inductive learning would benefit from massing, yet their results showed the opposite. Undaunted, we remained convinced that spacing would be more beneficial for repetition learning than for inductive learning- especially for older adults, given their overall declines in episodic memory. The current results disconfirmed our expectations once again. If our intuitions are erroneous, despite our years spent proving and praising the spacing effect-including roughly 40 years’ worth contributed by Robert A. Bjork-those of the average student are surely mistaken as well (as the inaccuracy of the participants’ metacognitive ratings suggests). We have, perhaps, fallen victim to the illusion that making learning easy makes learning effective, rather than recognizing that spacing is a desirable difficulty (Bjork1994) that enhances inductive learning as well as repetition learning well into old age.

    ↩︎
  18. From Son & Simon 201212ya:

    Thus, while spacing may boost learning, it may be thought to be relatively inefficient in terms of study time. As we discuss later, this feeling of inefficiency may be one of the reasons that spacing is not the more popular strategy. Interestingly, in that same study (Baddeley & Longman 197846ya; and see also Pirolli & Anderson 198539ya and Woodworth & Schlosberg 195470ya [Experimental Psychology]), there was evidence of such a thing as laboring in vain. That is, exceeding a certain number of hours of practice a day (more than approximately 2h) led to no increases in learning, as might be expected. Related to the deficient-processing theory mentioned above, these results are crucial in understanding intuitively how the spacing effect works: We simply get burnt out. These data are also analogous to the cognitive literature on overlearning, which shows that while continuous study over long periods of time might seem beneficial (and even feel good) in the short-term, the benefits disappear soon afterwards (Rohrer et al 200519ya; Rohrer & Taylor 200618ya)…In the above-described Baddeley & Longman1978’s study, for example, after postal workers practiced typing in either massed or spaced study sessions, they had to indicate how satisfied they were with the training. Results showed that while spacing led to the best learning, it was the least liked. Similarly, Simon & Bjork 200123ya found that people preferred the massing strategy on a motor learning task.

    ↩︎
  19. “Study strategies of college students: Are self-testing and scheduling related to achievement?”, Hartwig & Dunlosky 201212ya:

    Previous studies, such as those by Kornell and Bjork (Psychonomic Bulletin & Review, 14:219-224, 200717ya) and Karpicke, Butler, and Roediger (Memory, 17:471-479, 200915ya), have surveyed college students’ use of various study strategies, including self-testing and rereading. These studies have documented that some students do use self-testing (but largely for monitoring memory) and rereading, but the researchers did not assess whether individual differences in strategy use were related to student achievement. Thus, we surveyed 324 undergraduates about their study habits as well as their college grade point average (GPA). Importantly, the survey included questions about self-testing, scheduling one’s study, and a checklist of strategies commonly used by students or recommended by cognitive research. Use of self-testing and rereading were both positively associated with GPA. Scheduling of study time was also an important factor: Low performers were more likely to engage in late-night studying than were high performers; massing (vs. spacing) of study was associated with the use of fewer study strategies overall; and all students-but especially low performers-were driven by impending deadlines. Thus, self-testing, rereading, and scheduling of study play important roles in real-world student achievement.

    (See also Dunlosky et al 2013.) Note the self-testing correlation excludes flashcards, a result that both the authors and me found surprising. The sleep connection is interesting, given the hypothesized link between stronger memory formation & studying before a good night’s sleep - you can hardly get a good night’s sleep if you are cramming late into the night (correlated with lower grades) but you can if you do so at a reasonable time in the evening (in time to get a solid night).

    See also Susser & McCabe2012:

    Laboratory studies have demonstrated the long-term memory benefits of studying material in multiple distributed sessions as opposed to one massed session, given an identical amount of overall study time (ie. the spacing effect). The current study goes beyond the laboratory to investigate whether undergraduates know about the advantage of spaced study, to what extent they use it in their own studying, and what factors might influence its utilization. Results from a web-based survey indicated that participants (n = 285) were aware of the benefits of spaced study and would use a higher level of spacing under ideal compared to realistic circumstances. However, self-reported use of spacing was intermediate, similar to massing and several other study strategies, and ranked well below commonly used strategies such as rereading notes. Several factors were endorsed as important in the decision to distribute study time, including the perceived difficulty of an upcoming exam, the amount of material to learn, how heavily an exam is weighed in the course grade, and the value of the material. Further, level of metacognitive self-regulation and use of elaboration strategies were associated with higher rates of spaced study.

    ↩︎
  20. Analytic Culture in the US Intelligence Community: An Ethnographic Study, Johnston 200519ya, pg89:

    To investigate the intensity of instructional interactions, Art Graesser and Natalie Person1994 compared questioning and answering in classrooms with those in tutorial settings.5 They found that classroom groups of students ask about three questions an hour and that any single student in a classroom asks about 0.11 questions per hour. In contrast, they found that students in individual tutorial sessions asked 20-30 questions an hour and were required to answer 117-146 questions per hour. Reviews of the intensity of interaction that occurs in technology-based instruction have found even more active student response levels. [J. D. Fletcher, Technology, the Columbus Effect, and the Third Revolution in Learning.]

    Although Graesser & Person 199430ya also found that sheer number of questions was not necessarily important, suggesting diminishing marginal returns or perhaps bad question asking.↩︎

  21. “SuperMemo is based on the insight that there is an ideal moment to practice what you’ve learned. Practice too soon and you waste your time. Practice too late and you’ve forgotten the material and have to relearn it. The right time to practice is just at the moment you’re about to forget. Unfortunately, this moment is different for every person and each bit of information. Imagine a pile of thousands of flash cards. Somewhere in this pile are the ones you should be practicing right now. Which are they?” Gary Wolf, “Want to Remember Everything You’ll Ever Learn? Surrender to This Algorithm”, Wired Magazine↩︎

  22. “Make no mistake about it: Computers process numbers - not symbols. We measure our understanding (and control) by the extent to which we can arithmetize an activity.” Perlis, ibid.↩︎

  23. this exponential expansion is how a SR program can handle continual input of cards: if cards were scheduled at fixed intervals, like every other day, review would soon become quite impossible - I have >18000 items in Mnemosyne, but I don’t have time to review 9000 questions a day!↩︎

  24. See the 2008 meta-analysis, “Learning Styles: Concepts and Evidence” (APS press release); from the abstract:

    …in order to demonstrate that optimal learning requires that students receive instruction tailored to their putative learning style, the experiment must reveal a specific type of interaction between learning style and instructional method: Students with one learning style achieve the best educational outcome when given an instructional method that differs from the instructional method producing the best outcome for students with a different learning style. In other words, the instructional method that proves most effective for students with one learning style is not the most effective method for students with a different learning style.

    Our review of the literature disclosed ample evidence that children and adults will, if asked, express preferences about how they prefer information to be presented to them. There is also plentiful evidence arguing that people differ in the degree to which they have some fairly specific aptitudes for different kinds of thinking and for processing different types of information. However, we found virtually no evidence for the interaction pattern mentioned above, which was judged to be a precondition for validating the educational applications of learning styles. Although the literature on learning styles is enormous, very few studies have even used an experimental methodology capable of testing the validity of learning styles applied to education. Moreover, of those that did use an appropriate method, several found results that flatly contradict the popular meshing hypothesis.

    We conclude therefore, that at present, there is no adequate evidence base to justify incorporating learning-styles assessments into general educational practice. Thus, limited education resources would better be devoted to adopting other educational practices that have a strong evidence base, of which there are an increasing number. However, given the lack of methodologically sound studies of learning styles, it would be an error to conclude that all possible versions of learning styles have been tested and found wanting; many have simply not been tested at all.

    ↩︎
  25. Fritz, C. O., Morris, P. E., Acton, M., Etkind, R., & Voelkel, A. R (200717ya). “Comparing and combining expanding retrieval practice and the keyword mnemonic for foreign vocabulary learning”. Applied Cognitive Psychology, 21, 499-526.↩︎

  26. From Balota et al 200618ya, describing Spitzer 193985ya, “Studies in retention”:

    Spitzer (193985ya) incorporated a form of expanded retrieval in a study designed to assess the ability of sixth graders to learn science facts. Impressively, Spitzer tested over 3600 students in Iowa-the entire sixth-grade population of 91 elementary schools at the time. The students read two articles, one on peanuts and the other on bamboo, and were given a 25-item multiple choice test to assess their knowledge (such as ‘To which family of plants does bamboo belong?’). Spitzer tested a total of nine groups, manipulating both the timing of the test (administered immediately or after various delays) and the number of identical tests students received (one to three). Spitzer did not incorporate massed or equal interval retrieval conditions, but he had at least two groups that were tested on an expanding schedule of retrieval, in which the intervals between tests were separated by the passage of time (in days) rather than by intervening to-be-learned information. For example, in one of the groups, the first test was given immediately, the second test was given seven days after the first test, and the third test was given 63 days after the second test. Thus, in essence, this group was tested on a 0-7-63 day expanding retrieval schedule. Spitzer compared performance of the expanded retrieval group to a group given a single test 63 days after reading the original article. On the first (immediate) test, the expanded retrieval group correctly answered 53% of the questions. After 63 days and two previous tests, their score was still an impressive 43%. The single test group correctly answered only 25% of the original items after 63 days, giving the expanded retrieval group an 18% retention advantage. This is quite impressive, given that this large benefit remained after a 63-day retention interval. Similar beneficial effects were found in a group tested on a 0-1-21 day expanded retrieval schedule compared to a group given a single test after 21 days. Of course, this study does not decouple the effects of testing from spacing or expansion, but the results do clearly indicate considerable learning and retention using the expanded repeated testing procedure. Spitzer concluded that ‘…examinations are learning devices and should not be considered only as tools for measuring achievement of pupils’ (p. 656, italics added)

    ↩︎
  27. “Distributing Learning Over Time: The Spacing Effect in Children’s Acquisition and Generalization of Science Concepts”, Vlach & Sandhofer 201212ya:

    The spacing effect describes the robust finding that long-term learning is promoted when learning events are spaced out in time, rather than presented in immediate succession. Studies of the spacing effect have focused on memory processes rather than for other types of learning, such as the acquisition and generalization of new concepts. In this study, early elementary school children (5-7 year-olds; N = 36) were presented with science lessons on one of three schedules: massed, clumped, and spaced. The results revealed that spacing lessons out in time resulted in higher generalization performance for both simple and complex concepts. Spaced learning schedules promote several types of learning, strengthening the implications of the spacing effect for educational practices and curriculum.

    ↩︎
  28. See also Balch 200618ya, who compared spacing & massed in an introductory psychology course as well.↩︎

  29. Roediger & Karpicke 200618yab again.↩︎

  30. Balota et al 200618ya review:

    No feedback or correction was given to subjects if they made errors or omitted answers. Landauer & Bjork1978 found that the expanding-interval schedule produced better recall than equal-interval testing on a final test at the end of the session, and equal-interval testing, in turn, produced better recall than did initial massed testing. Thus, despite the fact that massed testing produced nearly errorless performance during the acquisition phase, the other two schedules produced better retention on the final test given at the end of the session. However, the difference favoring the expanding retrieval schedule over the equal-interval schedule was fairly small at around 10%. In research following up Landauer and Bjork’s (197846ya) original experiments, practically all studies have found that spaced schedules of retrieval (whether equal-interval or expanding schedules) produce better retention on a final test given later than do massed retrieval tests given immediately after presentation (eg. Cull, 200024ya; Cull, Shaughnessy, & Zechmeister, 199628ya), although exceptions do exist. For example, in Experiments 3 and 4 of Cull et al 199628ya, massed testing produced performance as good as equal-interval testing on a 5-5-5 schedule, but most other experiments have found that any spaced schedule of testing (either equal-interval or expanding) is better than a massed schedule for performance on a delayed test. However, whether expanding schedules are better than equal-interval schedules for long-term retention-the other part of Landauer and Bjork’s interesting findings-remains an open question. Balota, Duchek, and Logan (in press) have provided a thorough consideration of the relevant evidence and have shown that it is mixed at best, and that most researchers have found no difference between the two schedules of testing. That is, performance on a final test at the end of a session often shows no difference in performance between equal-interval and expanding retrieval schedules.

    Cull, for those curious (Cull, W. L. (200024ya). “Untangling the benefits of multiple study opportunities and repeated testing for cued recall”. Applied Cognitive Psychology, 14, 215-235):

    Cull (200024ya) compared expanded retrieval to equal interval spaced retrieval in a series of four experiments designed to mimic typical teaching or study strategies encountered by students. He examined the role of testing versus simply restudying the material, feedback, and various retention intervals on final test performance. Paired associates (an uncommon word paired with a common word, such as bairn-print) were presented in a manner similar to the flashcard techniques students often use to learn vocabulary words. The intervals between retrieval attempts of to-be-learned information ranged from minutes in some experiments to days in others. Interestingly, across four experiments, Cull did not find any evidence of an advantage of an expanded condition over a uniform spaced condition (ie. no [substantial] expanded retrieval effect), although both conditions consistently produced large advantages over massed presentations. He concluded that distributed testing of any kind, expanded or equal interval, can be an effective learning aid for teachers to provide for their students.

    ↩︎
  31. The Balota et al 200618ya review offers a synthesis of current theories on how massed and spaced differ, based on memory encoding:

    According to encoding variability theory, performance on a memory test is dependent upon the overlap between the contextual information available at the time of test and the contextual information available during encoding. During massed study, there is relatively little time for contextual elements to fluctuate between presentations and so this condition produces the highest performance in an immediate memory test, when the test context strongly overlaps with the same contextual information encoded during both of the massed presentations. In contrast, when there is spacing between the items, there is time for fluctuation to take place between the presentations during study, and hence there is an increased likelihood of having multiple unique contexts encoded. Because a delayed test will also allow fluctuation of context, it is better to have multiple unique contexts encoded, as in the spaced presentation format, as opposed to a single encoded context, as in the massed presentation format.

    Storm et al 201014ya did 3 experiments on reading comprehension:

    On a test 1 week later, recall was enhanced by the expanding schedule, but only when the task between successive retrievals was highly interfering with memory for the passage. These results suggest that the extent to which learners benefit from expanding retrieval practice depends on the degree to which the to-be-learned information is vulnerable to forgetting.

    ↩︎
  32. From Mnemosyne’s Principles page:

    The Mnemosyne algorithm is very similar to SM2 used in one of the early versions of SuperMemo. There are some modifications that deal with early and late repetitions, and also to add a small, healthy dose of randomness to the intervals. Supermemo now uses SM11. However, we are a bit skeptical that the huge complexity of the newer SM algorithms provides for a statistically relevant benefit. But, that is one of the facts we hope to find out with our data collection. We will only make modifications to our algorithms based on common sense or if the data tells us that there is a statistically relevant reason to do so.

    ↩︎
  33. Balota et al 200618ya:

    Carpenter and DeLosh (200519ya, Exp. 2) have recently investigated face-name learning under massed, expanded (1-3-5), and equal interval (3-3-3) conditions. This study also involved study and study and test procedures during the acquisition phase. Carpenter and DeLosh found a large effect of spacing, but no evidence of a benefit of expanded over equal interval practice. In fact, Carpenter and DeLosh reported a reliable benefit of the equal interval condition over the expanded retrieval condition.

    ↩︎
  34. Balota et al 200618ya again:

    Rea & Modigliani 198539ya tested the effectiveness of expanded retrieval in a third-grade classroom setting. In separate conditions, students were given new multiplication problems or spelling words to learn. The problem or word was presented audiovisually once and then tested on either a massed retrieval schedule of 0-0-0-0 or an expanding schedule of 0-1-2-4, in which the intervals involved being tested on old items or learning new items. After each test trial for a given item, the item was re-presented in its entirety so students received feedback on what they were learning. Performance during the learning phase was at 100% for both spelling words and multiplication facts. On an immediate final retention test, Rea and Modigliani found a performance advantage for all items-math and spelling- practiced on an expanding schedule compared to the massed retrieval schedule. They suggested, as have others, that spacing combined with the high success rate inherent in the expanded retrieval schedule produced better retention than massed retrieval practice. However, as in Spitzer’s study, Rea and Modigliani did not test an appropriate equal interval spacing condition. Hence, their finding that expanded retrieval is superior to massed retrieval in third graders could simply reflect the superiority of spaced versus massed rehearsal-in other words, the spacing effect.

    ↩︎
  35. Balota et al 2006.↩︎

  36. Balota et al 200618ya; >1 is rare in psychology, see “One Hundred Years of Social Psychology Quantitatively Described”, Bond et al 2003↩︎

  37. Rohrer & Taylor2006↩︎

  38. Balota et al 200618ya:

    …long-term retention of information has been demonstrated over several days in some cases (eg. Camp et al, 199628ya). For example, in the latter study, Camp et al employed an expanding retrieval strategy to train 23 individuals with mild to moderate AD to refer to a daily calendar as a cue to remember to perform various personal activities (eg. take medication). Following a baseline phase to determine whether subjects would spontaneously use the calendar, spaced retrieval training was implemented by repeatedly asking the subject the question, ‘How are you going to remember what to do each day?’ at expanding time intervals. The results indicated that 20/23 subjects did learn the strategy (ie. to look at the calendar) and retained it over a 1-week period.

    ↩︎
  39. Rohrer & Taylor 200618ya warns us, though, about many of the other math studies:

    In one meta-analysis by Donovan & Radosevich 199925ya, for instance, the size of the spacing effect declined sharply as conceptual difficulty of the task increased from low (eg. rotary pursuit) to average (eg. word list recall) to high (eg. puzzle). By this finding, the benefits of spaced practise may be muted for many mathematics tasks.

    ↩︎
  40. What is especially nice about this study was that not only did it use high-quality (intelligent & motivated) college students (United States Air Force Academy), the conditions were relatively controlled - both groups had the same homework (so equal testing effect), but like Rohrer & Taylor 200618ya/200717ya, the distribution was what varied:

    The course topics, textbook, handouts, reading assignments, and graded assignments (with the exception of quiz, homework, and participation points) were identical for the treatment and control groups. The listing of homework assignments in the syllabus differed between groups. The control group was assigned daily homework related to the topic(s) presented that day in class. Peterson (197153ya) calls this the vertical model for assigning mathematics homework. The treatment group was assigned homework in accordance with a distributed organizational pattern that combines practice on current topics and reinforcement of previously covered topics. Under the distributed model, approximately 40% of the problems on a given topic were assigned the day the topic was first introduced, with an additional 20% assigned on the next lesson and the remaining 40% of problems on the topic assigned on subsequent lessons (Hirsch et al, 198341ya). In Hirsch’s research and in this study, after the initial homework assignment, problem(s) representing a given topic resurfaced on the 2nd, 4th, 7th, 12th, and 21st lesson. Consequently, treatment group homework for lesson one consisted of only one topic; homework for lessons two and three consisted of two topics; and homework for lesson four through six consisted of three topics. This pattern continued as new topics were added and was applied to all non-exam, non-laboratory lessons. As shown by Tables 1 and 2, the same homework problems were assigned to both groups with only the pattern of assignment differing. Because of the nature of the distributed practice model, homework for the treatment group contained fewer problems (relative to the control group) early in the semester with the number of problems increasing as the semester progressed. Later in the semester, homework for the treatment group contained more problems (relative to the control group)….The USAFA routinely collects study time data. After each exam, a large sample of cadets (at least 60% of the course population) anonymously reported the amount of time (in minutes) spent studying for the exam. Time spent studying was approximately equal for both groups (see Table 5). Descriptive data revels that, for both the treatment and control group, study time for the third exam was at least 16% greater than study time for any other exam. Study time for the final exam was at least 68% greater than study time for any of the hourly exams (see Table 5)

    …The treatment produced an effect size (f 2) of 0.013 on the first exam, 0.029 on the second exam, 0.035 on the fourth exam, and 0.040 on the final course percentage grade. Although the effect sizes appear to be small, the treatment group outscored the control group in every case. A mean difference of 5.13 percentage points on the first, second, and fourth exam translates to an advantage of about a third of a letter grade for students in the treatment group. In addition, higher minimum scores earned by the treatment group may indicate that the distributed practice treatment served to eliminate the extremely low scores (refer to Table 3)….Oddly, the distributed practice treatment did not produce a [statistically-]significant effect on final exam scores. One possible cause for the disparity was the USAFA policy exempting the top performers from the final exam. Of the 16 exempted students, 11 were from the treatment group with only 5 from the control group.

    ↩︎
  41. Balch 200618ya abstract:

    Two introductory psychology classes (N = 145) participated in a counterbalanced classroom experiment that demonstrated the spacing effect and, by analogy, the benefits of distributed study. After hearing words presented twice in either a massed or distributed manner, participants recalled the words and scored their recall protocols, reliably remembering more distributed than massed words. Posttest scores on a multiple-choice quiz covering points illustrated by the experiment averaged about twice the comparable pretest scores, indicating the effectiveness of the exercise in conveying content. Students’ subjective ratings suggested that the experiment helped convince them of the benefits of distributed study.

    ↩︎
  42. See Cepeda et al 2006↩︎

  43. Commins, S., Cunningham, L., Harvey, D., and Walsh, D. (200321ya). “Massed but not spaced training impairs spatial memory”. Behavioural Brain Research 139, 215-223↩︎

  44. Galluccio & Rovee-Collier 200618ya, “Nonuniform effects of reinstatement within the time window”. Learning and Motivation, 37, 1-17.↩︎

  45. See the previous sections for many using children; one previously uncited is Toppino 199331ya, “The spacing effect in preschool children’s free recall of pictures and words”; but Toppino et al 2009 adds some interesting qualifiers to spaced repetition in the young:

    Preschoolers, elementary school children, and college students exhibited a spacing effect in the free recall of pictures when learning was intentional. When learning was incidental and a shallow processing task requiring little semantic processing was used during list presentation, young adults still exhibited a spacing effect, but children consistently failed to do so. Children, however, did manifest a spacing effect in incidental learning when an elaborate semantic processing task was used.

    ↩︎
  46. Another previously uncited study: Glenberg, A. M. (197945ya), “Component-levels theory of the effects of spacing of repetitions on recall and recognition”. Memory & Cognition, 7, 95-112.↩︎

  47. See Kornell et al 201014ya; Simone et al 2012 shows the spacing benefits but reduced in magnitude in its 56-74 year old subjects, similar to Jackson et al 2012 and Maddox2013↩︎

  48. Mammarella, N., Russo, R., & Avons, S. E. (200222ya). ”Spacing effects in cued-memory tasks for unfamiliar faces and nonwords”. Memory & Cognition, 30, 1238131251773ya↩︎

  49. Childers, J. B., & Tomasello, M. (200222ya). ”Two-year-olds learn novel nouns, verbs, and conventional actions from massed or distributed exposures”. Developmental Psychology, 38, 967-978↩︎

  50. eg. Fishman et al 1968↩︎

  51. The famous ‘10,000 hours of practice’ figure may not be as true or important as Ericsson and publicizers like Malcolm Gladwell imply, given the high variance of expertise against time, and results from sports showing smaller time investments (see also Hambrick’s corpus cutting ‘deliberate practice’ down to size), and Ericsson absurdly deny the powerful role of genetics and the necessary condition of having talent but the insight of ‘deliberate practice’ helping talented people probably is real. One may be able to get away with 3,000 hours rather than 10,000, but one isn’t going to do that with mindless repetition or no repetitions.↩︎

  52. Gentner, D., Loewenstein, J., & Thompson, L. (200321ya). “Learning and transfer: A general role for analogical encoding”. Journal of Educational Psychology, 95, 393-40↩︎

  53. From Kornell et al 201014ya:

    The benefits of spacing seem to diminish or disappear when to-be-learned items are not repeated exactly (Appleton-Knapp, Bjork, & Wickens, 2005)…a number of studies have shown that massing, rather than spacing, promotes inductive learning. These studies have generally employed relatively simple perceptual stimuli that facilitate experimental control (Gagné, 1950; Goldstone, 1996; Kurtz & Hovland, 1956; [Whitman J. R., & Garner, W. R. (196361ya). “Concept learning as a function of the form of internal structure”. Journal of Verbal Learning & Verbal Behavior, 2, 195-202]).

    ↩︎
  54. High error rates - indicating one didn’t actually learn the card contents in the first place - seem to be connected to failures of the spacing effect; there’s some evidence that people naturally choose to mass study when they don’t yet know the material.↩︎

  55. The 20 years look like this (note the scientific notation): [0.742675, 0.27044575182838654, 0.15275979054767388, 0.10348750000000001, 7.751290630254386e-2, 6.187922936397532e-2, 5.161829250474865e-2, 4.445884397854832e-2, 3.923055555555555e-2, 3.5275438307530015e-2, 3.219809429218694e-2, 2.9748098818459235e-2, 2.7759942051635768e-2, 2.6120309801216147e-2, 2.474928593068675e-2, 2.35890625e-2, 2.2596898475825956e-2, 2.1740583401051353e-2, 2.0995431241707652e-2, 2.0342238287817983e-2]↩︎

  56. modulo things where knowing it is useful even if you don’t need it often - it can be a brick in a pyramid of knowledge; cf. page 3 of Wolf:

    The problem of forgetting might not torment us so much if we could only convince ourselves that remembering isn’t important. Perhaps the things we learn - words, dates, formulas, historical and biographical details - don’t really matter. Facts can be looked up. That’s what the Internet is for. When it comes to learning, what really matters is how things fit together. We master the stories, the schemas, the frameworks, the paradigms; we rehearse the lingo; we swim in the episteme.

    The disadvantage of this comforting notion is that it’s false. “The people who criticize memorization - how happy would they be to spell out every letter of every word they read?” asks Robert Bjork, chair of UCLA’s psychology department and one of the most eminent memory researchers. After all, Bjork notes, children learn to read whole words through intense practice, and every time we enter a new field we become children again. “You can’t escape memorization,” he says. “There is an initial process of learning the names of things. That’s a stage we all go through. It’s all the more important to go through it rapidly.” The human brain is a marvel of associative processing, but in order to make associations, data must be loaded into memory.

    ↩︎
  57. See Stephen R. Schmidt’s webpage “Theories of Forgetting”, which cites ‘Woodworth & Schlosbeg (196163ya)’ when presenting a log graph of various studies’ forgetting curves.↩︎

  58. which neatly addresses the issue of such mailing lists being useless (‘who learns a word after just one exposure?’).↩︎

  59. Mnemosyne in this case constitutes both a way to learn the quotes so I can use them, and a waste book; just the other day I had 3 or 4 apposite quotes for an essay because I had entered them into Mnemosyne months or years ago.↩︎

  60. It’s well known that any speaker of a language understands many more words than they will ever use or be able to explicitly generate, that their “reading vocabulary” exceeds their “writing vocabulary”; less well-known is that on many problems, one can guess at well above random rates even while feeling unsure & ignorant, necessitating psychologists to employ forced-choice paradigms to reveal such “dark knowledge”. Even less known is the capacity of recognition memory or “implicit memory” (cf. McCollough effect); this memory can apply to things like recognizing images or text or music, typing, puzzle solving, etc. Andrew Drucker, in “Multiplying 10-digit numbers using Flickr: The power of recognition memory”, employs visual memory to calculate 9,883,603,368 × 4,288,997,768 = 42,390,752,785,149,282,624; he cites as precedent Standing1973:

    In one of the most widely-cited studies on recognition memory, Standing showed participants an epic 10,000 photographs over the course of 5 days, with 5 seconds’ exposure per image. He then tested their familiarity, essentially as described above. The participants showed an 83% success rate, suggesting that they had become familiar with about 6,600 images during their ordeal. Other volunteers, trained on a smaller collection of 1,000 images selected for vividness, had a 94% success rate.

    One sometimes sees people argue that something is insecure or unguessable or free from possible placebo effect because it involves too many objects to explicitly memorize, but as these examples make clear, recognition memory can happen quickly and store surprisingly large amounts of information. This could be used for authentication (see for example Bojinov et al 2012; HN discussion) or message since recognition memory could be exploited as a sort of secure communication system. Two parties can share a set of 20,000 photographs (10,000 pairs); to send a message, have a messenger spend 5 days on 10,000 picked ones; and then to receive it, ask him to recognize which photograph he saw in each of the 10,000 pairs. The subject not only does not know what the binary message is or what means, he can’t even produce it since he cannot remember the photographs!

    At an 80% accuracy rate, we can even calculate how many bits of information can be entrusted to the messenger using Shannon’s theorem; a calculation gives 5.8 kilobits as the upper limit: if p = 0.2 (based on the 80% success rate), then 10000 / (1 − (p × log2 p + (1 − p) × (log2 (1 − p)))) = 5,807.44. (This message can, of course, be encrypted.)

    So we see that Frank Herbert was right after all: the securest way to send a message is through a “distrans” messenger! (The downside is that the implicit recognition memory decays; see Landauer1986 for adjusted estimates.)

    This system is even more interesting because the learning happens unconsciously, without volition, so the subject does not need to cooperate nor even know about it (they could be exposed to key images without realizing it, such as through ‘advertising’). Further, recognition of an image also happens unconsciously, and can be observed by EEG ERPs & fMRI (and probably other neural correlates or modalities like eyetracking or skin galvanic response). Thus, messages can be stored & retrieved both unconsciously & involuntarily in brains!↩︎

  61. In this vein, I am reminded of what a former polyphasic sleeper told me:

    I’ve been polyphasic for about a year. (Not anymore; kills my memory.)…Anki reps, mostly. I found that I could do proper review sessions for about 2-3 days and would hit an impenetrable wall. I couldn’t learn a single new card and had total brain fog until I got 3 hours more sleep. That, however, would reset my adaptation. The whole effect is a bit less pronounced on Everyman, but not much. It is however easier to add sleep when you already have a core. I didn’t notice any other major mental impairment after the initial sleep deprivation.

    ↩︎
  62. For a more recent review, see Philips et al 2013.↩︎

  63. Presumably one would immediately give them all some high grade like 5 to avoid suddenly having a daily load of 500 cards for a while.↩︎

  64. Smaller is better.↩︎

  65. “For Mnemosyne 2.x, Ullrich is working on an official Mnemosyne iPhone client which will have very easy syncing.”↩︎

  66. Wired↩︎

  67. See Page 4, Wolf 200816ya:

    The spacing effect was one of the proudest lab-derived discoveries, and it was interesting precisely because it was not obvious, even to professional teachers. The same year that Neisser revolted, Robert Bjork, working with Thomas Landauer of Bell Labs, published the results of two experiments involving nearly 700 undergraduate students. Landauer and Bjork were looking for the optimal moment to rehearse something so that it would later be remembered. Their results were impressive: The best time to study something is at the moment you are about to forget it. And yet - as Neisser might have predicted - that insight was useless in the real world.

    ↩︎
  68. When I first read of SuperMemo, I had already taken a class in cognitive psychology and was reasonably familiar with Ebbinghaus’s forgetting curve - so my reaction to its methodology was Huxley’s: “How extremely stupid not to have thought of that!”↩︎

  69. See page 7, Wolf2008

    And yet now, as I grin broadly and wave to the gawkers, it occurs to me that the cold rationality of his approach may be only a surface feature and that, when linked to genuine rewards, even the chilliest of systems can have a certain visceral appeal. By projecting the achievement of extreme memory back along the forgetting curve, by provably linking the distant future - when we will know so much - to the few minutes we devote to studying today, Wozniak has found a way to condition his temperament along with his memory. He is making the future noticeable. He is trying not just to learn many things but to warm the process of learning itself with a draft of utopian ecstasy.

    ↩︎