“Methods for Studying Coincidences”, 1989 (; backlinks; similar):
This article illustrates basic statistical techniques for studying coincidences. These include data-gathering methods (informal anecdotes, case studies, observational studies, and experiments) and methods of analysis (exploratory and confirmatory data analysis, special analytic techniques, and probabilistic modeling, both general and special purpose). We develop a version of the birthday problem general enough to include dependence, inhomogeneity, and almost and multiple matches. We review Fisher’s techniques for giving partial credit for close matches. We develop a model for studying coincidences involving newly learned words. Once we set aside coincidences having apparent causes, four principles account for large numbers of remaining coincidences: hidden cause; psychology, including memory and perception; multiplicity of endpoints, including the counting of “close” or nearly alike events as if they were identical; and the law of truly large numbers, which says that when enormous numbers of events and people and their interactions cumulate over time, almost any outrageous event is bound to occur. These sources account for much of the force of synchronicity.
[Keywords: birthday problems, extrasensory perception, Jung, Kammerer, multiple endpoints, rare events, synchronicity]
…Because of our different reading habits, we readers are exposed to the same words at different observed rates, even when the long-run rates are the same Some words will appear relatively early in your experience, some relatively late. More than half will appear before their expected time of appearance, probably more than 60% of them if we use the exponential model, so the appearance of new words is like a Poisson process. On the other hand, some words will take more than twice the average time to appear, about 1⁄7 of them (1⁄e2) in the exponential model. They will look rarer than they actually are. Furthermore, their average time to reappearance is less than half that of their observed first appearance, and about 10% of those that took at least twice as long as they should have to occur will appear in less than 1⁄20 of the time they originally took to appear. The model we are using supposes an exponential waiting time to first occurrence of events. The phenomenon that accounts for part of this variable behavior of the words is of course the regression effect.
…We now extend the model. Suppose that we are somewhat more complicated creatures, that we require k exposures to notice a word for the first time, and that k is itself a Poisson random variable…Then, the mean time until the word is noticed is (𝜆 + 1)T, where T is the average time between actual occurrences of the word. The variance of the time is (2𝜆 + 1)T2. Suppose T = 1 year and 𝜆 = 4. Then, as an approximation, 5% of the words will take at least time [𝜆 + 1 + 1.65 (2𝜆 + 1)(1⁄2)]T or about 10 years to be detected the first time. Assume further that, now that you are sensitized, you will detect the word the next time it appears.
On the average it will be a year, but about 3% of these words that were so slow to be detected the first time will appear within a month by natural variation alone. So what took 10 years to happen once happens again within a month. No wonder we are astonished. One of our graduate students learned the word “formication” on a Friday and read part of this manuscript the next Sunday, two days later, illustrating the effect and providing an anecdote. Here, sensitizing the individual, the regression effect, and the recall of notable events and the non-recall of humdrum events produce a situation where coincidences are noted with much higher than their expected frequency. This model can explain vast numbers of seeming coincidences.
View PDF: