--- title: "Leprechaun Hunting & Citogenesis" description: created: 2014-06-30 modified: 2021-05-15 status: finished previous: /search next: /replication confidence: highly likely importance: 3 cssExtension: dropcaps-de-zs ...
> Many claims, about history in particular, turn out to be false when traced back to their origins, and form kinds of academic urban legends. These "leprechauns" are particularly pernicious because they are often widely-repeated due to their growing [apparent trustworthiness](!W "Woozle effect"), yet difficult to research & debunk due to the difficulty of following deeply-nested chains of citations through ever more obscure sources. This page lists instances I have run into. > > A major source of leprechaun transmission is the frequency with which researchers do not read the papers they cite: because they do not read them, they repeat misstatements or add their own errors, further transforming the leprechaun and adding another link in the chain to anyone seeking the original source. This can be quantified by checking statements against the original paper, and examining the spread of *typos* in citations: someone reading the original will fix a typo in the usual citation, or is unlikely to make the same typo, and so will not repeat it. Both methods indicate high rates of non-reading, explaining how leprechauns can propagate so easily.
# Leprechaun hunting and historical context In trying to chase down references to obtain their fulltext and the original primary sources (so much easier in [the era of search engines](/search "'Internet Search Tips', Branwen 2018")), I sometimes wind up discovering that the claims as stated are blatantly false, such as [Kepler's portrait](https://arxiv.org/abs/2108.02213 "‘How a fake Kepler portrait became iconic’, Shore & Pavlík 2021"), and the end product of a long memetic evolution (often politically-biased or [Whiggish](!W "Whig history"), and sometimes [with devastating consequences](https://www.wired.com/story/the-teeny-tiny-scientific-screwup-that-helped-covid-kill/ "The 60-Year-Old Scientific Screwup That Helped Covid Kill: All pandemic long, scientists brawled over how the virus spreads. Droplets! No, aerosols! At the heart of the fight was a teensy error with huge consequences")). These urban legends or academic myths were dubbed "leprechauns" by Laurent Bossavit in his book [_The Leprechauns of Software Engineering: How folklore turns into fact and what to do about it_](https://www.amazon.com/Leprechauns-Software-Engineering-Laurent-Bossavit/dp/2954745509), because in tracing well-known claims about programming, at the end of the rainbow of a clear useful important claim repeated ad nauseum for decades, one often discovers that the basis for the claim is foolsgold, vanishing the next morning like a leprechaun's pot of gold---the original source was a terrible experiment, an anecdote, completely irrelevant, or even outright fictional. (Not to be confused with [Replication Crisis](/replication "'The Replication Crisis: Flaws in Mainstream Science', Branwen 2010")-style issues, where claims disappear all the time, but because they were based on misleading data/analyses or deliberate fraud; with leprechauns and urban legends, it's more the cumulative effect of carelessness and 'game of telephone' effects, possibly with *some* bias as the seed of a dubious outlier claim which is [amplified due to its memetic properties](/littlewood "'Littlewood’s Law and the Global Media', Branwen 2018"){#gwern-littlewood}.) ## Leprechaun Examples A list of examples of claims I have had the misfortune to spend time looking into which, on closer investigation, turned out to vanish with the dew: - Supposedly [a man with hydrocephalus destroying >90% of his brain graduated with a math degree](/hydrocephalus "'Hydrocephalus and Intelligence: The Hollow Men', Branwen 2015"); this can't be *directly* shown to be false, but it traces back to popular articles, and real research on this anecdotal case was, suspiciously, never published; the weight of the evidence about it and contrasting with other hydrocephalus cases (some affected by research fraud) strongly suggests error or omission of damaging details. - [Drapetomania](!W) is cited as an example of the Antebellum South's medicalization of slaves the better to oppress them, ignoring the fact that it was supported by only its inventor, was mocked, had no practical consequences, and was of less importance to its time than [Time Cube](!W) is to our own. - The British science writer [Dionysius Lardner](!W) supposedly scoffed at the idea of fast trains, claiming "Rail travel at high speed is not possible because passengers, unable to breathe, would die of asphyxia"; but there is [no good source that he ever said that](https://en.wikipedia.org/wiki/Talk:Dionysius_Lardner#Did_he_actually_say_that.3F), and it seems to have been made up out of whole cloth in 1980 by someone who couldn't spell his first name right. - [Bicycle face](!W) was claimed by an encyclopedia and a few other feminist books to be a disorder pushed by the English medical establishment to discourage women from bicycling & keep them under control; but [the scanty primary sources barely supported its existence](https://en.wikipedia.org/wiki/Talk:Bicycle_face#Serious_sourcing_issues) as an obscure concept known from a few newspaper columns, and certainly not the misogynist tool of oppression it was depicted as. (There appear to be similar problems with Rachel Maines's claims about Victorian doctors' use of vibrators, in that the sources simply do not support her claims: ["A Failure of Academic Quality Control: _The Technology of Orgasm_", Lieberman & Schatzberg 2018](https://journalofpositivesexuality.org/wp-content/uploads/2018/08/Failure-of-Academic-Quality-Control-Technology-of-Orgasm-Lieberman-Schatzberg.pdf).) - A feminist wrote that 'This was a time before women had the right to vote. If they did attend college at all, it was at the risk of contracting "neuralgia, uterine disease, hysteria, and other derangements of the nervous system" (according to Harvard gynecologist Edward H. Clarke)'; this was a grossly out-of-context quote which libeled a man with noble & progressive beliefs, as [I pointed out in my comment.](https://madamescientist.com/2014/04/11/on-the-shoulders-of-giants/#comment-6577) - There are many attributions to the great physicist [Lord Kelvin](!W) of a line which runs "X-rays are probably a hoax" or "X-rays are frauds" or somesuch; [a closer investigation](https://en.wikiquote.org/wiki/Talk:William_Thomson#.22X-rays_will_prove_to_be_a_hoax.22) shows that there are no primary quotations, and that the real context seems to have been his reaction to sensationalized newspaper articles on the discovery of X-rays and that in any case, he accepted X-rays as soon as he read the scientific paper describing their discovery. (He probably also did not say "Radio has no future.") - We all know spinach has lots of iron---or does it not, or [does it not *not* have lots of iron?](https://sss.sagepub.com/content/44/4/638.long "'Academic urban legends', Rekdal 2014"). Let's go deeper: [Hamblin 1981](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1507475/pdf/bmjcred00690-0047.pdf "'Fake!', Hamblin 1981") reveals that the widely-held belief that Popeye eats spinach because it contains lots of iron is a myth, and spinach has normal iron amounts; a myth ultimately caused by sloppy German chemists typoing a decimal point and uncritically repeated since then, as an example of leprechauns/urban legends/errors in science... Which [Sutton 2010](https://www.erwinmayer.com/wp-content/uploads/2010/10/Sutton_Spinach_Iron_and_Popeye_March_2010.pdf) traces the versions of, ultimately finding the spinach myth to be a myth and no decimal point involved at all at any point, and the myth coming to Hamblin, as Hamblin agrees, via [_Reader’s Digest_](https://dysology.blogspot.com/2017/12/the-spinach-popeye-iron-decimal-error.html)... Except [Rekdal 2014](https://journals.sagepub.com/doi/full/10.1177/0306312714535679) points out that the story was indeed published in _Reader’s Digest_---but 8 years afterward... And [Joachim Dagg](https://historiesofecology.blogspot.com/2015/10/the-real-decimal-point-error-that.html) in 2015 finds a decimal point error *elsewhere* for the iron content of beans, which was debunked by a Bender, and the debunking passed onto Hamlin with a confusion into spinach... But [Sutton](!W "Mike Sutton (criminologist)"), in 2018, [accuses Dagg and another](https://patrickmatthew.com/book-reviews.html) of being obsessive cyberstalkers out to discredit Sutton's work---proposing that Darwin plagiarized evolution, a revelation covered up by "Darwin cultists"---and that Dagg's interest in the spinach myth-myth is merely part of an epic multi-year harassment campaign: > Meanwhile, in 2018, Dagg, who like Derry cyberstalks me obsessively around the Internet eg. posting obsessive juvenile comments on the Amazon book reviews that I write etc (eg. here), writes in the Linnean Society paper in which he jealously plagiarises what he proves in his own words he prior knew (eg. in 2014 and later here) to be my original Big Data IDD "Selby cited Matthew" discovery, thanks the malicious and jealous intimidating cyberstalker Derry and his friend Mike Weale. Notably, Weale cited my original (Sutton 2014) (Selby and six other naturalists cited Matthew pre-1858) prior-published peer reviewed journal bombshell discovery in his 2015 Linnean Society paper and openly thanks me for assisting him with that paper. He also thanks Dagg in the same paper. As further proof of his absolute weird obsession with me, Dagg (here) also jealously retraces all my prior-published steps in my original and now world-famous spinach, decimal point error supermyth bust. I think he was trying - but once gain failing (here obsessing most desperately about me and my research once again) - to discredit me anyway he could, which is the usual behaviour of obsessed stalking cultists, unable to deal with the verifiable new cult-busting facts they despise, so going desperately after the reputation of their discoverer instead. His Linnean Society Journal friend Derry is totally obsessed with me for the very same reason. He too, for apparently the very same reason, tries but also fails to discredit the spinach supermythbust on his desperate pseudo-scholarly obsessive stalker site (here). What a pair of jealous and obsessive sad clowns they are. So, can we really trust Dagg *or* Sutton...? - plagiarist [Johann Hari](!W) claims that prohibitionist [Harry J. Anslinger](!W) surveyed 30 scientific experts about the safety of marijuana and ignored the 29 telling him it was safe and based his anti-marijuana campaign on that 1 scientist; each point there is false. [In actuality after checking Hari's sources](https://www.reddit.com/r/wikipedia/comments/57lqr7/of_30_leading_scientists_whose_views_he_sought_29/d8td27r/), Anslinger did not survey them but they had been debating the ban proposal internally & the AMA provided Anslinger excerpts of their opinions, they were generally not eminent scientists but pharmacists & drug industry representatives, the holdout did not say it was dangerous but merely described a doctor of his acquaintance who had been severely addicted to marijuana & noted that "This may be an exceptional case", and Anslinger didn't base his campaign on it---or even mention it publicly---although he did save the exception to the Bureau files on marijuana (which is where Hari found it). - In his book _[Fragments of an Anarchist Anthropology](!W)_, David Graeber claims Nazi rallies were "inspired by" Harvard pep rallies, but without any sources; I investigated this in more depth and concluded that the connection was [real but far more tenuous than Graeber's summary](https://en.wikipedia.org/wiki/Talk:Pep_rally#Naziism_deletion:_reliability_of_info.3F). (Graeber's books make a number of incorrect claims.) - The Wikipedia article on shampoo cited popular science writer Mary Roach as summarizing NASA & Soviet research as indicating shampoo is necessary, while the relevant passage seems to [say the opposite](https://en.wikipedia.org/wiki/Talk:Shampoo#Theory_section:_dubious_use_of_reference). - CS theoretician [Edsger Dijkstra](!W) is known for a quote that "Computer Science is no more about computers than astronomy is about telescopes", but [it's unclear he ever said it](https://en.wikiquote.org/wiki/Talk:Edsger_W._Dijkstra#Telescope) and it may have actually been said by either [Hal Abelson](!W) or one of 3 obscure writers. - A famous quote by Oliver Heaviside [turns out to be](https://en.wikiquote.org/w/index.php?title=Oliver_Heaviside&type=revision&diff=2949443&oldid=2746598) stitched together from no less than 4 different sources (3 different places in 2 Heaviside books, and a later commentator describing Heaviside's philosophy of science). - The "[Lindy effect](!W)" is the claim that a large volume of output predicts a lower probability of terminating soon & more future output eg. writing novels, as happens under certain statistical distributions, which was credited as originating in a [1964 _The New Republic _ magazine article by Albert Goldman](/doc/statistics/probability/1964-goldman.pdf); obtaining a copy, however, I learn that Goldman's actual observation of "Lindy's Law" was that comedians appeared to have fixed amounts of material, and so the more output from a comedian, the more likely his TV career is about to terminate---that is, the opposite of the "Lindy effect" as defined by Nicholas Taleb. - AI researchers like to tell the cautionary story of a neural network learning not to recognize tanks but time of day which happened to correlate with tank type in that set of photographs; unsurprisingly, this [probably did not happen.](/tank "'The Neural Net Tank Urban Legend', Branwen 2011") There appear to be several similar AI-related leprechauns: the infamous Microsoft [Tay](!W "Tay (bot)") bot, which was supposedly educated by 4chan into being evil, appears to have been mostly a simple 'echo' function (common in chatbots or IRC bots) and the non-"repeat after me" Tay texts are generally short, generic, and cherrypicked out of tens or hundreds of thousands of responses, and it's highly unclear if Tay 'learned' anything at all in the short time that it was operational; a "racist photo cropping" algorithm on Twitter caused a ruckus in 2020 with people cherrypicking examples of 'bias' using pairs of photos where the black person was badly cropped, but Twitter did not confirm this, stated their testing had specifically checked for that, and more extensive testing with up to 100 pairs showed roughly 50:50 crops (reminiscent of the 'gorilla' Google photo classification that people declined to note classified white people as 'seals'); an Amazon hiring algorithm was biased against women and in favor of lacrosse-playing men, except said algorithm was never used, and research on it stopped because it was exhibiting chance-level performance; Cambridge Analytica's political ads supposedly powered by AI was just a giant scam and could not possibly have had the effects attributed to it, both because [advertising has absolutely tiny effects](/banner#discussion) even with vastly more comprehensive datasets than Cambridge Analytica had access to and because the Trump campaign fired Cambridge Analytica early on. - A more contemporary example comes courtesy of [Mt. Gox](!W): everyone 'knew' it was started to be an exchange for trading _Magic: the Gathering_ cards, until I observed that my thorough online research turned up no hard evidence of it but rather endless Chinese whispers; the truth, [as revealed by founder Jed McCaleb](/doc/bitcoin/2014-mccaleb "'2014 Jed McCaleb MtGox interview', McCaleb 2014") turned out to be rather stranger. - researching ["Laws of Tech: Commoditize Your Complement"](/complement "A classic pattern in technology economics, identified by Joel Spolsky, is layers of the stack attempting to become monopolies while turning other layers into perfectly-competitive markets which are commoditized, in order to harvest most of the consumer surplus; discussion and examples."), I learned that Netscape founder [Marc Andreessen's](!W "Marc Andreessen") infamous boast that web browsers would destroy the Microsoft Windows OS monopoly by reducing Windows to a "poorly debugged set of device drivers" is ascribed by Andreessen to [Robert Metcalfe](!W) - More minorly, I've corrected a [_New York Times_ movie review](https://forum.evageeks.org/post/483397/NYT-review-of-_20_/#483397) & an [_ars technica_ computer crime article](https://arstechnica.com/tech-policy/2012/06/fbi-halted-one-child-porn-inquiry-because-tor-got-in-the-way/ "FBI halted one child porn inquiry because Tor got in the way: Feds closed 'assessment' of child porn after checking notorious site Silk Road"). - "Littlewood’s Law of Miracles" appears to have not been by [Littlewood but Freeman Dyson](/littlewood-origin "'Origin of ‘Littlewood’s Law of Miracles’', Branwen 2019") - [Richard Feynman's anecdote](/maze "‘Feynman’s Maze-Running Story’, Branwen 2014") about "Mr Young" & methodological errors in psychology studies of rats turns out to be mostly right but got details wrong, making it especially hard to find the original - [Carthage was not sown with salt](/doc/history/1986-ridley.pdf "To be Taken with a Pinch of Salt: The Destruction of Carthage") - Finally, I might mention that most discussions of [Thomas Robert Malthus](!W) are erroneous and show the speaker has not actually read _An Essay_. # Citogenesis: How Often Do Researchers Not Read The Papers They Cite?
One fertile source of leprechauns seems to be the observation that researchers do not read many of the papers that they cite in their own papers. The frequency of this can be inferred from pre-digital papers, based on bibliographic errors: if a citation has mistakes in it, such that one could not have actually looked up the paper in a library or database, and those mistakes were copied from another paper, then the authors almost certainly did not read the paper (otherwise they would have fixed the mistakes when they found them out the hard way) and simply copied the citation. The empirically-measured spread of bibliographic errors suggest that researchers frequently do not read the papers they cite. The frequency can be further confirmed by examining citations to see when the citers makes much more serious errors by *misdescribing* the original paper's findings; the frequency of such "quotation errors" is also high, showing that the errors involved in citation malpractice are substantial and not merely bibliographic.
In reading papers and checking citations (often while [hunting leprechauns](#leprechaun-hunting-and-historical-context) or tracing epigraphs), one quickly realizes that not every author is diligent about providing correct citation data, or even reading the things they cite; not too infrequently, a citation is far less impressive than it sounds when described, or even, once you read the original, actually shows the *opposite* of what it is cited for. Claims, phrases, and [numbers](https://databasearchitects.blogspot.com/2018/06/propagation-of-mistakes-in-papers.html "‘Propagation of Mistakes in Papers’, Neumann 2018") propagate, typically with their complexity gradually being worn away and turned into a catchy meme. This process will be extremely familiar to anyone factchecking stuff on social media. This helps myths propagate and makes claims seem far better supported than they really are. Since errors tend to be in the direction of impressive or cool or counterintuitive claims, this process and other systemic biases preferentially select for wrong claims (particularly politically convenient ones or [extreme ones](#gwern-littlewood)). As always, there is no substitute for demanding & [finding](/search "'Internet Search Tips', Branwen 2018"){#gwern-search-2} fulltext and reading the original source for a claim rather than derivative ones. How often do authors not read their cites? One way to check is to look at suspiciously high citation rates of difficult-to-access things; if a thesis or book is not available online or is not available in many libraries, but it has racked up hundreds or thousands of citations, is it likely that so many time-pressed lazy academics took the time to interlibrary loan it from one of the only holding libraries rather than simply cargo-culting a citation? For example, [David McClelland](!W), one of the [most cited](https://scholar.google.com/scholar?as_sdt=0%2C21&q=author%3A%22DC+McClelland%22&btnG=) psychologists of the 20th century & critic of standardized testing such as IQ, self-published through his consulting company a number of books[^McClelland], which he cites in highly-popular articles of his (eg. [McClelland 1973](/doc/iq/1973-mcclelland.pdf "Testing for Competence Rather Than for ‘Intelligence’")/[McClelland & Boyatzis 1980](/doc/iq/1980-mcclelland.pdf "Opportunities for Counselors from the Competency Assessment Movement")/[McClelland 1994](/doc/iq/1994-mcclelland.pdf "The knowledge-testing-educational complex strikes back")); several of these books have since racked up hundreds of citations, and yet, have never been republished, cannot be found anywhere online in Amazon / Google Books / Libgen / used book sellers, and do not even appear in [WorldCat](!W) (!) which suggests that *no* libraries have copies of them---one rather wonders how all of these citers managed to obtain copies to read... But individual anecdotes, however striking, don't provide an overall answer; perhaps "Achievement Motivation Theory" fans are sloppy ([Barrett & Depinet 1991](/doc/iq/1991-barrett.pdf "A reconsideration of testing for competence rather than for intelligence")/[Barrett 1994](/doc/iq/1994-barrett.pdf "Empirical data say it all")/[Barrett et al 2003](/doc/iq/2003-barrett.pdf "New Concepts of Intelligence: Their Practical and Legal Implications for Employee Selection") notes that if you actually read the books, McClelland's methods clearly don't work), but that doesn't mean all researchers are sloppy. [^McClelland]: Particularly: - McClelland & Dailey 1972, _Improving officer selection for the Foreign Service_. Boston, MA: Hay/McBer. - McClelland & Dailey 1973, _Evaluating new methods of measuring the qualities needed in superior Foreign Service Officers_. Boston: McBer. - McClelland & Dailey 1974, _Professional competencies of human service workers_. Boston: McBer and Co. Only the first two books appear [available even in McClelland's posthumous Harvard papers](https://hollisarchives.lib.harvard.edu/repositories/4/archival_objects/1033884). This might seem near-impossible to answer, but bibliographic analysis offers a cute trick. In olden times, citations and bibliographies had to be compiled by hand; this is an error-prone process, but one may make a different error from another author citing the same paper, and one might correct any error on reading the original. On the other hand, if you cite a paper because you blindly copied the citation from another paper and never get around to reading it, you may introduce additional errors but you definitely won't fix any error in what you copied. So one can get an idea of how frequent non-reads are by *tracing lineages of bibliographic errors*: the more people copy around the same wrong version of a citation (out of the total set of citations for that cite), the fewer of them must be actually reading it. Such copied errors turn out to be quite common and represent a large fraction of citations, and thus suggests that many paper are being cited without being read. (This would explain not only why retracted studies keep getting cited by new authors, but also the prevalence of misquotation/misrepresentation of research, and why leprechauns persist so long.) Simkin & Roychowdhury venture a guess that as many as 80% of authors citing a paper have not actually read the original (which I feel is too high but I also can't strongly argue with given how often I see quote errors or omissions when I check cites). From ["Citation Analysis"](https://tefkos.comminfo.rutgers.edu/Courses/e530/Readings/Nicolaisen%20citation%20analysis%20ARIST%202008.pdf#page=3), Nicolaisen 2007: > Garfield (1990, p. 40) reviewed a number of studies dealing with bibliographic errors and concluded, that "to err bibliographically is human." For instance, in a study of the incidence and variety of bibliographic errors in six medical journals, De Lacey, Record, and Wade (1985) found that almost a quarter of the references contained at least one mistake and 8 percent of these were judged serious enough to prevent retrieval of the article. [Moed & Vriens 1989](#moed-vriens-1989) examined discrepancies between 4,500 papers from five scientific journals and approximately 25,000 articles that cited these papers, finding that almost 10 percent of the citations in the cited reference dataset showed a discrepancy in either the title, the author name, or the page number. They concluded that one cause for the multiplication of errors seemed to be authors' copying of erroneous references from other articles. [Broadus (1983)](#broadus-1983) came to the same conclusion in a study of a 1975 textbook on sociobiology that included among its references an erroneous reference to a 1964 article (one word was incorrectly substituted in the title). By examining 148 subsequent papers that cited both the book and the article, Broadus could see how many authors repeated the book's mistaken reference. He found that 23 percent of the citing authors also listed the faulty title. A similar study by [Simkin & Roychowdhury 2003](#simkin-roychowdhury-2002-2) reported an almost 80-percent repetition of misprints. One might hope that with modern technology like search engines and Libgen, this problem would be lessened since it is so much easier to access fulltext and bibliographic errors are so much less important when no one is actually looking up papers by page numbers in a row of bound volumes, but I suspect that if this was redone, the error rate would go down regardless of any improvements in reading rates, simply because researchers now can use tools like Zotero or Crossref to automatically retrieve bibliographic data, so the true non-reading rate simply becomes masked. And while fulltext is easier to read now, academic pressures are even stronger now, and volumes of publications have only accelerated since the citation data in all of these studies, making it even more difficult for a researcher to read everything they know they should. So while these figures may be outdated, they may not be obsolete as all that. (And myself? Well, I can honestly say that I do not link any paper on Gwern.net without having read it; however, I have read most but not all papers I host, and I have not read most of the books I host or sometimes cite---it just takes too much time to read entire books.) ## Bibliography Individual papers: - ["An investigation of the validity of bibliographic citations"](/doc/statistics/bias/1983-broadus.pdf){#broadus-1983}, Broadus 1983: > Edward O. Wilson, in his famous work, _Sociobiology, The New Synthesis_ [9], makes reference to a pair of articles by W. D. Hamilton, but misquotes the articles' title. No less than 148 later papers make reference to both Wilson's book and Hamilton's articles, by title. Thus, there is provided an opportunity to test the charge, made by some critics, that writers frequently lift their bibliographic references from other publications without consulting the original sources. Although 23% of these citing papers made the same error as did Wilson, a further perusal of the evidence raises considerable doubt as to whether fraudulent use was intended. (By 'fraudulent use', Broadus seems to mean that authors did not seem to broadly copy references indiscriminately in "wholesale borrowing" to pad out their bibliography, eg. authors who copied the erroneous citation could have, but generally didn't, copy citations to a bunch of other Hamilton articles. He doesn't try to argue that they all read the original Hamilton paper despite their copying of the error.) - ["Possible inaccuracies occurring in citation analysis"](/doc/statistics/bias/1989-moed.pdf), Moed & Vriens 1989: > Citation analysis of scientific articles constitutes an important tool in quantitative studies of science and technology. Moreover, citation indexes are used frequently in searches for relevant scientific documents. In this article we focus on the issue of reliability of citation analysis. How accurate are citation counts to individual scientific articles? What pitfalls might occur in the process of data collection? To what extent do 'random' or 'systematic' errors affect the results of the citation analysis? We present a detailed analysis of discrepancies between target articles and cited references with respect to author names, publication year, volume number, and starting page number. Our data consist of some 4500 target articles published in five scientific journals, and 25000 citations to these articles. Both target and citation data were obtained from the Science Citation Index, produced by the Institute for Scientific Information. It appears that in many cases a specific error in a citation to a particular target article occurs in more than one citing publication. We present evidence that authors in compiling reference lists, may copy references from reference lists in other articles, and that this may be one of the mechanisms underlying this phenomenon of multiple' variations/errors. - ["Read before you cite!"](https://arxiv.org/abs/cond-mat/0212043){#simkin-roychowdhury-2002-2}, Simkin & Roychowdhury 2002 (further discussion: [Simkin & Roychowdhury 2006](https://rss.onlinelibrary.wiley.com/doi/full/10.1111/j.1740-9713.2006.00202.x "Do you sincerely want to be cited? Or: read before you cite")): > We report a method of estimating what percentage of people who cited a paper had actually read it. The method is based on a stochastic modeling of the citation process that explains empirical studies of misprint distributions in citations (which we show follows a Zipf law). Our estimate is only about 20% of citers read the original...In principle, one can argue that an author might copy a citation from an unreliable reference list, but still read the paper. A modest reflection would convince one that this is relatively rare, and cannot apply to the majority. Surely, in the pre-internet era it took almost equal effort to copy a reference as to type in one's own based on the original, thus providing little incentive to copy if someone has indeed read, or at the very least has procured access to the original. Moreover, if someone accesses the original by tracing it from the reference list of a paper with a misprint, then with a high likelihood, the misprint has been identified and will not be propagated. In the past decade with the advent of the Internet, the ease with which would-be non-readers can copy from unreliable sources, as well as would-be readers can access the original has become equally convenient, but there is no increased incentive for those who read the original to also make verbatim copies, especially from unreliable resources2. - ["Stochastic modeling of citation slips"](https://arxiv.org/abs/cond-mat/0401529), Simkin & Roychowdhury 2004: > We present empirical data on frequency and pattern of misprints in citations to twelve high-profile papers. We find that the distribution of misprints, ranked by frequency of their repetition, follows Zipf’s law. We propose a stochastic model of citation process, which explains these findings, and leads to the conclusion that 70-90% of scientific citations are copied from the lists of references used in other papers. (Simkin & Roychowdhury have some other papers which don't seem to do further empirical work on the non-reading question: ["Copied citations create renowned papers?"](https://arxiv.org/abs/cond-mat/0305150), 2003; ["A mathematical theory of citing"](/doc/statistics/bias/2007-simkin.pdf "Simkin & Roychowdhury 2007"), ["An introduction to the theory of citing"](https://arxiv.org/abs/math/0701086), 2007; ["Theory of Citing"](https://link.springer.com/chapter/10.1007/978-1-4614-0754-6_16), 2011.) - ["Avoid “Laundry List” Citations I"](https://www.cs.ucr.edu/~eamonn/Keogh_SIGKDD09_tutorial.pdf#page=112), Keogh 2009: a long presentation which mentions the author's experience with seeing a citation typo he made get copied by dozens of subsequent papers: > ...In other cases I have seen papers that claim “*we introduce a novel algorithm X*”, when in fact an essentially identical algorithm appears in one of the papers they have referenced (but probably not read). - ["Avoiding erroneous citations in ecological research: read before you apply"](/doc/statistics/bias/2017-sigut.pdf), Šigut et al 2017: > The [Shannon-Wiener index](!W) is a popular nonparametric metric widely used in ecological research as a measure of species diversity. We used the [Web of Science](!W) database to examine cases where papers published 1990–2015 mislabeled this index. We provide detailed insights into causes potentially affecting use of the wrong name ‘Weaver’ instead of the correct ‘Wiener’. Basic science serves as a fundamental information source for applied research, so we emphasize the effect of the type of research (applied or basic) on the incidence of the error. Biological research, especially applied studies, increasingly uses indices, even though some researchers have strongly criticized their use. Applied research papers had a higher frequency of the wrong index name than did basic research papers. The mislabeling frequency decreased in both categories over the 25-year period, although the decrease lagged in applied research. Moreover, the index use and mistake proportion differed by region and authors’ countries of origin. Our study also provides insight into citation culture, and results suggest that almost 50% of authors have not actually read their cited sources. Applied research scientists in particular should be more cautious during manuscript preparation, carefully select sources from basic research, and read theoretical background articles before they apply the theories to their research. Moreover, theoretical ecologists should liaise with applied researchers and present their research for the broader scientific community. Researchers should point out known, often-repeated errors and phenomena not only in specialized books and journals but also in widely used and fundamental literature. ## Miscitation A few papers I found on the way, which touch on the much more serious question of how often a citation is *correctly* described/interpreted (as opposed to merely having bibliographic errors suggesting it may not have been read at all): - ["How accurate are quotations and references in medical journals?"](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1416756/pdf/bmjcred00467-0046.pdf), de Lacey et al 1985 > The accuracy of quotations and references in six medical journals published during January 1984 was assessed. The original author was misquoted in 15% of all references, and most of the errors would have misled readers. Errors in citation of references occurred in 24%, of which 8% were major errors---that is, they prevented immediate identification of the source of the reference. Inaccurate quotations and citations are displeasing for the original author, misleading for the reader, and mean that untruths become "accepted fact." ... - ["Do Authors Check Their References? A Survey of Accuracy of References in Three Public Health Journals"](/doc/statistics/bias/1987-eichorn.pdf "'Do authors check their references? A survey of accuracy of references in 3 public health journals', Eichorn & Yankauer 1987"), Eichorn & Yankauer 1987: > We verified a random sample of 50 references in the May 1986 issue of each of three public health journals. Thirty-one percent of the 150 references had citation errors, one out of 10 being a major error (reference not locatable). Thirty percent of the references differed from authors' use of them with half being a major error (cited paper not related to author's contention). - ["Accuracy of references in psychiatric literature: a survey of three journals"](https://www.cambridge.org/core/services/aop-cambridge-core/content/view/E5089C474E3DD9ED63267B88C2547468/S0955603600101266a.pdf/div-class-title-accuracy-of-references-in-psychiatric-literature-a-survey-of-three-journals-div.pdf), Lawson & Fosker 1999: > **Aims and method**: The prevalence of errors in reference citations and use in the psychiatric literature has not been reported as it has in other scientific literature. Fifty references randomly selected from each of three psychiatric journals were examined for accuracy and appropriateness of use by validating them against the original sources. > > **Results**: A high prevalence of errors was found, the most common being minor errors in the accuracy of citations. Major citation errors, delayed access to two original articles and three could not be traced. Eight of the references had major errors with the appropriateness of use of their quotations. > > **Clinical implications**: Errors in accuracy of references impair the processes of research and evidence-based medicine, quotation errors could mislead clinicians into making wrong treatment decisions. - ["Secondary and Tertiary Citing: A Study of Referencing Behavior in the Literature of Citation Analysis Deriving from the Ortega Hypothesis of Cole and Cole"](/doc/statistics/bias/1995-hoerman.pdf), Hoerman & Nowicke 1995: > This study examines a complex network of documents and citations relating to the literature of the Ortega Hypothesis (as defined by Jonathan R. Cole and Stephen Cole), demonstrating the tenacity of errors in details of and meaning attributed to individual citations. These errors provide evidence that secondary and tertiary citing occurs in the literature that assesses individual influence through the use of citations. Secondary and tertiary citing is defined as the inclusion of a citation in a reference list without examining the document being cited. The authors suggest that, in the absence of error, it is difficult to determine the amount of secondary and tertiary citing considered normative. Therefore, to increase understanding of the relationship between citations and patterns of influence, it is recommended that large-scale studies examine additional instances of citation error. - [Neven Sesardić](!W), _Making Sense of Heritability_ (pg135): > ...In my opinion, this kind of deliberate misrepresentation in attacks on hereditarianism is less frequent than sheer ignorance. But why is it that a number of people who publicly attack "Jensenism" are so poorly informed about Jensen's real views? Given the magnitude of their distortions and the ease with which these misinterpretations spread, one is alerted to the possibility that at least some of these anti-hereditarians did not get their information about hereditarianism first hand, from primary sources, but only indirectly, from the texts of unsympathetic and sometimes quite biased critics.^8^ In this connection, it is interesting to note that several authors who strongly disagree with Jensen (Longino 1990; Bowler 1989; Allen 1990; Billings et al 1992; McInerney 1996; Beckwith 1993; Kassim 2002) refer to his classic paper from 1969 by citing the volume of the _Harvard Educational Review_ incorrectly as "33" (instead of "39"). What makes this mis-citation noteworthy is that the very same mistake is to be found in Gould's _Mismeasure of Man_ (in both editions). Now the fact that Gould's idiosyncratic _lapsus calami_ gets repeated in the later sources is either an extremely unlikely coincidence or else it reveals that these authors' references to Jensen's paper actually originate from their contact with Gould's text, not Jensen's. - ["How citation distortions create unfounded authority: analysis of a citation network"](https://www.bmj.com/content/339/bmj.b2680), Greenberg 2009: > ...A complete citation network was constructed from all PubMed indexed English literature papers addressing the belief that β amyloid, a protein accumulated in the brain in Alzheimer’s disease, is produced by and injures skeletal muscle of patients with inclusion body myositis... The network contained 242 papers and 675 citations addressing the belief, with 220 553 citation paths supporting it. Unfounded authority was established by citation bias against papers that refuted or weakened the belief; amplification, the marked expansion of the belief system by papers presenting no data addressing it; and forms of invention such as the conversion of hypothesis into fact through citation alone. Extension of this network into text within grants funded by the National Institutes of Health and obtained through the Freedom of Information Act showed the same phenomena present and sometimes used to justify requests for funding. - ["How accurate are citations of frequently cited papers in biomedical literature?"](https://www.biorxiv.org/content/10.1101/2020.12.10.419424.full), Pavlovic et al 2020: > ...Findings from feasibility study, where we collected and reviewed 1,540 articles containing 2,526 citations of 14 most cited articles in which the 1st authors were affiliated with the Faculty of Medicine University of Belgrade, were further evaluated for external confirmation in an independent verification set of articles. Verification set included 4,912 citations identified in 2,995 articles that cited 13 most cited articles published by authors affiliated with the Mayo Clinic Division of Nephrology and Hypertension (Rochester, Minnesota, USA), whose research focus is hypertension and peripheral vascular disease. Most cited articles and their citations were determined according to SCOPUS database search. A citation was defined as being accurate if the cited article supported or was in accordance with the statement by citing authors. A multilevel regression model for binary data was used to determine predictors of inaccurate citations. At least one inaccurate citation was found in 11% and 15% of articles in the feasibility study and verification set, respectively, suggesting that inaccurate citations are common in biomedical literature. The main findings were similar in both sets. The most common problem was the citation of nonexistent findings (38.4%), followed by an incorrect interpretation of findings (15.4%). One fifth of inaccurate citations were due to “chains of inaccurate citations”, in which inaccurate citations appeared to have been copied from previous papers. Reviews, longer time elapsed from publication to citation, and multiple citations were associated with higher chance of citation being inaccurate.... - ["The problem of miscitation in psychological science: Righting the ship", Cobb et al 2023](/doc/statistics/bias/2023-cobb.pdf "‘The problem of miscitation in psychological science: Righting the ship’, Cobb et al 2023"){.include-annotation}