“Controlled Experiment Finds No Detectable Citation Bump from Twitter Promotion”, Trevor Branch, Isabelle M. Cté, Solomon R. David, Joshua A. Drew, Michelle LaRue, Melissa C. Márquez, E. Chris M. Parsons, D. Rabaiotti, David Shiffman, David A. Steen, Alexander L. Wild2023-09-18 ()⁠:

Multiple studies across a variety of scientific disciplines have shown that the number of times that a paper is shared on Twitter is correlated with the number of citations that paper receives. However, these studies were not designed to answer whether tweeting about scientific papers causes an increase in citations, or whether they were simply highlighting that some papers have higher relevance, importance or quality and are therefore both tweeted about more and cited more.

The authors of this study are leading science communicators on Twitter from several life science disciplines, with substantially higher follower counts than the average scientist, making us uniquely placed to address this question. We conducted a three-year-long controlled experiment, randomly selecting 5 articles published in the same month and journal, and randomly tweeting one while retaining the others as controls.

This process was repeated for 10 articles from each of 11 journals, recording Altmetric scores, number of tweets, and citation counts before and after tweeting.

Randomization tests revealed that tweeted articles were downloaded 2.6–3.9× more often than controls immediately after tweeting, and retained statistically-significantly higher Altmetric scores (+81%) and number of tweets (+105%) 3 years after tweeting. However, while some tweeted papers were cited more than their respective control papers published in the same journal and month, the overall increase in citation counts after 3 years (+7% for Web of Science and +12% for Google Scholar)

was not statistically-significant (p > 0.15). Therefore while discussing science on social media has many professional and societal benefits (and has been a lot of fun), increasing the citation rate of a scientist’s papers is likely not among them.

[Typical p-value fallacy. +7% is a meaningful & practically-important effect for an incredibly cheap & easy thing to do; that they fail to reject the null does not mean they have proven the null, because they are underpowered to detect effects too small to matter—their conclusion that “increasing citation rate…is likely not [a benefit]” is outright contradicted by their result of +7%!]