“Reproducibility in the Social Sciences”, 2022 (; backlinks; similar):
Concern over social scientists’ inability to reproduce empirical research has spawned a vast and rapidly growing literature. The size and growth of this literature make it difficult for newly interested academics to come up to speed.
Here, we provide a formal text modeling approach to characterize the entirety of the field, which allows us to summarize the breadth of this literature and identify core themes. We construct and analyze text networks built from 1,947 articles to reveal differences across social science disciplines within the body of reproducibility publications and to discuss the diversity of subtopics addressed in the literature.
This field-wide view suggests that reproducibility is a heterogeneous problem with multiple sources for errors and strategies for solutions, a finding that is somewhat at odds with calls for largely passive remedies reliant on open science.
We propose an alternative rigor and reproducibility model that takes an active approach to rigor prior to publication, which may overcome some of the shortfalls of the post-publication model.
…Where Are the Sociologists? [cf. 2002] Perhaps the first takeaway from the systematic review of the literature for sociologists is just how rare it is to find sociological work represented. Sociology journals make up only about 2% of the journals in our corpus and published an even smaller percentage of papers. Indeed, only 6 [0.6%] of the 985 articles published in American Journal of Sociology, American Sociological Review, or Social Forces 1970–502020 include “replication”, “reproducibility”, or “reanalysis” in the core search terms, based on a Web of Science search limited to these journals. Although we might expect a bias for novelty in the most prominent journals in the field, speaking as former editors of Socius, we note there were similarly very few submissions aimed directly at replicating prior work [excepting a special issue devoted to the topic, including the articles by 2019, 2019, and et al 2019], and when such works were submitted, the authors typically had difficulty convincing reviewers that such activity was valuable. Thus, our first observation is that sociologists seem to favor novelty over replication to such a deep extent that evaluating the depth of replication success is difficult. If nobody sees value in replicating initial work, we are unlikely to find the cases that fail.
Despite the dearth of explicit replication attempts, there are at least 3 good reasons to be suspicious that such tests might frequently fail. The first is the finding that statistical-significance tests reported in the sociological literature have distributions consistent with a publication bias favoring barely statistically-significant results (2008). This is, in our opinion, sufficient smoke to suggest fire. Second, prominent comment and reply sequences suggest the sorts of mistakes typically uncovered in the absence of careful reproduction and, ultimately, replication. These exchanges usually focus on data selection (choice of cutoff dates, outliers, etc.), coding (missing data codes, top codes), or modeling issues (convergence checks, etc.) that are necessary to produce findings.4 Finally, the lack of concern with replication in sociology is made clear by the contrast with overt replication concerns in psychology reports. Although we cannot evaluate changes in rates of replication (or success) from this corpus as constructed, the mere existence of hundreds of papers explicitly attempting replication in psychology suggests that psychology has room for this sort of work that is largely missing in sociology.
View PDF: