“Scale Coarseness As a Methodological Artifact: Correcting Correlation Coefficients Attenuated From Using Coarse Scales”, 2008-08-15 ():
[visualization] Scale coarseness is a pervasive yet ignored [measurement error] methodological artifact that attenuates observed correlation coefficients in relation to population coefficients. The authors describe how to disattenuate correlations that are biased by scale coarseness in primary-level as well as meta-analytic studies and derive the sampling error variance for the corrected correlation.
Results: of two Monte Carlo simulations reveal that the correction procedure is accurate and show the extent to which coarseness biases the correlation coefficient under various conditions (ie. value of the population correlation, number of item scale points, and number of scale items).
The authors also offer a Web-based computer program that disattenuates correlations at the primary-study level and computes the sampling error variance as well as confidence intervals for the corrected correlation.
Using this program, which implements the correction in primary-level studies, and incorporating the suggested correction in meta-analytic reviews will lead to more accurate estimates of construct-level correlation coefficients.
[Keywords: scale coarseness, Likert-type scale, correlation coefficient, meta-analysis, artifact]
…although this fact is seldom acknowledged, organizational science researchers use coarse scales every time continuous constructs are measured using Likert-type (1981) or ordinal (1979) items.1 We are so accustomed to using these types of items that we seem to have forgotten they are intrinsically coarse. As noted by 2006, “scales are not strictly continuous in that there is coarseness due to the category widths and the collapsing of individuals with different true scores into the same category. This is common for many psychological measures, and researchers typically assume that the coarseness is not problematic” (pg28).
As an illustration, consider a typical Likert-type item including 5 scale points or anchors ranging from 1 (“strongly disagree”) to 5 (“strongly agree”). When one or more Likert-type items are used to assess continuous constructs, such as job satisfaction, personality, organizational commitment, and job performance, information is lost because individuals with different true scores are considered to have identical standing regarding the underlying construct. Specifically, all individuals with true scores around 4 are assigned a 4, all those with true scores around 3 are assigned a 3, and so forth. However, differences may exist between these individuals’ true scores (eg. 3.60 vs. 4.40 or 3.40 vs. 2.60, respectively), but these differences are lost due to the use of coarse scales because respondents are forced to provide scores that are systematically biased downwardly or upwardly. This information loss produces a downward bias in the observed correlation coefficient between a predictor X and a criterion Y. In short, scales that include Likert-type and ordinal items are coarse, imprecise, do not allow individuals to provide data that are sufficiently discriminating, and yet they are used pervasively in the organizational sciences to measure constructs that are continuous in nature. This is unfortunate, given that cognitive psychologists and psychometricians have concluded that humans are able to provide data with greater degree of discrimination and precision by using scales with as many as 20 points (1960) or 25 points (1954).
See Also:
Reliability: A Review of Psychometric Basics and Recent Marketing Practices
Statistically Controlling for Confounding Constructs Is Harder than You Think
Evaluating the Effect of Inadequately Measured Variables in Partial Correlation Analysis
Seeing The Forest From The Trees: When Predicting The Behavior Or Status Of Groups, Correlate Means