The Journal of applied psychology
-
Examination of the Van Iddekinge, Roth, Raymark, and Odle-Dusseau (2012) meta-analysis reveals a number of problems. They meta-analyzed a partial database of integrity test validities. An examination of their coded database revealed that measures coded as integrity tests and meta-analyzed as such often included scales that are not in fact integrity tests. ⋯ We found the absence of fully hierarchical moderator analyses to be a serious weakness. We also explain why empirical comparisons between test publishers versus non-publishers cannot unambiguously lead to inferences of bias, as alternate explanations are possible, even likely. In light of the problems identified, it appears that the conclusions about integrity test validity drawn by Van Iddekinge et al. cannot be considered accurate or reliable.
-
Van Iddekinge, Roth, Raymark, and Odle-Dusseau's (2012) meta-analysis of pre-employment integrity test results confirmed that such tests are meaningfully related to counterproductive work behavior. The article also offered some cautionary conclusions, which appear to stem from the limited scope of the authors' focus and the specific research procedures used. Issues discussed in this commentary include the following: (a) test publishers' provision of studies for meta-analytic consideration; (b) errors and questions in the coding of statistics from past studies; (c) debatable corrections for unreliable criterion measures; (d) exclusion of laboratory, contrasted-groups, unit-level, and time-series studies of counterproductive behavior; (e) under-emphasis on the prediction of counterproductive workplace behaviors compared with job performance, training outcomes, and turnover; (f) overlooking the industry practice of deploying integrity scales with other valid predictors of employee outcomes; (g) implication that integrity test publishers produce biased research results; (h) incomplete presentation of integrity tests' resistance to faking; and (i) omission of data indicating applicants' favorable response to integrity tests, the tests' lack of adverse impact, and the positive business impact of integrity testing. This commentary, therefore, offers an alternate perspective, addresses omissions and apparent inaccuracies, and urges a return to the use of diverse methodologies to evaluate the validity of integrity tests and other psychometric instruments.
-
We react to the Van Iddekinge, Roth, Raymark, and Odle-Dusseau (2012a) meta-analysis of the relationship between integrity test scores and work-related criteria, the earlier Ones, Viswesvaran, and Schmidt (1993) meta-analysis of those relationships, the Harris et al. (2012) and Ones, Viswesvaran, and Schmidt (2012) responses, and the Van Iddekinge, Roth, Raymark, and Odle-Dusseau (2012b) rebuttal. We highlight differences between the findings of the 2 meta-analyses by focusing on studies that used predictive designs, applicant samples, and non-self-report criteria. ⋯ The lack of detailed documentation of all effect size estimates used in either meta-analysis makes it impossible to ascertain the bases for the differences in findings. We call for increased detail in meta-analytic reporting and for better information sharing among the parties producing and meta-analytically integrating validity evidence.