-
Notifications
You must be signed in to change notification settings - Fork 8
Bennett et al.'s S score
The
The
Zhao et al. (2012) described these assumptions using the following metaphor. Each rater places
Bennett, Alpert, & Goldstein (1954) proposed the
- mSSCORE %Calculates S using vectorized formulas
Use these formulas with two raters and two (dichotomous) categories:
Use these formulas with multiple raters, multiple categories, and any weighting scheme:
- Bennett, E. M., Alpert, R., & Goldstein, A. C. (1954). Communication through limited response questioning. The Public Opinion Quarterly, 18(3), 303–308.
- Guilford, J. P. (1963). Preparation of item scores for the correlations between persons in a Q factor analysis. Educational and Psychological Measurement, 23(1), 13–22.
- Maxwell, A. E. (1977). Coefficients of agreement between observers and their interpretation. The British Journal of Psychiatry, 130, 79–83.
- Janson, S., & Vegelius, J. (1979). On generalizations of the G index and the phi coefficient to nominal scales. Multivariate Behavioral Research, 14(2), 255–269.
- Brennan, R. L., & Prediger, D. J. (1981). Coefficient Kappa: Some uses, misuses, and alternatives. Educational and Psychological Measurement, 41(3), 687–699.
- Byrt, T., Bishop, J., & Carlin, J. B. (1993). Bias, prevalence and kappa. Journal of Clinical Epidemiology, 46, 423–429.
- Potter, W. J., & Levine-Donnerstein, D. (1999). Rethinking validity and reliability in content analysis. Journal of Applied Communication Research, 27(3), 258–284.
- Zhao, X., Liu, J. S., & Deng, K. (2012). Assumptions behind inter-coder reliability indices. In C. T. Salmon (Ed.), Communication Yearbook (pp. 418–480). Routledge.
- Gwet, K. L. (2014). Handbook of inter-rater reliability: The definitive guide to measuring the extent of agreement among raters (4th ed.). Gaithersburg, MD: Advanced Analytics.