BIOSTATISTICS |
|
Year : 2016 | Volume
: 2
| Issue : 2 | Page : 217-219 |
|
Understanding the calculation of the kappa statistic: A measure of inter-observer reliability
Sidharth S Mishra, Nitika
Department of Community Medicine, School of Public Health, Postgraduate Institute of Medical Education and Research, Chandigarh, India
Correspondence Address:
Nitika School of Public Health, Postgraduate Institute of Medical Education and Research, Chandigarh India
 Source of Support: None, Conflict of Interest: None  | Check |
DOI: 10.4103/2455-5568.196883
|
|
It is common practice to assess the consistency of diagnostic ratings in terms of “agreement beyond chance.” The kappa coefficient is a popular index of agreement for binary and categorical ratings. This article focuses on the unweighted kappa statistic calculation by providing a stepwise approach that is supplemented with an example. The aim is that health care personnel may better understand the purpose of the kappa statistic and how to calculate it.
The following core competencies are addressed in this article: Medical knowledge. |
|
|
|
[FULL TEXT] [PDF]* |
|
 |
|