Home About us Editorial board Ahead of print Current issue Search Archives Submit article Instructions Subscribe Contacts Login 
  • Users Online: 640
  • Home
  • Print this page
  • Email this page
BIOSTATISTICS
Year : 2016  |  Volume : 2  |  Issue : 2  |  Page : 217-219

Understanding the calculation of the kappa statistic: A measure of inter-observer reliability


Department of Community Medicine, School of Public Health, Postgraduate Institute of Medical Education and Research, Chandigarh, India

Correspondence Address:
Nitika
School of Public Health, Postgraduate Institute of Medical Education and Research, Chandigarh
India
Login to access the Email id

Source of Support: None, Conflict of Interest: None


DOI: 10.4103/2455-5568.196883

Rights and Permissions

It is common practice to assess the consistency of diagnostic ratings in terms of “agreement beyond chance.” The kappa coefficient is a popular index of agreement for binary and categorical ratings. This article focuses on the unweighted kappa statistic calculation by providing a stepwise approach that is supplemented with an example. The aim is that health care personnel may better understand the purpose of the kappa statistic and how to calculate it. The following core competencies are addressed in this article: Medical knowledge.


[FULL TEXT] [PDF]*
Print this article     Email this article
 Next article
 Previous article
 Table of Contents

 Similar in PUBMED
   Search Pubmed for
   Search in Google Scholar for
 Related articles
 Citation Manager
 Access Statistics
 Reader Comments
 Email Alert *
 Add to My List *
 * Requires registration (Free)
 

 Article Access Statistics
    Viewed12089    
    Printed318    
    Emailed0    
    PDF Downloaded266    
    Comments [Add]    

Recommend this journal