Artigo Acesso aberto Revisado por pares

Intercoder Reliability for Validating Conclusions Drawn from Open-Ended Interview Data

2000; SAGE Publishing; Volume: 12; Issue: 3 Linguagem: Inglês

10.1177/1525822x0001200301

ISSN

1552-3969

Autores

Karen S. Kurasaki,

Tópico(s)

Computational and Text Analysis Methods

Resumo

Intercoder reliability is a measure of agreement among multiple coders for how they apply codes to text data. Intercoder reliability can be used as a proxy for the validity of constructs that emerge from the data. Popular methods for establishing intercoder reliability involve presenting predetermined text segments to coders. Using this approach, researchers run the risk of altering meanings by lifting text from its original context, or making interpretations about the length of codable text. This article describes a set of procedures that was used to develop and assess intercoder reliability with free-flowing text data, in which the coders themselves determined the length of codable text segments. Content analysis of open-ended interview data collected from twenty third-generation Japanese American men and women generated an intercoder reliability of more than .80 for fifteen of the seventeen themes, an average agreement of .90 across all themes, and consistency among the coders in how they segmented coded text. The findings suggest that these procedures may be useful for validating the conclusions drawn from other qualitative studies using text data.

Referência(s)
Altmetric
PlumX