Usability of quality measures for online health information: Can commonly used technical quality criteria be reliably assessed?
2005; Elsevier BV; Volume: 74; Issue: 7-8 Linguagem: Inglês
10.1016/j.ijmedinf.2005.02.002
ISSN1872-8243
AutoresElmer V. Bernstam, Smitha Sagaram, Muhammad F. Walji, Craig Johnson, Funda Meric‐Bernstam,
Tópico(s)Mobile Health and mHealth Applications
ResumoPurpose: Many criteria have been developed to rate the quality of online health information. To effectively evaluate quality, consumers must use quality criteria that can be reliably assessed. However, few instruments have been validated for inter-rater agreement. Therefore, we assessed the degree to which two raters could reliably assess 22 popularly cited quality criteria on a sample of 42 complementary and alternative medicine Web sites. Methods: We determined the degree of inter-rater agreement by calculating the percentage agreement, Cohen's kappa, and prevalence- and bias-adjusted kappa (PABAK). Results: Our un-calibrated analysis showed poor inter-rater agreement on eight of the 22 quality criteria. Therefore, we created operational definitions for each of the criteria, decreased the number of assessment choices and defined where to look for the information. As a result 18 of the 22 quality criteria were reliably assessed (inter-rater agreement ≥ 0.6). Conclusions: We conclude that even with precise definitions, some commonly used quality criteria cannot be reliably assessed. However, inter-rater agreement can be improved with precise operational definitions.
Referência(s)