Editor's Spotlight/Take 5: Readability of Orthopaedic Patient-reported Outcome Measures: Is There a Fundamental Failure to Communicate?
2017; Lippincott Williams & Wilkins; Volume: 475; Issue: 8 Linguagem: Inglês
10.1007/s11999-017-5383-9
ISSN1528-1132
Autores Tópico(s)Delphi Technique in Research
ResumoWhen the purpose of treatment is to help the patient by reducing symptoms and improve quality of life, there are some questions that only the patient can answer. As we seek each patient's answers, it turns out that the way we ask our questions may be as important as the questions themselves. How good are we at asking our patients these questions? Do our patients even understand them? A recent study of orthopaedic patient-reported outcome measures (PROMs) reported that most such materials are written at a level that is incomprehensible to the average adult reader [2]. The problem of hard-to-understand content is, in fact, even more widespread than that. A total of 97% of the reading material on the American Academy of Orthopaedic Surgeons website was above the 6th grade reading level in one study [4]. Similar findings have been reported across disciplines and in other kinds of patient-education materials [3, 5-7, 10, 13]. With increasing emphasis on PROMs, the incomprehensibility of medical information written for our patients calls into question the reliability of the feedback we receive. These concerns prompted Dr. Brent A. Ponce and his team at the University of Alabama at Birmingham to study the readability of PROMs used in orthopaedic surgery. They compared their results against both the Centers for Medicare & Medicaid Service's (CMS) and NIH's recommendations for reading grade levels. Given the reporting to the contrary [2, 4], Dr. Ponce's group offer a surprising finding—that the vast majority of PROMs are, in fact, written at an acceptable grade level. However, there are still a small number of PROMs that are beyond the grasp of most patient's reading comprehension. Dr. Ponce and his coauthors went beyond analysis; they showed how deliberate editing can improve readability of medical information for patients. Editing improved all of the difficult-to-read PROMs. These suggestions are generalizable to printed information for patients. Using simple steps and available tools, we can apply the findings of Dr. Ponce's work to material we produce for our patients, from the development of future PROMs to printed discharge instructions. Please join me for the Take-5 interview with Brent A. Ponce MD, as we explore the important topic of communication with patients. Take-5 Interview with Brent A. Ponce MD, senior author of “Readability of Orthopaedic Patient-reported Outcome Measures: Is There a Fundamental Failure to Communicate?” M. Daniel Wongworawat MD:You are familiar with previous reports [2, 4] showing that orthopaedic patient material often exceeds the reading level of those patients. Why do you think your findings are so different?Figure: No Caption available.Brent A. Ponce MD: Assessing readability is challenging, especially in medicine. Physicians generally communicate as they would with peers instead of adopting the perspective of a patient who may not have the same education. Readability algorithms were specifically designed for use in the military, educational, or business sectors—not in medicine. Additionally, the currently used readability algorithms have been around for many years, and they each use different formulas that emphasize different items to assess text readability. To use a lone metric, as in prior orthopaedic studies, fails to consider numerous accepted, alternative ways to assess readability. For example, one algorithm may use the number of “complex words” in a text to determine grade level, while another may be based upon sentence length. Neither is wrong, but neither really considers all components of the text. To use a baseball analogy, Dave Kingman hit a lot of home runs, but he is not in the Hall of Fame. Many other players with fewer home runs are enshrined at Cooperstown. This is because home runs are but one single metric to assess performance on the baseball field; a more-comprehensive view generally is called for. Our study attempted to combine the readability metrics for a more accurate assessment of readability. While we acknowledge that this has some inherent problems, we feel it is the best method for readability assessment with the tools currently available. Interestingly, our study highlighted a need for the academic community to develop better readability metrics for medical texts. Dr. Wongworawat:You have done a remarkable job at editing those instruments that are too difficult to read. What approaches can you highlight for us? Dr. Ponce: The take-away point regarding PROM editing is that readability improvement is possible and predictable. Previous studies [11, 12] have shown that editing is beneficial and effective with other forms of patient-health documents, so, logically, we wanted to know if this success would be demonstrated with PROMs as well. However, all steps were not necessary for all PROMs. Whereas, the formation of shorter sentences and implementation of active voice was useful in some PROMs, these steps were not universal. In contrast, the implementation of smaller, shorter, simpler words was a broadly applicable step, which yielded satisfactory results. If someone wished to edit PROMs in one step, the removal of difficult, technical language would likely be the most-appropriate action. Lastly, it should be noted that these editing steps are not original, but have been recommended by CMS [8], and we simply followed their recommendations. Dr. Wongworawat:Applying your approaches in editing documents to improve readability, show us an example where you applied those approaches (sentence length, active voice, etc …) and highlight the before and after. Dr. Ponce: In the example below, we modify an original sentence with three editing techniques: (1) Using the active voice, (2) making the sentences shorter, and (3) removing technical terms. The median grade level (MGL) using these steps goes from the 9th grade to just below the 5th grade. Original (MGL 9.1): Occasional giving way with light sports or moderate work. Able to compensate but limits vigorous activities, sports, or heavy work not able to cut or twist suddenly. Active Voice (MGL 7.3): Your knee occasionally gives way with light sports or moderate work. You are able to compensate, but with limits of vigorous activities, sports, or heavy work. You are not able to cut or twist suddenly. Shorter Sentences (MGL 7.0): Your knee occasionally gives way with light sports or moderate work. You are able to compensate. You limit vigorous activities, sports, or work. You cannot cut or twist suddenly. Removal of Technical Terms (MGL 4.7): Your knee gives way at times with light sports or modest work. You are able to adapt. You limit robust activities, sports, or work. You cannot cut or twist quickly. Yes, arguments may arise over the words chosen as replacements, but overall, we believe that the final version is more easily “readable” and in turn understood than the original. We also attempted to show this in Appendix 2 of the manuscript. Dr. Wongworawat:There are many readability tests to choose from. You compared reading grade levels between these tests. For the writer, what practical advice do you have in choosing one or two? Which readability test should I choose if I wanted to analyze material that I've written for my patients? Dr. Ponce: There are numerous readability tests available for use, but a common question arises regarding which readability test is the most applicable to the healthcare field. Many articles have debated this topic, drawing differing conclusions [1, 9, 14]. The Gunning Fog Index (GFI) seems to be an applicable measure for assessing medical texts. Developed for use with business publications and journals, the GFI assesses average sentence length and percentage of complex words (over three syllables). However, it is essential to pair this with another algorithm that measures different aspect(s) of PROMs. For this, the Automated Readability Index, developed by the US Air Force for assessment of technical documents, would be an appropriate complement through its usage of average sentence length and average word length for its calculations. This allows the assessment of multiple aspects of a PROM via two unique readability algorithms with partial overlap for retrospective quality control. Additionally, as stated in the paper, we caution against using one, lone algorithm due to the possibility of skew that could result as seen in prior reports concerning PROMs. Dr. Wongworawat:MGLs are related to sentence and word structure, but having a low MGL does not necessarily mean better understandability. What are the limits of computer analysis, and when is it necessary to test written material using real people? Dr. Ponce: This is a subtle but critical point as understandability and readability are not synonymous. The readability equations cannot quantify a reader's ability to comprehend the text, but only how easily it can be read. Understandability incorporates subjective variables such as font size, sentence syntax, and the patient's ability to digest what was read. Equations cannot quantify this as they do not test understanding. While understandability is an individualized measure that varies between patients, readability is a defined calculation used to assess document complexity on a broad scale—a 40,000-foot view, so-to-speak. We believe that once issues with readability are addressed, understandability should then be tested.
Referência(s)