Artigo Acesso aberto

Interrater reliability between in-person and telemedicine evaluations in obstructive sleep apnea

2021; American Academy of Sleep Medicine; Linguagem: Inglês

10.5664/jcsm.9220

ISSN

1550-9397

Autores

Michael Yurcheshen, Wilfred R. Pigeon, Carolina Z. Marcus, Jonathan A. Marcus, Susan Messing, Kevin Nguyen, Jennifer Marsella,

Tópico(s)

Chronic Obstructive Pulmonary Disease (COPD) Research

Resumo

Free AccessScientific InvestigationsInterrater reliability between in-person and telemedicine evaluations in obstructive sleep apnea Michael E. Yurcheshen, MD, FAASM, Wilfred Pigeon, PhD, Carolina Z. Marcus, MD, Jonathan A. Marcus, MD, Susan Messing, MS, Kevin Nguyen, MD, Jennifer Marsella, MD Michael E. Yurcheshen, MD, FAASM Address correspondence to: Michael E. Yurcheshen, MD, FAASM, UR Sleep Center, 2337 South Clinton Avenue, Rochester, NY 14610; Email: E-mail Address: [email protected] University of Rochester School of Medicine and Dentistry, Rochester, New York , Wilfred Pigeon, PhD University of Rochester School of Medicine and Dentistry, Rochester, New York , Carolina Z. Marcus, MD University of Rochester School of Medicine and Dentistry, Rochester, New York , Jonathan A. Marcus, MD University of Rochester School of Medicine and Dentistry, Rochester, New York , Susan Messing, MS University of Rochester School of Medicine and Dentistry, Rochester, New York , Kevin Nguyen, MD Saddleback Medical Center, Laguna Hills, California , Jennifer Marsella, MD University of Rochester School of Medicine and Dentistry, Rochester, New York Published Online:July 1, 2021https://doi.org/10.5664/jcsm.9220Cited by:6SectionsAbstractEpubPDF ShareShare onFacebookTwitterLinkedInRedditEmail ToolsAdd to favoritesDownload CitationsTrack Citations AboutABSTRACTStudy Objectives:We examined how telemedicine evaluation compares to face-to-face evaluation in identifying risk for sleep-disordered breathing.Methods:This was a randomized interrater reliability study of 90 participants referred to a university sleep center. Participants were evaluated by a clinician investigator seeing the patient in-person, then randomized to a second clinician investigator who performed a patient evaluation online via audio-video conferencing. The primary comparator was pretest probability for obstructive sleep apnea,Results:The primary outcome comparing pretest probability for obstructive sleep apnea showed a weighted kappa value of 0.414 (standard error 0.090, P = .002), suggesting moderate agreement between the 2 raters. Kappa values of our secondary outcomes varied widely, but the kappa values were lower for physical exam findings compared to historical elements.Conclusions:Evaluation for pretest probability for obstructive sleep apnea via telemedicine has a moderate interrater correlation with in-person assessment. A low degree of interrater reliability for physical exam elements suggests telemedicine assessment for obstructive sleep apnea could be hampered by a suboptimal physical exam. Employing standardized scales for obstructive sleep apnea when performing telemedicine evaluations may help with risk-stratification and ultimately lead to more tailored clinical management.Citation:Yurcheshen ME, Pigeon W, Marcus CZ, et al. Interrater reliability between in-person and telemedicine evaluations in obstructive sleep apnea. J Clin Sleep Med. 2021;17(7):1435–1440.BRIEF SUMMARYCurrent Knowledge/Study Rationale: Telemedicine is a promising technology that is now widely used in the practice of sleep medicine. The accuracy of telemedicine compared to in-person assessment in evaluating patients with obstructive sleep apnea is still unknown. The current study was an interrater reliability study comparing a telemedicine to an in-person evaluator in assessing pretest probability for obstructive sleep apnea in a community population.Study Impact: The telemedicine and in-person investigators had moderate agreement in evaluating pretest probability for mild, moderate, or severe obstructive sleep apnea. The current study underscores the need to consider standardized processes that optimize telemedicine and support an online clinician's ability to accurately assess for sleep-disordered breathing.INTRODUCTIONTelemedicine is changing health care delivery for all of clinical medicine. There is increasing demand for sleep medicine specialists, and online consultants have the potential to deliver care despite distance, transportation concerns, or pandemic conditions. Patients, sleep medicine providers, insurance carriers, and industry all have a stake in the development of telemedicine services.Obstructive sleep apnea (OSA) is a common and expensive medical condition, estimated to afflict 2% to 20% of the United States population at an estimated annual cost of nearly 150 billion dollars to the US economy.1–3 Suspicion for OSA is based largely on clinical history, with some additional detail gained from physical examination. Survey tools can also play a role in assessing for OSA.4–7 Because the condition is common, and because it can be evaluated by history and examination suited for remote video-audio communication, OSA is a good candidate for telemedicine evaluation. Based on this promise, industry groups are looking to optimize this technology.8Telemedicine consultation has been studied in various disease processes, including Parkinson disease, emergency ocular disorders, and acute pediatric illnesses.9–13 This type of assessment has led to substantial satisfaction with medical care, as well as reduced travel time and distance.9 Regarding telemedicine management of OSA, the literature lacks large, community-based outcome studies, but there does seem to be promise in the technology based on a number of studies. In 1 study, over 60% of OSA patients felt comfortable engaging in virtual consultations.14 Likewise, a community population showed equal satisfaction with in-person and telemedicine evaluations for their sleep condition.15 One study showed a small decrease in continuous positive airway pressure compliance among participants managed by teleconsultation.16 In contrast, continuous positive airway pressure participants recruited from a veteran population had similar functional outcomes with telemedicine compared to in-person management.17These studies are encouraging but have limitations. Some lacked a control group, and others had limited numbers of participants. Even the most compelling of these studies has limited generalizability as it was in a veteran population only.17 Although the sleep field is rapidly moving toward adoption of telemedicine, there are still gaps in our understanding of this care model. For instance, there are no community studies examining the accuracy of remote assessments compared to the gold standard of in-person evaluation.18 Many treatment decisions, including the decision to order at-home or in-lab sleep testing, are based on a patient's pretest probability for sleep apnea.19 Telemedicine will be most useful if patient assessments are sufficiently accurate to drive this type of clinical decision-making.The present randomized clinical trial aimed to compare telemedicine to in-person assessment for sleep-disordered breathing. We hypothesized that there would be high interrater reliability between telemedicine and in-person assessors in determining pretest probability for OSA. Secondary aims included interrater reliability for historical and physical exam elements suggestive of this condition, as well as a comparison between raters in interpreting home sleep apnea testing.METHODSStudy design and randomizationWe conducted a randomized, blinded, interrater reliability study comparing the impressions of a clinician seeing a patient in-person to a clinician seeing the same patient via telemedicine. Subjects were recruited between March 2017 and January 2019. The study design is outlined in Figure 1. The study design is based on comparing assessments between in-person and telemedicine physicians, rather than between a single telemedicine rater at different time points, or between 2 different telemedicine raters. This design was chosen as a compromise, given the rapid pace of clinical assessment/testing and the impracticality of purposely delaying a repeat assessment that would minimize evaluator memory bias.Figure 1: Study design.Download FigureThree American Board of Medical Specialties sleep-board eligible/certified clinicians (M.E.Y., C.Z.M., J.M.) underwent group training to familiarize themselves with the study protocol and outcome assessments. During this training, the raters reviewed 20 theoretical case histories to reach consensus about pretest probability for OSA. Training included a description of the physical exam but did not include simulated video footage of the oral cavity. We elected to use 3 different raters in our study to expedite recruitment (each rater was responsible for recruiting roughly one-third of the cohort) and to minimize bias that may have stemmed from using only 2 raters.Consenting participants were asked to complete a demographic profile and an Epworth Sleepiness Scale during their in-person clinical encounter. The in-person investigator reviewed the patient's electronic medical record and conducted a history and physical examination. The investigator completed a clinical impression battery that included responses relating to the primary and secondary endpoints. These participants, upon consent, were randomized to 1 of 2 other raters by utilizing a randomized block design of size 4, and an online clinical encounter was conducted through an audiovisual conferencing application (Zoom; San Jose, CA). The telemedicine encounter was scheduled within 5 business days. This tele-evaluation included record review, history, and a brief, noninvasive examination of the oral cavity using the patient's web-enabled camera and an incandescent light source. The telemedicine investigator was blinded to the in-person assessment, and blinding was ensured by a third-party audit of a sample of 5 study participants. The participant completed a repeat Epworth Sleepiness Scale around the time of the telemedicine encounter and completed a survey questionnaire regarding their experience with telemedicine.For participants who completed home sleep apnea testing (HSAT) a type 3 home sleep testing device [Braebon Medibyte (Kanata, ON, Canada) or Respironics Alice NightOne (Murrysville, PA)] was used. The devices recorded the following signals: snoring, body position, heart rate, oxygen saturation, airflow (nasal pressure), and thoracoabdominal movement. The studies were scored by an experienced New York State licensed sleep technologist who was unaware of participant randomization. Respiratory scoring was performed in accordance with published criteria.20 The study was interpreted by both the in-person and telemedicine clinicians independently, and an impression of mild, moderate, or severe sleep apnea (or uninterpretable study) was recorded.The protocol for this study was approved by the University of Rochester Institutional Review Board and complied with the ethical principles of the Declaration of Helsinki and were in accordance with the International Conference on Harmonization Good Clinical Practice guideline. All participants provided written informed consent before study enrollment.ParticipantsEligible participants were recruited from general referrals to the University of Rochester Sleep Center. Men and women 30–70 years old referred for any reason were asked to participate. Potential participants needed sufficient computer literacy, access to high-speed internet (minimum 384 kbps), a computing device with appropriate video camera (minimum 640 × 480 resolution at 30 frames/s), and microphone. Patients with dementia, severe psychiatric or developmental illness, complete hearing or visual loss, or not fluent in English were excluded. Participants conducted the telemedicine evaluation in their domicile or other private area. Participants were offered a $25 gift card if they completed both phases of the study. All participants were also provided an incandescent pen light, by which the evaluator examined the oral cavity. This incandescent light source helped to minimize "white out" contrast issues in the oral cavity seen when brighter LED lighting is used.MeasuresThe study's primary outcome was to assess the pretest probability for sleep apnea (high, moderate, or low). Secondary outcome measures included interrater reliability for both historical and physical exam findings, including snoring volume (none, mild, moderate, severe, not sure), witnessed apneas (yes, no, not sure), degree of daytime sleepiness (none, mild, moderate, severe), modified Mallampati class (1–4) of airway crowding, overjet distance of maxillary-mandibular relationship (< 0 mm, 0–3 mm, > 3 mm), tonsil size (0–4+), and severity of sleep apnea based on HSAT (inconclusive, mild, moderate, severe).Statistical methodsSample sizeA sample size of 80 participants was selected based on 80% power at a significance level of .05 to detect a kappa of 0.60 in a test of kappa > 0.4, assuming an overall clinic population risk profile of 30% low, 30% moderate, and 40% high pretest probability for OSA. We recruited 90 participants for the study, based on an assumed 10% dropout.Statistical analysisKappa and weighted kappa were evaluated for in-person vs telemedicine evaluations. Weighted kappa utilized the Cicchetti-Allison weighting scheme. For this study, there were no missing data from our raters.When score values are relatively equally exercised along the ordinal scale, the percent agreement and kappa demonstrate concordance; when some of the categories are very sparsely populated, kappa values can drop precipitously. To evaluate the amount of agreement, we followed the Landis and Koch suggestions, where: < 0 is poor agreement, 0.0–0.20 is slight agreement, 0.21–0.40 is fair agreement, 0.41–0.60 is moderate agreement, 0.61– 0.80 is substantial agreement, and 0.81–1.00 is almost perfect agreement.21RESULTSNinety (90) participants enrolled in the study, and 58 participants completed the entire protocol. The CONSORT (Consolidated Standards of Reporting Trials) diagram in Figure 2 depicts participant progress throughout the study. Based on the post-hoc audit of the electronic health record, none of the telemedicine investigators accessed the evaluation of the primary investigator during the embargoed timeframe between the 2 evaluations. One participant discovered the HSAT results and shared them with the telemedicine investigator. Since the telemedicine investigator did not access the in-person investigator's assessments, we decided to include this participant in the analysis. The only reason for noncompletion was loss-to-follow-up with the participant not showing up for the telemedicine appointment (32 participants). There were 37 participants who underwent HSAT evaluation. No evaluation was stopped for technical concerns.Figure 2: CONSORT (Consolidated Standards of Reporting Trials) diagram. HSAT = home sleep apnea testing.Download FigureParticipant characteristics are included in Table 1. The 58 completers had a mean age of 49.9 years. Of enrolled participants 11% were African-American, 4% were Hispanic, and 1% were Asian American. Of completing participants, 7% were African-American, 4% were Hispanic, and 0% were Asian American. The baseline Epworth Sleepiness Scale score was 9.4.Table 1 Participant characteristics.All (n = 90)Completers (n = 58, 65%)Age (mean ± SD), y48.84 ± 8.0449.93 ± 8.10Female, %53.355.2Race/ethnicity, % Asian1.10.0 Black/African American11.16.9 Hispanic4.43.4 White/Caucasian83.389.7Income, % < $50,00020.019.0 $50,000–100,00033.337.9 > $100,00036.734.5 Preferred not to answer10.08.6Average business days to enrollmentn/a4.98 ± 2.98Average days to enrollmentn/a6.48 ± 4.68n/a = not available, SD = standard deviation.Of those who completed both appointments, 13 participants completed their telemedicine appointment later than the 5-business-day target window (ranging from 6–24 days). Any delay in the telemedicine appointment was due to patient factors/preferences and rescheduling.Statistics about patient and evaluator satisfaction with the process, timing, and duration of the telemedicine appointment are in preparation for another manuscript and are not reported here.Interrater reliabilityThe results for our primary and secondary outcomes are included in Table 2, including percentage agreement, kappa, and weighted kappa where appropriate.Table 2 Interrater reliability of in-person vs telemedicine physician evaluations for obstructive sleep apnea.Evaluation% AgreementKappa (SE)Weighted Kappa (SE)P ValuePretest probability of OSA (low, moderate, high)53.440.286 (.100)0.414 (.090).002Level of daytime sleepiness44.830.225 (.092)0.391 (.083).004Snoring at maximum level63.160.354 (.107)0.353 (.109).0002Apneas witnessed by third party77.430.628 (.086)0.702 (.079)< .0001Apneas witnessed by third party (Y/N)91.300.825 (.084)—< .0001Modified Mallampati score48.280.179 (.093)0.200 (.091).067Tonsils (Y/N)64.29–0.134 (.105)—.326Overjet82.46–0.069 (.022)–0.044 (.014).496After reviewing home sleep apnea test (mild, moderate, severe)91.890.872 (.070)0.899 (.056)< .0001OSA = obstructive sleep apnea, SE = standard error, Y/N, yes/no.Based on a sample size of all protocol completers, the quadratic weighted kappa value was 0.414 [standard error (SE) 0.090, P = .002] in determining pretest probability of OSA based on clinical evaluation. This value is consistent with moderate agreement between evaluators. In a post-hoc analysis, when the sample categories were compressed to high (which includes high- and moderate-risk categories) and low pretest probability, the linear weighted kappa value was calculated at 0.28 (SE 0.17, 95% confidence interval 0–0.62). In this case, the value suggests fair agreement between evaluators.Of our secondary clinical endpoints, the historical element of witnessed apneas had the highest weighted kappa (0.702, SE 0.079, P < .0001), and assessment of the physical exam finding of overjet had the lowest weighted kappa but did not reach statistical significance (–0.044, SE 0.014, P = .496).Regarding home sleep testing, based on a sample size of 37 participants (all HSAT completers with dual impressions), we calculated a weighted kappa value of 0.899 (SE 0.056, P < .0001) in determining the severity of OSA based on home sleep testing. This value is consistent with almost perfect agreement between evaluators.DISCUSSIONAlthough telemedicine is a promising tool for the delivery of sleep care, its accuracy compared to in-person evaluation has been uncertain. This present study is the first to evaluate the accuracy of telemedicine in determining pretest probability for obstructive sleep apnea in a community population.Our results show moderate agreement between an in-person and a telemedicine evaluator in determining pretest probability for obstructive sleep apnea. A much higher level of agreement was noted for our secondary endpoint of witnessed apneas but was low for all of the elements of physical examination. The agreement between raters in ultimately determining the degree of sleep apnea when looking at HSAT results was almost perfect, according to published criteria.21Tele-sleep-medicine is becoming increasingly visible, and adoption is happening quickly in the middle of a coronavirus pandemic. In a mid-decade review of patient attitudes toward sleep telemedicine, 63% of respondents surveyed stated they would be comfortable or willing to try telemedicine visits for their sleep appointments.14 When considering the shortage of sleep medicine providers, and the time, expense, and safety of traveling to and conducting in-person appointments, telemedicine evaluation is helping to improve access. It is important that the sleep field continue to optimize both the technology and the clinical standards for this tool.Pretest probability for sleep apnea was selected as the primary endpoint for this study because it is a major determinant in developing evaluation and management plans.19 Uncomplicated patients with moderate or high pretest probability for significant OSA may be well suited for home testing, whereas patients with low pretest probability may not be referred for testing at all. Still other patients may have a significant pretest probability for mild sleep apnea, and in-lab testing may be more appropriate. As such, pretest probability drives clinical decision-making.Formulation of pretest probability is driven by both (1) history and (2) physical exam. Analysis of our primary aim suggests reasonable but imperfect interrater reliability in deciding pretest probability for OSA, on par with interrater reliability for some other medical conditions, but less than others.22–23 The reliability of our primary aim stands in contrast to substantial agreement in one of our historical elements (witnessed apneas), and the poor or unclear agreement in the physical exam findings (modified Mallampati class, overjet distance, tonsil size). These results suggest that uncertainty introduced by the physical exam may have tempered the clinical picture generated by history.In our study, once participants had home sleep testing, there was excellent agreement in determining the severity of sleep apnea based on HSAT. This result supports a similar finding in a veteran population and suggests that a telemedicine provider is unlikely to miss sleep apnea on HSAT once appropriate patients are identified.17The results suggest challenges and a way forward in developing evaluation/management plans via telemedicine. To minimize uncertainty introduced by the telemedicine assessment, a standardized, protocol-driven approach with predictive survey tools (ie, STOP-Bang, Berlin Questionnaire) could help stratify patients by risk.4–7 Although these tools were not employed in this study, they have value in predicting sleep apnea, can be performed remotely, and might increase the accuracy of telemedicine evaluations. More recently, structured interviews to assess for a wide variety of sleep disorders have been developed.24–25 These too could play a role in telemedicine assessment, although they are more time intensive than the aforementioned, sleep-apnea focused questionnaires.Also, consideration should be given to how to optimize the physical examination portion of the remote assessment. There is renewed interest in the role of the physical exam in sleep medicine and how best to unify descriptions of the airway.26 For telemedicine, there are additional considerations that might include better lighting, higher resolution cameras, or the use of in-person patient presenters. New technologies will certainly play a role in improving the remote physical exam.27 Further, telemedicine evaluation opens unconventional avenues for a clinician, not least of which include the "physical examination" of the patient's sleep environment. An evaluator may be able to get additional information via telemedicine that he or she could not directly appreciate in the office (ie, an easy chair in the bedroom, continuous positive airway pressure at bedside, etc). This capability was not directly studied in this trial.There were several limitations to our study. The most significant of these was the absence of an in-person–to–in-person, or telemedicine-to-telemedicine comparison for interrater reliability. The published literature includes some limited examples of in-person interrater reliability for sleep disorders. One study, using a structured interview template, showed a kappa coefficient of 0.73 for obstructive sleep apnea, substantially better than our weighted kappa of 0.414.24 Another study, using a different structured interview template, assessed kappa as a secondary outcome measure, and had a result closer to ours (kappa for obstructive sleep apnea = 0.38).25 Still, these studies employed designs different from the present study, so a direct comparison is imperfect.Likewise, an interrater reliability study could have been designed with delayed repeated encounters using a single rater, or timely assessments between 2 raters, with both assessments employing telemedicine.28 Given practical concerns of scheduling a patient for 2 different assessments before sleep testing, a study design that employed 2 different raters across 2 different settings (in-person vs telemedicine) was employed. This type of mixed methodology has been used by other telemedicine interrater reliability studies.22–23,29–30 We imagine the introduction of an extra variable (2 different raters in 2 different settings) negatively impacted our kappa coefficients, and we might expect that our values would have been even stronger if both raters had utilized the telemedicine platform. Other limitations include individual evaluator practices. Temporal dispersion between the 2 evaluations was also a concern; we tried to minimize this by encouraging a 5-business-day timeframe, We also had a sample size smaller than anticipated due to a relatively high number of patients who were lost to follow-up. The "no shows" for the second visit were higher than expected, despite confirmation of adequate technology, multiple attempts to schedule the teleassessment by phone and email, and a small financial incentive. Perhaps a post-pandemic/telemedicine-familiar mindset, or a higher financial incentive, could have aided retention. We do not feel that this no-show rate is necessarily reflective of missed telemedicine appointments in clinical practice. In addition, although our raters received standardized case training for the protocol, this training did not include video review of the airway. This shortcoming may have impacted our raters' impressions. Due to study design limitations, the in-person evaluation always predated the telemedicine evaluation, potentially introducing patient bias into the telemedicine assessment. Also, the number of participants in this study was too small to determine with confidence if 2 of the raters were more closely aligned than the other 2 pairings. Last, the population recruited for this study was mostly White and of higher socioeconomic status, although the impact of this demographic is uncertain.The present study demonstrated a promising signal in determining the accuracy of telemedicine encounters for OSA. Ultimately, outcome and cost-analysis studies are needed to determine the utility of this promising technology.DISCLOSURE STATEMENTAll authors have seen and approved the manuscript. Work for this study was performed at the UR Sleep Center of the University of Rochester in Rochester, NY. The study was funded by a grant from the American Academy of Sleep Medicine Foundation (AASM Foundation grant #163-FP-17). Drs. Yurcheshen, C. Marcus, J. Marcus, and Messing received financial support from this grant. Dr. Yurcheshen has served as a clinical trials consultant for Jazz Pharmaceuticals and Harmony Biosciences; none of these consulting activities involve the subject matter for this present study. Dr. Pigeon has been a subinvestigator on observational trials funded by Pfizer, Inc., and by Abbvie, Inc., that are unrelated to this manuscript. He is an employee of the US Department of Veterans Affairs (VA); the views or opinions expressed herein do not necessarily represent those of the VA or the US government. Drs. C. Marcus, J. Marcus, Marsella, Messing, and Nguyen report no conflicts of interest.ABBREVIATIONSHSAThome sleep apnea testOSAobstructive sleep apneaSEstandard errorREFERENCES1. Young T, Palta M, Dempsey J, Skatrud J, Weber S, Badr S. The occurrence of sleep-disordered breathing among middle-aged adults. N Engl J Med. 1993;328(17):1230–1235. https://doi.org/10.1056/NEJM199304293281704 CrossrefGoogle Scholar2. Frost & Sullivan. Hidden Health Crisis Costing America Billions. Underdiagnosing and Undertreating Obstructive Sleep Apnea Draining Healthcare System. Darien, IL: American Academy of Sleep Medicine, 2016. https://aasm.org/advocacy/initiatives/economic-impact-obstructive-sleep-apnea/. Accessed April 5, 2021. Google Scholar3. Peppard PE, Young T, Barnet JH, et al... Increased prevalence of sleep-disordered breathing in adults. Am J Epidemiol. 2013;177(9):1006–1014. https://doi.org/10.1093/aje/kws342 CrossrefGoogle Scholar4. Vana KD, Silva GE, Goldberg R. Predictive abilities of the STOP-Bang and Epworth Sleepiness Scale in identifying sleep clinic patients at high risk for obstructive sleep apnea. Res Nurs Health. 2013;36(1):84–94. https://doi.org/10.1002/nur.21512 CrossrefGoogle Scholar5. Netzer NC, Stoohs RA, Netzer CM, Clark K, Strohl KP. Using the Berlin Questionnaire to identify patients at risk for the sleep apnea syndrome. Ann Intern Med. 1999;131(7):485–491. https://doi.org/10.7326/0003-4819-131-7-199910050-00002 CrossrefGoogle Scholar6. Fenton ME, Heathcote K, Bryce R, et al.. The utility of the elbow sign in the diagnosis of OSA. Chest. 2014;145(3):518–524. https://doi.org/10.1378/chest.13-1046 CrossrefGoogle Scholar7. Prasad KT, Sehgal IS, Agarwal R, Nath Aggarwal A, Behera D, Dhooria S. Assessing the likelihood of obstructive sleep apnea: a comparison of nine screening questionnaires. Sleep Breath. 2017;21(4):909–917. https://doi.org/10.1007/s11325-017-1495-4 CrossrefGoogle Scholar8. Singh J, Badr MS, Diebert W, et al.. American Academy of Sleep Medicine (AASM) position paper for the use of telemedicine for the diagnosis and treatment of sleep disorders. J Clin Sleep Med. 2015;11(10):1187–1198. https://doi.org/10.5664/jcsm.5098 LinkGoogle Scholar9. Bowman RJ, Kennedy C, Kirwan JF, Sze P, Murdoch IE. Reliability of telemedicine for diagnosing and managing eye problems in accident and emergency departments. Eye (Lond). 2003;17(6):743–746. https://doi.org/10.1038/sj.eye.6700489 CrossrefGoogle Scholar10. Siew L, Hsiao A, McCarthy P, Agarwal A, Lee E, Chen L. Reliability of telemedicine in the assessment of seriously ill children. Pediatrics. 2016;137(3):e20150712. https://doi.org/10.1542/peds.2015-0712 CrossrefGoogle Scholar11. Dorsey ER, Venkataraman V, Grana MJ, et al.. Randomized controlled clinical trial of "virtual house calls" for Parkinson disease. JAMA Neurol. 2013;70(5):565–570. https://doi.org/10.1001/jamaneurol.2013.123 CrossrefGoogle Scholar12. Venkataraman V, Donohue SJ, Biglan KM, Wicks P, Dorsey ER. Virtual visits for Parkinson disease: a case series. Neurol Clin Pract. 2014;4(2):146–152. https://doi.org/10.1212/01.CPJ.0000437937.63347.5a CrossrefGoogle Scholar13. Schneider R, Dorsey ER, Biglan K. Telemedicine care for nursing home residents with Parkinsonism. J Am Geriatr Soc. 2016;64(1):218–220. https://doi.org/10.1111/jgs.13909 CrossrefGoogle Scholar14. Kelly JM, Schwamm LH, Bianchi MT. Sleep telemedicine: a survey study of patient preferences. ISRN Neurol. 2012;2012:135329. https://doi.org/10.5402/2012/135329 CrossrefGoogle Scholar15. Parikh R, Touvelle MN, Wang H, Zallek SN. Sleep telemedicine: patient satisfaction and treatment adherence. Telemed J E Health. 2011;17(8):609–614. https://doi.org/10.1089/tmj.2011.0025 CrossrefGoogle Scholar16. Coma-Del-Corral MJ, Alonso-Alvarez ML, Allende M, et al.. Reliability of telemedicine in the diagnosis and treatment of sleep apnea syndrome. Telemed J E Health 2013;19(1):7-12. https://doi.org/10.1089/tmj.2012.0007 Google Scholar17. Fields BG, Behari PP, McCloskey S, et al.. Remote ambulatory management of veterans with obstructive sleep apnea. Sleep. 2016;39(3):501–509. https://doi.org/10.5665/sleep.5514 CrossrefGoogle Scholar18. Zia S, Fields BG. Sleep telemedicine: an emerging field's latest frontier. Chest. 2016;149(6):1556–1565. https://doi.org/10.1016/j.chest.2016.02.670 CrossrefGoogle Scholar19. Rosen IM, Kirsch DB, Chervin RD, et al. American Academy of Sleep Medicine Board of Directors. Clinical use of a home sleep apnea test: an American Academy of Sleep Medicine position statement. J Clin Sleep Med. 2017;13(10):1205–1207. https://doi.org/10.5664/jcsm.6774 LinkGoogle Scholar20. Berry RB, Quan SF, Abreu AR, et al.; for the American Academy of Sleep Medicine. The AASM Manual for the Scoring of Sleep and Associated Events: Rules, Terminology and Technical Specifications. Version 2.6. Darien, IL: American Academy of Sleep Medicine; 2020. Google Scholar21. Landis JR, Koch GG. The measurement of observer agreement for categorical data. Biometrics. 1977;33(1):159–174. https://doi.org/10.2307/2529310 CrossrefGoogle Scholar22. Awadallah M, Janssen F, Körber B, Breuer L, Scibor M, Handschu R. Telemedicine in general neurology: interrater reliability of clinical neurological examination via audio-visual telemedicine. Eur Neurol. 2018;80:289–294. https://doi.org/10.1159/000497157 CrossrefGoogle Scholar23. Handschu R, Littmann R, Reulbach U, et al. Telemedicine in emergency evaluation of acute stroke: interrater agreement in remote video examination with a novel multimedia system. Stroke. 2003;34(12):2842–2846. https://doi.org/10.1161/01.STR.0000102043.70312.E9 CrossrefGoogle Scholar24. Taylor DJ, Wilkerson AK, Pruiksma KE, et al. STRONG STAR Consortium. Reliability of the structured clinical interview for DSM-5 sleep disorders module. J Clin Sleep Med. 2018;14(3):459–464. https://doi.org/10.5664/jcsm.7000 LinkGoogle Scholar25. Merikangas KR, Zhang J, Emsellem H, et al.. The structured diagnostic interview for sleep patterns and disorders: rationale and initial evaluation. Sleep Med. 2014;15(5):530–535. https://doi.org/10.1016/j.sleep.2013.10.011 CrossrefGoogle Scholar26. Yu JL, Rosen I. Utility of the modified Mallampati grade and Friedman tongue position in the assessment of obstructive sleep apnea. J Clin Sleep Med. 2020;16(2):303–308. https://doi.org/10.5664/jcsm.8188 LinkGoogle Scholar27. Teslong. NTE390W/430W Wi-Fi Digital Otoscope for iPhone/iPad/Android. https://www.teslong.com/Ear-Otoscope/WiFi-Ear-Otoscope. Accessed April 5, 2021. Google Scholar28. Russell TG, Martin-Khan M, Khan A, Wade V. Method-comparison studies in telehealth: study design and analysis considerations. J Telemed Telecare. 2017;23(9):797–802. https://doi.org/10.1177/1357633X17727772 CrossrefGoogle Scholar29. Cabana F, Boissy P, Tousignant M, Moffet H, Corriveau H, Dumais R. Interrater agreement between telerehabilitation and face-to-face clinical outcome measurements for total knee arthroplasty. Telemed J E Health. 2010;16(3):293–298. https://doi.org/10.1089/tmj.2009.0106 CrossrefGoogle Scholar30. Basaran A, Ozlu O, Das K. Telemedicine in burn patients: reliability and patient preference. Burns. 2020. https://doi.org/10.1016/j.burns.2020.11.015 Google Scholar Previous article Next article FiguresReferencesRelatedDetailsCited by Shamim-Uzzaman Q and Khosla S Sleep telemedicine: present and future Encyclopedia of Sleep and Circadian Rhythms, 10.1016/B978-0-12-822963-7.00360-1, (113-119), . Robbins R, Quan S, Bertisch S, Depner C and Redline S Sleep, sleep disorders, and the Internet Encyclopedia of Sleep and Circadian Rhythms, 10.1016/B978-0-12-822963-7.00273-5, (81-87), . Does unconscious socioeconomic bias influence tele-evaluation of obstructive sleep apnea? An exploratory analysisYurcheshen M, Pigeon W, Marcus C, Marcus J, McDermott M, Consagra W, Nguyen K and Marsella J Sleep Medicine, 10.1016/j.sleep.2022.07.019, Vol. 100, , (225-229), Online publication date: 1-Dec-2022. Inter-Rater and Intra-Rater Reliability of Return-to-Work Screening Tests for UK Firefighters Following InjuryNoll L, Moran J and Mallows A Healthcare, 10.3390/healthcare10122381, Vol. 10, No. 12, (2381) Identifying Sleep Disorders From Search Engine Activity: Combining User-Generated Data With a Clinically Validated QuestionnaireCohen Zion M, Gescheit I, Levy N and Yom-Tov E Journal of Medical Internet Research, 10.2196/41288, Vol. 24, No. 11, (e41288) Exploring telemedicine evaluation reliability: ahead of its time and long overdueFields B Journal of Clinical Sleep Medicine, Vol. 17, No. 7, (1337-1338), Online publication date: 1-Jul-2021. Volume 17 • Issue 7 • July 1, 2021ISSN (print): 1550-9389ISSN (online): 1550-9397Frequency: Monthly Metrics History Submitted for publicationNovember 4, 2020Submitted in final revised formFebruary 19, 2021Accepted for publicationFebruary 19, 2021Published onlineJuly 1, 2021 Information© 2021 American Academy of Sleep MedicineKeywordssleep-disordered breathingclinical studytelehealthinterrater reliabilitytelemedicineobstructive sleep apneaPDF download

Referência(s)