Editorial Acesso aberto Revisado por pares

Machine Learning and Artificial Intelligence in Neurosurgery: Status, Prospects, and Challenges

2021; Lippincott Williams & Wilkins; Volume: 89; Issue: 2 Linguagem: Inglês

10.1093/neuros/nyab170

ISSN

1524-4040

Autores

T. Forcht Dagi, Fred G. Barker, Jacob Glass,

Tópico(s)

Anatomy and Medical Technology

Resumo

INTRODUCTION AND OVERVIEW: THE ROAD TO ARTIFICIAL INTELLIGENCE “Create a model that is as sophisticated as the problem requires – but not more so.” Craig MacDonald1 The purpose of this article is to introduce artificial intelligence (AI), machine learning (ML), and related technologies to neurosurgeons, to review their current status, and to comment on the trajectory for their incorporation into neurosurgery. On the order of 25 studies explicitly utilizing AI technologies have been published in the neurosurgical literature to date. This is only the beginning.2-6 The conceptual origins of these technologies can be traced to ancient legendary androids, humanoid automatons, and mechanical imitations of animals, some of which were bestowed with a form of intelligence and even will. The history leading to AI is summarized in Table 1. Mechanical calculators have been deployed throughout recorded history. Charles Babbage, a Cambridge mathematician and inventor, conceived and designed the first computers between 1833 and 1871 to address mathematical calculation errors, but they were never successfully built. In fact, his efforts were later described as a “false dawn.”7 A number of other computing machines were created prior to World War II and proved essential to the war effort.8 One problem that had to be resolved before computers could be commercialized was command and data storage. Another was cost. In 1950, Engineering Research Associates introduced magnetic drum storage, and in 1952 both UNIVAC and IBM introduced magnetic tape.9 Both advances were seminally important. In 1950, A.M. Turing published a sentinel paper exploring whether computers might be construed as thinking. He proposed a test to determine whether they could.10,11 A wide-reaching debate ensued around whether he was right and whether, in any case, machine intelligence (as it then was called – and the term the British retained in preference to AI) was anything worth pursuing. The introduction of the term “artificial intelligence” is ascribed to John McCarthy of Dartmouth, Marvin Minsky of Harvard, Nathaniel Rochester from IBM, and Claude Shannon from Bell Telephone Laboratories who submitted a proposal for a “2 month, 10 man” workshop in 1955. When the meeting took place at Dartmouth in the summer of 1956, Allen Newell, Cliff Shaw, and Herbert Simon presented Logic Theorist, a program funded by the Research and Development (RAND) Corporation designed to mimic human problem-solving skills. This served as proof of concept for AI, and is generally acknowledged to have launched the field. Research programs emerged at Massachusetts Institute of Technology, Stanford, Carnegie Mellon University, and other institutions. Many were supported by the Defense Advanced Research Projects Agency in the US Department of Defense.12 As the field evolved, so did debates about whether AI should pursue the simulation of human reasoning, or whether the simulation of human performance sufficed. Those interested in simulating human reasoning were called neats and those focused on performance were known as scruffies.13,14 By the late 1960s, researchers began applying AI-based tools to the natural and physical sciences. This work crossed multiple scientific disciplines. One early collaboration involved the geneticist and Nobel Laureate Joshua Lederberg, the computer scientist and Turing Award Laureate Edward Feigenbaum, the chemist and National Medalist of Science and Technology Carl Djerassi (inventor of oral birth control), and the mathematician, computer scientist, and philosopher Bruce Buchanan. The project was DENDRAL, commissioned by National Aeronautics and Space Administration to encode the knowledge of expert chemists in order to infer the structures of organic compounds from mass-spectral data.15 DENDRAL was significant not only for the huge volume of data processed, but also for its emphasis on knowledge engineering (KE), a complementary field that became central to AI. KE included knowledge acquisition, validation, and representation (mapping and encoding into a knowledge base); TABLE 1. - Milestones Along the Path From Robots to AI in Medicine Pre-1946 Automatic machines and calculating device but not AI. Wondrous ancient automata described 1920s The word “robot” replaces the word “automaton” 1928 Eric, a battery-powered, aluminum-skinned robot with 11 electromagnets and a motor that could move its hands and head and be controlled remotely or by voice presented at the Model Engineer's Society in London 1930s Industrial robots introduced in the United States 1939 Elektro, a 7-foot tall, walking, talking, voice-controlled, humanoid robot weighing 120 kg presented at the World's Fair. It could smoke, speak 700 words and move its head and arms 1949 Manchester Mark 1, first stored program computer, installed. Named “The Electronic Brain” 1950 Alan Turing writes “Can Machines Think?” 1955 Logic Theorist – first AI program presented and funded by the RAND Corporation 1956 Dartmouth Summer Research Project on Artificial Intelligence 1963 DARPA funds AI at Massachusetts Institute of Technology 1965 Edward Feigenbaum introduces expert systems at Stanford (The Heuristic Programming Project) 1968 The famed science fiction writer, Arthur C. Clarke, predicts that by 2001, machines will be smarter than humans 1970s Automated, computer-assisted EKG readings 1973 Image analysis of digitized retinal angiography 1973 Expert system assistance for renal disease 1978 Mirsky and others predict no more than 3 to 8 years before human intelligence is surpassed by computers 1978 CASNET introduced for expert system computer-assisted diagnosis of glaucoma 1981 The PC is introduced with the PC DOS operating system 1980s Early investigation of machine vision adaptations to medical image analysis 1983 Two expert medical systems, the “Internist-I” and “Cadeuceus” introduced 1988 Computer-assisted resection of subcortical lesions 1988 Automated computer-assisted detection of peripheral lung lesions 1990 Human Genome Project begins 1997 An IBM computer defeats Gary Kasparove in chess 1997 Dragon Software introduces first public speech recognition system 1998 Image Checker computer-assisted diagnostic system for mammography introduced 2000 Proliferation of cheap storage and increasing computer power 2000 Introduction of DL for medical applications 2004 Early reports of computer-assisted diagnosis of retinal disease 2007 IBM Watson introduced 2010 Passage of the Patient Protection and Affordable Care Act. EMRss proliferate 2010 Computer-assisted diagnosis in endoscopy 2011 Digital assistant introduced commercially 2012 Computer-assisted segmentation of sectional brain images 2012 Computer-assisted brain tumor grading 2017 Chatbots introduced for patient intake 2018 AI trials for gastroenterology diagnosis begin 2018 FDA approves Viz.AI, AI-assisted clinical decision support system for stroke triage 2020 Stacked neural networks applied to EKG interpretation EKG, electrocardiogram; EMRs, electronic medical record. inferencing (inferring answers for the user from the knowledge stored); and explaining what information was needed or how conclusions were reached.16 Public and corporate interest in intelligent computing systems expanded during the 1980s as access to computers proliferated. AI was expected to revolutionize society. Health care was a principal focus of these expectations.17-23 Experts bickered about whether medical informatics ought to be considered as a field of computer science, engineering, or biomedicine. The intellectual identity of and control over the field mattered more than might otherwise be imagined. Edward Shortliffe, editor of the Journal of Biomedical Informatics, an early and influential contributor to AI, curtailed the debate by proposing that AI be classified as a form of biomedical informatics. This was the view that prevailed and allowed for effective crossdisciplinary collaboration.14,24,25 AI suffered from inadequate computer storage and inadequate computer power. Faith in the ability of AI to deliver solutions diminished. AI was widely perceived to have failed. A period characterized as “AI winter” descended. The field of AI became notably self-reflective, perhaps because of the sociological and ethical implications of reproducing human intelligence.14,26-28 AI was revived by the advent of the Human Genome Project in 1990. The Genome Project was accompanied by a mandate for tools to handle an unprecedented eruption of data.29 The demands of an enormous data flow reinvigorated interest in data analytics and catalyzed the development of the AI-related technologies.30-33 In 1997, scientists at the National Aeronautics and Space Administration described the problem of datasets too large to be stored in a computer's main memory. This restriction severely limited the extent of data processing that was possible.34 The scientists introduced the term “big data” to describe datasets of this magnitude. In 2001, Douglas B. Laney of the Gartner Group enlarged on this concept, characterizing big data as “high-volume, high-velocity and/or high-variety information assets that demand… innovative forms of… processing…” Volume, velocity, and variety came to be known as the 3 “Vs,” alluding to the amount of low density, unstructured data (volume); the rate at which data must be processed (velocity); and the diversity of data encountered (variety).35 Three more “Vs” were added later: value, variability, and veracity.36,37 The challenge of confronting big data is balanced by the promise of unearthing novel insights that are not otherwise accessible. DEFINITIONS AND TECHNOLOGIES The terms data mining (DM), algorithm, AI, ML, deep learning (DL), neural networks, and expert systems are defined and described in Table 2.38-42 The terms may sometimes overlap or be defined with slight variation. TABLE 2. - Definitions of Essential Terms Term Abbreviation Definition Artificial intelligence AI Describes the abilities of computer systems able to perform tasks that otherwise normally require human intelligence. Algorithm n/a Sets of rules or processes to be followed in making calculations in other problem-solving operations. Data mining DM A field of computer science focused on the properties of datasets. It can extract rules for algorithms, for example, from data. A prerequisite for other forms of data processing. Machine learning ML A subset of AI that allows systems to create algorithms capable of modifying themselves by reading structured data without human intervention after being trained, and of improving from experience without being programmed explicitly. Almost always requires structured data. If outputs are wrong, the algorithms need to be retrained by humans. Deep learning DL A subset of ML which uses neural networks and multiple layers of algorithms to reiterate a task and learn progressively in order to gradually improve outcomes. Depends on adequate but not necessarily structured data. Mimics human learning more closely. May still produced flawed outputs if the quality of data is insufficient. Neural network NN Also known as artificial neural network, and is used to describe a series of algorithms that aim to recognize underlying relationships in a set of data through processes that mimic the way the human brain operates. The algorithms adapt appropriately to changing inputs. Sometimes construed as the next evolutionary stage of ML. Expert systems n/a Refers to software programmed using AI techniques to offer advice or make decisions in such areas as medical diagnosis where the judgment of human experts is emulated. ML constitutes a subfield of AI. DL is sometimes classified as a subfield of ML (and therefore of AI) and sometimes of neural networks. Neural networks are the basis for DL algorithms. The “deep” in “deep learning” refers to the depth of the node layers in a neural network. A node is the equivalent of a neuron with multiple dendrites providing inputs for processing (translation) and an axon transmitting the output. A DL algorithm by definition must have at least 3 nodes.43,44 Effective approaches may draw from AI, ML, DL, or a combination, depending on the specific application. We adopt the term “Intelligent Computing Systems” (ICS) to refer to AI, ML, DL, neural networks, and expert systems as a group, but refer to each technology by name when discussing them individually. While the earliest clinical applications of ICS revolved around expert systems for decision support, their ambit has broadened as technologies have evolved.17-23 The number of publications invoking medical ICS burgeoned from 596 in 2010 to 12 422 in 2019.23 About 51% of all papers in the field involve or invoke DM (not ICS, strictly speaking, but closely related) and ML.28,45-50 DM AND KNOWLEDGE EXTRACTION DM is defined formally as a field of computer science focused on the properties of datasets and enabling the examination of large datasets to elicit correlations and patterns that may not be evident otherwise. It has the same practical meaning as “knowledge extraction,” a term used to refer to the automated or semiautomated extraction of useful information from structured and unstructured sources.51 Knowledge extraction was trademarked “Database Mining” by the Hecht-Nielsen Neurocomputer Corporation in San Diego in the 1980s, but similar terms had appeared earlier.52-54 It also became known by an alternative designation, “Knowledge Discovery in Data (KDD),” introduced in 1989 and was framed as follows: “The basic problem addressed by the KDD process is one of mapping low-level data (which are typically too voluminous to understand and digest easily) into other forms that might be more compact (for example, a short report), more abstract (for example, a descriptive approximation or model of the process that generated the data), or more useful (for example, a predictive model for estimating the value of future cases). At the core of the process is the application of specific data-mining methods for pattern discovery and extraction.”55,56 The term KDD was initially preferred by academics, but it has been largely eclipsed by the later term, “data mining” (see Table 3).57 TABLE 3. - DM and ML120 Differences DM ML 1. Scope DM explores the properties of datasets. Large datasets are analyzed to elicit or confirm correlations and patterns of significance that may be useful inherently, or applied to the prediction of outcomes or actions. The output of DM is an input for ML. ML is a branch of AI that automatically improves the accuracy of the algorithms on which it depends to analyze inputs without being explicitly programmed to do so. ML depends on DM methods. 2. Methodology DM is a method of eliciting useful information from complex datasets. ML automatically uses training datasets to improve its complex data processing algorithms. 3. Uses DM is a research instrument primarily directed at eliciting information. ML is a tool primarily directed at predicting outcomes. 5. Method DM is typically deployed to analyze data in batches. ML algorithms typically run continuously. Changes in input data patterns can be incorporated without reprogramming or human interference. 6. Nature DM requires human direction to choose and apply techniques to extract information. ML is designed to proceed automatically. 7. Learning capability DM is a manual technique insofar as it requires analysis to be initiated by a human being. ML uses the same techniques as DM to automatically learn and to adapt to changes. 8. Implementation DM involves building models to which particular correlation and pattern evaluation techniques are applied. ML uses AI, neural networks, and automated algorithms to accomplish its objectives. 9. Data needs Relative to ML, DM can produce results on lesser volumes of data. ML requires large amounts of data presented in standardized format. ALGORITHMS AND AI AI depends on the deployment of algorithmic software routines designed for data analysis and developed on the basis of large sets of structured and curated data. During development, algorithms are “taught” on training datasets and then tested against testing datasets. Algorithm outputs are then assessed, after which the algorithm can be rejected, modified, or adopted. Poor training sets and misleading inputs can lead to misleading results. Algorithms can be standalone or embedded in devices, and either locked or unlocked. Locked algorithms do not to change with use, and cannot to be modified by the user. Unlocked algorithms may “learn” or “evolve” as new data are processed and may be modified by the user. “Learning” means that the algorithm is capable of self-modification and improvement. “Learning” is an attractive technical accomplishment, but presents a challenge from the regulatory perspective, because the algorithm first presented for approval may be quite different from the product eventually put into commercial use. Algorithms for medical use must be tested prior to using protocols called “verification and validation.”58 The process of verification and validation is outlined in Table 4. Satisfactory validation and verification do not necessarily equate to clinical utility. TABLE 4. - Software Testing: Validation and Verification121 Type of test Validation Verification Process Static Dynamic Purpose Determine whether software meets design requirements. Find bugs early in development Determine whether the final software product actually meets the intended needs and desired use(s) in the appropriate context Examines Application and software architecture, specifications, code, programming, and the accompanying database System testing and user acceptance testing; additional debugging and nonfunctional testing Focus on Software design and architecture, code and programming Final product Does not focus on Final product Software specifications and design, completeness, code, and programming Executes software Not essential Essential Reports on Completeness and integrity of design Functionality including black box testing (testing a system with no prior knowledge of its inner workings) and white box testing (testing the inner workings of the system) Sequence Always leads Always follows Most applications of AI in medicine use either numerical or image-based data as inputs. The AI algorithms are based on models that process the data to produce either a numerical expression (often a probability) or a classification. Thus, an algorithm might be designed to express the likelihood that a group of 30 μ calcified spicules on a mammogram is consistent with a carcinoma, whether the mammogram is abnormal, or both. Calculated likelihoods reflect a form of probabilistic thinking that evolved in 19th century Britain and became very important in the management of clinical uncertainty. In many respects, calculated likelihoods are the basis for what is commonly referred to as clinical judgement.59-61 In practice, the performance of AI on a diagnostic task is commonly tested against a physician's judgment under the like circumstances. This test of equivalence determines its clinical validity, its clinical “capability,” its clinical value, and, therefore, its utility. Calculated likelihoods are an essential aspect of predictive analytics. ML ML is a branch of AI which develops self-learning and self-improving algorithms. The term “Machine Learning” was first used by Arthur Samuel in 1959 to describe algorithms offering “computers the ability to learn without being explicitly programmed.” His computers learned to play chequers.62,63 ML is the basis for many familiar advances including digital assistants, spam detectors, image analysis, and self-driving cars.64 While ML may appear to be a desirable attribute of all algorithms, it is not always optimum for commercial devices, as noted earlier. ML does not guarantee correct inferences or conclusion. DL DL is a subset of ML. It has shown great promise in fields ranging from genomic alignments to protein folding and voice recognition. Many observers believe that DL is the next advance in ML and that it will have an important role in in medicine.46 Still, the applications of DL have been thus far early and largely translational.65-68 Like AI and other forms of ML, DL is a black box whose internal logic may be elusive. INTELLECTUAL PROPERTY PROTECTION The software and algorithms in ICS may be protected by patents, copyrights, trade secrets, trademarks, and service marks. An overview of the intellectual property protection available for software in the US is provided in Table 5.69-71 The options available for software protection within the European Union are nearly identical, differing mainly in the details.72 TABLE 5. - Types of Intellectual Property Protection Instrument Term (USA) Disclosure Detail Conditions Notes Patent 20 years from the filing date of the earliest US or international (PTC) application to which priority is claimed (excluding provisional applications). Yes Protects an invention. Disclosure sufficient to enable or teach an expert, or an individual “schooled in the art,” to replicate it. An invention must be actually or constructively reduced to practice. Must be patentable (ie, invented and not discovered), novel, nonobvious, useful, and reduced to practice rather than simply conceptual. Any new or useful process, machine, manufacture, composition of matter, or any new and useful improvement is patentable. May not be a discovery (as opposed to an invention). An innovation must be invented and not discovered. Nonengineered bacteria and other discoveries found in nature cannot be patented. Some software, particularly involving medical devices, may be protectable by both patent and copyright. Some software, particularly used with or in medical devices, may qualify for patent and copyright protection. Copyright Lifetime of the author plus 70 years. Must have achieved expression. Protects original or literary work of authorship. Must be reduced into a fixed and tangible state; must be original; must constitute an expression and not just an idea. Music scores and recordings; computer software; writings; visual art; choreography; movies and photographs; recent architecture. The case around software is complicated and may differ in the European Union from the US. “Originality,” “fair use doctrine,” and the “idea-expression distinction” can become controversial. “Fair use” doctrine can complicate copyright protection. Trademark or service mark Unending but must be provably in use. None other than the mark itself, but details about the entity it is meant to protect will be required. Protects any word, name, symbol, device, or any combination, used or intended to be used to identify and distinguish the goods/services of one seller or provider from those of others, and to indicate the source of the goods/services, brand names, and logos on goods and services. Intended for brand recognition. Although not absolutely required, Federal registration is advisable in the US for US-based trademark and service mark holders. The trademark is not enforced or policed by any agency. PCT, International Patent Cooporation Treaty. CLINICAL APPLICATIONS OF ICS Many studies have underscored the quality and value of ICS in the curation and interpretation of large datasets and in clinical decision support. AI has been successfully deployed for applications ranging from image interpretation to cytology, the management of blood glucose levels, diabetic retinopathy and glaucoma screening, rare syndrome diagnosis in pediatric neurology, decision support in oncology and emergency medicine, and clinical trials.31,73-82 The ability to improve outcomes and reduce costs is widely heralded. The National Academy of Medicine points to imaging and signal detection as the most advanced applications.23 The use of AI in personalized medicine and health-care policy is also pursued increasingly.83-85 REGULATION The introduction of ICS into medical devices has been slower than anticipated. A review published in 2020 reported that only 30 (46.9%) radiological devices, 16 (25.0%) cardiological devices, and 10 (15.6%) internal medical devices received premarketing approval from the US Food and Drug Administration (FDA). No explicitly neurosurgical devices were approved.86 At least a part of the problem is associated with the complexity and ambiguity of the FDA process. The FDA is charged with regulating software that functions as a medical device, that may be deployed on mobile platforms or other general-purpose computing platforms, or that has a role in the function or control of a hardware device.87 Most ICS intended for commercial distribution will be subject to regulation by the FDA. Other health-care software programs, including those intended for research purposes, may be regulated differently, or even not at all.88 Unlike medical device manufacturers, software developers are not accustomed to rigorous external regulatory review process. They prefer to introduce new products in phases reflecting the software release life cycle. Alpha products are in the early testing stage. Later-stage beta products may have the form of a finished product, but remain filled with “bugs” or errors. Subsequent numbered releases, called “versions,” are still not guaranteed to be error-free even after they are launched commercially. This approach differs from that pursued by the medical device industry, where errors and bugs are taken much more seriously and commercial distribution delayed until foreseeable problems are resolved. Source code, instructions written by a programmer using computer programming language, is a critical component of some computer programs. It can be read by human beings before being compiled into object code in machine language, which is only readable by the computer. Source code can be held proprietary to protect the program and prevent modifications. Proprietary source code is generally locked and licensed with provisions that enjoin the user from attempting to discover or modify it. Open-source unlocked software, in contrast, is available under public license, and designed specifically to allow developers to collaborate to improve it. Software developers are highly concerned about the ease with which their product may be pirated or corrupted if the source code is published and widely believe that the health-care regulatory establishment is insufficiently sensitive to that vulnerability. Ambiguity in the regulatory process may be one of the major factors in the slowness of ICS-based medical products to enter clinical use. Whether designed to be stand-alone, or embedded into a medical device, algorithms must generally be deconvoluted to qualify for premarketing approval. This is a complicated process for developers, and may be perceived as a threat to intellectual property protection. To overcome the problems associated with unlocked algorithms and devices incorporating ML, the FDA has announced “a total product lifecycle-based regulatory framework.” This framework would allow for modifications to algorithms on the basis of real-world learning and adaptation, while still ensuring adherence to accepted safety and effectiveness standards.89 All the pathways offered for approval by the FDA have been pursued by device developers. A total of 55 (85.9%) devices using ML and AI gained approval through a 510(k) pathway, 8 (12.5%) via the de novo pathway, and 1 (1.6%) through a premarket approval process (PMA) pathway.86 Some companies have chosen the PMA pathway in order to set a rigorous standard and discourage competitors. ICS IN NEUROSURGERY – LOOKING TO THE FUTURE As of 2018, only 23 neurosurgical studies reported using ML or AI.2-5,90 This number may not be fully representative of the research underway, however, because many papers directly relevant to the neurosurgeon, or involving neurosurgeons, appear outside the neurosurgical literature.91-102 The reasons for this relatively modest number may relate to difficulties in sourcing, standardizing, and collating neurosurgical data, as well as preferred research paradigms in the field.103 Several clinical applications, including neuronavigation, image processing, 3-dimensional modeling and printing, prosthetic manufacturing, stereotactic radiosurgery, and clinical trials management have made use of ICS. Nonclinical applications for scheduling, workflow, billing, and practice management are commercially available. Almost all can be sourced from companies that provide turnkey installations. It is entirely conceivable to have automated websites and chatbots welcome patients and elicit a history; AI and DM applications verify the history against medical records; expert systems support diagnostic tests; advanced processing solutions and modeling systems offer accuracy and precision in neuroimaging; robotic and neuronavigation technologies help in minimally invasive surgery; neuronavigation, machine vision, and image fusion direct the resection of mass lesions, and later precision radiotherapy; machine vision and expert pathology systems offer histological analysis in Vivo, or at least at point-of-surgery; anesthesia safety systems and smart devices protect again intraoperative and medical error; smart respirators optimize postoperative ventilation; closed-loop, ML directed, closed-loop medication delivery devices provide tight glycemic and blood pressure control against designated target values; monitoring instrumentation connected to the medical record; intelligent alert systems; full instrumental interoperability; and AI technologies support personalized treatment protocols and clinical trials. Table 6 lists some principal investigational and noninvestigational clinical applications already available, and nonclinical applications under development.104-107 TABLE 6. - Clinical and Nonclinical Uses of ICS in Neurosurgery Clinical applications Practice management Automated cytology Trend analysis Frozen section screening Clinical trials management Computer-assisted radiological review Preoperative communication Image fusion applications Postoperative follow-up Radiosurgical planning FQR system Robotics Informed consent Allergy screening Human resource management Medication allergy screening Revenue cycle management Electronic medical records analysis Quality management systems Personalized implants Chatbots for websites Electrophysiological monitoring Patient communications Neuro-intensive care decision support Scheduling Tight glycemic control systems Workflow optimization Surgical modeling Selected writing tasks FQR, Frequently Asked Questions. ETHICAL ISSUES SURROUNDING ICS Ethical issues surrounding ICS reflect both philosophical and scientific concerns.108 The major philosophical issues relate to privacy, security, error, and transparency. The potential problems of privacy, security, and error are self-evident. Transparency becomes an issue because the logic of algorithms used in ICS is not necessarily in evidence. It is currently very difficult to determine why a DL model produces a specific result. As a result, there are concerns that hidden biases may affect clinical recommendations. Questions have been raised around the need for informed consent when ICS is deployed.109-111 The full ethical implications of ICS implementation are only beginning to be appreciated. They are easily overlooked, or erroneously framed as a form of technical error with technical solutions rather than as fundamental issues extending beyond technology.112-116 While this topic is beyond the purview of the current review, it is likely to earn increasing scrutiny as ICS proliferate.117-119 CONCLUSION The role of ICS in neurosurgery has yet to be fully established. Nevertheless, there are several areas in which ICS shows particular promise, including the data analytics big data, decision support, diagnostic image manipulation and interpretation, prosthetics, robotics, clinical trials management, and error prevention. It is important to specify and describe the algorithms that are used. ICS are subject to standards of safety, validity, reproducibility, usability, and reliability in the same way as other medical devices. Ethical concerns surrounding the use of AI in medicine center on privacy, security, transparency, and biases intentionally or unwittingly written into algorithms. These issues, together with a fuller understanding of the errors that may potentially occur, will require ongoing attention as AI becomes more widely deployed. Funding This study did not receive any funding or financial support. Disclosures The authors have no personal, financial, or institutional interest in any of the drugs, materials, or devices described in this article. Dr Glass received consulting fees from GLG in the past year

Referência(s)
Altmetric
PlumX