Using Emerging Technologies such as Virtual Reality and the World Wide Web to Contribute to a Richer Understanding of the Brain
1997; Wiley; Volume: 820; Issue: 1 Linguagem: Inglês
10.1111/j.1749-6632.1997.tb46198.x
ISSN1749-6632
Autores Tópico(s)Functional Brain Connectivity Studies
ResumoThe way by which people interact with computers—computer interface technology—is undergoing rapid change. Computer input and output initially consisted of punch cards and typed correspondence. Greater spontaneity was afforded by the display and interaction with text on a computer screen. During the past 10 years, the increasing acceptance of the mouse and the Xerox-PARC 2D interface has allowed the use of spatial navigation through the computer's operating system, as well as providing greater standardization for software applications. With the advent of three-dimensional hardware acceleration, the computer's interface and applications now allow three-dimensional navigation and interaction with three-dimensional worlds. Furthermore, the explosive growth of the Internet is providing a better means of sharing and distributing information between machines, and ultimately the virtual environments that reside in those machines. Improvement in computer interface technology has immense implications for neuroscience research. Computers, when used for quantitative and/or qualitative analysis, can aid understanding of the structure and function of the brain. However, we must also be cognizant of the fact that the strengths and weaknesses of the computer as an analytical tool can change the focus of our investigations. Sir William Osler stated, "When your only tool is a hammer, you treat everything as a nail." Because the computer is changing so quickly, recognizing its strengths and knowing its limitations, as well as identifying areas of rapid advancement will be important. So rather than limiting our investigations, as would happen with the metaphorical hammer, we need to match the computer's capabilities with our investigative and scientific reporting needs. Neuroscience research involves a range of scientific endeavors: analysis of static histological images, describing electrophysiological events, tracking receptor binding, and now the arena called "functional imaging." Given our increasingly comprehensive understanding of the dynamic complexity of the brain, we require increasingly sophisticated computer hardware and software to both contribute to and communicate our understanding of the brain. Just as scientific studies are producing massive amounts of data about the structure and function of the brain, we are seeing dramatic increases in the capabilities for the display and interaction of multi-dimensional data on the personal computer. Initially these capabilities were developed for high—end applications in computer-aided design, the military, and in flight simulation, but are now driven by the computer game industry. Virtual reality software and hardware, as well as networking applications deploying 3-D (involving virtual reality modeling language [VRML]) are trends that will contribute to a richer understanding of the brain and enable sharing and exploration of conceptual models in ways never before possible. New technology can provide more complete and concise descriptions of neuroscience models as well as capabilities that can transform the scientist's observations into models that can be explored in new ways by both the researcher and other interested scientists. The technology also offers new opportunities and challenges for collaboration and sharing of information to build increasingly comprehensive models of the brain. Using these technologies, multi-modal data can be simultaneously displayed, enabling different types of data to be merged to create a more comprehensive understanding of the brain. For example, the dynamic processes observed by the electrophysiologist can be combined with the receptor binding studies and histological information obtained by other researchers. This capability provides an opportunity for both scientific discovery (in detecting new patterns and relationships between investigations) as well as misinterpretation (observing patterns which are artifacts of the techniques employed), Hence, an understanding of the process by which such synthetic images are created, as well as the continued development of multi-disciplinary standards for data labeling, storage, normalization, and retrieval becomes increasingly critical. The costs associated with providing educational courses and skill certification using traditional means continue to increase. With the growing popularity of the World Wide Web, we see significant opportunities for online training and reporting of data. Internet-based educational experiences can afford an immediacy and speed to world-wide access of data and interpretations that have never before been possible. But while this capacity for information retrieval has been increased, it is also important to maintain the standards of peer-review and the editorial functions that traditional publications have deployed; else much misinformation will be promulgated. The World Wide Web can provide benefits beyond the unprecedented decrease in the cost and speed of the distribution of information; the technology can also display information previously unavailable in any format. The growth of World Wide Web—related technologies is pushing the development of other new communications technologies faster than ever. For example, three-dimensional interactive graphics have heretofore only been available on high-end workstations, such as those manufactured by Evans and Sutherland, Silicon Graphics, and others. But now we are seeing a more rapid evolution of software technologies on the network than we saw with standalone workstation applications. Initially, the hypertext markup language (HTML) has become a standard for displaying graphics, text, and links on the Internet. Within the past two years, technologies such as virtual reality modeling language (VRML) are providing standard methods for interacting with three-dimensional representations over the network. Personal computer manufacturers are racing to include three-dimensional graphics capabilities on the desk-top as a standard feature (just as audio capabilities are now becoming ubiquitous). Furthermore, the efforts to develop dedicated Internet boxes are under way that include advanced 3-D chip sets (developed in the game industry) and which are intended to streamline Internet connections with dedicated hardware/software solutions. Bandwidth is also increasing with the increasing utilization of digital phone lines by modem manufacturers. In addition, several promising telecommunications technologies may increase our communications capabilities hundreds- or thousands-fold. Current contenders include asymmetrical digital subscriber lines (ADSL), which utilize current phone lines to deliver up to eight megabytes per second. Because this communication is "asymmetrical," these communication technologies allow faster communication to your computer than from your computer to the communication hubs. From the cable companies, a technology referred to as "cable modems" promises to provide a similar asymmetric communication service to your PC via the cable companies' coaxial lines. Finally, the concept of using digital satellite connections to the PC can also allow rapid download of information. With greater bandwidth, the possibility of "telementoring," or teaching scientific methods over networks, begins to come closer to a practical enterprise. These increases in technology and bandwidth afford a new paradigm for scientific reporting, training, databases, and conferences. To rephrase Osler, the tools that we have can limit or expand our understanding. The scientific models available for describing the immense number of biochemical, genetic, and environmental interrelationships in the brain are still relatively primitive. Another of the most challenging applications of computer science has been in the construction of a financial "nervous system." Most of the large supercomputers that are sold are for calculating and verifying financial transactions. When you put your credit card through a gas pump, grocery, or retail store card reader, it requires considerable computing power to simultaneously verify, approve, register, or deny the millions of simultaneous transactions. Financial transactions are driving much computer development. Networks are facilitating an unprecedented communication between these servers. I cannot help but think that the growing capabilities of these multiprocessing environments will one day let us simulate a few moments of human nervous system activity. In the recent past, a research laboratory's analytical capabilities were defined in part by the size and caliber of available computing resources. Now, a new model is emerging. As networks become faster, and computers become more of a commodity, we see the emergence of computers as a subscription service. As network speeds increase, computer resources such as hard drives, memory, and CPU performance all can become available by "renting" these resources from a provider on the network. Furthermore, there is strong interest in Sun Microsystems JAVA programming language, designed from the ground up to support on-demand transmission of snippets of necessary software to enable the widest possible range of customization for the software environments. Exchange of information and software tools in the scientific community will rise to new levels. In view of the inherent demands of intricacy, breadth, and constant changes that exists in neuroscience research these advances should be most welcome. The power of one's investigative tools will be more and more dependent on the subscription databases and subscription computer resources which are available to a researcher or team of researchers. Instead of worrying about the cost of a new workstation, researchers will be more concerned whether a particular query is "computationally expensive"—taking up more of the network's resources and presumably costing more. "Agent" technology— software programs that allow an end-user to automatically search databases— will become more and more sophisticated. These agents may be built using JAVA, and additional ones can be purchased to allow increased flexibility and functionality. Ultimately, these software creations will serve the role of a librarian and research assistant, and may even have some "interpretive skills" to assess the relative significance of a particular abstract in one's field of interest. Multiple agents may be deployed to simultaneously build a highly sophisticated and interwoven computer model. Another area of growth in the computer science arena that will have an astonishing effect on neuroscience research is that of entertainment. Computer game development is pushing the graphics industry to achieve price/performance ratios that were inconceivable a few years ago. This year, the fruits of a collaboration between Silicon Graphics and Nintendo will be unveiled, a game machine called the "Ultra-64." This machine, which has many of the advanced graphics features found only on $100,000-plus visualization workstations, will be available at a price of $250. Because the home-game console market represents an industry with more than a $6 billion dollar annual revenue, large research and development budgets are possible in this area. The implications game machines have for neuroscience research are many-fold. To successfully develop a virtual environment that is realistic for an end-user, one must have an understanding of sensory physiology. The way in which the mind detects motion and perceives detail is critical in the construction of a realistic game architecture. A visualization computer (whether it is a game machine or a supercomputer) has a finite amount of graphics capabilities—usually defined as pixel-fill rates (the rapidity with which the computer can draw on a screen) and the polygon-per-second count (the speed with which the computer can represent three-dimensional geometry on the screen). As a result, many compromises must be made in order to create realistic imaging. Such compromises include representing objects with less detail when they are further from the field of view or when they are obstructed by other objects, and deploying many other tricks in both visual and behavioral representation to create an environment that can fool the eye. One result of the increased realism of these systems is their use in psychiatry; specifically for treating agoraphobia. As the systems become more and more powerful, it may be possible to construct real-time stimulation/feedback experiments in which the interaction between subtle changes in synthetic environments will be mapped to changes in parameters derived from biophysical studies and vice-versa. The concept of using computer games to study functional plasticity of the brain has initially been attempted, and an example of how computer games are helping to understand the functional plasticity of the brain as a result of experience has recently been reported. In the studies thus far, it has been shown that after training with computer games designed to hone their temporal processing skills for acoustic stimuli, language-learning-impaired children improved their game scores and also raised their performance on standardized tests using normal unmodified speech. Thus, theirs is a demonstration that changes in processing of sensory inputs can be modified and that these disabled persons can be trained through the use of computer games. There will almost certainly be new methods for human–computer interfaces beyond those commonly in use today. And these new interfaces may very well lead to a time in the near future in which double-blind studies will confirm the use of specialized games as accepted treatments for a wide range of neurological disorders.
Referência(s)