The Internet
1998; Lippincott Williams & Wilkins; Volume: 89; Issue: 4 Linguagem: Inglês
10.1097/00000542-199810000-00024
ISSN1528-1175
Autores Tópico(s)Health Sciences Research and Education
Resumo(Ruskin) Associate Professor of Anesthesiology.Dennis M. Fisher, M.D., EditorThis article is accompanied by an Editorial View. Please see: Eisenach JC, Todd MM: The Internet: Where do we want to go tomorrow? Anesthesiology 1998; 89:817-9.This article appears in full text with live hypertext links on the Anesthesiology Web Site. Go to the following address, and then scroll down to find the title link for this article. http://www.anesthesiology.org/tocs/v89n4-TOC.cfmTHE Internet is probably the largest revolution in the computer industry since the advent of the personal computer (PC) and is a valuable clinical tool for physicians and other health care personnel. It can be used to communicate with colleagues around the world, to obtain information (including practice guidelines, abstracts, and journal articles), and to arrange travel and meetings. [1]Although the technology on which the Internet is based was developed in the mid-1960s, it was not widely available until almost 1990, and access was primarily limited to universities, government agencies, or the computer industry. Until recently complex software and restricted access have required Internet users to have a sophisticated understanding of computers, networking, and programming. New technology has, however, made the Internet available to nearly anyone with access to a PC and a modem (a device that connects the PC to other computers over telephone lines). [2]Getting information on the Internet is now as easy as inserting a disk, clicking a “Setup” icon, and then pointing to a topic with a mouse (a small device that can be moved around a desktop to select an item). Sophisticated multimedia documents, including video, sounds, and pictures, and which cover every topic from fiberoptic intubation to the stock market, are accessible nearly anywhere in the world. [3,4]The Internet's history begins in the mid-1960s, during the Cold War, at which time the United States Department of Defense relied on a network of powerful super-computers to control ballistic missiles and other weapons. It was determined that in the event of a war, it might be necessary to change this network rapidly or add new computers on short notice (e.g., by parachuting computers into an area and connecting them by radio). As a result, the military, through the Defense Advanced Research Projects Agency (ARPA), began to investigate new simple and reliable ways to connect computers. [4]The central assumption of this new technology was that physical connections were unreliable, i.e., a connection could be lost at any time. To avoid this problem, each computer would keep a constantly updated list of its neighbors and would be able to find alternative pathways ways to a particular destination if a connection were severed. To provide reliable communication in this environment, two closely interacting protocols were developed: the “Internet Protocol”(IP), which moves small packets of information from one computer to another, and the “Transport Control Protocol”(TCP), which breaks large blocks of data into small chunks and reassembles them on the other end. These intertwined protocols are commonly referred to as “TCP/IP.” Sending a file across the Internet using TCP/IP can be compared with mailing a long letter, one page at a time. TCP separates the letter into individual pages, numbering each page; IP is the envelope that contains each page and gets it to its destination. TCP is not only method of sending information across the Internet, but it is the most commonly used. [4]Personnel at institutions connected to the ARPANet (the Department of Defense TCP/IP network) quickly learned that linking of computers could permit research reports, programs, data files, and other information to be shared nearly instantaneously with colleagues at remote locations. Subsequently other government agencies used the TCP/IP protocol to connect their computers. The National Science Foundation (NSF) created NSFNet, a TCP/IP network to link its five supercomputers and to provide remote users with access to its resources. The National Aeronautics and Space Administration (NASA) created the NASA Science Network.By the early 1980s, the biggest impediment to growth of the Internet was bureaucracy. The US government set strict “appropriate use” policies that governed what information could be sent and how each network could be used. Administrators were encumbered by the regulations and by the careful record keeping that was required. To solve this problem, Congress passed a law combining ARPANet, NASA Science Net, and NSFNet into the National Research and Education Network, administered by the NSF. The Internet was born.Researchers and scientists at universities and federal agencies, who soon discovered that TCP/IP networks were easy to connect and that they could expand without disrupting existing networks, were the initial users of the Internet. To promote widespread use of the new network, the NSF created policies that encouraged institutions to make access available to individual users. In 1993, commercial Internet service providers (ISPs) were allowed to sell access to the general public, and the number of Internet users increased rapidly. Shortly thereafter, the development of advanced Internet services, in particular the World Wide Web (WWW), greatly simplified use of the Internet and further increased the rate of growth. Nearly all of the Internet is currently privately funded and maintained by telecommunications and computer companies, educational institutions, and other organizations; the government now funds only those sections that it uses.A network is a group of computers that are connected so that information can be shared between them. A group of computers sharing a word processor or spreadsheet in an office and computers that retrieve laboratory results from a hospital nursing station are both examples of a network. Computers can be connected together using ordinary telephone wire, coaxial cable (like that used for cable television systems), or fiberoptic cable, which consists of thin glass strands that carry information using bursts of light. A computer network in an office or hospital is frequently called a “local area network”(LAN) because all of the computers are in the same floor or building. A “wide area network”(WAN), or “internetwork,” is a group of LANS that have been connected by telephone wire, radio waves, or satellites.The Internet is not a single network but rather a network of networks that spans the globe. There is no single cable or piece of equipment that one can point to and say, “This is the Internet.” Each computer on the Internet is connected to an institutional network (a LAN) that is in turn connected to an ISP. The ISP connects the individual LAN to a regional network that may span a few square miles, an entire city, or a large part of the country. These larger regional networks are then interconnected to form even larger national or international networks. A piece of information travels from one computer to another by “hopping” from network to network until because the individual sections are connected together at more than one point. Information can therefore automatically find new pathways to a specific destination if part of the network is not functioning properly. This allows TCP/IP networks to be expanded as needed without disrupting service; the Internet is never closed for construction. [5]No single entity owns, maintains, or even plans the Internet. The United States government, through the NSF and other organizations, formerly funded the Internet. Many telecommunications companies, large and small, now work in cooperation with nonprofit organizations and volunteers to maintain the network. The ultimate authority for determining the future of the Internet belongs to the Internet Society (ISOC), a volunteer organization that promotes global Internet access and use. ISOC appoints another group of volunteers, the Internet Architecture Board, to approve new standards, allocate Internet addresses (a number that locates a specific computer on the network), and formulate long-range policies. ISOC also periodically appoints members to the Internet Engineering Task Force, which in turn creates “Working Groups” that solve specific short-term problems and handle technical issues. The only permanent entity that has been assigned a specific responsibility is Internic, a non-profit organization. Internic assigns network addresses and names, which must be controlled by one group so that every address is unique.Many people believe that the Internet is “free.” This is not true, although some schools and companies do not pass their costs on to employees or students. Academic institutions and corporations pay for their connection to a regional network; in the case of educational or non-profit institutions, part of this cost may be offset through federal subsidies. People can obtain an Internet connection for home use from either a commercial ISP such as MindSpring or AT&T or through an online service such as Prodigy or America Online, which provides Internet access as one of many features.Most Internet services, such as the WWW (see below), use client-server technology, a model that consists of two discrete pieces of software that work together to provide a flexible, powerful information retrieval system. The server program accepts requests for information and responds by sending data in a standard format that is independent of the type of computer. The server can be any computer on the Internet, from a small desktop computer to a mainframe that fills an entire room. The client program formulates requests for information based on user input and displays the results (e.g., a document, picture, sound, and so on) on the user computer. Each document must be retrieved each time it is used; updates are therefore reflected immediately. Most WWW clients are easily customized by the user, who controls what typestyles will be used for headings and highlighted text and what helper application, or viewing program, will be used for a given file. This arrangement gives the designers of Internet resources a considerable amount of freedom because the document need not be designed for a specific computer.Each computer connected to the Internet has a “host-name” that is assigned by the administrator of the local network. A hostname identifies a particular computer on the Internet, just as a street address identifies a particular house in a city. A hostname actually consists of several different names, divided by periods, or “dots,” and usually reveals the institution at which the computer is located (or the Internet service provider through which the computer is connected to the network) and possibly the function or owner of the computer. For example, the name of the GASNet server (an Internet resource for anesthesiologists) is gasnet.med.yale.edu. Working from right to left, edu indicates that the computer is located in an educational institution; yale is the name of the institution. Each name to the left of yale has been assigned by the network administrator at Yale University. At this particular institution, all computers at the medical school have been placed in a “subnetwork” named med. The leftmost part, gasnet, is the name of the computer itself.Universal resource locators (URLs) provide a system for identifying the name of each computer, each file of interest, and the exact method of retrieving the file. Developed primarily for the WWW, URLs are now commonly used to describe most resources on the Internet. URLs consists of three parts: a code identifying the transfer protocol to be used, the hostname of the computer being accessed, and the path and file to be retrieved. For example, http://gasnet.med.yale.edu/index.html is the URL for the anesthesiology section of the WWW Virtual Library (a list of hundreds of Internet resources for anesthesiologists). Working left to right, http indicates that the address is for the WWW (it actually stands for hypertext transfer protocol; other protocols, ftp, telnet, and gopher, will be described). A colon and two forward slashes always follow the protocol name. The next part is the name of the computer, in this case gasnet.med.yale.edu. The last part, index.html, is the name of the specific file to be retrieved. Sometimes the part after the hostname is empty, in which case a default (or index) file is usually returned.The Internet by itself is of academic concern only to researchers in computer networking; it is the information contained within the Internet that makes it important for everyone else. Resources, including practice guidelines, lecturers, weather reports, and online journals, are accessed with a variety of computer programs and are referred to collectively as Internet services. [6]Internet services can be divided into two broad classifications. “Basic services” are primarily text-based and were the first applications to be developed for the Internet; they include a terminal program (telnet), the file transfer protocol (FTP), and electronic mail (e-mail). “Advanced services” include Gopher and the WWW and have existed for only a few years. They take advantage of the high-speed connections currently available, as well as the graphical user interface provided by Microsoft Windows (Redmond, WA), the Apple Macintosh (Apple Computers, Cupertino, CA), computer, and other operating systems. Even more sophisticated services, such as interactive documents, teleconferencing, and video on demand, are being introduced at a rapid pace.Most Internet services use client-server technology, a model consisting of two discrete pieces of software that work together to provide a flexible, powerful information retrieval system. The server accepts requests for information and responds with data in a standard format, independent of the type of computer. The client program formulates requests based on user input and displays the results (e.g., a document, picture, sound, and so on) on the user's computer. Each document must be retrieved each time it is used, allowing users to view the most recent version of an article. Most graphical client software is easily customized by the user, who controls what typestyles will be used for headings and highlighted text and what “helper application,” or viewing program, will be used for a given file.Electronic mail (e-mail;Figure 1) is probably the most frequently used Internet service. [6]For many people, e-mail is a primary method of exchanging messages, sending documents, or arranging meetings and appointments with someone in the next office or on another continent. Although e-mail was originally designed to transmit plain text messages, modern software allows word processing files, spreadsheets, and even pictures and sound to be sent. There are many e-mail programs available today, and although they differ in screen appearance, features, and the exact sequence of commands necessary to send a message, they share many common features.Sending e-mail is relatively straightforward and in many ways resembles mailing a letter. The user types in a message, adds the recipient's address (see below) and “attachments” such as a picture or sound, and then clicks the “Send” button on the keyboard or with the mouse. When e-mail was designed, it was intended for short, plain text messages. As it became popular, however, users wanted to send other information such as formatted word processor documents, programs, sounds, and pictures. Because this information is represented differently, the information may need to be converted to a format that resembles plain text before it is mailed and then converted back to its original format on the receiving end. This conversion is usually done automatically by most e-mail programs, but older software may require the use of a pair of programs named uuencode and uudecode.As with postal mail, the e-mail address of each recipient must be specified. E-mail addresses typically consist of two parts: the username and the hostname, which are separated by the @(at) sign (e.g., ruskin@gasnet.med.yale.edu).The user name refers to the user identification of the intended recipient, which is usually assigned by the system administrator. Some institutions such as universities and large companies maintain e-mail directories for their own employees. There are also several directories on the Internet (e.g., Four 11, http://www.four11.com). If a person moves from one e-mail service to another, the location to which his or her e-mail address must be delivered will change. Many users, however, have one or more e-mail “aliases,” which forward e-mail to another address.The Internet contains many “mailhosts,” whose responsibility is to store and forward e-mail to its destination. The e-mail format most commonly used on the Internet is the “simple mail transfer protocol”(SMTP), which defines how the mail messages are formatted and the mechanism by which the mail is delivered. Some e-mail software packages, such as Microsoft Mail, do not adhere to this standard, and proprietary software is required to use them on the Internet.Mailing lists permit users to broadcast information to a few tens of subscribers or to thousands of subscribers. Mailing lists can consist of a simple list of e-mail addresses maintained by an individual or of a “list processor.” A list processor is a program that maintains a database of e-mail addresses in a centralized location and automatically forwards any message addressed to the list to the individual subscribers. Mailing lists can be moderated (each message is approved by the list manager before distribution) or unmoderated (messages are forwarded automatically on receipt). Subscription to mailing lists may also be restricted to a specific group (e.g., members of the American Society of Anesthesiologists [ASA]). The GASNet Anesthesiology Discussion Group currently has 2,200 subscribers; its members are located in countries around the world, and topics of discussion range from political developments to questions about patient care or research. [7]Many societies offer mailing lists to facilitate communication among their members or to broadcast news of important events. For example, the ASA uses a mailing list to keep its members informed of developing news.USENET News (Figure 2) resembles a bulletin board that contains information on thousands of categories covering many topics. It can be used for scientific research, to get information about a product or service, or to obtain help with computer hardware or software. Newsgroups can either be moderated, in which a person (moderator) reviews each message before distribution, or unmoderated, in which messages are transmitted without review. It is important to remember that most groups are not restricted-anybody can read and respond to any message sent to the list. It is often best to read the messages in a group before participating in a discussion to gain a sense of what is normally posted.USENET postings resemble e-mail messages because they contain “from” and “subject” lines, but they are posted to the newsgroup instead of being addressed to an individual recipient. Many newsreaders resemble e-mail programs. Some examples of useful groups include the following. alt.med.equipment: Medical equipment comp.os.ms-windows.win95: Information about Windows 95 sci.engr.biomed: Biomedical engineering bit.med.resp-care.world: Discussions about respiratory therapy rec.food.restaurants: Where to eat at the next ASA meetingOne of the earliest applications of the Internet was to connect remote computers together so that the keyboard and monitor of the client computer acted as if they were a terminal connected directly to the host computer. Telnet allows the user to access the host computer as if he or she were seated at a terminal that is directly connected to it. The most common uses for telnet include connecting to literature search databases and for receiving other, character-based information such as the weather. Telnet can also be used to retrieve e-mail from a mainframe computer. Figure 2and Figure 3are examples of Telnet sessions.Occasionally the instructions for a specific resource indicate a connection to a specific port, e.g., for weather reports, telnet to downwind.sprl.umich.edu port 3000, or telnet://downwind.sprl.umich.edu:3000. This connection number tells the computer which program to run. Most resources that previously used telnet now use the WWW.The file transfer protocol (FTP)(Figure 3) is used to exchange files such as word processor documents, experimental data, or even computer programs over the Internet. FTP has also been supplanted to some extent by the WWW, which is much easier to use. FTP is still a convenient way for two people to exchange information and is also commonly used to obtain new software or updates. FTP might be used, for example, to download National Institutes of Health grant guidelines or a new program (transfer the information from a server to the user's computer) or to upload a completed grant application (transfer it from a local PC to another computer).Nearly all computers require a user identification and password to allow access. Many computers, however, distribute publicly available files (e.g., a new program or a video clip) using “anonymous FTP,” which allows limited access to a computer to people without an account on that system. Anonymous FTP can be accessed by responding to a request for a username with the word “anonymous.”“Netiquette”(Internet etiquette) dictates that an e-mail address should be given as a password, even though none is required. Most anonymous FTP hosts allow files to be downloaded only, although some computers allow files to be uploaded to a single directory (frequently called “incoming”).Gopher and the WWW are two of the so-called “advanced” Internet services and are designed to be intuitive and easy to use. Both systems allow the user to obtain information without having to know how it is stored or where it is located. The Internet Gopher presents a menu from which the user can select an item by clicking it with a mouse or by entering a number. The list can include text, pictures, or other files. It can also contain links to other gopher servers.The WWW (Figure 4) was initially developed at the European Laboratory for Particle Physics (CERN) in 1992 as a method of publishing distributing abstracts and papers in the physical sciences. [8]Between 1992 and the present, its content has grown from a few thousand pages to more than 21 million pages of information. [9]The WWW client runs on the user's computer and parents information using an intuitive, graphical user interface that resembles a printed page in appearance and can include text, pictures, sounds, and video clips. Documents can contain highlighted references (hyperlinks) to information on the original computer or on any other Web server on the Internet; these are referred to as hypertext documents. When the user clicks on a highlighted word or phrase with a mouse, the new file is automatically loaded and displayed.Hyperlinks provide easy access to the wealth of information on the Internet and have made the WWW the centerpiece of the information superhighway. They make it possible for a document to be retrieved without knowing its composition or even where it is physically located. Hyperlinks may be located anywhere in the document and are indicated by highlighted or underlined words that can be clicked on with the mouse to activate the linked document. Hypermedia documents (multimedia documents that contain hyperlinks) reside on thousands of WWW serves located around the world. A session on the WWW may take the user from the East Coast to West Coast in the United States or to Europe, Asia, or Australia all within a few seconds. [10]Many WWW clients can also access other Internet services such as Gopher, Telnet, FTP, and Usenet, allowing a single program to be used for most tasks.Faster Internet connections and improved data compression techniques, combined with inexpensive hardware and software, make it possible to send real-time audio and video using only a desktop PC or Macintosh computer. Internet broadcasting makes it possible to receive everything from live broadcasts of sporting events to ASA safety videos over an ordinary modem. Physicians now use inexpensive software to hold “virtual meetings” and even perform consultation on procedures in remote locations. Such virtual meetings enable physicians to share case conferences and everyday educational activities and make it possible to share educational activities with developing countries.The three methods most commonly used to obtain an Internet connection are a direct Internet connection provided at an institution such as a hospital, an on-line service, and a commercial ISP. [2,6,9]Physicians working in a hospital whose network is connected to the Internet may already have access at the hospital. Institutions may connect their computers in one of several ways. The most common method (
Referência(s)