Поможем написать учебную работу
Если у вас возникли сложности с курсовой, контрольной, дипломной, рефератом, отчетом по практике, научно-исследовательской и любой другой работой - мы готовы помочь.
Если у вас возникли сложности с курсовой, контрольной, дипломной, рефератом, отчетом по практике, научно-исследовательской и любой другой работой - мы готовы помочь.
ТЕКСТЫ ДЛЯ ИНДИВИДУАЛЬНОГО ЧТЕНИЯ
2 КУРС ИПМ
№ 1. VIRUSES AND VACCINES - 3301
№ 2. COMPUTER CRIME 3083
№ 3. INPUT DEVICES 3302
№ 4. Hardware (computer) 7324
№ 5. Computer Security 8370
№ 6. Virus (computer) 11.114
№ 7. Network (computer science) 7581
№ 8. Operating System 4563
№ 9. User Interface 9538
№ 10. Multimedia 6357
№ 11. Keyboard 3073
№ 12. Programming Language 9849
№ 13. Monitor (computer) 3564
№ 14. Central Processing Unit 8846
№ 15.Integrated Circuit 3900
№ 1. VIRUSES AND VACCINES
1. The terms viruses and vaccines have entered the jargon of the computer industry to describe some of the bad things that can happen to computer systems and programs. Unpleasant occurrences like in March 6, 1991, was an attack of the Michelangelo virus. So from now on you need to check your IBM or IBM-compatible personal computer for the presence of Michelangelo before March 6 every year or you risk losing all the data on your hard disk when you turn on your machine that day. And Macintosh users need to do the same for another intruder, the Jerusalem virus, before each Friday the 13th, or risk a similar fate for their data.
2. A virus, as its name suggests, is contagious. It is a set of illicit instructions that infects other programs and may spread rapidly. The Michelangelo virus went worldwide within a year. Some types of viruses include the worm, a program that spreads by replicating itself; the bomb, a program intended to sabotage a computer by triggering damage based on certain conditions usually at a later date; and the Trojan horse, a program that covertly places illegal, destructive instructions in the middle of an otherwise legitimate program. A virus may be dealt with by means of a vaccine, or antivirus, program, a computer program that stops the spread of and often eradicates the virus.
3. Transmitting a Virus. A programmer secretly inserts a few unauthorized instructions in a personal computer operating system program. The illicit instructions lie dormant until three events occur together: 1. the disk with the infected operating system is in use; 2. a disk in another drive contains another copy of the operating system and some data files; and 3. a command, such as COPY or DIR, from the infected operating system references a data file. Under these circumstances, the virus instructions are now inserted into the other operating system. Thus the virus has spread to another disk, and the process can be repeated again and again. In fact, each newly infected disk becomes a virus carrier.
4. Damage from Viruses. We have explained how the virus is transmitted; now we come to the interesting part the consequences. In this example, the virus instructions add 1 to a counter each time the virus is copied to another disk. When the counter reaches 4, the virus erases all data files. But this is not the end of the destruction, of course; three other disks have also been infected. Although viruses can be destructive, some are quite benign; one simply displays a peace message on the screen on a given date. Others may merely be a nuisance, like the Ping-Pong virus that bounces a "Ping-Pong ball" around your screen while you are working. But a few could result in disaster for your disk, as in the case of Michelangelo.
5. Prevention. A word about prevention is in order. Although there are programs called vaccines that can prevent virus activity, protecting your computer from viruses depends more on common sense than on building a "fortress" around the machine. Although there have been occasions where commercial software was released with a virus, these situations are rare. Viruses tend to show up most often on free software acquired from friends. Even commercial bulletin board systems, once considered the most likely suspects in transferring viruses, have cleaned up their act and now assure their users of virus-free environments. But not all bulletin board systems are run professionally. So you should always test diskettes you share with others by putting their write-protection tabs in place. If an attempt is made to write to such a protected diskette, a warning message appears on the screen. It is not easy to protect hard disks, so many people use antivirus programs. Before any diskette can be used with a computer system, the antivirus program scans the diskette for infection. The drawback is that once you buy this type of software, you must continuously pay the price for upgrading as new viruses are discovered.
(3301)
Notes:
contagious заразный;
covertly невидимо, скрытно;
eradicate удалять;
illicit instruction запрещенная, незаконная инструкция;
benign мирный;
nuisance неудобство, помеха;
bulletin board systems электронная доска объявлений.
№ 2. COMPUTER CRIMES
1. More and more, the operations of our businesses, governments, and financial institutions are controlled by information that exists only inside computer memories. Anyone clever enough to modify this information for his own purposes can reap substantial rewards. Even worse, a number of people who have done this and been caught at it have managed to get away without punishment.
These facts have not been lost on criminals or would-be criminals. A recent Stanford Research Institute study of computer abuse was based on 160 case histories, which probably are just the proverbial tip of the iceberg. After all, we only know about the unsuccessful crimes. How many successful ones have gone undetected is anybody's guess.
2. Here are a few areas in which computer criminals have found the pickings all too easy.
Banking. All but the smallest banks now keep their accounts on computer files. Someone who knows how to change the numbers in the files can transfer funds at will. For instance, one programmer was caught having the computer transfer funds from other people's accounts to his wife's checking account. Often, traditionally trained auditors don't know enough about the workings of computers to catch what is taking place right under their noses.
3. Business. A company that uses computers extensively offers many opportunities to both dishonest employees and clever outsiders. For instance, a thief can have the computer ship the company's products to addresses of his own choosing. Or he can have it issue
checks to him or his confederates for imaginary supplies or services. People have been caught doing both.
4. Credit Cards. There is a trend toward using cards similar to credit cards to gain access to funds through cash-dispensing terminals. Yet, in the past, organized crime has used stolen or counterfeit credit cards to finance its operations. Banks that offer after-hours or remote banking through cash-dispensing terminals may find themselves unwillingly subsidizing organized crime.
5. Theft of Information. Much personal information about individuals is now stored in computer files. An unauthorized person with access to this information could use it for blackmail. Also, confidential information about a company's products or operations can be stolen and sold to unscrupulous competitors.
6. Software Theft. The software for a computer system is often more expensive than the hardware. Yet this expensive software is all too easy to copy. Crooked computer experts have devised a variety of tricks for getting these expensive programs printed out, punched
on cards, recorded on tape, or otherwise delivered into their hands. This crime has even been perpetrated from remote terminals that access the computer over the telephone.
7. Theft of Time-Sharing Services. When the public is given access to a system, some members of the public often discover how to use the system in unauthorized ways. For example, there are the "phone breakers" who avoid long distance telephone charges by sending over their phones control signals that are identical to those used by the telephone company.
Since time-sharing systems often are accessible to anyone who dials the right telephone number, they are subject to the same kinds of manipulation.
Of course, most systems use account numbers and passwords to restrict access to authorized users. But unauthorized persons have proved to be adept at obtaining this information and using it for their own benefit. For instance, when a police computer system was demonstrated to a school class, a precocious student noted the access codes being used; later, all the student's teachers turned up on a list of wanted criminals.
(3083)
Notes:
to reap пожинать плоды;
computer abuse злоупотребление компьютерами;
confederate сообщник, соучастник;
cash-dispensing terminals банкоматы;
counterfeit подделанный;
to be adept быть знающим, опытным;
precocious не по годам развитой.
№ 3. INPUT DEVICES
1. There are several devices used for inputting information into the computer: a keyboard, some coordinate input devices, such as manipulators (a mouse, a track ball), touch panels and graphical plotting tables, scanners, digital cameras, TV tuners, sound cards etc.
When personal computers first became popular, the most common device used to transfer information from the user to the computer was the keyboard. It enables inputting numerical and text data. A standard keyboard has 104 keys and three more ones informing about the operating mode of light indicators in the upper right corner.
2. Later when the more advanced graphics became to develop, user found that a keyboard did not provide the design capabilities of graphics and text representation on the display. There appeared manipulators, a mouse, and a track ball, that are usually used while operating with graphical interface. Each software program uses these buttons differently.
The mouse is an optic-mechanical input device. The mouse has three or two buttons which control the cursor movement across the screen. The mouse provides the cursor control thus simplifying user's orientation on the display. The mouse's primary functions are to help the user draw, point and select images on his computer display by moving the mouse across the screen.
3. In general software programs require to press one or more buttons, sometimes keeping them depressed or double-click them to issue changes in commands and to draw or to erase images. When you move the mouse across a flat surface, the ball located on the bottom side of the mouse turns two rollers. One is tracking the mouse's vertical movements, the other is tracking horizontal movements. The rotating ball glides easily, giving the user good control over the textual and graphical images.
4. In portable computers touch panels or touch pads are used instead of manipulators. A touch panel senses the placement of a users finger and can be used to execute commands or access files. Moving a finger along the surface of the touch pad is transformed into the cursor movement across the screen.
Graphical plotting tables (plotters) find application in drawing and inputting manuscript texts. You can draw; add notes and signs to electronic documents by means of a special pen. The quality of graphical plotting tables is characterized by permitting capacity that is the number of lines per inch, and their capability to respond to the force of pen pressing.
Scanner is used for optical inputting of images (photographs, pictures, slides) and texts and converting them into the computer form.
5. Digital video cameras have been spread recently. They have a microphone, used to input sounds such as the human voice which can activate computer commands in conjunction with voice recognition software. Sound cards used in digital video cameras produce sound conversion from analog to digital form. They are able to synthesize sounds. Video cameras enable getting video images and photographs directly in digital computer format. Digital cameras give possibility to get high quality photos.
Other input devices include a joystick, a rod like device often used by people who play computer games and special game-ports.
6. Now some engineers use a light pen to modify a technical drawing on a computer display screen. Light pens are electronic pointers that allow users to modify designs on-screen. The hand-held pointer contains sensors that send signals to the computer whenever light is recorded. The computers screen is not lit up all at once, but traced row-by-row by an electron beam sixty times every second. Because of this, the computer is able to determine the pens position by noting exactly when the pen detects the electron beam passing its tip. Light pens are often used in computer-aided design and computer-aided manufacture (CAD and CAM) technology because of the flexibility they provide.
(3302)
Notes:
track ball - трекбол, устройство для перемещения указателя на экране дисплея;
to issue вызывать;
to glide скользить, плавно двигаться;
to convert преобразовывать;
in conjunction with в связи с;
an electron beam электронный луч.
№ 4. Hardware (computer)
I |
INTRODUCTION |
Hardware (computer), equipment involved in the function of a computer. Computer hardware consists of the components that can be physically handled. The function of these components is typically divided into three main categories: input, output, and storage. Components in these categories connect to microprocessors, specifically, the computer's central processing unit (CPU), the electronic circuitry that provides the computational ability and control of the computer, via wires or circuitry called a bus.
Software, on the other hand, is the set of instructions a computer uses to manipulate data, such as a word-processing program or a video game. These programs are usually stored and transferred via the computer's hardware to and from the CPU. Software also governs how the hardware is utilized; for example, how information is retrieved from a storage device. The interaction between the input and output hardware is controlled by software called the Basic Input Output System software (BIOS).
Although microprocessors are still technically considered to be hardware, portions of their function are also associated with computer software. Since microprocessors have both hardware and software aspects they are therefore often referred to as firmware.
II |
INPUT HARDWARE |
Input hardware consists of external devicesthat is, components outside of the computer's CPUthat provide information and instructions to the computer. A light pen is a stylus with a light sensitive tip that is used to draw directly on a computer's video screen or to select information on the screen by pressing a clip in the light pen or by pressing the light pen against the surface of the screen. The pen contains light sensors that identify which portion of the screen it is passed over. A mouse is a pointing device designed to be gripped by one hand. It has a detection device (usually a ball) on the bottom that enables the user to control the motion of an on-screen pointer, or cursor, by moving the mouse on a flat surface. As the device moves across the surface, the cursor moves across the screen. To select items or choose commands on the screen, the user presses a button on the mouse. A joystick is a pointing device composed of a lever that moves in multiple directions to navigate a cursor or other graphical object on a computer screen. A keyboard is a typewriter-like device that allows the user to type in text and commands to the computer. Some keyboards have special function keys or integrated pointing devices, such as a trackball or touch-sensitive regions that let the user's finger motions move an on-screen cursor.
An optical scanner uses light-sensing equipment to convert images such as a picture or text into electronic signals that can be manipulated by a computer. For example, a photograph can be scanned into a computer and then included in a text document created on that computer. The two most common scanner types are the flatbed scanner, which is similar to an office photocopier, and the handheld scanner, which is passed manually across the image to be processed. A microphone is a device for converting sound into signals that can then be stored, manipulated, and played back by the computer. A voice recognition module is a device that converts spoken words into information that the computer can recognize and process.
A modem, which stands for modulator-demodulator, is a device that connects a computer to a telephone line or cable television network and allows information to be transmitted to or received from another computer. Each computer that sends or receives information must be connected to a modem. The digital signal sent from one computer is converted by the modem into an analog signal, which is then transmitted by telephone lines or television cables to the receiving modem, which converts the signal back into a digital signal that the receiving computer can understand.
III |
OUTPUT HARDWARE |
Output hardware consists of external devices that transfer information from the computer's CPU to the computer user. A video display, or screen, converts information generated by the computer into visual information. Displays commonly take one of two forms: a video screen with a cathode ray tube (CRT) or a video screen with a liquid crystal display (LCD). A CRT-based screen, or monitor, looks similar to a television set. Information from the CPU is displayed using a beam of electrons that scans a phosphorescent surface that emits light and creates images. An LCD-based screen displays visual information on a flatter and smaller screen than a CRT-based video monitor. LCDs are frequently used in laptop computers.
Printers take text and image from a computer and print them on paper. Dot-matrix printers use tiny wires to impact upon an inked ribbon to form characters. Laser printers employ beams of light to draw images on a drum that then picks up fine black particles called toner. The toner is fused to a page to produce an image. Inkjet printers fire droplets of ink onto a page to form characters and pictures.
IV |
STORAGE HARDWARE |
Storage hardware provides permanent storage of information and programs for retrieval by the computer. The two main types of storage devices are disk drives and memory. There are several types of disk drives: hard, floppy, magneto-optical, and compact. Hard disk drives store information in magnetic particles embedded in a disk. Usually a permanent part of the computer, hard disk drives can store large amounts of information and retrieve that information very quickly. Floppy disk drives also store information in magnetic particles embedded in removable disks that may be floppy or rigid. Floppy disks store less information than a hard disk drive and retrieve the information at a much slower rate. Magneto-optical disc drives store information on removable discs that are sensitive to both laser light and magnetic fields. They can typically store as much information as hard disks, but they have slightly slower retrieval speeds. Compact disc drives store information on pits burned into the surface of a disc of reflective material (see CD-ROM). CD-ROMs can store about as much information as a hard drive but have a slower rate of information retrieval. A digital video disc (DVD) looks and works like a CD-ROM but can store more than 15 times as much information.
Memory refers to the computer chips that store information for quick retrieval by the CPU. Random access memory (RAM) is used to store the information and instructions that operate the computer's programs. Typically, programs are transferred from storage on a disk drive to RAM. RAM is also known as volatile memory because the information within the computer chips is lost when power to the computer is turned off. Read-only memory (ROM) contains critical information and software that must be permanently available for computer operation, such as the operating system that directs the computer's actions from start up to shut down. ROM is called nonvolatile memory because the memory chips do not lose their information when power to the computer is turned off.
Some devices serve more than one purpose. For example, floppy disks may also be used as input devices if they contain information to be used and processed by the computer user. In addition, they can be used as output devices if the user wants to store the results of computations on them.
V |
HARDWARE CONNECTIONS |
To function, hardware requires physical connections that allow components to communicate and interact. A bus provides a common interconnected system composed of a group of wires or circuitry that coordinates and moves information between the internal parts of a computer. A computer bus consists of two channels, one that the CPU uses to locate data, called the address bus, and another to send the data to that address, called the data bus. A bus is characterized by two features: how much information it can manipulate at one time, called the bus width, and how quickly it can transfer these data.
A serial connection is a wire or set of wires used to transfer information from the CPU to an external device such as a mouse, keyboard, modem, scanner, and some types of printers. This type of connection transfers only one piece of data at a time, and is therefore slow. The advantage to using a serial connection is that it provides effective connections over long distances.
A parallel connection uses multiple sets of wires to transfer blocks of information simultaneously. Most scanners and printers use this type of connection. A parallel connection is much faster than a serial connection, but it is limited to distances of less than 3 m (10 ft) between the CPU and the external device.
7324
Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved.
Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved.
№ 5. Computer Security
I |
INTRODUCTION |
Computer Security, techniques developed to safeguard information and information systems stored on computers. Potential threats include the destruction of computer hardware and software and the loss, modification, theft, unauthorized use, observation, or disclosure of computer data.
Computers and the information they contain are often considered confidential systems because their use is typically restricted to a limited number of users. This confidentiality can be compromised in a variety of ways. For example, computers and computer data can be harmed by people who spread computer viruses and worms. A computer virus is a set of computer program instructions that attaches itself to programs in other computers. The viruses are often parts of documents that are transmitted as attachments to e-mail messages. A worm is similar to a virus but is a self-contained program that transports itself from one computer to another through networks. Thousands of viruses and worms exist and can quickly contaminate millions of computers.
People who intentionally create viruses are computer experts often known as hackers. Hackers also violate confidentiality by observing computer monitor screens and by impersonating authorized users of computers in order to gain access to the users computers. They invade computer databases to steal the identities of other people by obtaining private, identifying information about them. Hackers also engage in software piracy and deface Web sites on the Internet. For example, they may insert malicious or unwanted messages on a Web site, or alter graphics on the site. They gain access to Web sites by impersonating Web site managers.
Malicious hackers are increasingly developing powerful software crime tools such as automatic computer virus generators, Internet eavesdropping sniffers, password guessers, vulnerability testers, and computer service saturators. For example, an Internet eavesdropping sniffer intercepts Internet messages sent to other computers. A password guesser tries millions of combinations of characters in an effort to guess a computers password. Vulnerability testers look for software weaknesses. These crime tools are also valuable security tools used for testing the security of computers and networks.
An increasingly common hacker tool that has gained widespread public attention is the computer service saturator, used in denial-of-service attacks, which can shut down a selected or targeted computer on the Internet by bombarding the computer with more requests than it can handle. This tool first searches for vulnerable computers on the Internet where it can install its own software program. Once installed, the compromised computers act like “zombies” sending usage requests to the target computer. If thousands of computers become infected with the software, then all would be sending usage requests to the target computer, overwhelming its ability to handle the requests for service.
A variety of simple techniques can help prevent computer crimes, such as protecting computer screens from observation, keeping printed information and computers in locked facilities, backing up copies of data files and software, and clearing desktops of sensitive information and materials. Increasingly, however, more sophisticated methods are needed to prevent computer crimes. These include using encryption techniques, establishing software usage permissions, mandating passwords, and installing firewalls and intrusion detection systems. In addition, controls within application systems and disaster recovery plans are also necessary.
II |
BACKUP |
Storing backup copies of software and data and having backup computer and communication capabilities are important basic safeguards because the data can then be restored if it was altered or destroyed by a computer crime or accident. Computer data should be backed up frequently and should be stored nearby in secure locations in case of damage at the primary site. Transporting sensitive data to storage locations should also be done securely.
III |
ENCRYPTION |
Another technique to protect confidential information is encryption. Computer users can scramble information to prevent unauthorized users from accessing it. Authorized users can unscramble the information when needed by using a secret code called a key. Without the key the scrambled information would be impossible or very difficult to unscramble. A more complex form of encryption uses two keys, called the public key and the private key, and a system of double encryption. Each participant possesses a secret, private key and a public key that is known to potential recipients. Both keys are used to encrypt, and matching keys are used to decrypt the message. However, the advantage over the single-key method lies with the private keys, which are never shared and so cannot be intercepted. The public key verifies that the sender is the one who transmitted it. The keys are modified periodically, further hampering unauthorized unscrambling and making the encrypted information more difficult to decipher.
IV |
APPROVED USERS |
Another technique to help prevent abuse and misuse of computer data is to limit the use of computers and data files to approved persons. Security software can verify the identity of computer users and limit their privileges to use, view, and alter files. The software also securely records their actions to establish accountability. Military organizations give access rights to classified, confidential, secret, or top-secret information according to the corresponding security clearance level of the user. Other types of organizations also classify information and specify different degrees of protection.
V |
PASSWORDS |
Passwords are confidential sequences of characters that allow approved persons to make use of specified computers, software, or information. To be effective, passwords must be difficult to guess and should not be found in dictionaries. Effective passwords contain a variety of characters and symbols that are not part of the alphabet. To thwart imposters, computer systems usually limit the number of attempts and restrict the time it takes to enter the correct password.
A more secure method is to require possession and use of tamper-resistant plastic cards with microprocessor chips, known as “smart cards,” which contain a stored password that automatically changes after each use. When a user logs on, the computer reads the card's password, as well as another password entered by the user, and matches these two respectively to an identical card password generated by the computer and the user's password stored in the computer in encrypted form. Use of passwords and "smart cards" is beginning to be reinforced by biometrics, identification methods that use unique personal characteristics, such as fingerprints, retinal patterns, facial characteristics, or voice recordings.
VI |
FIREWALLS |
Computers connected to communication networks, such as the Internet, are particularly vulnerable to electronic attack because so many people have access to them. These computers can be protected by using firewall computers or software placed between the networked computers and the network. The firewall examines, filters, and reports on all information passing through the network to ensure its appropriateness. These functions help prevent saturation of input capabilities that otherwise might deny usage to legitimate users, and they ensure that information received from an outside source is expected and does not contain computer viruses.
VII |
INTRUSION DETECTION SYSTEMS |
Security software called intrusion detection systems may be used in computers to detect unusual and suspicious activity and, in some cases, stop a variety of harmful actions by authorized or unauthorized persons. Abuse and misuse of sensitive system and application programs and data such as password, inventory, financial, engineering, and personnel files can be detected by these systems.
VIII |
APPLICATION SAFEGUARDS |
The most serious threats to the integrity and authenticity of computer information come from those who have been entrusted with usage privileges and yet commit computer fraud. For example, authorized persons may secretly transfer money in financial networks, alter credit histories, sabotage information, or commit bill payment or payroll fraud. Modifying, removing, or misrepresenting existing data threatens the integrity and authenticity of computer information. For example, omitting sections of a bad credit history so that only the good credit history remains violates the integrity of the document. Entering false data to complete a fraudulent transfer or withdrawal of money violates the authenticity of banking information. These crimes can be prevented by using a variety of techniques. One such technique is checksumming. Checksumming sums the numerically coded word contents of a file before and after it is used. If the sums are different, then the file has been altered. Other techniques include authenticating the sources of messages, confirming transactions with those who initiate them, segregating and limiting job assignments to make it necessary for more than one person to be involved in committing a crime, and limiting the amount of money that can be transferred through a computer.
IX |
DISASTER RECOVERY PLANS |
Organizations and businesses that rely on computers need to institute disaster recovery plans that are periodically tested and upgraded. This is because computers and storage components such as diskettes or hard disks are easy to damage. A computer's memory can be erased or flooding, fire, or other forms of destruction can damage the computers hardware. Computers, computer data, and components should be installed in safe and locked facilities.
8370
Contributed By:
Donn B. Parker
Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved.
№ 6. Virus (computer)
I |
INTRODUCTION |
Virus (computer), self-duplicating computer program that interferes with a computer's hardware or operating system (the basic software that runs the computer). Viruses are designed to duplicate or replicate themselves and to avoid detection. Like any other computer program, a virus must be executed for it to functionthat is, it must be located in the computer's memory, and the computer must then follow the virus's instructions. These instructions are called the payload of the virus. The payload may disrupt or change data files, display an irrelevant or unwanted message, or cause the operating system to malfunction.
II |
HOW INFECTIONS OCCUR |
Computer viruses activate when the instructionsor executable codethat run programs are opened. Once a virus is active, it may replicate by various means and tries to infect the computers files or the operating system. For example, it may copy parts of itself to floppy disks, to the computers hard drive, into legitimate computer programs, or it may attach itself to e-mail messages and spread across computer networks by infecting other shared drives. Infection is much more frequent in PCs than in professional mainframe systems because programs on PCs are exchanged primarily by means of floppy disks, e-mail, or over unregulated computer networks.
Viruses operate, replicate, and deliver their payloads only when they are run. Therefore, if a computer is simply attached to an infected computer network or downloading an infected program, it will not necessarily become infected. Typically a computer user is not likely to knowingly run potentially harmful computer code. However, viruses often trick the computer's operating system or the computer user into running the viral program.
Some viruses have the ability to attach themselves to otherwise legitimate programs. This attachment may occur when the legitimate program is created, opened, or modified. When that program is run, so is the virus. Viruses can also reside on portions of the hard disk or floppy disk that load and run the operating system when the computer is started, and such viruses thereby are run automatically. In computer networks, some viruses hide in the software that allows the user to log on (gain access to) the system.
With the widespread use of e-mail and the Internet, viruses can spread quickly. Viruses attached to e-mail messages can infect an entire local network in minutes.
III |
TYPES OF VIRUSES |
There are five categories of viruses: parasitic or file viruses, bootstrap sector, multi-partite, macro, and script viruses.
Parasitic or file viruses infect executable files or programs in the computer. These files are often identified by the extension .exe in the name of the computer file. File viruses leave the contents of the host program unchanged but attach to the host in such a way that the virus code is run first. These viruses can be either direct-action or resident. A direct-action virus selects one or more programs to infect each time it is executed. A resident virus hides in the computer's memory and infects a particular program when that program is executed.
Bootstrap-sector viruses reside on the first portion of the hard disk or floppy disk, known as the boot sector. These viruses replace either the programs that store information about the disk's contents or the programs that start the computer. Typically, these viruses spread by means of the physical exchange of floppy disks.
Multi-partite viruses combine the abilities of the parasitic and the bootstrap-sector viruses, and so are able to infect either files or boot sectors. These types of viruses can spread if a computer user boots from an infected diskette or accesses infected files.
Other viruses infect programs that contain powerful macro languages (programming languages that let the user create new features and utilities). These viruses, called macro viruses, are written in macro languages and automatically execute when the legitimate program is opened.
Script viruses are written in script programming languages, such as VBScript (Visual Basic Script) and JavaScript. These script languages can be seen as a special kind of macro language and are even more powerful because most are closely related to the operating system environment. The "ILOVEYOU" virus, which appeared in 2000 and infected an estimated 1 in 5 personal computers, is a famous example of a script virus.
IV |
ANTI-VIRAL TACTICS |
A |
Preparation and Prevention |
Computer users can prepare for a viral infection by creating backups of legitimate original software and data files regularly so that the computer system can be restored if necessary. Viral infection can be prevented by obtaining software from legitimate sources or by using a quarantined computer to test new softwarethat is, a computer not connected to any network. However, the best prevention may be the installation of current and well-designed antiviral software. Such software can prevent a viral infection and thereby help stop its spread.
B |
Virus Detection |
Several types of antiviral software can be used to detect the presence of a virus. Scanning software can recognize the characteristics of a virus's computer code and look for these characteristics in the computer's files. Because new viruses must be analyzed as they appear, scanning software must be updated periodically to be effective. Other scanners search for common features of viral programs and are usually less reliable. Most antiviral software uses both on-demand and on-access scanners. On-demand scanners are launched only when the user activates them. On-access scanners, on the other hand, are constantly monitoring the computer for viruses but are always in the background and are not visible to the user. The on-access scanners are seen as the proactive part of an antivirus package and the on-demand scanners are seen as reactive. On-demand scanners usually detect a virus only after the infection has occurred and that is why they are considered reactive.
Antivirus software is usually sold as packages containing many different software programs that are independent of one another and perform different functions. When installed or packaged together, antiviral packages provide complete protection against viruses. Within most antiviral packages, several methods are used to detect viruses. Checksumming, for example, uses mathematical calculations to compare the state of executable programs before and after they are run. If the checksum has not changed, then the system is uninfected. Checksumming software can detect an infection only after it has occurred, however. As this technology is dated and some viruses can evade it, checksumming is rarely used today.
Most antivirus packages also use heuristics (problem-solving by trial and error) to detect new viruses. This technology observes a programs behavior and evaluates how closely it resembles a virus. It relies on experience with previous viruses to predict the likelihood that a suspicious file is an as-yet unidentified or unclassified new virus.
Other types of antiviral software include monitoring software and integrity-shell software. Monitoring software is different from scanning software. It detects illegal or potentially damaging viral activities such as overwriting computer files or reformatting the computer's hard drive. Integrity-shell software establishes layers through which any command to run a program must pass. Checksumming is performed automatically within the integrity shell, and infected programs, if detected, are not allowed to run.
C |
Containment and Recovery |
Once a viral infection has been detected, it can be contained by immediately isolating computers on networks, halting the exchange of files, and using only write-protected disks. In order for a computer system to recover from a viral infection, the virus must first be eliminated. Some antivirus software attempts to remove detected viruses, but sometimes with unsatisfactory results. More reliable results are obtained by turning off the infected computer; restarting it from a write-protected floppy disk; deleting infected files and replacing them with legitimate files from backup disks; and erasing any viruses on the boot sector.
V |
VIRAL STRATEGIES |
The authors of viruses have several strategies to circumvent antivirus software and to propagate their creations more effectively. So-called polymorphic viruses make variations in the copies of themselves to elude detection by scanning software. A stealth virus hides from the operating system when the system checks the location where the virus resides, by forging results that would be expected from an uninfected system. A so-called fast-infector virus infects not only programs that are executed but also those that are merely accessed. As a result, running antiviral scanning software on a computer infected by such a virus can infect every program on the computer. A so-called slow-infector virus infects files only when the files are modified, so that it appears to checksumming software that the modification was legitimate. A so-called sparse-infector virus infects only on certain occasionsfor example, it may infect every tenth program executed. This strategy makes it more difficult to detect the virus.
By using combinations of several virus-writing methods, virus authors can create more complex new viruses. Many virus authors also tend to use new technologies when they appear. The antivirus industry must move rapidly to change their antiviral software and eliminate the outbreak of such new viruses.
VI |
VIRUSLIKE COMPUTER PROGRAMS |
There are other harmful computer programs that can be part of a virus but are not considered viruses because they do not have the ability to replicate. These programs fall into three categories: Trojan horses, logic bombs, and deliberately harmful or malicious software programs that run within Web browsers, an application program such as Internet Explorer and Netscape that displays Web sites.
A Trojan horse is a program that pretends to be something else. A Trojan horse may appear to be something interesting and harmless, such as a game, but when it runs it may have harmful effects. The term comes from the classic Greek story of the Trojan horse found in Homers Iliad.
A logic bomb infects a computers memory, but unlike a virus, it does not replicate itself. A logic bomb delivers its instructions when it is triggered by a specific condition, such as when a particular date or time is reached or when a combination of letters is typed on a keyboard. A logic bomb has the ability to erase a hard drive or delete certain files.
Malicious software programs that run within a Web browser often appear in Java applets and ActiveX controls. Although these applets and controls improve the usefulness of Web sites, they also increase a vandals ability to interfere with unprotected systems. Because those controls and applets require that certain components be downloaded to a users personal computer (PC), activating an applet or control might actually download malicious code.
A |
History |
In 1949 Hungarian American mathematician John von Neumann, at the Institute for Advanced Study in Princeton, New Jersey, proposed that it was theoretically possible for a computer program to replicate. This theory was tested in the 1950s at Bell Laboratories when a game called Core Wars was developed, in which players created tiny computer programs that attacked, erased, and tried to propagate on an opponent's system.
In 1983 American electrical engineer Fred Cohen, at the time a graduate student, coined the term virus to describe a self-replicating computer program. In 1985 the first Trojan horses appeared, posing as a graphics-enhancing program called EGABTR and as a game called NUKE-LA. A host of increasingly complex viruses followed.
The so-called Brain virus appeared in 1986 and spread worldwide by 1987. In 1988 two new viruses appeared: Stone, the first bootstrap-sector virus, and the Internet worm, which crossed the United States overnight via computer network. A computer worm is a subset of a virus. However, instead of infecting files or operating systems, worms replicate from computer to computer by spreading entire copies of itself. The Dark Avenger virus, the first fast infector, appeared in 1989, followed by the first polymorphic virus in 1990.
Computer viruses grew more sophisticated in the 1990s. In 1995 the first macro language virus, WinWord Concept, was created. In 1999 the Melissa macro virus, spread by e-mail, disabled e-mail servers around the world for several hours, and in some cases several days. Regarded by some as the most prolific virus ever, Melissa cost corporations millions of dollars due to computer downtime and lost productivity.
The VBS_LOVELETTER script virus, also known as the Love Bug and the ILOVEYOU virus, unseated Melissa as the world's most prevalent and costly virus when it struck in May 2000. By the time the outbreak was finally brought under control, losses were estimated at US$10 billion, and the Love Bug is said to have infected 1 in every 5 PCs worldwide.
11.114
Contributed By:
Eddy Willems
Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved.
№ 7. Network (computer science)
I |
INTRODUCTION |
Network (computer science), techniques, physical connections, and computer programs used to link two or more computers. Network users are able to share files, printers, and other resources; send electronic messages; and run programs on other computers.
A network has three layers of components: application software, network software, and network hardware. Application software consists of computer programs that interface with network users and permit the sharing of information, such as files, graphics, and video, and resources, such as printers and disks. One type of application software is called client-server. Client computers send requests for information or requests to use resources to other computers, called servers, that control data and applications. Another type of application software is called peer-to-peer. In a peer-to-peer network, computers send messages and requests directly to one another without a server intermediary.
Network software consists of computer programs that establish protocols, or rules, for computers to talk to one another. These protocols are carried out by sending and receiving formatted instructions of data called packets. Protocols make logical connections between network applications, direct the movement of packets through the physical network, and minimize the possibility of collisions between packets sent at the same time.
Network hardware is made up of the physical components that connect computers. Two important components are the transmission media that carry the computer's signals, typically on wires or fiber-optic cables, and the network adapter, which accesses the physical media that link computers, receives packets from network software, and transmits instructions and requests to other computers. Transmitted information is in the form of binary digits, or bits (1s and 0s), which the computer's electronic circuitry can process.
II |
NETWORK CONNECTIONS |
A network has two types of connections: physical connections that let computers directly transmit and receive signals and logical, or virtual, connections that allow computer applications, such as word processors, to exchange information. Physical connections are defined by the medium used to carry the signal, the geometric arrangement of the computers (topology), and the method used to share information. Logical connections are created by network protocols and allow data sharing between applications on different types of computers, such as an Apple Macintosh and an International Business Machines Corporation (IBM) personal computer (PC), in a network. Some logical connections use client-server application software and are primarily for file and printer sharing. The Transmission Control Protocol/Internet Protocol (TCP/IP) suite, originally developed by the United States Department of Defense, is the set of logical connections used by the Internet, the worldwide consortium of computer networks. TCP/IP, based on peer-to-peer application software, creates a connection between any two computers.
A |
Media |
The medium used to transmit information limits the speed of the network, the effective distance between computers, and the network topology. Copper wires and coaxial cable provide transmission speeds of a few thousand bits per second for long distances and about 100 million bits per second (Mbps) for short distances. Optical fibers carry 100 million to 1 billion bits of information per second over long distances.
B |
Topology |
Common topologies used to arrange computers in a network are point-to-point, bus, star, and ring. Point-to-point topology is the simplest, consisting of two connected computers. The bus topology is composed of a single link connected to many computers. All computers on this common connection receive all signals transmitted by any attached computer. The star topology connects many computers to a common hub computer. This hub can be passive, repeating any input to all computers similar to the bus topology, or it can be active, selectively switching inputs to specific destination computers. The ring topology uses multiple links to form a circle of computers. Each link carries information in one direction. Information moves around the ring in sequence from its source to its destination (see Computer Architecture).
Local area networks (LANs), which connect computers separated by short distances, such as in an office or a university campus, commonly use bus, star, or ring topologies. Wide area networks (WANs), which connect distant equipment across the country or internationally, often use special leased telephone lines as point-to-point links.
C |
Sharing Information |
When computers share physical connections to transmit information packets, a set of Media Access Control (MAC) protocols are used to allow information to flow smoothly through the network. An efficient MAC protocol ensures that the transmission medium is not idle if computers have information to transmit. It also prevents collisions due to simultaneous transmission that would waste media capacity. MAC protocols also allow different computers fair access to the medium.
One type of MAC is Ethernet, which is used by bus or star network topologies. An Ethernet-linked computer first checks if the shared medium is in use. If not, the computer transmits. Since two computers can both sense an idle medium and send packets at the same time, transmitting computers continue to monitor the shared connection and stop transmitting information if a collision occurs. Ethernet can transmit information at a rate of 10 Mbps.
Computers also can use Token Ring MAC protocols, which pass a special message called a token through the network. This token gives the computer permission to send a packet of information through the network. If a computer receives the token, it sends a packet, or, if it has no packet to send, it passes the token to the next computer. Since there is only one token in the network, only one computer can transmit information at a time.
III |
NETWORK OPERATION AND MANAGEMENT |
Network management and system administration are critical for a complex system of interconnected computers and resources to remain operating. A network manager is the person or team of people responsible for configuring the network so that it runs efficiently. For example, the network manager might need to connect computers that communicate frequently to reduce interference with other computers. The system administrator is the person or team of people responsible for configuring the computer and its software to use the network. For example, the system administrator may install network software and configure a server's file system so client computers can access shared files.
Networks are subject to hacking, or illegal access, so shared files and resources must be protected. A network intruder could eavesdrop on packets being sent across a network or send fictitious messages. For sensitive information, data encryption (scrambling data using mathematical equations) renders captured packets unreadable to an intruder. Most servers also use authentication schemes to ensure that a request to read or write files or to use resources is from a legitimate client and not from an intruder (see Computer Security).
IV |
FUTURE TECHNOLOGIES AND TRENDS |
The wide use of notebook and other portable computers drives advances in wireless networks. Wireless networks use either infrared or radio-frequency transmissions to link these mobile computers to networks. Infrared wireless LANs work only within a room, while wireless LANs based on radio-frequency transmissions can penetrate most walls. Wireless LANS have capacities from less than 1 Mbps to 8 Mbps and operate at distances up to a few hundred meters. Wireless communication for WANS use cellular telephone networks, satellite transmissions, or dedicated equipment to provide regional or global coverage, but they have transmission rates of only 2000 to 19,000 bits per second.
New networks must also meet the growing demand for faster transmission speeds, especially for sound and video applications. One recently developed network, called an Asynchronous Transfer Mode (ATM) network, has speeds of up to 625 Mbps and can be used by either LANS or WANS.
In February 1996 Fujitsu Ltd., Nippon Telephone and Telegraph Corporation, and a team of researchers from AT&T succeeded in transmitting information through an optical fiber at a rate of 1 trillion bits per secondthe equivalent of transmitting 300 years of newspapers in a single second. This was accomplished by simultaneously sending different wavelengths of light, each carrying separate information, through the optical fiber. If it can be integrated into a network, this new technology will make it easy, inexpensive, and incredibly fast to send information, such as video and memory-sensitive three-dimensional images.
7581
Contributed By:
Scott F. Midkiff
Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved.
№ 8. Operating System
I |
INTRODUCTION |
Operating System (OS), in computer science, the basic software that controls a computer. The operating system has three major functions: It coordinates and manipulates computer hardware, such as computer memory, printers, disks, keyboard, mouse, and monitor; it organizes files on a variety of storage media, such as floppy disk, hard drive, compact disc, digital video disc, and tape; and it manages hardware errors and the loss of data.
II |
HOW AN OS WORKS |
Operating systems control different computer processes, such as running a spreadsheet program or accessing information from the computer's memory. One important process is interpreting commands, enabling the user to communicate with the computer. Some command interpreters are text oriented, requiring commands to be typed in or to be selected via function keys on a keyboard. Other command interpreters use graphics and let the user communicate by pointing and clicking on an icon, an on-screen picture that represents a specific command. Beginners generally find graphically oriented interpreters easier to use, but many experienced computer users prefer text-oriented command interpreters.
Operating systems are either single-tasking or multitasking. The more primitive single-tasking operating systems can run only one process at a time. For instance, when the computer is printing a document, it cannot start another process or respond to new commands until the printing is completed.
All modern operating systems are multitasking and can run several processes simultaneously. In most computers, however, there is only one central processing unit (CPU; the computational and control unit of the computer), so a multitasking OS creates the illusion of several processes running simultaneously on the CPU. The most common mechanism used to create this illusion is time-slice multitasking, whereby each process is run individually for a fixed period of time. If the process is not completed within the allotted time, it is suspended and another process is run. This exchanging of processes is called context switching. The OS performs the “bookkeeping” that preserves a suspended process. It also has a mechanism, called a scheduler, that determines which process will be run next. The scheduler runs short processes quickly to minimize perceptible delay. The processes appear to run simultaneously because the user's sense of time is much slower than the processing speed of the computer.
Operating systems can use a technique known as virtual memory to run processes that require more main memory than is actually available. To implement this technique, space on the hard drive is used to mimic the extra memory needed. Accessing the hard drive is more time-consuming than accessing main memory, however, so performance of the computer slows.
III |
CURRENT OPERATING SYSTEMS |
Operating systems commonly found on personal computers include UNIX, Macintosh OS, and Windows. UNIX, developed in 1969 at AT&T Bell Laboratories, is a popular operating system among academic computer users. Its popularity is due in large part to the growth of the interconnected computer network known as the Internet. Software for the Internet was initially designed for computers that ran UNIX. Variations of UNIX include SunOS (distributed by SUN Microsystems, Inc.), Xenix (distributed by Microsoft Corporation), and Linux (available for download free of charge and distributed commercially by companies such as Red Hat, Inc.). UNIX and its clones support multitasking and multiple users. Its file system provides a simple means of organizing disk files and lets users control access to their files. The commands in UNIX are not readily apparent, however, and mastering the system is difficult. Consequently, although UNIX is popular for professionals, it is not the operating system of choice for the general public.
Instead, windowing systems with graphical interfaces, such as Windows and the Macintosh OS, which make computer technology more accessible, are widely used in personal computers (PCs). However, graphical systems generally have the disadvantage of requiring more hardwaresuch as faster CPUs, more memory, and higher-quality monitorsthan do command-oriented operating systems.
IV |
FUTURE TECHNOLOGIES |
Operating systems continue to evolve. A recently developed type of OS called a distributed operating system is designed for a connected, but independent, collection of computers that share resources such as hard drives. In a distributed OS, a process can run on any computer in the network (presumably a computer that is idle) to increase that process's performance. All basic OS functionssuch as maintaining file systems, ensuring reasonable behavior, and recovering data in the event of a partial failurebecome more complex in distributed systems.
Research is also being conducted that would replace the keyboard with a means of using voice or handwriting for input. Currently these types of input are imprecise because people pronounce and write words very differently, making it difficult for a computer to recognize the same input from different users. However, advances in this field have led to systems that can recognize a small number of words spoken by a variety of people. In addition, software has been developed that can be taught to recognize an individual's handwriting.
4563
Contributed By:
Mark Allen Weiss
Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved.
№ 9. User Interface
I |
INTRODUCTION |
User Interface, in computer science, components humans use to communicate with computers. A computer user directs the function of a computer with instructions called input. Input is entered by various devices, such as a keyboard, and is translated into electronic signals that a computer can process. These signals pass along circuit pathways known as buses and are coordinated and controlled by the central processing unit (CPU) (the computer circuitry that performs arithmetic and logical functions), and software known as the operating system. Once the CPU has performed the commands directed by the user, it may communicate the results by sending electronic signals, called output, back along the bus to one or more output devices, such as a printer or video display monitor.
In addition to a computer's speed the usability of the software and the ergonomic design of the physical components are important considerations. Usability is the ease with which a person learns to use an application, as well as how efficient and effective it is. Ergonomics determines how people function in relation to their environment, and with respect to computers, how to make input and output devices easy, comfortable, and efficient to use. For example, curved ergonomic keyboards prevent wrists from bending at unnatural angles, making the user more comfortable and input faster.
II |
INPUT AND OUTPUT DEVICES |
A variety of devices are used to enter data. Most personal computers (PCs) include a keyboard because it is easy to use and efficient for everyday tasks such as word processing. A mouse, trackball, and joystick are other input devices that help the user point, select, and move objects on a video display monitor. Handwriting can be entered on a computer's screen using light pens, wands that contain sensors to translate the user's motions into data. Touch screens in which infrared light sensors detect a user's fingers are used in environments where keyboards are unsuitable, such as cash dispensing machines. Sound and speech recognition are popular for some applications, but these input devices are still imperfect and usually understand and respond to only a small vocabulary of commands.
The most familiar output devices are printers and color video display monitors. Audio output is also common, as well as sophisticated connections to synthesizers that produce a wide range of musical sounds (see MIDI).
III |
COMMAND AND GRAPHICAL INTERFACE |
Dialog between the user and the computer is usually accomplished by command-line or graphical user interfaces (GUIs). Command-line interfaces require the user to type brief commands on a keyboard to direct the computer's actions. GUIs use windows to organize files and applications represented by icons (small pictures) and menus that list commands. The user directly manipulates these visual objects on the video display monitor by pointing, highlighting, and dragging or by moving them with a mouse or trackball.
GUIs are easier to learn than command-line interfaces, since commands need to be memorized and tend to vary between different computer systems. Entering commands with a GUI is slower, however, so GUIs usually have optional command-line equivalents as a quick alternative for more experienced users.
IV |
SPECIAL SYSTEMS |
Some users require special interfaces. Visually impaired people, for example, use screen readers to translate individual lines of text from the screen into speech, and printers that produce text in the Braille system. Adopting graphical interfaces for the visually impaired is more difficult, although some word processors provide menus, windows, and icons with auditory properties that make sounds when the cursor passes over them, or when the cursor passes into off-screen areas. Some systems, however, have yet to be adequately developed for the visually impaired, such as web browsers, the visual interface system that accesses the global information database known as the World Wide Web.
Virtual reality (VR) provides users with the illusion of being in a three-dimensional (3D) world. There are two types of VR systems: immersive and nonimmersive. Immersive systems involve wearing a head-mounted display or helmet and data gloves that translate the user's hand motions into data the computer can process. This VR interface enables the user to directly experience a simulated environment. The user can turn, pick up, throw, or push computer-generated objects using gestures similar to those they would normally use. In VR, users are aware of the simulated environment and their actions through visual, auditory, and some tactile sensations. Immersive VR is used for applications such as pilot training systems, computer games, and medical training. Nonimmersive VR systems display alternate environments for the user to navigate through but do not require users to wear specialized equipment. Instead, users rely on conventional devices such as video display monitors, keyboards, and a mouse to manipulate the simulated environment.
V |
FUTURE INTERFACES |
A wealth of information is now available to computer users. However, not all of the information is useful, and finding exactly what is needed can be difficult. Two approaches being developed to deal with this information surplus are intelligent agents and empowering users. Intelligent agents (often portrayed as an animated helpful person or creature on the computer screen) act independently within a computer system to carry out a limited set of tasks. For example, an agent could be used to sift electronic mail and provide a signal when an important message has arrived. Empowering users puts powerful, easy-to-use browsing, searching, and sifting tools under the direct command of the user.
9538
Contributed By:
Jennifer Preece
Richard Jacques
Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved.
№ 10. Multimedia
I |
INTRODUCTION |
Multimedia, in computer science, the presentation of information using the combination of text, sound, pictures, animation, and video. Common multimedia computer applications include games, learning software, and reference materials, such as this encyclopedia. Most multimedia applications include predefined associations, known as hyperlinks, that enable users to switch between media elements and topics.
Thoughtfully presented multimedia can enhance the scope of presentation in ways that are similar to the roving associations made by the human mind. Connectivity provided by hyperlinks transforms multimedia from static presentations with pictures and sound into an endlessly varying and informative interactive experience.
Multimedia applications are computer programs; typically they are stored on compact discs (CD-ROMs). They may also reside on the World Wide Web, which is the media-rich component of the international communication network known as the Internet. Multimedia documents found on the World Wide Web are called Web pages. Linking information together with hyperlinks is accomplished by special computer programs or computer languages. The computer language used to create Web pages is called HyperText Markup Language (HTML).
Multimedia applications usually require more computer memory and processing power than the same information represented by text alone. For instance, a computer running multimedia applications must have a fast central processing unit (CPU), which is the electronic circuitry that provides the computational ability and control of the computer. A multimedia computer also requires extra electronic memory to help the CPU in making calculations and to enable the video screen to draw complex images. The computer also needs a high capacity hard disk to store and retrieve multimedia information, and a compact disc drive to play CD-ROM applications. Finally, a multimedia computer must have a keyboard and a pointing device, such as a mouse or a trackball, so that the user can direct the associations between multimedia elements.
II |
VISUAL ELEMENTS |
The larger, sharper, and more colorful an image is, the harder it is to present and manipulate on a computer screen. Photographs, drawings, and other still images must be changed into a format that the computer can manipulate and display. Such formats include bit-mapped graphics and vector graphics.
Bit-mapped graphics store, manipulate, and represent images as rows and columns of tiny dots. In a bit-mapped graphic, each dot has a precise location described by its row and column, much like each house in a city has a precise address. Some of the most common bit-mapped graphics formats are called Graphical Interchange Format (GIF), Tagged Image File Format (TIFF), and Windows Bitmap (BMP).
Vector graphics use mathematical formulas to recreate the original image. In a vector graphic, the dots are not defined by a row-and-column address; rather they are defined by their spatial relationships to one another. Because their dot components are not restricted to a particular row and column, vector graphics can reproduce images more easily, and they generally look better on most video screens and printers. Common vector graphics formats are Encapsulated Postscript (EPS), Windows Metafile Format (WMF), Hewlett-Packard Graphics Language (HPGL), and Macintosh graphics file format (PICT).
Obtaining, formatting, and editing video elements require special computer components and programs. Video files can be quite large, so they are usually reduced in size using compression, a technique that identifies a recurring set of information, such as one hundred black dots in a row, and replaces it with a single piece of information to save space in the computer's storage systems. Common video compression formats are Audio Video Interleave (AVI), Quicktime, and Motion Picture Experts Group (MPEG or MPEG2). These formats can shrink video files by as much as 95 percent, but they introduce varying degrees of fuzziness in the images.
Animation can also be included in multimedia applications to add motion to images. Animations are particularly useful to simulate real-world situations, such as the flight of a jet airplane. Animation can also enhance existing graphics and video elements adding special effects such as morphing, the blending of one image seamlessly into another (see Computer Graphics).
III |
SOUND ELEMENTS |
Sound, like visual elements, must be recorded and formatted so the computer can understand and use it in presentations. Two common types of audio format are Waveform (WAV) and Musical Instrument Digital Interface (MIDI). WAV files store actual sounds, much as music CDs and tapes do. WAV files can be large and may require compression. MIDI files do not store the actual sounds, but rather instructions that enable devices called synthesizers to reproduce the sounds or music. MIDI files are much smaller than WAV files, but the quality of the sound reproduction is not nearly as good.
IV |
ORGANIZATIONAL ELEMENTS |
Multimedia elements included in a presentation require a framework that encourages the user to learn and interact with the information. Interactive elements include pop-up menus, small windows that appear on the computer screen with a list of commands or multimedia elements for the user to choose. Scroll bars, usually located on the side of the computer screen, enable the user to move to another portion of a large document or picture.
The integration of the elements of a multimedia presentation is enhanced by hyperlinks. Hyperlinks creatively connect the different elements of a multimedia presentation using colored or underlined text or a small picture, called an icon, on which the user points the cursor and clicks on a mouse. For example, an article on President John F. Kennedy might include a paragraph on his assassination, with a hyperlink on the words “the Kennedy funeral.” The user clicks on the hyperlinked text and is transferred to a video presentation of the Kennedy funeral. The video is accompanied by a caption with embedded hyperlinks that take the user to a presentation on funeral practices of different cultures, complete with sounds of various burial songs. The songs, in turn, have hyperlinks to a presentation on musical instruments. This chain of hyperlinks may lead users to information they would never have encountered otherwise.
V |
MULTIMEDIA APPLICATIONS |
Multimedia has had an enormous impact on education. For example, medical schools use multimedia-simulated operations that enable prospective surgeons to perform operations on a computer-generated "virtual" patient. Similarly, students in engineering schools use interactive multimedia presentations of circuit design to learn the basics of electronics and to immediately implement, test, and manipulate the circuits they design on the computer. Even in elementary schools, students use simple yet powerful multimedia authoring tools to create multimedia presentations that enhance reports and essays.
Multimedia is also used in commercial applications. For instance, some amusement arcades offer multimedia games that allow players to race Indy cars or battle each other from the cockpits of make-believe giant robots. Architects use multimedia presentations to give clients tours of houses that have yet to be built. Mail-order businesses provide multimedia catalogues that allow prospective buyers to browse virtual showrooms.
6357
Contributed By:
William Ditto
Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved.
№ 11. Keyboard
Keyboard, in computer science, a keypad device with buttons or keys that a user presses to enter data characters and commands into a computer. Keyboards emerged from the combination of typewriter and computer-terminal technology. They are one of the fundamental pieces of personal computer (PC) hardware, along with the central processing unit (CPU), the monitor or screen, and the mouse or other cursor device.
The most common English-language key pattern for typewriters and keyboards is called QWERTY, after the layout of the first six letters in the top row of its keys (from left to right). In the late 1860s, American inventor and printer Christopher Sholes invented the modern form of the typewriter. Sholes created the QWERTY keyboard layout by separating commonly used letters so that typists would type slower and not jam their mechanical typewriters. Subsequent generations of typists have learned to type using QWERTY keyboards, prompting manufacturers to maintain this key orientation on typewriters.
Computer keyboards copied the QWERTY key layout and have followed the precedent set by typewriter manufacturers of keeping this convention. Modern keyboards connect with the computer CPU by cable or by infrared transmitter. When a key on the keyboard is pressed, a numeric code is sent to the keyboards driver software and to the computers operating system software. The driver translates this data into a specialized command that the computers CPU and application programs understand. In this way, users may enter text, commands, numbers, or other data. The term character is generally reserved for letters, numbers, and punctuation, but may also include control codes, graphical symbols, mathematical symbols, and graphic images.
Almost all standard English-language keyboards have keys for each character of the American Standard Code for Information Interchange (ASCII) character set, as well as various function keys. Most computers and applications today use seven or eight data bits for each character. Other character sets include ISO Latin 1, Kanji, and Unicode. Each character is represented by a unique number understood by the computer. For example, ASCII code 65 is equal to the letter A. The function keys generate short, fixed sequences of character codes that instruct application programs running on the computer to perform certain actions. Often, keyboards also have directional buttons for moving the screen cursor, separate numeric pads for entering numeric and arithmetic data, and a switch for turning the computer on and off. Some keyboards, including most for laptop computers, also incorporate a trackball, mouse pad, or other cursor-directing device. No standard exists for positioning the function, numeric, and other buttons on a keyboard relative to the QWERTY and other typewriting keys. Thus layouts vary on keyboards.
An alternative keyboard design not yet widely used but broadly acknowledged for its speed advantages is the Dvorak keyboard. In the 1930s, American educators August Dvorak and William Dealy designed this key set so that the letters that make up most words in the English language are in the middle row of keys and are easily reachable by a typists fingers. Common letter combinations are also positioned so that they can be typed quickly. Most keyboards are arranged in rectangles, left to right around the QWERTY layout. Newer, innovative keyboard designs are more ergonomic in shape. These keyboards have separated banks of keys and are less likely to cause carpal tunnel syndrome, a disorder often caused by excessive typing on less ergonomic keyboards.
3073
Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved.
№ 12. Programming Language
I |
INTRODUCTION |
Programming Language, in computer science, artificial language used to write a sequence of instructions (a computer program) that can be run by a computer. Similar to natural languages, such as English, programming languages have a vocabulary, grammar, and syntax. However, natural languages are not suited for programming computers because they are ambiguous, meaning that their vocabulary and grammatical structure may be interpreted in multiple ways. The languages used to program computers must have simple logical structures, and the rules for their grammar, spelling, and punctuation must be precise.
Programming languages vary greatly in their sophistication and in their degree of versatility. Some programming languages are written to address a particular kind of computing problem or for use on a particular model of computer system. For instance, programming languages such as Fortran and COBOL were written to solve certain general types of programming problemsFortran for scientific applications, and COBOL for business applications. Although these languages were designed to address specific categories of computer problems, they are highly portable, meaning that they may be used to program many types of computers. Other languages, such as machine languages, are designed to be used by one specific model of computer system, or even by one specific computer in certain research applications. The most commonly used programming languages are highly portable and can be used to effectively solve diverse types of computing problems. Languages like C, PASCAL, and BASIC fall into this category.
II |
LANGUAGE TYPES |
Programming languages can be classified as either low-level languages or high-level languages. Low-level programming languages, or machine languages, are the most basic type of programming languages and can be understood directly by a computer. Machine languages differ depending on the manufacturer and model of computer. High-level languages are programming languages that must first be translated into a machine language before they can be understood and processed by a computer. Examples of high-level languages are C, C++, PASCAL, and Fortran. Assembly languages are intermediate languages that are very close to machine language and do not have the level of linguistic sophistication exhibited by other high-level languages, but must still be translated into machine language.
A |
Machine Languages |
In machine languages, instructions are written as sequences of 1s and 0s, called bits, that a computer can understand directly. An instruction in machine language generally tells the computer four things: (1) where to find one or two numbers or simple pieces of data in the main computer memory (Random Access Memory, or RAM), (2) a simple operation to perform, such as adding the two numbers together, (3) where in the main memory to put the result of this simple operation, and (4) where to find the next instruction to perform. While all executable programs are eventually read by the computer in machine language, they are not all programmed in machine language. It is extremely difficult to program directly in machine language because the instructions are sequences of 1s and 0s. A typical instruction in a machine language might read 10010 1100 1011 and mean add the contents of storage register A to the contents of storage register B.
B |
High-Level Languages |
High-level languages are relatively sophisticated sets of statements utilizing words and syntax from human language. They are more similar to normal human languages than assembly or machine languages and are therefore easier to use for writing complicated programs. These programming languages allow larger and more complicated programs to be developed faster. However, high-level languages must be translated into machine language by another program called a compiler before a computer can understand them. For this reason, programs written in a high-level language may take longer to execute and use up more memory than programs written in an assembly language.
C |
Assembly Language |
Computer programmers use assembly languages to make machine-language programs easier to write. In an assembly language, each statement corresponds roughly to one machine language instruction. An assembly language statement is composed with the aid of easy to remember commands. The command to add the contents of the storage register A to the contents of storage register B might be written ADD B,A in a typical assembly language statement. Assembly languages share certain features with machine languages. For instance, it is possible to manipulate specific bits in both assembly and machine languages. Programmers use assembly languages when it is important to minimize the time it takes to run a program, because the translation from assembly language to machine language is relatively simple. Assembly languages are also used when some part of the computer has to be controlled directly, such as individual dots on a monitor or the flow of individual characters to a printer.
III |
CLASSIFICATION OF HIGH-LEVEL LANGUAGES |
High-level languages are commonly classified as procedure-oriented, functional, object-oriented, or logic languages. The most common high-level languages today are procedure-oriented languages. In these languages, one or more related blocks of statements that perform some complete function are grouped together into a program module, or procedure, and given a name such as “procedure A.” If the same sequence of operations is needed elsewhere in the program, a simple statement can be used to refer back to the procedure. In essence, a procedure is just a mini-program. A large program can be constructed by grouping together procedures that perform different tasks. Procedural languages allow programs to be shorter and easier for the computer to read, but they require the programmer to design each procedure to be general enough to be used in different situations.
Functional languages treat procedures like mathematical functions and allow them to be processed like any other data in a program. This allows a much higher and more rigorous level of program construction. Functional languages also allow variablessymbols for data that can be specified and changed by the user as the program is runningto be given values only once. This simplifies programming by reducing the need to be concerned with the exact order of statement execution, since a variable does not have to be redeclared, or restated, each time it is used in a program statement. Many of the ideas from functional languages have become key parts of many modern procedural languages.
Object-oriented languages are outgrowths of functional languages. In object-oriented languages, the code used to write the program and the data processed by the program are grouped together into units called objects. Objects are further grouped into classes, which define the attributes objects must have. A simple example of a class is the class Book. Objects within this class might be Novel and Short Story. Objects also have certain functions associated with them, called methods. The computer accesses an object through the use of one of the objects methods. The method performs some action to the data in the object and returns this value to the computer. Classes of objects can also be further grouped into hierarchies, in which objects of one class can inherit methods from another class. The structure provided in object-oriented languages makes them very useful for complicated programming tasks.
Logic languages use logic as their mathematical base. A logic program consists of sets of facts and if-then rules, which specify how one set of facts may be deduced from others, for example:
If the statement X is true, then the statement Y is false.
In the execution of such a program, an input statement can be logically deduced from other statements in the program. Many artificial intelligence programs are written in such languages.
IV |
LANGUAGE STRUCTURE AND COMPONENTS |
Programming languages use specific types of statements, or instructions, to provide functional structure to the program. A statement in a program is a basic sentence that expresses a simple ideaits purpose is to give the computer a basic instruction. Statements define the types of data allowed, how data are to be manipulated, and the ways that procedures and functions work. Programmers use statements to manipulate common components of programming languages, such as variables and macros (mini-programs within a program).
Statements known as data declarations give names and properties to elements of a program called variables. Variables can be assigned different values within the program. The properties variables can have are called types, and they include such things as what possible values might be saved in the variables, how much numerical accuracy is to be used in the values, and how one variable may represent a collection of simpler values in an organized fashion, such as a table or array. In many programming languages, a key data type is a pointer. Variables that are pointers do not themselves have values; instead, they have information that the computer can use to locate some other variablethat is, they point to another variable.
An expression is a piece of a statement that describes a series of computations to be performed on some of the programs variables, such as X + Y/Z, in which the variables are X, Y, and Z and the computations are addition and division. An assignment statement assigns a variable a value derived from some expression, while conditional statements specify expressions to be tested and then used to select which other statements should be executed next.
Procedure and function statements define certain blocks of code as procedures or functions that can then be returned to later in the program. These statements also define the kinds of variables and parameters the programmer can choose and the type of value that the code will return when an expression accesses the procedure or function. Many programming languages also permit minitranslation programs called macros. Macros translate segments of code that have been written in a language structure defined by the programmer into statements that the programming language understands.
V |
HISTORY |
Programming languages date back almost to the invention of the digital computer in the 1940s. The first assembly languages emerged in the late 1950s with the introduction of commercial computers. The first procedural languages were developed in the late 1950s to early 1960s: Fortran (FORmula TRANslation), created by John Backus, and then COBOL (COmmon Business Oriented Language), created by Grace Hopper. The first functional language was LISP (LISt Processing), written by John McCarthy in the late 1950s. Although heavily updated, all three languages are still widely used today.
In the late 1960s, the first object-oriented languages, such as SIMULA, emerged. Logic languages became well known in the mid 1970s with the introduction of PROLOG, a language used to program artificial intelligence software. During the 1970s, procedural languages continued to develop with ALGOL, BASIC, PASCAL, C, and Ada. SMALLTALK was a highly influential object-oriented language that led to the merging of object-oriented and procedural languages in C++ and more recently in JAVA. Although pure logic languages have declined in popularity, variations have become vitally important in the form of relational languages for modern databases, such as SQL (Structured Query Language).
9849
Contributed By:
Peter M. Kogge
Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved.
№ 13. Monitor (computer)
Monitor (computer), in computer science, device connected to a computer that displays information on a screen. Modern computer monitors can display a wide variety of information, including text, icons (pictures representing commands), photographs, computer rendered graphics, video, and animation.
Most computer monitors use a cathode-ray tube (CRT) as the display device. A CRT is a glass tube that is narrow at one end and opens to a flat screen at the other end. The CRTs used for monitors have rectangular screens, but other types of CRTs may have circular or square screens. The narrow end of the CRT contains a single electron gun for a monochrome, or single-color monitor, and three electron guns for a color monitorone electron gun for each of the three primary colors: red, green, and yellow. The display screen is covered with tiny phosphor dots that emit light when struck by electrons from an electron gun.
Monochrome monitors have only one type of phosphor dot while color monitors have three types of phosphor dots, each emitting either red, green, or blue light. One red, one green, and one blue phosphor dot are grouped together into a single unit called a picture element, or pixel. A pixel is the smallest unit that can be displayed on the screen. Pixels are arranged together in rows and columns and are small enough that they appear connected and continuous to the eye.
Electronic circuitry within the monitor controls an electromagnet that scans and focuses electron beams onto the display screen, illuminating the pixels. Image intensity is controlled by the number of electrons that hit a particular pixel. The more electrons that hit a pixel, the more light the pixel emits. The pixels, illuminated by each pass of the beams, create images on the screen. Variety of color and shading in an image is produced by carefully controlling the intensity of the electron beams hitting each of the dots that make up the pixels. The speed at which the electron beams repeat a single scan over the pixels is known as the refresh rate. Refresh rates are usually about 60 times a second.
Monochrome monitors display one color for text and pictures, such as white, green, or amber, against a dark color, such as black, for the background. Gray-scale monitors are a type of monochrome monitor that can display between 16 and 256 different shades of gray.
Manufacturers describe the quality of a monitors display by dot pitch, which is the amount of space between the centers of adjacent pixels. Smaller dot pitches mean the pixels are more closely spaced and the monitor will yield sharper images. Most monitors have dot pitches that range from 0.22 mm (0.008 in) to 0.39 mm (0.015 in).
The screen size of monitors is measured by the distance from one corner of the display to the diagonally opposite corner. A typical size is 38 cm (15 in), with most monitors ranging in size from 22.9 cm (9 in) to 53 cm (21 in). Standard monitors are wider than they are tall and are called landscape monitors. Monitors that have greater height than width are called portrait monitors.
The amount of detail, or resolution, that a monitor can display depends on the size of the screen, the dot pitch, and on the type of display adapter used. The display adapter is a circuit board that receives formatted information from the computer and then draws an image on the monitor, displaying the information to the user. Display adapters follow various standards governing the amount of resolution they can obtain. Most color monitors are compatible with Video Graphics Array (VGA) standards, which are 640 by 480 pixels (640 pixels on each of 480 rows), or about 300,000 pixels. VGA yields 16 colors, but most modern monitors display far more colors and are considered high resolution in comparison. Super VGA (SVGA) monitors have 1024 by 768 pixels (about 800,000) and are capable of displaying more than 60,000 different colors. Some SVGA monitors can display more than 16 million different colors.
A monitor is one type of computer display, defined by its CRT screen. Other types of displays include flat, laptop computer screens that often use liquid-crystal displays (LCDs). Other thin, flat-screen monitors that do not employ CRTs are currently being developed.
3564
Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved.
№ 14. Central Processing Unit
I |
INTRODUCTION |
Central Processing Unit (CPU), in computer science, microscopic circuitry that serves as the main information processor in a computer. A CPU is generally a single microprocessor made from a wafer of semiconducting material, usually silicon, with millions of electrical components on its surface. On a higher level, the CPU is actually a number of interconnected processing units that are each responsible for one aspect of the CPUs function. Standard CPUs contain processing units that interpret and implement software instructions, perform calculations and comparisons, make logical decisions (determining if a statement is true or false based on the rules of Boolean algebra), temporarily store information for use by another of the CPUs processing units, keep track of the current step in the execution of the program, and allow the CPU to communicate with the rest of the computer.
II |
HOW A CPU WORKS |
A |
CPU Function |
A CPU is similar to a calculator, only much more powerful. The main function of the CPU is to perform arithmetic and logical operations on data taken from memory or on information entered through some device, such as a keyboard, scanner, or joystick. The CPU is controlled by a list of software instructions, called a computer program. Software instructions entering the CPU originate in some form of memory storage device such as a hard disk, floppy disk, CD-ROM, or magnetic tape. These instructions then pass into the computers main random access memory (RAM), where each instruction is given a unique address, or memory location. The CPU can access specific pieces of data in RAM by specifying the address of the data that it wants.
As a program is executed, data flow from RAM through an interface unit of wires called the bus, which connects the CPU to RAM. The data are then decoded by a processing unit called the instruction decoder that interprets and implements software instructions. From the instruction decoder the data pass to the arithmetic/logic unit (ALU), which performs calculations and comparisons. Data may be stored by the ALU in temporary memory locations called registers where it may be retrieved quickly. The ALU performs specific operations such as addition, multiplication, and conditional tests on the data in its registers, sending the resulting data back to RAM or storing it in another register for further use. During this process, a unit called the program counter keeps track of each successive instruction to make sure that the program instructions are followed by the CPU in the correct order.
B |
Branching Instructions |
The program counter in the CPU usually advances sequentially through the instructions. However, special instructions called branch or jump instructions allow the CPU to abruptly shift to an instruction location out of sequence. These branches are either unconditional or conditional. An unconditional branch always jumps to a new, out of order instruction stream. A conditional branch tests the result of a previous operation to see if the branch should be taken. For example, a branch might be taken only if the result of a previous subtraction produced a negative result. Data that are tested for conditional branching are stored in special locations in the CPU called flags.
C |
Clock Pulses |
The CPU is driven by one or more repetitive clock circuits that send a constant stream of pulses throughout the CPUs circuitry. The CPU uses these clock pulses to synchronize its operations. The smallest increments of CPU work are completed between sequential clock pulses. More complex tasks take several clock periods to complete. Clock pulses are measured in Hertz, or number of pulses per second. For instance, a 2-gigahertz (2-GHz) processor has 2 billion clock pulses passing through it per second. Clock pulses are a measure of the speed of a processor.
D |
Fixed-Point and Floating-Point Numbers |
Most CPUs handle two different kinds of numbers: fixed-point and floating-point numbers. Fixed-point numbers have a specific number of digits on either side of the decimal point. This restriction limits the range of values that are possible for these numbers, but it also allows for the fastest arithmetic. Floating-point numbers are numbers that are expressed in scientific notation, in which a number is represented as a decimal number multiplied by a power of ten. Scientific notation is a compact way of expressing very large or very small numbers and allows a wide range of digits before and after the decimal point. This is important for representing graphics and for scientific work, but floating-point arithmetic is more complex and can take longer to complete. Performing an operation on a floating-point number may require many CPU clock periods. A CPUs floating-point computation rate is therefore less than its clock rate. Some computers use a special floating-point processor, called a coprocessor, that works in parallel to the CPU to speed up calculations using floating-point numbers. This coprocessor has become standard on many personal computer CPUs, such as Intels Pentium chip.
III |
HISTORY |
A |
Early Computers |
In the first computers, CPUs were made of vacuum tubes and electric relays rather than microscopic transistors on computer chips. These early computers were immense and needed a great deal of power compared to todays microprocessor-driven computers. The first general purpose electronic computer, the ENIAC (Electronic Numerical Integrator And Computer), was introduced in 1946 and filled a large room. About 18,000 vacuum tubes were used to build ENIACs CPU and input/output circuits. Between 1946 and 1956 all computers had bulky CPUs that consumed massive amounts of energy and needed continual maintenance, because the vacuum tubes burned out frequently and had to be replaced.
B |
The Transistor |
A solution to the problems posed by vacuum tubes came in 1947, when American physicists John Bardeen, Walter Brattain, and William Shockley first demonstrated a revolutionary new electronic switching and amplifying device called the transistor. The transistor had the potential to work faster and more reliably and to consume much less power than a vacuum tube. Despite the overwhelming advantages transistors offered over vacuum tubes, it took nine years before they were used in a commercial computer. The first commercially available computer to use transistors in its circuitry was the UNIVAC (UNIVersal Automatic Computer), delivered to the United States Air Force in 1956.
C |
The Integrated Circuit |
Development of the computer chip started in 1958 when Jack Kilby of Texas Instruments demonstrated that it was possible to integrate the various components of a CPU onto a single piece of silicon. These computer chips were called integrated circuits (ICs) because they combined multiple electronic circuits on the same chip. Subsequent design and manufacturing advances allowed transistor densities on integrated circuits to increase tremendously. The first ICs had only tens of transistors per chip compared to the tens of millions of transistors per chip common on todays CPUs.
In 1967 Fairchild Semiconductor introduced a single integrated circuit that contained all the arithmetic logic functions for an eight-bit processor. (A bit is the smallest unit of information used in computers. Multiples of a bit are used to describe the largest-size piece of data that a CPU can manipulate at one time.) However, a fully working integrated circuit computer required additional circuits to provide register storage, data flow control, and memory and input/output paths. Intel Corporation accomplished this in 1971 when it introduced the Intel 4004 microprocessor. Although the 4004 could only manage four-bit arithmetic, it was powerful enough to become the core of many useful hand calculators at the time. In 1975 Micro Instrumentation Telemetry Systems introduced the Altair 8800, the first personal computer kit to feature an eight-bit microprocessor. Because microprocessors were so inexpensive and reliable, computing technology rapidly advanced to the point where individuals could afford to buy a small computer. The concept of the personal computer was made possible by the advent of the microprocessor CPU. In 1978 Intel introduced the first of its x86 CPUs, the 8086 16-bit microprocessor. Although 32-bit microprocessors are most common today, microprocessors are becoming increasingly sophisticated, with many 64-bit CPUs available. High-performance processors can run with internal clock rates that exceed 3 GHz, or 3 billion clock pulses per second.
IV |
CURRENT DEVELOPMENTS |
The competitive nature of the computer industry and the use of faster, more cost-effective computing continue the drive toward faster CPUs. The minimum transistor size that can be manufactured using current technology is fast approaching the theoretical limit. In the standard technique for microprocessor design, ultraviolet (short wavelength) light is used to expose a light-sensitive covering on the silicon chip. Various methods are then used to etch the base material along the pattern created by the light. These etchings form the paths that electricity follows in the chip. The theoretical limit for transistor size using this type of manufacturing process is approximately equal to the wavelength of the light used to expose the light-sensitive covering. By using light of shorter wavelength, greater detail can be achieved and smaller transistors can be manufactured, resulting in faster, more powerful CPUs. Printing integrated circuits with X-rays, which have a much shorter wavelength than ultraviolet light, may provide further reductions in transistor size that will translate to improvements in CPU speed.
Many other avenues of research are being pursued in an attempt to make faster CPUs. New base materials for integrated circuits, such as composite layers of gallium arsenide and gallium aluminum arsenide, may contribute to faster chips. Alternatives to the standard transistor-based model of the CPU are also being considered. Experimental ideas in computing may radically change the design of computers and the concept of the CPU in the future. These ideas include quantum computing, in which single atoms hold bits of information; molecular computing, where certain types of problems may be solved using recombinant DNA techniques; and neural networks, which are computer systems with the ability to learn.
8846
Contributed By:
Peter M. Kogge
Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved.
№ 15.Integrated Circuit
Integrated Circuit, tiny electronic circuit used to perform a specific electronic function, such as amplification; it is usually combined with other components to form a more complex system. It is formed as a single unit by diffusing impurities into single-crystal silicon, which then serves as a semiconductor material, or by etching the silicon by means of electron beams. Several hundred identical integrated circuits (ICs) are made at a time on a thin wafer several centimeters wide, and the wafer is subsequently sliced into individual ICs called chips. In large-scale integration (LSI), as many as 5000 circuit elements, such as resistors and transistors, are combined in a square of silicon measuring about 1.3 cm (.5 in) on a side. Hundreds of these integrated circuits can be arrayed on a silicon wafer 8 to 15 cm (3 to 6 in) in diameter. Larger-scale integration can produce a silicon chip with millions of circuit elements. Individual circuit elements on a chip are interconnected by thin metal or semiconductor films, which are insulated from the rest of the circuit by thin dielectric layers. Chips are assembled into packages containing external electrical leads to facilitate insertion into printed circuit boards for interconnection with other circuits or components.
During recent years, the functional capability of ICs has steadily increased, and the cost of the functions they perform has steadily decreased. This has produced revolutionary changes in electronic equipmentvastly increased functional capability and reliability combined with great reductions in size, physical complexity, and power consumption. Computer technology, in particular, has benefited greatly. The logic and arithmetic functions of a small computer can now be performed on a single VLSI chip called a microprocessor, and the complete logic, arithmetic, and memory functions of a small computer can be packaged on a single printed circuit board, or even on a single chip. Such a device is called a microcomputer.
In consumer electronics, ICs have made possible the development of many new products, including personal calculators and computers, digital watches, and video games. They have also been used to improve or lower the cost of many existing products, such as appliances, televisions, radios, and high-fidelity equipment. They have been applied in the automotive field for diagnostics and pollution control, and they are used extensively in industry, medicine, traffic control (both air and ground), environmental monitoring, and communications.
Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved.
Command
Command, in computer science, an instruction that initiates a computer function, operation, or program. All computer programs respond to some sort of specific commands, with more complicated programs generally having a larger quantity and variety of commands. Commands for similar functions differ depending on the computer system and operating system being used.
Commands are an integral part of the interface between the user and computer. Computers usually employ one of two types of user interfaces: command-line interfaces or graphical user interfaces (GUIs). Command-line interfaces are based on text. An example of a command-line interface is Microsoft Corporations Disk Operating System (MS-DOS). In programs that respond to commands through a command-line interface, the user must enter an exact command, usually in the form of a keyword, into the computer. An example is the command MEM in MS-DOS, which displays information about the amount and the various types of memory in the main computer memory.
Graphical user interfaces, such as the Apple Computer Inc. operating system and Microsofts Windows 95 operating system, enable the user to enter commands into the computer by single-or double-clicking a mouse button once an appropriate icon or keyword has been selected. The icon or keyword is selected from a menu, from the desktop, or from a window.
Once a user enters a command into the computer, the command is read by the computers operating system. The operating system is the most important program running on a computer because it performs basic functions, such as memory allocation, and allows other computer programs to run. One of the functions of the operating system is to interpret commands. This function is performed by a program running within each computers operating system called the command interpreter. The command interpreter reads the commands from the user or from a file and executes them.
Microsoft ® Encarta ® 2006. © 1993-2005 Microsoft Corporation. All rights reserved.
3900