Будь умным!


У вас вопросы?
У нас ответы:) SamZan.net

Memory. The memory system of the future mchines will store the entire welth of knowledge ccumulted by mn in ll the sciences in culture nd in every spect of humn life.

Работа добавлена на сайт samzan.net:


20


S

A

C

D

G

F

E

N

M

A

B

Normal

Reflected ray

Incident ray

Angle of reflection

O

Angle of incidence

Additional Reading

TEXT 1

Age of Thinking Machines

Which fields of modern science and technology might prove decisive for the future development of new forms of movement of matter? One of them is the electronic computer — the most remarkable achievement of the twentieth century, which marked the emergence of man into the era of the automation of mental work.

Firstly, an increase in the volume of the machine-memory. The memory system of the future machines will store the entire wealth of knowledge accumulated by man in all the sciences in culture and in every aspect of human life. The potential in this field is endless.

Secondly, micro-miniaturization. This is associated with the progress of radioelectronics. In the designing of electronic computers we have passed from radio-valves to transistors, which are smaller in size, and to solid circuits, where various portions of a tiny piece of synthetic crystal are imparted the necessary electrophysical properties by appropriate treatment.

Micro-miniaturization makes it possible to construct extremely complex computers of small size and weight, but involving a tremendous number of elements.

The man-machine relationship is most important, too. In the comparatively near future, a global system of machines, a single automated system of communication, and a single system of scientific, technical and other data supply will be developed. All these systems will be correlated, will have intricate structures and specific properties. The development of such systems and of their theory is an essential trend of modern thought.

Finally, the electronic machines will affect radically the work of scientists. The machine will by itself accumulate, process and supply new data. By this method some new types of codes have been found. Their discovery necessitated the testing and comparisons of approximately one thousand million operations. Thanks to the high rate of processing, the computers found the new codes independently, while man only fed the demands and the algorithms, i.e. set the search pattern.

The machines which are already available possess prerequisites of new properties, and serious thought should be focussed on the development of these emerging properties along the channels most beneficial for man.

Computing machines will radically change the work of man. They will independently accumulate information, process it and draw conclusions. Also important will be such trends as making machines self-improving, self-programming, and self-managing. Thanks to these properties, the machine will possess the ability to modify its structure, its organization and the interaction of its elements to a certain extent.

A global system with global tasks may be developed in the comparatively near future. This will call for combining within a single system electronic computers, information and other machines, a system of communications, etc.

TEXT 2

ATOMS AND NUCLEAR FUELS

An atom of matter is like an incredibly tiny solar system. It has a central sun, called the nucleus, round which move electrons, the planets of the system. The diameter of the nucleus is only about a ten-thousandth of that of the atom, yet in spite of its infinitesimally small size the nucleus is itself a complex body built up from particles called protons and neutrons. Practically the whole mass of the atom is concentrated in the nucleus, for the mass of a planetary electron is less than a thousandth part of the mass of a proton or a neutron.

When matter undergoes a chemical reaction, such as burning, for example, the planetary electrons of its atoms are rearranged and as a consequence energy is released.This energy appears generally as heat, as the heat of a coal fire, or light, as in a gas flame. In these reactions the nuclei of the atoms taking part are undisturbed. In some circumstances, however, it is possible to produce a reaction in which a nucleus is disturbed or even broken up, and when this is done much more energy may be released than it is possible when only the planetary electrons are involved. Unlike chemical reactions nuclear reactions cannot generally be made to spread from one atom to the next; each nucleus has to be treated individually. But there is one exception to this rule, the reaction called nuclear fission, which is the cornerstone of nuclear power. Fission is produced when a nucleus of certain elements is struck by a neutron; the nucleus absorbs the neutron, its equilibrium is disturbed, and it is split into two more or less equal parts. In this process of splitting energy is released and also more than one fresh neutron (actually about 2.5 neutrons per fission on the average); the latter are most important because they cause further fissions in neighbouring atoms and these in their turn release more neutrons to cause yet another generation of fissions, and so on. In this way there is produced a self-sustaining reaction, a nuclear fire.

There are many fuels in which an ordinary fire can burn - coal, oil, gas, wood, even metals - but only one naturally occurring material will sustain a nuclear fire. That is the element uranium, a metal heavier than lead. Questions often asked at this point are: "How do you light one of these nuclear fires? Do you have to switch on your reactor?" The answer is that uranium is throwing off neutrons continuously. even before it is put into the reactor, but the nucleal fire begins only when there is a sufficient quantity of uranium assembled in the favourable conditions of the reactor. to sustain a chain reaction. This quantity is called the critical size. A point to be remembered is that only a small fraction of the atoms in uranium will "burn," in the nuclear sense. Uranium consists of a mixture of two kinds of atoms, one of which is a little lighter than the other. These different atoms are called isotopes of uranium; they behave similarly in ordinary chemical reactions, but their nuclei differ in a way that leads to different behaviour when they are hit by neutrons. The lighter isotope, uranium-235, is easily split by neutrons of low energy but the heavier isotope, uranium-238, is not. Uranium-235 is in fact the part that burns. Now uranium as it occurs in nature contains only 0.7 per cent of uranium-235; all the rest is uranium-238 So only 0.7 per cent of natural uranium is nuclear fuel.

Uranium-238, however, can be changed into an isotope of another element which will undergo fission and therefore "burn". Once again the neutron plays an essential part. When a neutron hits a uranium-238 nucleus it is absorbed; as a result, and after some internal rearrangement accompanied by the emission of particles, the nucleus is transmuted into a nucleus of an isotope of the element plutonium, plutonium-239. This new material is an even better nuclear fuel than uranium-235. In a similar manner a third nuclear fuel can be made by exposing the element thorium to neutrons; the thorium isotope of mass 232 absorbs neutrons and is transmuted into uranium-233, a fissile isotope of uranium which does not exist in nature.

These three then, uranium-235, plutonium-239 and uranium-233, are the fuels of the atomic age; the essential raw materials from which they are extracted or made by nuclear transmutation are natural uranium and thorium. The striking thing about these nuclear fuels, compared with ordinary chemical fuels, is the enormous amount of energy that is released for each pound of fuel burnt. Thus a pound of uranium, if all the atoms in it were made to undergo fission, would release as much energy as 3, 000,000 pounds (or 1,300 tons) of coal. Such complete utilization of uranium has yet to be realized in practice, though the fact that non-fissile uranium-238 can be transmuted into fissile plutonium implies that it is theoretically possible.

The energy released in fission is imparted in the first instance to the two fragments into which the nucleus is split, causing them to move apart with great speed. No way of using the energy of these fragments directly has yet been devised. but the motion through uranium heats the metal and this heat can be removed and converted to mechanical energy by a steam engine or gas turbine. In short, atomic energy is obtained by burning an uncommon fuel in an uncommon way and then using the heat obtained in a quite ordinary manner.


TEXT 3

The neutron

One characteristic of the atomic nuclei is their relatively great mass.There is only one known particle of the comparable mass which is not an atomic nucleus, namely, the neutron. Its mass is closely equal to the mass of the proton. (Actually the neutron is about 0.1 per cent heavier than the proton). The property that sets the neutron apart from the atomic nuclei is that it does not carry any charge and therefore does not attract electrons and does not surround itself with an electronic shell. Neutrons are produced in some close nuclear collisions, that is, in collisions in which nuclei get into contact with each other.

The only interaction of neutrons with atomic nuclei is one of short range which is of the same type as the forces giving rise to anomalous alpha-particle scattering. Thus a neutron must as a general rule get to the surface of a nucleus in order that it should be deflected. The only established interaction of neutrons with electrons is a weak force of the magnetic type. Further short-range interactions do not exist or are extremely small. It can be shown that owing to the small mass of the electrons these weak forces are particularly ineffective in the interaction of free electrons with neutrons. The probability of electronic excitation by neutron impact is very small. But electrons in atomic orbits can influence the path of a neutron with higher probability if during a collision the electrons do not become excited. In this case the electrons act as parts of the atom and can be said to possess effectively the mass of the whole atom. In such collisions the weak magnetic interaction was detected. All interactions of neutrons with electrons seem, however, to be of small importance in our discussion.

The most important interaction of neutrons with matter remains the collisions with atomic nuclei. According to a geometrical picture these collisions ought to have a cross section of the order of 10-24 cm2. This could lead to a mean free path of a neutron in a solid which may be longer than a centimeter. As a general rule neutrons do penetrate solids as is suggested by this long free path. This fact is a very direct illustration of the small extension of nuclear particles.

TEXT 4

ALGEBRAIC LANGUAGE

Use of letters. - Letters are used to express the general properties of numbers. Suppose that we want to express briefly in a written form that the product of two numbers remains unaltered when we interchange the position of the multiplic and the multiplier. Then, representing one of the numbers by the letter a and the other by the letter b, we shall be able to write the equality: a×b=b×a, or, shortly: ab=ba, having agreed once and for all that the multiplication sign is understood between any two letters, written side by side, if no other sign is indicated. Consequently, letters are always used to express that a certain property is peculiar to numbers in general and not to any particular numbers.

Letters of the Latin alphabet are generally used to represent numbers.

Algebraic expression. - An algebraic expression is an expression in which several numbers represented by letters (or by letters and figures) are connected by means of signs indicating the operations to which the numbers must be subjected and the order of these operations.

Such are, for instance, the expressions:

For the sake of brevity we shall often simply say “expression” instead of “algebraic expression”.

To evaluate an expression when the numerical values of the letters are given, is to substitute the numerical equivalents for the letters and perform the operations indicated in the expression. The number obtained is known as the numerical value of the algebraic expression for the given numerical equivalents of the letters. Hence, the numerical value of the expression  when p=3 and a=520 is

=5.2×3=15.6

Order of operations. - With regard to the order in which the operations indicated in an algebraic expression should be performed, it  was agreed upon to perform the operations of the higher order first, i.e., involution and evolution, then multiplication and division, and,  finally, addition and subtraction.

Thus, if we have the expression  we must, when evaluating it, first perform the involution (square the number a and cube the numbet b), then the multiplication and division (multiply 3 by a2 and the result obtained by b; divide b3 by c) and, finally, the subtraction and addition (subtract from 3a2b and add d to the result).

Notion of Values which may be taken in two opposite senses. - Problem.- At midnight a thermometer read 2° and at noon 5°. How many degrees did the temperature change between midnight and noon?

The conditions of this problem are not sufficiently clear; we must know whether the reading at midnight was 2° below or above 0°, the same must be mentioned for the noon reading. If e.g. both at midnight and at noon the temperature was above 0°, then during the given period of time the temperature rose from 2° to 5°, i. e. 3°; while if the temperature at midnight was 2° below zero and at noon it was 5° above zero, the temperature rose 2 + 5, i. e. 7°, and so on.

In this problem we deal with a quantity having a direction: the number of degrees may be read either upwards or downwards from zero. The temperature above 0° (heat) is known as positive and is recorded as the number of degrees preceded by the + sign, and the temperature below 0° (cold) is known as negative and is recorded as  the number of degrees preceded by the - sign (there will be no misunderstanding if the first reading is taken without any sign at all).

Now let us formulate our problem, for instance, as follows: At midnight a thermometer read -2° and at noon it read + 5°. What is the change in temperature between midnight and noon? As it is, the problem has a definite answer: The temperature rose 2 + 5, 1. e. 7°.

TEXT 5

Radioactivity

Radioactivity is a manifestation of nuclear instability. Unstable nuclei boil, erupt, emit particles, blow up, or transform themselves by some means, usually into another nuclear species. This process is called radioactive decay. We have already mentioned various aspects of the phenomena encountered.

The most common forms of radioactive decay involve the emission of α, β, or γ-rays. The energies of emitted particles are often of the order of Mev but higher and lower energies are also found. In the range of energies around 1Mev, γ-rays are more penetrating than β-rays, and β-rays are more penetrating than α-rays.

The decay process is describable in terms of a sign constant, called the decay constant. With one exception, this decay constant is unaffected by pressure, temperature, chemical composition, or other such factors that can be varied in the surroundings of radioactive nuclei.

Radioactive substances are often classified according to whether they occur on the earth in a radioactive state without further treatment, or whether they are produced by bombardment in man-made apparatus. The former are called "naturally" radioactive, the latter "artificially" radioactive.

As one might expect, naturally occuring radioactive substances are either very long-lived, or are the products of long-lived substances. Short-lived substances, unless replenished, have decayed, and so have disappeared from the earth. This raises interesting questions regarding the origin of matter. For example, to what extent are stable nuclei the products of radioactive decay? Or, what limits can study of natural radioactivity set on the age of the earth?

Artificially prepared radioactive substances are also of great interest. Their general availability has opened up whole new chapters in the exploration of matter.

Radioactivity is moreover, an aspect of one of the most fascinating manifestations of nature —the metamorphosis of one thing into something else - the creation of something new. We might be able to say a good deal concerning the transformation of the lump of sugar in the cup of tea; the emission of a quantum of light is perhaps less familiar, and the transformation of a nucleus is a process that we are just inventing the words to describe.

TEXT 6

HOW RADAR WORKS

The design of a radar begins with consideration of its intended use, that is, the function to be performed by the radar as a whole. The uses generally divide into three categories:

1. Warning and surveillance of activity, including identification.

2. Aids to the direction of weapons, that is, gunfire control and searchlight control.

3. Observation of terrain echoes or beacons for navigation and control of bombing.

There is nothing mysterious or complex about radiolocation. It rests on the foundations of ordinary radio theory, and is a technique based on the transmission, reception, and interpretation of radiofrcquency pulses. Considered as a whole, it must be admitted that even the most elementary of radar equipment is difficult to visualize, but this is simply due to the fact that so many (normally) curious circuits and pieces of apparatus are gathered together under one roof. No particular circuit or detail of the equipment is in itself especially difficult to understand, and once the elements are known the complete assembly is no longer mentally unmanageable.

The word "radar" is derived from the phrase "radio direction-finding and range", and it may be more expressive than the older "radiolocation", or it may not. Finding the position of an aircraft or a ship by means of radio covers a very wide field of electronic application, covers, in fact, the whole area of radio direction-finding (R. D. F.) from the elementary bearing-loop to the principle of the reflected pulse which represents the latest principle of the technique. In this article the term will be used to cover only those methods of detection which depend upon the reflected pulse, the characteristic (by popular opinion) which distinguishes radar from all other methods oi position-finding in that no co-operation is required on the part of the target. We shall  not dwell, therefore, upon the older and more  familiar methods which depend upon the reception at two or more points of a signal transmitted by the body under location itself.

The actual equipments in use which employ the reflected pulse principle are greatly varied from the point of view of physical appearance, but their basic principles are the same.

First, let us tabulate and briefly analyse the problem to be met. The aim of radar is to find the position of a target with respect to a fixed point on the ground - say the position of an aeroplane or a barrage balloon with respect to the radar equipment situated in a field a mile or so away. Three quantities must be measured in order to define the position of the aeroplane or the barrage balloon: first, the slant range, the length of the most direct line drawn from the radar site to the target; second, the angle of bearing, i.e. which point of the compass the target occupies; third, the angle of elevation. Fig. 6 should make these points clear for you. When the target is an aeroplane, these three quantities are continuously varying so that the problem of position-finding is somewhat complicated by the fact that the radar equipment has to "follow" as well as find. In the case of barrage balloon, things are not quite so difficult, and the three important factors may be found at leisure.

Fig. 6

TEXT 7

Quantum electronics

Quantum electronics was born when a new method was proposed for generating and amplifying radio waves by the use of quantum micro-systems, molecules, atoms and so on. This method has proved very fruitful and gave good results.

On the basis of radio-frequency quantum generators clocks have been made that measure time with an accuracy of one second per 300 years. Modern scientific achievements make possible the manufacture of clocks which measure time with an even higher accuracy, namely: one second per tens of thousands of years. Such superprecise generators can be applied for aerial and sea navigation.

Of no less importance are the quantum amplifiers, they considerably increase the sensitivity of radio receivers. Quantum amplifiers operate at temperatures close to absolute zero, and radio receivers developed on the basis of quantum amplifiers are tens and even hundreds of times more sensitive than conventional receivers. This considerable increase in sensitivity opens up great future for radar, radio navigation, space radio communication, radio astronomy and other fields of science and technology.

Perhaps the most interesting thing about semiconductor lasers is that they can transform electrical energy directly into light wave energy. They do this with an efficiency approaching one hundred per cent. The development of powerful highly-efficient semiconductor lasers will considerably raise the power efficiency of a number of technological processes. Calculations and experiments show that even superhard substances such as diamonds, rubies, hard alloys and so on can be worked profitably by means of ruby lasers.

Semiconductor quantum generators occupy a special place among the optical quantum generators. The size of a ruby crystal laser comes to tens of centimetres. Ruby crystals about ten centimetres long can intensify light ten times. The same results can be obtained from semiconductor crystals only a few microns long.

Semiconductor laser is a few tenths of a millimetre long, whereas the density of its radiation is hundreds of thousands of times as great as that of the best ruby lasers. Semiconductor lasers operate under pulse and permanent regimes. It is very easy to control the generator oscillations, to modulate its radiation by simply changing its feed current. The high-frequency radiation of optical generators makes if possible to transmit an enormous flow of information. This is of great significance for the advancement of communication techniques. The small dimensions of the semiconductor laser make it especially suitable for use in superspeed computers.

Theoretical calculations have shown that devices similar to semiconductor lasers can also transform the energy of light radio waves into electrical energy with an efficiency of close to 100 per cent. This means that electric power may be transmitted over considerable distances with negligible losses without the use of transmission lines. The high efficiency of semiconductor lasers opens up possibilities of developing extremely economical sources of light.

TEXT 8

SONIC TECHNIQUES FOR INDUSTRY

It is apparent that a new area of technology based on the use of sound waves, is taking shape. The term "sonics" was given to this new technology which includes the analysis, testing, and processing of materials and products by the use of mechanical vibrating energy. All applications of sonics are based on the same physical principles, the particular frequency that is best suited being determined by the special requirements and limitations of the task.

We shall see that the phenomenon of acoustic vibration can be utilized in many ways. With sound waves we can "sonograph"(as with light waves we photograph) the inner structure of bodies that are opaque to light. Sound waves can penetrate many solids and liquids more readily than X-rays or other forms of electromagnetic energy. Thus sound can expose a tiny crack embedded many feet deep in metal, where detection by any other means might be impossible. Similarly ultrasonic pulse techniques are now being used in medicine for the early diagnosis of different diseases.

By acoustic techniques we can measure the elastic constants of solid materials, as well as analyse the residual stresses or structural changes. The molecular arrangements within many organic liquids can be found from measurements of sound velocity or absorption. The rates of energy transfer among gas molecules and the chemical affinity of gaseous mixtures can be determined by using sound waves.

As soon as we can measure a process, we have within reach a means of controlling it. Indeed acoustic instrumentation offers extensive but practically unexplored opportunities in the automatic control of industrial processes. The geometry of metal parts, the quality of cast metals and laminated plastics, the temperature in the combustion chamber of gasoline engines, the composition of compounds in liquid or gas, the flow velocity of liquids and gases - these and many other processes may, in time, come under the watchful ear of acoustics.

In the above-mentioned applications, the sound is used as a measuring stick or flashlight - the amounts of power are small and incidental. In another class of applications, large amounts of acoustic power are employed to do useful work. Vibrational energy for example is already used to drill rock and to machine complicated profiles in one single operation. Sound has become a powerful method for the cleaning of precision parts and may find important applications in electrochemistry. Acting on fumes, dusts and smokes, sound can speed up the collection of particles.

Here are some of the technical fields in which the sonic and ultrasonic engineering may find wide application: oil-well drilling, liquid processing, machining, engraving and welding, underwater signalling, cleaning of metal parts, information storage, molecular analysis and some others. The frequency range covered by these applications is extremely wide and their realization therefore involves widely different acoustic engineering practices.

Most of the applications listed have today reached the stage of successful operations, that is, the usefulness to industry of these techniques and instruments has been widely recognized, the development of reliable equipment is more or less completed, and the manufacture and maintenance of the equipment have proved to be economical.

We now come to the other application of sonics - namely, the processing of materials. It has been found that intense vibrations affect colloidal distribution, equalize electrolytic concentrations, and speed up aging processes by absorption in a certain medium, intense vibrations may produce local heating effects, as for example, in the use of ultrasonics in medical therapy.

A particularly powerful phenomenon is cavitation. This is the breakdown of cohesion of a liquid that is exposed to high tensile forces as the sound wave passes through it.

Under the influence of cavitations steel surfaces may be pitted, oxide layers removed, bacteria disintegrated, or high polymers depolymerized. One of the particular successful applications of surface cavitation is in ultrasonic drilling, another is in the soldering of aluminium.

Progress during recent years has been encouraging and still more valuable contributions of sonics to industry may well be expected.

TEXT 9

Semiconductors

Different semiconductors develop greater or lesser voltages. In all semiconductors the voltage increases with the difference in temperature between the hot and cold ends. The voltage across a given semiconductor when one of its ends is warmer than the other is the measure of its characteristic thertmoelectric power, which is expressed in volts per degree centigrade. Semiconductors display thermoelectric power some hundreds of times greater than that of metals but since metals develop only a few millionths of a volt, the thermoelectric power of semiconductors is still very small. Even when the difference in temperature of the two ends of a semiconductor is several hundred degrees, the semiconductor develops only l0-ths of a volt. This, however, is enough to make thermoelectricity useful.

Semiconductors possess still another thermoelectric advantage not found in metals at all. In some types of semiconductor material the voltage differential between the hot and the cold end is set up not by the flow of negatively charged electrons but by the flows of positively charged "holes" vacated by electrons. As a result, the cold end in such a semiconductor becomes positively charged. The two types of semiconductors are designated as "n-type" (hot end positive) and "p-type" (cold end positive). In both types, of course, the direction of the current (electron flow) is from the positive to the negative end, as inside a battery.

Let us now construct a thermoelectric circuit to generate an electric current. We take two semiconductors of opposite types, an n-type and a p-type, and join them at their hot ends. Between their cold ends we place a conductor through which we wish to pass a current. This conductor may be the armature of an electric motor, a lamp, an electrolytic bath to reduce aluminium, or any other device using an electric current. Let us assume that a high temperature is maintained at the hot junction, and that the cold ends of the semiconductors are maintained at a lower temperature. The current produced in the n-type semiconductor flows from the hot to the cold end, while that in the p-type semiconductor flows from the cold end to the hot. The current thus flows around the whole circuit, including the electrical device. Such a thermoelectric cell, it is true, yields only l0-ths of a volt, e. g. the 100 to 200 volts used in the home. To obtain these voltages in a thermoelectric generator we need only join hundreds of individual thermoelectric cells together.

The quality of a thermoelectric cell, however, is not only determined by the voltage it will produce. Two other factors must be taken into account: its electrical and thermal conductivity. If the voltage it produces is to be delivered as useful current, then it must have high electrical conductivity. On the other hand, if a thermoelectric cell is to convert a high percentage of the heat energy into electrical energy, it must have low thermal conductivity. The principal deficiency of thermoelectric cells, as contrasted with other heat engines, is that most of the heat supplied to the hot end flows directly and wastefully by heat conduction, to the cold end. Thus the ratio between the useful electrical output and the heat input in a thermoelectric cell is low.


TEXT 10

Microwave Power Transistors

Most microwave transistors are silicon, planar, epitaxially diffused n-p-n structures with emitter geometries designed to increase the ratio of active to physical area. The two most widely used emitter geometries are the interdigitated geometry and the overlay geometry. In an early interdigitated structure the emitters and bases are built like a set of interlocking combs. The emitter and base areas are controlled by masking and diffusion. The oxide deposit, formed with silicon heated to a high temperature, masks the transistor against either an n- or p-type impurity. This oxide is removed by the usual photoetching techniques in areas where diffusion is required in a base or emitter. With photoetching techniques, the emitter and base strip width and separation can be controlled to one micron.

Overlay structure differs from interdigitated structure in three ways: pattern, composition and metallization. In a modern overlay transistor structure many small, separate emitter sites are used instead of the continuous emitter strip. This arrangement provides a substantial increase in overall emitter periphery without requiring an increase in physical area of the device, and thus raises the device power-frequency capability. As for composition, in addition to the standard base and emitter diffusions, an extra diffused region is made in the base to serve as a conductor grid. This p+ region offers three advantages: (1) it distributes base current uniformly over all of the separate emitter sites, (2) it reduces distances between emitter and base, and (3) it reduces the base resistance and the contact resistance between the aluminium metallization material. The term "overlay" is derived from the fact that the emitter metallization lies over the base instead of adjacent to it, as in the interdigitated structure. The emitter current is carried in the metal conductors that cross over the base. The actual base and emitter areas beneath the pattern are insulated from one another by a silicon dioxide layer.

The design of microwave power transistors has diverged from that of small-signal transistors. The important performance criteria in microwave power-amplifier circuits are power output, power gain, and efficiency. Transistors suitable for power amplification must deliver power efficiently with sufficient gain at the frequency range of interest.

The power-output capability of a transistor is determined by current- and voltage-handling capabilities of the device at the frequency range of interest. The current-handling capability of the transistor is limited by its emitter periphery and epitaxial-layer resistivity. The voltage-handling capability of the device is limited by the break-down voltages which are, in turn, limited by the resistivity of the epitaxial layer and by the penetration of the junction.

In general, all RF power transistors have operating voltage restrictions, and only current-handling differentiates power transistors from small-signal units. At high current level the emitter current of transistor is concentrated at the emitter-base edge; therefore, transistor current-handling can be increased by the use of emitter geometries which have high emitter-periphery-to-emitter-area ratios and by the use of improved techniques in the growth of collector substrate material. Transistors for large-signal applications should be designed so that the peak currents do not cause base widening which would limit the current handling of the device. Basewidth widening is severe in transistors in which the collector side of the collector-base junction has a lower carrier concentration and higher resistivity than the base side of the junction. However, the need for low-resistivity material in the collector to handle high currents without base widening severely limits the break-down voltages as discussed previously. As a result, the use of a different-resistivity epitaxial layer for different operation voltages is becoming common.

Transistor efficiency is determined with device operating under signal-bias conditions at which the collector-to-base junction is reverse-biased and the emitter-to-base junction is forward-biased partially with the input drive signal. The collector efficiency of a transistor amplifier is defined as the ratio of the RF power output at the frequency of interest to the dc input power. Therefore, high efficiency implies that circuit loss be minimum and the ratio of the transistor output, the parallel equivalent resistance, and its collector load resistance be maximum. Thus, the transistor parameter which limits the collector efficiency is output admittance. The output admittance of a transistor pellet consists of two parts: an equivalent parallel output resistance which approaches l/ω1C0 at microwave frequency under small-signal conditions, and an output capacitance C0. In a common-emitter circuit, C0 is essentially the output capacitance because the impedance level at the base is low relative to the impedance level at the transistor output. The output capacitance represents effectively the transistor junction capacitance in series with a resistance. If the collector resistivity is increased, the effective output capacitance and the collector-base break-down voltage are both increased. In a power transistor, junction and epitaxial thickness variations cause variations in C0 with Vcb. Thus, the dynamic output capacitance is a function or voltage swing and power level. It can be shown that the average l/ω1C0 under maximum voltage swing is equal to 2 Ccb where Ccb is measured at the voltage value of Vcb. For the first approximation, the large-signal output resistance can be assumed to be inversely proportional to Ccb. Because the ratio of the transistor output resistance to its collector load resistance determines the collector efficiency, a transistor with high output resistance and, hence, low Ccb is essential.

TEXT 11

RADIO WAVES

Electrical energy that has escaped into free space exists in the form of electromagnetic waves. These waves, which are commonly called radio waves, travel with the velocity of light and consist of magnetic and electrostatic fields at right angles to each other and also at right angles to the direction of travel.

One half of the electrical energy contained in the wave exists in the form of electrostatic energy, while the remaining half is in the form of magnetic energy.

The essential properties of a radio wave are the frequency, intensity, direction of travel, and plane of polarization. The radio waves produced by an alternating current will vary in intensity with the frequency of the current and will therefore be alternately positive and negative.

The distance occupied by one complete cycle of such an alternating wave is equal to the velocity of the wave divided by the number of cycles that are sent out each second and is called the wave length.

The relation between wave length λ in meters and frequency f in cycles per second is therefore

The quantity 300 000 000 is the velocity of light in meters per second. The frequency is ordinarily expressed in kilocycles, abbreviated KC; or in megacycles, abbreviated MC. A low-frequency wave has a long wave length while a high frequency corresponds to a short wave length.

The strength of a radio wave is measured in terms of the voltage stress produced in space by the electrostatic field of the wave and is usually expressed in microvolts stress per meter.

Since the actual stress produced at any point by an alternating wave varies sinusoidally from instant to instant, it is customary to consider  the intensity of such a wave to be the effective value of the stress, which is 0.707 times the maximum stress in the atmosphere during the cycle. The strength of the wave measured in terms of microvolts per meter of stress in space is exactly the same voltage that the magnetic flux of the wave induces in a conductor 1 meter long when sweeping across this conductor with the velocity of light.

Thus the strength of a wave is not only the dielectric stress produced in space by the electrostatic field, but it also represents the voltage that the magnetic field of the wave will induce in cutting across a conductor.

In fact, the voltage stress produced by the wave can be considered as resulting from the movement of the magnetic flux of the same wave.

The minimum field strength required to give satisfactory reception of a wave depends upon a number of factors, such as frequency, type of signal involved, and amount of interference present. Under some conditions radio waves having signal strengths as low as 0.1 μγ per meter are usable. Occasionally signal strengths as great as 5,000 to 30,000 μγ per meter are required to ensure entirely satisfactory reception at all times.

In most cases the weakest useful signal strength lies somewhere between these extremes.

A plane parallel to the mutually perpendicular lines of electrostatic and electromagnetic flux is called the wave front.

The wave always travels in a direction at right angles to the wave front, but whether it goes forward or backward depends upon the relative direction of the lines of electromagnetic and electrostatic flux.

If the direction of either the magnetic or electrostatic flux is reversed, the direction of travel is reversed; but reversing both sets of flux has no effect.

The direction of the electrostatic lines of flux is called the direction of polarization of the wave. If the electrostatic flux lines are vertical the wave is vertically polarized; when the electrostatic flux lines are horizontal and the electromagnetic flux lines are vertical, the wave is horizontally polarized.

TEXT 12

BRIEF ANALYSIS OF THE TELEVISION SYSTEM

In addition to the picture impulses, special signals are sent out by the television transmitter for the purpose of synchronizing the picture at the receiver with that being picked up by the camera.

At the television receiver, the picture and audio signals are picked up simultaneously by a single antenna. The voltages induced in the receiving antenna are fed into the r-f stage of the receiver, and the picture carrier and the sound carrier are converted by the superheterodyne conversion method into two separate i-f signals, one corresponding to the sound carrier and the other to the video or picture carrier with its associated sidebands. Two separate i-f amplifier channels are employed, one for the picture signal and the other for the sound signal.

The sound i-f signal, after suitable amplification, is demodulated by an FM detector. After sufficient amplification by the audio amplifier, the sound signal is reproduced by the loudspeaker in the usual .way.

The picture i-f signal is amplified by several stages having wideband frequency characteristics and is then fed into the video (picture) detector, where the i-f signal is then demodulated in the same fundamental manner as in an ordinary sound receiver. The video (picture) signal which appears in the output of the detector is then amplified in a video amplifier, which corresponds to the audio amplifier in a sound receiver except that it must pass a much wider range of frequencies.

In place of the loudspeaker used in the sound system, a device is used which converts the varying amplitude of the video signals into corresponding variations of light which reproduce the original scene.

The picture-reproducing device is a cathode-ray tube, similar in many respects to the ordinary cathode-ray tube used in service-shop oscilloscopes. The cathode-ray tube may be called a picture tube, because the desired picture is reproduced on the face of this tube. Without going into the details of the cathode-ray tube at this time, we shall assume that it consists of a glass envelope, a source of electrons which are formed into a beam, a control grid for varying the intensity of the electron beam, a deflection system for deflecting the beam, and a screen coated with a fluorescent material that emits light upon impact by the electron beam.

The fundamental action of the cathode-ray tube in reproducing a picture consists in the electron beam's moving horizontally and vertically simultaneously so as to cover the entire area of the picture-tube screen at the same time that the intensity of the spot is being varied by the video signal which is applied between the grid and cathode elements of the cathode-ray tube. The control grid of the picture tube controls the intensity of the beam which strikes the screen in exactly the same way that the control grid of an amplifier tube controls the plate current. In this way, each portion of the picture tube is made to have the proper degree of light or shade to reproduce the original scene.

The brief description of the action taking place in the television receiver omitted the necessary scanning and synchronizing action which locks or synchronizes the picture at the receiver with that at the television camera. However, in the receiver, the synchronizing and scanning action takes place between the output of the video amplifier and the picture tube as shown by the block marked "synchronizing and scanning" in Fig. 9.

BASIC STRUCTURE OF A PICTURE

Before considering the process of converting the image into an electrical signal, it is necessary to consider the elements that make up a picture.

The basic structure of any picture consists of small areas of light or shade, known as picture elements.

The amount of detail in the picture depends on the size and number of elements making up the picture. For fine detail, the picture elements should be small and numerous, as in an ordinary photograph, and should not be visible except on very close inspection. However, if the picture elements are few and large, they will show quite plainly, as evidenced by the halftone engravings employed in newspapers.

The number and size of picture elements required for satisfactory visual representation depend upon two factors: the amount of detail desired, and the distance at which the picture is to be viewed.

It has been found that if a television picture has approximately 200,000 picture elements it will have adequate detail. However, the distance at which it should be viewed for a pleasing picture depends upon the size of the screen. If a picture containing approximately 200,000 elements is enlarged, say to twice its size, then the picture elements will be larger, and if viewed close to the screen, it will not appear as pleasing as it would if viewed from a greater distance where the individual picture elements seem to blend into one complete picture. It is for this reason that the picture on a large screen appears much better at some distance from the screen.

TEXT 13

OPERATING SYSTEMS

Types of Computer Operation

1  Computers vary considerably in size. capability and type of application. Similarly, there is a wide variety of ways in which they can be operated. Each type of a computer operation requires a different type of an operating system.

Most microcomputers and some minicomputers can only process one program at a time. This is a single program operation, and it requires only a simple operating system. The operating system supervises the loading and running of each program and the input and output of data. Any errors occurring are reported.

Next in complexity is batch processing. A number of programs are batched together, and then run as a group. Although the programs are actually run one at a time, input and output from various programs can overlap to some extent. Programs are normally queued up for batch processing, and the operating system starts the next program in the queue as soon as sufficient computing resources are available for it.

Similar to batch processing, but much more sophisticated, is multiprogramming At any one time, a number of programs are on the computer at various stages of completion. Resources are allocated to programs according to the requirements of the programs, and in order to maximise the usage of the different resources of the computer.

A particular type of multiprogramming, which is becoming increasingly popular is transaction processing. Transaction processing is designed for systems which must run large numbers of fairly small programs very frequently, where each program run deals with a single transaction such as a withdrawal from a cash terminal.

The Nature of an Operating Sytem

Like the question "What is a computer? ", the question "What is an operating system? " can be answered at several levels.

Firstly, an operating system is a program, or set of programs. Operating systems vary in size from very small to very large, but all are pieces of software. In the past. almost all operating systems were written in a low level language. Currently, many operating systems are partly or completely written in a high level language.

Secondly, an operating system is. by virtue of its name a system. It is a collection of parts, working together towards some common goals. The goals, or objectives, of an operating system are discussed below.

Thirdly, a computer may be regarded as a set of devices, or resources, which provide a number of services, such as input, processing, storage and output. The operating system of the computer may be regarded as the manager of these resources. It controls the way in which these resources are put to work.

Finally, an operating system is the lowest layer of software on a computer. It acts directly on the "raw" hardware of the computer. It supports other layers of software such as compilers and applications programs. Part of the task of an operating system is to "cushion" users from the complexities of direct use of the computer hardware.

TEXT 14

SUPERCONDUCTIVITY AT ROOM TEMPERATURE

Several years ago an experiment was performed at the Massachusetts Institute of Technology that demonstrated the possibility of constructing a perpetual-motion machine. An electric current was induced to flow around a small ring of metal. The ring was then set aside. A year later the current was found to be still circulating in the material of the ring; what is more, it had not diminished by a measurable amount during this period! Although physicists object instinctively to the idea of perpetual motion and refer to such currents euphemistically as "persistent currents", they are obviously extremely persistent currents.

The secret of this extraordinary phenomenon is of course that the metal must be kept very cold — in fact, within a few degrees of absolute zero (—273 degrees Centigrade). Below a characteristic "transition temperature" certain metals spontaneously enter what is known as the superconducting state, in which a stream of electrons can flow without encountering any resistance in the form of friction. Since friction is the cause of the failure of all mechanical perpetual-motion machines, its total absence in this case allows the initial current to persist indefinitely without any further input of energy, thereby violating the traditional doctrine of the impossibility of perpetual motion.

Actually the phenomenon of superconductivity is not at all rare. Since its discovery by the Dutch physicist Heik Kamerlingh Onnes more than 50 years ago many different metals and several hundred alloys composed of these metals have been identified as superconductors. As might well be expected, the technological potential of perpetual-motion machines based on the principle of superconductivity is virtually unlimited. Lossless power transmission, enormously powerful electromagnets, more efficient motors, amplifiers, particle accelerators and even computers are just a few of the serious proposals for the exploitation of superconductivity that have been put forward in the past 50 years. The main drawback of all these schemes involves the very low temperatures typically associated with superconductors; the complex and bulky refrigeration equipment required to maintain such metals in the superconducting state makes most of the proposed applications as yet economically unfeasible. The hope that the problem of refrigeration might some day be circumvented by the discovery of superconductors with higher transition temperatures has led to the investigation of a large number of alloy combinations of the known superconducting metals. Although many new superconducting alloys have been found, the outlook for high-temperature metallic superconductors is not bright. The highest transition temperature recorded so far is only 18.2 Kelvin (degrees Centigrade above absolute zero), which is still well below the temperature range accessible to simple refrigeration systems. Moreover this work has yielded a considerable amount of statistical evidence that suggests that it is extremely unlikely that an alloy will ever be found with a transition temperature appreciably higher than about 20 degrees K.

What about the possibility of discovering some other substance — perhaps a nonmetallic one — that would be superconducting at higher temperatures? As a matter of fact it is an especially opportune time to investigate such a possibility in view of the great theoretical advances that have been made in recent years towards understanding the superconducting state. Of particular interest is the possibility of synthesizing an organic substance that would mimic the essential properties of a superconducting metal. Calculations have shown that certain organic molecules should be able to exist in the superconducting state at temperatures as high as room temperature (about 300 degrees K) and perhaps even higher!

TEXT 15

Optical fibres

Optical fibres, hair-thin strands of pure glass carrying information as pulses of light, have been described as "probably the biggest breakthrough in telecommunications since the invention of the telephone". All kinds of communications can be carried along the same optical fibre cable — speech, texts, photos, drawings, music, computer data, etc. — at higher speeds than have been previously possible.

The fibres, made from glass so pure that a block of it 20 km thick .would theoretically be as transparent as a window pane, have many advantages over metal wires. Small, light and easy to handle, they are made from an abundant raw material, sand. They can carry the same number of telephone calls as metal cables ten times as thick — dozens of fibres, carrying around, 100,000 telephone calls, could all pass through the eye of a needle, at the same time — and they are immune to electrical interference which affects the quality of calls. An optical fibre cable the thickness of a finger could bring a hundred TV channels to a receiver.

The tiny strands are playing a key role in the digital revolution  which is sweeping through modern telecommunications. The telecommunications network developed for the telephone used a system which turned the air pressure waves created by speech into continuous and variable "analogues" of electrical waves and turned them back to speech at the receiver. Expensive conversion equipment or separate networks were needed to handle a text, TV or computer data. In the digital world, however, all forms of information are translated into bits, the standard international language of today's computers, and represented as pulses of light. Information in this form can be processed easily and sent anywhere in seconds in a single multi-purpose network. Optical fibres are ideal for digital working and open the door to a host of services not possible on an analogue system.

Each strand of fibre consists of an inner core to channel the light and an outer cladding to keep it in by reflecting it back along the core. To make the glass for the fibres, the ingredients are deposited as gases on the inside of a hollow silica tube at temperatures of around 2000°C. The tube is collapsed under intense heat to form a solid glass rod about 1cm in diameter which already has the structure of the fibre which will be drawn from it. The rod is then loaded into a furnace, drawn into fibre and coated with resin to protect it and increase its flexibility. Tiny crystals the size of a grain of salt are used to produce the light which carries information along the fibres. This passes through a lens into the fibre. At the other end a receiver reverses the process and turns each light pulse into an electrical sign. Optical fibres will have countless applications in tomorrow's "information society".

TEXT 16

RELIABILITY OF MISSILES AND SPACE VEHICLES

Reliability is above all a design parameter; it must be thought of as a physical property of a device which behaves in accordance with certain physical laws. In other words, reliability starts with engineering and is a basic property which must be designed into the equipment by engineers. It is true that there are other major factors which influence the performance in the final application such as manufacturing, quality control, and handling and checkout in the field. If manufacturing process is not carried out with the proper precision and skill, if the inspection and testing in the factory are not done with proper care, and if the field crews at the launch site do not checkout, test, and launch the vehicle in accordance with proper procedures, the net result will certainly be mission failures. To be sure, no amount of manufacturing precision care in the inspection and testing, and proficiency of the launch crews can make a missile or space mission succeed if the basic design is not right in the first place.

Although reliability is one of the primary parameters in determining the capability of the missile or space system to perform itsoverall mission, it must nevertheless be kept in balance with other systems parameters. Therefore, as part of the systems design, a trade-off between reliability and other systems parameters such as weight, accuracy, speed, and orbital precision must be made. Considerable gain in over-all system effectiveness can sometimes be obtained by sacrificing some accuracy or performance of the system for the sake of an improvement in reliability. Conversely, gains may also be realized by sacrificing some reliability in favour of improvements in accuracy and reduction of weight. The important point here is that a balance must be struck between reliability and other systems parameters.

To illustrate the severity of the reliability problem in satellites and space vehicles the Table presents some relative reliability requirements for a typical subsystem, say, a 25-watt UHF (ultra-high frequency) transmitter which might be used in any one of three applications.

Although the mean time to failure (MTTF) for the transmitter in a missile application is only slightly higher than the MTTF required in an aircraft application, the MTTF requirements for space are several orders of magnitude greater than those for either missile or aircraft.

Hence, the resulting reliability problem is different in nature and much more severe in the case of space vehicles.

Typical Reliability Requirements for Electronic Subsystem,

25-watt UHF Transmitter

Application

Mission Time

Reliability,

Probability of N Failure

during Mission

MeanTime to

Failure (MTTF)

Aircraft

Missile

Satellite A

Satellite B

8 hr,without

maintenance

1,75 hr

1 month

1 yr

0,92

0,99

0,96

0,96

100 hr

175 hr

25 months

(18,000 hr)

 

25 yr

(216,000 hr)

TEXT 17

RELIABILITY OF ELECTRONIC SYSTEMS

With the growth in complexity of electronic systems, perhaps the most crucial problem to be faced, and one particularly critical in military applications, is that of reliability.

The probability that any complete electronic system will function as intended is found by multiplying together the probabilities of the individual components and connections making up the system. Indeed, with the present state of development of reliable electronic devices, the soldered connections in a system—which outnumber the components many times over — are probably more of a collective hazard than are the components themselves.

As a simple numerical example, suppose a system is composed of only seven components and connections, each having a probability of survival of 90 per cent. Connected together, these seven probabilities give an average probability of survival for the entire system of less than 50 per cent  (0.9×0.9×0.9×0.9×0.9×0.9×0.9=0.48).

Of course, in practice, the individual probabilities will be higher than 90 per cent. But when the calculation is extended to the thousands of components and tens of thousands of soldered joints composing the electronic systems of modern aircraft or missiles, it is obvious that these individual probabilities must be extremely high if the entire system is to have any real chance of carrying out its intended function. Some idea of the magnitude of the problem can be gained from this comparison: the B-17 or B-29 of World War II used some 2,000 individual electronic components; today, the B-58 requires nearly 100,000 in systems of much greater complexity.

In recent years, therefore, a great deal of attention has been given to this problem of increasing the reliability of electronic components and circuit construction. However, the probability formula shows that a reduction in the number of components and connections raises the over-all probability of system survival much faster than does improvement in the probabilities of the individual parts of the system. For this reason, modern electronics research has as one of its major objectives a reduction in the number of components and connections required to perform a given function. In this respect, molecular electronics is particularly attractive. If a given function is performed within a single, solid block of material, interconnections are eliminated completely.

Text 18

PROPAGATION OF LIGHT

Velocity of Light. – Light is a transverse wave motion. It travels through empty space, as well as through such transparent substances as glass, air, and water. Its velocity, which is 186,000 miles per sec, is so great that in 1 sec it would travel more than seven times around the earth at the equator. Light travels from the sun to the earth in a little over 8 min, but it requires 4 years for light to travel from the nearest star to the earth. If the North Star were obliterated, the earth would continue to receive light from it about 44 years.

Frequency and Wave Length. – The relation between frequency, velocity, and wave length is the same for light waves as it is for sound waves. Waves of yellow light have been found to have a wave length equal to about 0.000059 cm. The wave length of light is often expressed in angstrom units. One angstrom unit = 10-8 cm.

Fig. 32. Rectilinear propagation of light.

Rectilinear Propagation of Light. – Under ordinary circumstances light travels in straight lines and does not appreciably bend around objects. That light travels in straight lines may be shown by placing a candle or other source of light behind a screen having in it a small hole (Fig. 32). In front of this screen AB are placed two screens CD and GF, each with a small hole at the center. When these screens are so adjusted that the eye E can see the source of light S distinctly, it will be found that the straight line joining S and E passes through the holes in the screens. This shows that light from S to E comes in a straight line.

Sources of Light. – The sun is the chief source of light and heat, but there are many artificial sources. Any body when heated to a sufficient high temperature becomes a source of light.

As the temperature of a body is raised, the body emits invisible radiation. When it becomes red-hot, visible radiations begin to be emitted. The higher the temperature, the greater is the amount of both heat and light waves that are emitted, but the percentage of visible radiations becomes larger and larger as the temperature of the source of radiations is increased. For this reason, the modern tungsten lamp is much more efficient than the old carbon incandescent lamp. Tungsten has a very high melting point, and when it is surrounded by nitrogen or when it is in a vacuum, it can be heated to a high temperature and its efficiency thus made large.

REFLECTION AND REFRACTION OF LIGHT

Laws of Reflection. – When a beam of light, traveling in a homogeneous medium, comes to a second medium, some of the light is reflected. At a polished or silvered surface, nearly all the light is reflected. At the surface of clear glass, only a small part of it is reflected. The greater part of it enters the glass and passes through. In Fig.33, let AB represent the reflecting surface, MP the perpendicular or normal to this surface, OP the incident ray, and PN the reflected ray. The angle OPM between the incident ray and the normal to the surface is called the angle of incidence. The angle MPN between the reflected ray and the normal to the surface is called the angle of reflection. Reflection at such a surface occurs according to the following two laws:

First Law of Reflection. The incident ray, the reflected ray, and the normal to the surface lie in the same plane.

Second Law of Reflection. The angle of incidence is to equal to the angle of reflection.

Refraction. – Experiments have shown that light travels with the greatest speed in a vacuum and that it travels with different speeds in different mediums. When it passes obliquely from one medium to another in which it has a different velocity, there occurs a change in the direction of propagation of the light. This bending of the ray of light when passing from one medium to another is known as refraction.

Refraction can be illustrated by taking a cup which is opaque (Fig. 34) and placing a coin on the bottom of it at the point B, so that the far edge of the coin can just be seen when the eye is at E. If now, without moving the eye, water is poured into the cup, the coin will come completely into view. The ray BA as it leaves the water is bent away from the normal NA. Other rays are in bent in a similar manner, and an image of the coin is formed at C, so that the depth of the coin below the surface of the water seems to have been lessened. Here it is seen that rays coming from the water to the air are bent away from the normal. The rays are always bent away from the normal when they enter a medium in which their velocity is greater than it was in the medium from which they came.

Fig. 33. Reflection of light from a plane mirror. The angle of incidence is equal to the angle of reflection.

TEXT 19

NOTIONS OF INTELLIGENCE

1    It is quite possible to set out an approximate scale of intelligence: most people are more intelligent than most chimpanzees, a word processor is a more intelligent machine than a typewriter, etc. Nevertheless there is no scientific definition of intelligence. Intelligence is related to the ability to recognise patterns, draw reasoned conclusions, analyse complex systems into simple elements and resolve contradictions, yet it is more than all of these. Intelligence is at a higher level than information and knowledge, but below the level of wisdom, it contains an indefinable "spark" which enables new insights to be gained, new theories to be formulated and new knowledge to be established.

2    Intelligence can also be examined from the point of view of language. Information can easily be represented as words, numbers or some other symbols. Knowledge is generally expressed in a language or as mathematics. Intelligence is at the upper limit of language: instances of patterns or deductive reasoning can be written down, and certain general principles can be stated. However, the creative "spark" of intelligence is almost impossible to express in language.

3    The only widely accepted definition of artificial intelligence is based on a test devised by Alan Turing in 1950:

Suppose there are two identical terminals in a room, one connected to a computer, and the other operated remotely by a person. If someone using the two terminals is unable to decide which is connected to the computer, and which is operated by the person, then the computer can be credited with intelligence.

4    The defininon of artificial intelligence which follows from this test is:

Artificial intelligence is the science of making machines do things that would require intelligence if done by people.

5….No computer system has come anywhere near to passing the Turing test in general terms. Nevertheless, progress has been made in a number of specific fields. It would take a very good chess player in the 1980s to be able to tell whether he or she were playing against a computer or a human opponent. Most car drivers are unaware which parts of their cars have been assembled by robots, and which by manual workers.

  1.  Conventional data processing is based on information; artificial intelligence is based on knowledge. A central problem for artificial intelligence is an adequate representation of knowledge on a computer. On the one hand the representation must be "rich" enough to be of practical use. On the other hand. it must be simple enough for processing by a computer.

Fig. 7.1

A semantic network

7    A technique of knowledge representation which is widely used is semantic networks. As shown in Figure 7.1, a semantic network shows a set of relationships between objects. It is a flexible method of representation, allowing new objects and new relationships to be added to a knowledge base. Accordingly, semantic networks are often used in computer systems which have some form of learning capacity.

TEXT 20

EXPERT SYSTEMS

An expert system is a computer system which is able to draw reasoned conclusions from a body of knowledge in a particular field, and communicate to the user the line of reasoning by which it has reached a conclusion.

Objectives of Expert Sytems

The purpose of an expert system is to provide reasoned advice at a comparable level to that provided by a human expert. This capability has two main aims: to enhance the abilities of leading experts in certain fields, and to make a high level of expertise available to less highly qualified practitioners.

The first aim takes note of the fact that some areas of human expertise, such as the diagnosis and treatment of cancer, are so complex that even the leading experts can benefit from the systematic, logical approach provided by a computer. A computer system will take into consideration all the knowledge at its disposal in the consideration of every case and will follow known lines of reasoning exhaustively, no matter how complex they are. These capabilities complement (the skills of a human expert, which are generally based on a mixture of knowledge, experience, insight and intuition.

The second aim attempts to raise the level of skill of professionals who are not themselves leading experts. A large number of medical practitioners fall into this category, particularly in developing countries. When expert systems become widely available, the skills of these practitioners should be significantly enhanced.

In some expert systems, the expert knowledge is fixed into the system when it is constructed. In others, there is a built-in ability to learn from experience, including from mistakes made by the system.

Applications of Expert Systems

A small number of expert systems are in use at present. These arc mainly in the following fields:

•  Medicine:  Expert systems are in use for diagnosis and the planning of treatment in specialised fields. These include certain types of cancer, kidney diseases and some viral infections. Expert systems are also used to plan and monitor experiments, particularly in genetics. Expert systems or use by general practitioners in diagnosis and treatment are under investigation, but none are in widespread use at present.

•  Geological Prospecting:  Expert systems have already proved their worth in oil prospecting, and are now being used for other minerals.

•  Designing Computer Configurations:  Digital Equipment Corporation uses an expert system to design the computer configuration required when an order for a VAX minicomputer is placed. The expert system ensures that a compatible set of equipment is delivered, which meets the requirements of the customer.

•  Chemistry:  The analysis of chemical structures from mass spectrometer data is often done with the aid of an expert system.

•  Legal Advice:  Expert systems which give general legal advice, and assist in such matters as making Social Security claims, are at present under development.




1. Об основах туристической деятельности в Российской Федерации
2. Потребность объединять усилия людей в борьбе с природой для получения пищи при сооружении жилища ~ эти и мн
3. Элементы воровского арго в молодежном жаргоне
4. Реферат- Транспортная система Японии
5. Средняя общеобразовательная школа с углублённым изучением отдельных предметов 39
6. Развитие читательского интереса младших школьников во внеурочной деятельности
7. Пежо я чертыхаясь вглядывалась в шоссе
8. разновидность литературного язык предназначенные для использования в определенных сферах общения Раздел
9. ..Вижду в гробех лежащую по образу Божию созданную нашу красоту безобразну бесславну не имущую вида
10. Клетки сходные по происхождению и выполняемы функциям образуют 1 ткани 2 органы 3 системы о.html
11. Карл Поппер
12. Философия на 20122013 уч
13. положен ность к заболе ваниям Творческий потенциал Твор
14. ра топих планов и карт
15. Модуль 4 НЕРВЫ ТУЛОВИЩА И КОНЕЧНОСТЕЙ
16. на тему- Ортодоксальні та неортодоксальні даршани стародавньої Індії
17. служанкой религии Как решались философские вопросы Что такое патристика и схоластика Кто такие но
18. Статья 1 Сфера применения настоящего Федерального закона1
19. Пожарная безопасность
20. Дегтярев Василий Алексеевич