The invention:
A key idea in the late Industrial Revolution, the
interchangeability of parts made possible mass production of
identical products.
The people behind the invention:
Henry M. Leland (1843-1932), president of Cadillac Motor Car
Company in 1908, known as a master of precision
Frederick Bennett, the British agent for Cadillac Motor Car
Company who convinced the Royal Automobile Club to run
the standardization test at Brooklands, England
Henry Ford (1863-1947), founder of Ford Motor Company who
introduced the moving assembly line into the automobile
industry in 1913
03 August 2009
Instant photography
The invention: Popularly known by its Polaroid tradename, a camera
capable of producing finished photographs immediately after
its film was exposed.
The people behind the invention:
Edwin Herbert Land (1909-1991), an American physicist and
chemist
Howard G. Rogers (1915- ), a senior researcher at Polaroid
and Land’s collaborator
William J. McCune (1915- ), an engineer and head of the
Polaroid team
Ansel Adams (1902-1984), an American photographer and
Land’s technical consultant
The Daughter of Invention
Because he was a chemist and physicist interested primarily in
research relating to light and vision, and to the materials that affect
them, it was inevitable that Edwin Herbert Land should be drawn
into the field of photography. Land founded the Polaroid Corporation
in 1929. During the summer of 1943, while Land and his wife
were vacationing in Santa Fe, New Mexico, with their three-yearold
daughter, Land stopped to take a picture of the child. After the
picture was taken, his daughter asked to see it. When she was told
she could not see the picture immediately, she asked how long it
would be. Within an hour after his daughter’s question, Land had
conceived a preliminary plan for designing the camera, the film,
and the physical chemistry of what would become the instant camera.
Such a device would, he hoped, produce a picture immediately
after exposure.
Within six months, Land had solved most of the essential problems
of the instant photography system. He and a small group of associates
at Polaroid secretly worked on the project. Howard G. Rogers
was Land’s collaborator in the laboratory. Land conferred the
responsibility for the engineering and mechanical phase of the project
on William J. McCune, who led the team that eventually designed the original camera and the machinery that produced both
the camera and Land’s new film.
The first Polaroid Land camera—the Model 95—produced photographs
measuring 8.25 by 10.8 centimeters; there were eight pictures
to a roll. Rather than being black-and-white, the original Polaroid
prints were sepia-toned (producing a warm, reddish-brown color).
The reasons for the sepia coloration were chemical rather than aesthetic;
as soon as Land’s researchers could devise a workable formula
for sharp black-and-white prints (about ten months after the camera
was introduced commercially), they replaced the sepia film.
A Sophisticated Chemical Reaction
Although the mechanical process involved in the first demonstration
camera was relatively simple, this process was merely
the means by which a highly sophisticated chemical reaction—
the diffusion transfer process—was produced.
In the basic diffusion transfer process, when an exposed negative
image is developed, the undeveloped portion corresponds
to the opposite aspect of the image, the positive. Almost all selfprocessing
instant photography materials operate according to
three phases—negative development, diffusion transfer, and
positive development. These occur simultaneously, so that positive
image formation begins instantly. With black-and-white materials,
the positive was originally completed in about sixty seconds; with
color materials (introduced later), the process took somewhat longer.
The basic phenomenon of silver in solution diffusing from one
emulsion to another was first observed in the 1850’s, but no practical
use of this action was made until 1939. The photographic use of
diffusion transfer for producing normal-continuous-tone images
was investigated actively from the early 1940’s by Land and his associates.
The instant camera using this method was demonstrated
in 1947 and marketed in 1948.
The fundamentals of photographic diffusion transfer are simplest
in a black-and-white peel-apart film. The negative sheet is exposed
in the camera in the normal way. It is then pulled out of the
camera, or film pack holder, by a paper tab. Next, it passes through a
set of rollers, which press it face-to-face with a sheet of receiving material included in the film pack. Simultaneously, the rollers rupture
a pod of reagent chemicals that are spread evenly by the rollers
between the two layers. The reagent contains a strong alkali and a
silver halide solvent, both of which diffuse into the negative emulsion. There the alkali activates the developing agent, which immediately
reduces the exposed halides to a negative image. At the
same time, the solvent dissolves the unexposed halides. The silver
in the dissolved halides forms the positive image.
Impact
The Polaroid Land camera had a tremendous impact on the photographic
industry as well as on the amateur and professional photographer.
Ansel Adams, who was known for his monumental,
ultrasharp black-and-white panoramas of the American West, suggested
to Land ways in which the tonal value of Polaroid film could
be enhanced, as well as new applications for Polaroid photographic
technology.
Soon after it was introduced, Polaroid photography became part
of the American way of life and changed the face of amateur photography
forever. By the 1950’s, Americans had become accustomed
to the world of recorded visual information through films, magazines,
and newspapers; they also had become enthusiastic picturetakers
as a result of the growing trend for simpler and more convenient
cameras. By allowing these photographers not only to record
their perceptions but also to see the results almost immediately, Polaroid
brought people closer to the creative process.
Infrared photography
The invention: The first application of color to infrared photography,
which performs tasks not possible for ordinary photography.
The person behind the invention:
Sir William Herschel (1738-1822), a pioneering English
astronomer
Invisible Light
Photography developed rapidly in the nineteenth century when it
became possible to record the colors and shades of visible light on
sensitive materials. Visible light is a form of radiation that consists of
electromagnetic waves, which also make up other forms of radiation
such as X rays and radio waves. Visible light occupies the range of
wavelengths from about 400 nanometers (1 nanometer is 1 billionth
of a meter) to about 700 nanometers in the electromagnetic spectrum.
Infrared radiation occupies the range fromabout 700 nanometers
to about 1,350 nanometers in the electromagnetic spectrum. Infrared
rays cannot be seen by the human eye, but they behave in the
same way that rays of visible light behave; they can be reflected, diffracted
(broken), and refracted (bent).
Sir William Herschel, a British astronomer, discovered infrared
rays in 1800 by calculating the temperature of the heat that they produced.
The term “infrared,” which was probably first used in 1800,
was used to indicate rays that had wavelengths that were longer than
those on the red end (the high end) of the spectrum of visible light but
shorter than those of the microwaves, which appear higher on the
electromagnetic spectrum. Infrared film is therefore sensitive to the
infrared radiation that the human eye cannot see or record. Dyes that
were sensitive to infrared radiation were discovered early in the
twentieth century, but they were not widely used until the 1930’s. Because
these dyes produced only black-and-white images, their usefulness
to artists and researchers was limited. After 1930, however, a
tidal wave of infrared photographic applications appeared.The Development of Color-Sensitive Infrared Film
In the early 1940’s, military intelligence used infrared viewers for
night operations and for gathering information about the enemy. One
device that was commonly used for such purposes was called a
“snooper scope.” Aerial photography with black-and-white infrared
film was used to locate enemy hiding places and equipment. The images
that were produced, however, often lacked clear definition.
The development in 1942 of the first color-sensitive infrared film,
Ektachrome Aero Film, became possible when researchers at the
Eastman Kodak Company’s laboratories solved some complex chemical
and physical problems that had hampered the development of
color infrared film up to that point. Regular color film is sensitive to
all visible colors of the spectrum; infrared color film is sensitive to
violet, blue, and red light as well as to infrared radiation. Typical
color film has three layers of emulsion, which are sensitized to blue,
green, and red. Infrared color film, however, has its three emulsion
layers sensitized to green, red, and infrared. Infrared wavelengths
are recorded as reds of varying densities, depending on the intensity
of the infrared radiation. The more infrared radiation there is,
the darker the color of the red that is recorded.
In infrared photography, a filter is placed over the camera lens to
block the unwanted rays of visible light. The filter blocks visible and
ultraviolet rays but allows infrared radiation to pass. All three layers
of infrared film are sensitive to blue, so a yellow filter is used. All
blue radiation is absorbed by this filter.
In regular photography, color film consists of three basic layers:
the top layer is sensitive to blue light, the middle layer is sensitive to
green, and the third layer is sensitive to red. Exposing the film to
light causes a latent image to be formed in the silver halide crystals
that make up each of the three layers. In infrared photography, color
film consists of a top layer that is sensitive to infrared radiation, a
middle layer sensitive to green, and a bottom layer sensitive to red.
“Reversal processing” produces blue in the infrared-sensitive layer,
yellow in the green-sensitive layer, and magenta in the red-sensitive
layer. The blue, yellow, and magenta layers of the film produce the
“false colors” that accentuate the various levels of infrared radiation
shown as red in a color transparency, slide, or print.relationship to the color of light to which the layer is sensitive. If the
relationship is not complementary, the resulting colors will be false.
This means that objects whose colors appear to be similar to the
human eye will not necessarily be recorded as similar colors on infrared
film. A red rose with healthy green leaves will appear on infrared
color film as being yellow with red leaves, because the chlorophyll
contained in the plant leaf reflects infrared radiation and
causes the green leaves to be recorded as red. Infrared radiation
from about 700 nanometers to about 900 nanometers on the electromagnetic
spectrum can be recorded by infrared color film. Above
900 nanometers, infrared radiation exists as heat patterns that must
be recorded by nonphotographic means.
Impact
Infrared photography has proved to be valuable in many of the
sciences and the arts. It has been used to create artistic images that
are often unexpected visual explosions of everyday views. Because
infrared radiation penetrates haze easily, infrared films are often
used in mapping areas or determining vegetation types. Many
cloud-covered tropical areas would be impossible to map without
infrared photography. False-color infrared film can differentiate between
healthy and unhealthy plants, so it is widely used to study insect
and disease problems in plants. Medical research uses infrared
photography to trace blood flow, detect and monitor tumor growth,
and to study many other physiological functions that are invisible
to the human eye.
Some forms of cancer can be detected by infrared analysis before
any other tests are able to perceive them. Infrared film is used in
criminology to photograph illegal activities in the dark and to study
evidence at crime scenes. Powder burns around a bullet hole, which
are often invisible to the eye, show clearly on infrared film. In addition,
forgeries in documents and works of art can often be seen
clearly when photographed on infrared film. Archaeologists have
used infrared film to locate ancient sites that are invisible in daylight.
Wildlife biologists also document the behavior of animals at
night with infrared equipment.
28 July 2009
In vitro plant culture
The invention: Method for propagating plants in artificial media
that has revolutionized agriculture.
The people behind the invention:
Georges Michel Morel (1916-1973), a French physiologist
Philip Cleaver White (1913- ), an American chemist
Plant Tissue Grows “In Glass”
In the mid-1800’s, biologists began pondering whether a cell isolated
from a multicellular organism could live separately if it were
provided with the proper environment. In 1902, with this question in
mind, the German plant physiologist Gottlieb Haberlandt attempted
to culture (grow) isolated plant cells under sterile conditions on an artificial
growth medium. Although his cultured cells never underwent
cell division under these “in vitro” (in glass) conditions, Haberlandt
is credited with originating the concept of cell culture.
Subsequently, scientists attempted to culture plant tissues and
organs rather than individual cells and tried to determine the medium
components necessary for the growth of plant tissue in vitro.
In 1934, Philip White grew the first organ culture, using tomato
roots. The discovery of plant hormones, which are compounds that
regulate growth and development, was crucial to the successful culture
of plant tissues; in 1939, Roger Gautheret, P. Nobécourt, and
White independently reported the successful culture of plant callus
tissue. “Callus” is an irregular mass of dividing cells that often results
from the wounding of plant tissue. Plant scientists were fascinated
by the perpetual growth of such tissue in culture and spent
years establishing optimal growth conditions and exploring the nutritional
and hormonal requirements of plant tissue.
Plants by the Millions
A lull in botanical research occurred during World War II, but
immediately afterward there was a resurgence of interest in applying
tissue culture techniques to plant research. Georges Morel, a plant physiologist at the National Institute for Agronomic Research
in France, was one of many scientists during this time who
had become interested in the formation of tumors in plants as well
as in studying various pathogens such as fungi and viruses that
cause plant disease.
To further these studies, Morel adapted existing techniques in order
to grow tissue from a wider variety of plant types in culture, and
he continued to try to identify factors that affected the normal
growth and development of plants. Morel was successful in culturing
tissue from ferns and was the first to culture monocot plants.
Monocots have certain features that distinguish them fromthe other
classes of seed-bearing plants, especially with respect to seed structure.
More important, the monocots include the economically important
species of grasses (the major plants of range and pasture)
and cereals.
For these cultures, Morel utilized a small piece of the growing tip
of a plant shoot (the shoot apex) as the starting tissue material. This
tissue was placed in a glass tube, supplied with a medium containing
specific nutrients, vitamins, and plant hormones, and allowed
to grow in the light. Under these conditions, the apex tissue grew
roots and buds and eventually developed into a complete plant.
Morel was able to generate whole plants from pieces of the shoot
apex that were only 100 to 250 micrometers in length.
Morel also investigated the growth of parasites such as fungi and
viruses in dual culture with host-plant tissue. Using results from
these studies and culture techniques that he had mastered, Morel
and his colleague Claude Martin regenerated virus-free plants from
tissue that had been taken from virally infected plants. Tissues from
certain tropical species, dahlias, and potato plants were used for the
original experiments, but after Morel adapted the methods for the
generation of virus-free orchids, plants that had previously been
difficult to propagate by any means, the true significance of his
work was recognized.
Morel was the first to recognize the potential of the in vitro culture
methods for the mass propagation of plants. He estimated that several
million plants could be obtained in one year from a single small
piece of shoot-apex tissue. Plants generated in this manner were
clonal (genetically identical organisms prepared from a single plant).With other methods of plant propagation, there is often a great variation
in the traits of the plants produced, but as a result of Morel’s
ideas, breeders could select for some desirable trait in a particular
plant and then produce multiple clonal plants, all of which expressed
the desired trait. The methodology also allowed for the production of
virus-free plant material, which minimized both the spread of potential
pathogens during shipping and losses caused by disease.
Consequences
Variations on Morel’s methods are used to propagate plants used
for human food consumption; plants that are sources of fiber, oil,
and livestock feed; forest trees; and plants used in landscaping and
in the floral industry. In vitro stocks are preserved under deepfreeze
conditions, and disease-free plants can be proliferated quickly
at any time of the year after shipping or storage.
The in vitro multiplication of plants has been especially useful
for species such as coconut and certain palms that cannot be propagated
by other methods, such as by sowing seeds or grafting, and
has also become important in the preservation and propagation of rare plant species that might otherwise have become extinct. Many
of these plants are sources of pharmaceuticals, oils, fragrances, and
other valuable products.
The capability of regenerating plants from tissue culture has also
been crucial in basic scientific research. Plant cells grown in culture
can be studied more easily than can intact plants, and scientists have
gained an in-depth understanding of plant physiology and biochemistry
by using this method. This information and the methods
of Morel and others have made possible the genetic engineering and
propagation of crop plants that are resistant to disease or disastrous
environmental conditions such as drought and freezing. In vitro
techniques have truly revolutionized agriculture.
IBM Model 1401 Computer
The invention: A relatively small, simple, and inexpensive computer
that is often credited with having launched the personal
computer age.
The people behind the invention:
Howard H. Aiken (1900-1973), an American mathematician
Charles Babbage (1792-1871), an English mathematician and
inventor
Herman Hollerith (1860-1929), an American inventor
Computers: From the Beginning
Computers evolved into their modern form over a period of
thousands of years as a result of humanity’s efforts to simplify the
process of counting. Two counting devices that are considered to be
very simple, early computers are the abacus and the slide rule.
These calculating devices are representative of digital and analog
computers, respectively, because an abacus counts numbers of things,
while the slide rule calculates length measurements.
The first modern computer, which was planned by Charles Babbage
in 1833, was never built. It was intended to perform complex
calculations with a data processing/memory unit that was controlled
by punched cards. In 1944, Harvard University’s Howard H.
Aiken and the International Business Machines (IBM) Corporation
built such a computer—the huge, punched-tape-controlled Automatic
Sequence Controlled Calculator, or Mark I ASCC, which
could perform complex mathematical operations in seconds. During
the next fifteen years, computer advances produced digital computers
that used binary arithmetic for calculation, incorporated
simplified components that decreased the sizes of computers, had
much faster calculating speeds, and were transistorized.
Although practical computers had become much faster than
they had been only a few years earlier, they were still huge and extremely
expensive. In 1959, however, IBM introduced the Model
1401 computer. Smaller, simpler, and much cheaper than the multimillion-dollar computers that were available, the IBM Model 1401
computer was also relatively easy to program and use. Its low cost,
simplicity of operation, and very wide use have led many experts
to view the IBM Model 1401 computer as beginning the age of the
personal computer.
Computer Operation and IBM’s Model 1401
Modern computers are essentially very fast calculating machines
that are capable of sorting, comparing, analyzing, and outputting information,
as well as storing it for future use. Many sources credit
Aiken’s Mark I ASCC as being the first modern computer to be built.
This huge, five-ton machine used thousands of relays to perform complex
mathematical calculations in seconds. Soon after its introduction,
other companies produced computers that were faster and more versatile
than the Mark I. The computer development race was on.
All these early computers utilized the decimal system for calculations
until it was found that binary arithmetic, whose numbers are
combinations of the binary digits 1 and 0, was much more suitable
for the purpose. The advantage of the binary system is that the electronic
switches that make up a computer (tubes, transistors, or
chips) can be either on or off; in the binary system, the on state can
be represented by the digit 1, the off state by the digit 0. Strung together
correctly, binary numbers, or digits, can be inputted rapidly
and used for high-speed computations. In fact, the computer term
bit is a contraction of the phrase “binary digit.”
A computer consists of input and output devices, a storage device
(memory), arithmetic and logic units, and a control unit. In
most cases, a central processing unit (CPU) combines the logic,
arithmetic, memory, and control aspects. Instructions are loaded
into the memory via an input device, processed, and stored. Then,
the CPU issues commands to the other parts of the system to carry
out computations or other functions and output the data as needed.
Most output is printed as hard copy or displayed on cathode-ray
tube monitors, or screens.
The early modern computers—such as the Mark I ASCC—were
huge because their information circuits were large relays or tubes.
Computers became smaller and smaller as the tubes were replaced first with transistors, then with simple integrated circuits, and then
with silicon chips. Each technological changeover also produced
more powerful, more cost-effective computers.
In the 1950’s, with reliable transistors available, IBM began the
development of two types of computers that were completed by
about 1959. The larger version was the Stretch computer, which was
advertised as the most powerful computer of its day. Customized
for each individual purchaser (for example, the Atomic Energy
Commission), a Stretch computer cost $10 million or more. Some innovations
in Stretch computers included semiconductor circuits,
new switching systems that quickly converted various kinds of data
into one language that was understood by the CPU, rapid data readers,
and devices that seemed to anticipate future operations.
Consequences
The IBM Model 1401 was the first computer sold in very large
numbers. It led IBM and other companies to seek to develop less expensive,
more versatile, smaller computers that would be sold to
small businesses and to individuals. Six years after the development
of the Model 1401, other IBM models—and those made by
other companies—became available that were more compact and
had larger memories. The search for compactness and versatility
continued. A major development was the invention of integrated
circuits by Jack S. Kilby of Texas Instruments; these integrated circuits
became available by the mid-1960’s. They were followed by
even smaller “microprocessors” (computer chips) that became available
in the 1970’s. Computers continued to become smaller and more
powerful.
Input and storage devices also decreased rapidly in size. At first,
the punched cards invented by Herman Hollerith, founder of the
Tabulation Machine Company (which later became IBM), were read
by bulky readers. In time, less bulky magnetic tapes and more compact
readers were developed, after which magnetic disks and compact
disc drives were introduced.
Many other advances have been made. Modern computers can
talk, create art and graphics, compose music, play games, and operate
robots. Further advancement is expected as societal needs change. Many experts believe that it was the sale of large numbers
of IBM Model 1401 computers that began the trend.
20 July 2009
Hydrogen bomb
The invention: Popularly known as the “H-Bomb,” the hydrogen
bomb differs from the original atomic bomb in using fusion,
rather than fission, to create a thermonuclear explosion almost a
thousand times more powerful.
The people behind the invention:
Edward Teller (1908- ), a Hungarian-born theoretical
physicist
Stanislaw Ulam (1909-1984), a Polish-born mathematician
Crash Development
Afew months before the 1942 creation of the Manhattan Project,
the United States-led effort to build the atomic (fission) bomb, physicist
Enrico Fermi suggested to Edward Teller that such a bomb
could release more energy by the process of heating a mass of the
hydrogen isotope deuterium and igniting the fusion of hydrogen
into helium. Fusion is the process whereby two atoms come together
to form a larger atom, and this process usually occurs only in stars,
such as the Sun. Physicists Hans Bethe, George Gamow, and Teller
had been studying fusion since 1934 and knew of the tremendous
energy than could be released by this process—even more energy
than the fission (atom-splitting) process that would create the atomic
bomb. Initially, Teller dismissed Fermi’s idea, but later in 1942, in
collaboration with Emil Konopinski, he concluded that a hydrogen
bomb, or superbomb, could be made.
For practical considerations, it was decided that the design of the
superbomb would have to wait until after the war. In 1946, a secret
conference on the superbomb was held in Los Alamos, New Mexico,
that was attended by, among other Manhattan Project veterans,
Stanislaw Ulam and Klaus Emil Julius Fuchs. Supporting the investigation
of Teller’s concept, the conferees requested a more complete
mathematical analysis of his own admittedly crude calculations
on the dynamics of the fusion reaction. In 1947, Teller believed
that these calculations might take years. Two years later, however,the Soviet explosion of an atomic bomb convinced Teller that America’s
ColdWar adversary was hard at work on its own superbomb.
Even when new calculations cast further doubt on his designs,
Teller began a vigorous campaign for crash development of the hydrogen
bomb, or H-bomb.
The Superbomb
Scientists knew that fusion reactions could be induced by the explosion
of an atomic bomb. The basic problem was simple and formidable:
How could fusion fuel be heated and compressed long
enough to achieve significant thermonuclear burning before the
atomic fission explosion blew the assembly apart? A major part of
the solution came from Ulam in 1951. He proposed using the energy
from an exploding atomic bomb to induce significant thermonuclear
reactions in adjacent fusion fuel components.
This arrangement, in which the A-bomb (the primary) is physically
separated from the H-bomb’s (the secondary’s) fusion fuel, became
known as the “Teller-Ulam configuration.” All H-bombs are
cylindrical, with an atomic device at one end and the other components
filling the remaining space. Energy from the exploding primary
could be transported by X rays and would therefore affect the
fusion fuel at near light speed—before the arrival of the explosion.
Frederick de Hoffman’s work verified and enriched the new concept.
In the revised method, moderated X rays from the primary irradiate
a reactive plastic medium surrounding concentric and generally
cylindrical layers of fusion and fission fuel in the secondary.
Instantly, the plastic becomes a hot plasma that compresses and
heats the inner layer of fusion fuel, which in turn compresses a central
core of fissile plutonium to supercriticality. Thus compressed,
and bombarded by fusion-produced, high-energy neutrons, the fission
element expands rapidly in a chain reaction from the inside
out, further compressing and heating the surrounding fusion fuel,
releasing more energy and more neutrons that induce fission in a
fuel casing-tamper made of normally stable uranium 238.
With its equipment to refrigerate the hydrogen isotopes, the device
created to test Teller’s new concept weighed more than sixty
tons. During Operation Ivy, it was tested at Elugelab in the Marshall Islands on November 1, 1952. Exceeding the expectations of all concerned
and vaporizing the island, the explosion equaled 10.4 million
tons of trinitrotoluene (TNT), which meant that it was about
seven hundred times more powerful than the atomic bomb dropped
on Hiroshima, Japan, in 1945. A version of this device weighing
about 20 tons was prepared for delivery by specially modified Air
Force B-36 bombers in the event of an emergency during wartime.
In development at Los Alamos before the 1952 test was a device
weighing only about 4 tons, a “dry bomb” that did not require refrigeration
equipment or liquid fusion fuel; when sufficiently compressed
and heated in its molded-powder form, the new fusion fuel
component, lithium-6 deutride, instantly produced tritium, an isotope
of hydrogen. This concept was tested during Operation Castle
at Bikini atoll in 1954 and produced a yield of 15 million tons of TNT,
the largest-ever nuclear explosion created by the United States.
Consequences
Teller was not alone in believing that the world could produce
thermonuclear devices capable of causing great destruction. Months
before Fermi suggested to Teller the possibility of explosive thermonuclear
reactions on Earth, Japanese physicist Tokutaro Hagiwara
had proposed that a uranium 235 bomb could ignite significant fusion
reactions in hydrogen. The Soviet Union successfully tested an
H-bomb dropped from an airplane in 1955, one year before the
United States did so.
Teller became the scientific adviser on nuclear affairs of many
presidents, from Dwight D. Eisenhower to Ronald Reagan. The
widespread blast and fallout effects of H-bombs assured the mutual
destruction of the users of such weapons. During the Cold War
(from about 1947 to 1981), both the United States and the Soviet
Union possessed H-bombs. “Testing” these bombs made each side
aware of how powerful the other side was. Everyone wanted to
avoid nuclear war. It was thought that no one would try to start a
war that would end in the world’s destruction. This theory was
called deterrence: The United States wanted to let the Soviet Union
know that it had just as many bombs, or more, than it did, so that the
leaders of the Sovet Union would be deterred from starting a war.Teller knew that the availability of H-bombs on both sides was
not enough to guarantee that such weapons would never be used. It
was also necessary to make the Soviet Union aware of the existence
of the bombs through testing. He consistently advised against U.S.
participation with the Soviet Union in a moratorium (period of
waiting) on nuclear weapons testing. Largely based on Teller’s urging
that underground testing be continued, the United States rejected
a total moratorium in favor of the 1963 Atmospheric Test Ban
Treaty.
During the 1980’s, Teller, among others, convinced President
Reagan to embrace the Strategic Defense Initiative (SDI). Teller argued
that SDI components, such as the space-based “Excalibur,” a
nuclear bomb-powered X-ray laser weapon proposed by the Lawrence-
Livermore National Laboratory, would make thermonuclear
war not unimaginable, but theoretically impossible.
19 July 2009
Hovercraft
The invention: A vehicle requiring no surface contact for traction
that moves freely over a variety of surfaces—particularly
water—while supported on a self-generated cushion of air.
The people behind the invention:
Christopher Sydney Cockerell (1910- ), a British engineer
who built the first hovercraft
Ronald A. Shaw (1910- ), an early pioneer in aerodynamics
who experimented with hovercraft
Sir John Isaac Thornycroft (1843-1928), a Royal Navy architect
who was the first to experiment with air-cushion theory
Air-Cushion Travel
The air-cushion vehicle was first conceived by Sir John Isaac
Thornycroft of Great Britain in the 1870’s. He theorized that if a
ship had a plenum chamber (a box open at the bottom) for a hull
and it were pumped full of air, the ship would rise out of the water
and move faster, because there would be less drag. The main problem
was keeping the air from escaping from under the craft.
In the early 1950’s, Christopher Sydney Cockerell was experimenting
with ways to reduce both the wave-making and frictional
resistance that craft had to water. In 1953, he constructed a punt
with a fan that supplied air to the bottom of the craft, which could
thus glide over the surface with very little friction. The air was contained
under the craft by specially constructed side walls. In 1955,
the first true “hovercraft,” as Cockerell called it, was constructed of
balsa wood. It weighed only 127 grams and traveled over water at a
speed of 13 kilometers per hour.
On November 16, 1956, Cockerell successfully demonstrated
his model hovercraft at the patent agent’s office in London. It was
immediately placed on the “secret” list, and Saunders-Roe Ltd.
was given the first contract to build hovercraft in 1957. The first experimental
piloted hovercraft, the SR.N1, which had a weight of
3,400 kilograms and could carry three people at the speed of 25 knots, was completed on May 28, 1959, and publicly demonstrated
on June 11, 1959.
Ground Effect Phenomenon
In a hovercraft, a jet airstream is directed downward through a
hole in a metal disk, which forces the disk to rise. The jet of air has a
reverse effect of its own that forces the disk away from the surface.
Some of the air hitting the ground bounces back against the disk to
add further lift. This is called the “ground effect.” The ground effect
is such that the greater the under-surface area of the hovercraft, the
greater the reverse thrust of the air that bounces back. This makes
the hovercraft a mechanically efficient machine because it provides
three functions.
First, the ground effect reduces friction between the craft and the
earth’s surface. Second, it acts as a spring suspension to reduce
some of the vertical acceleration effects that arise from travel over
an uneven surface. Third, it provides a safe and comfortable ride at
high speed, whatever the operating environment. The air cushion
can distribute the weight of the hovercraft over almost its entire area
so that the cushion pressure is low.
The basic elements of the air-cushion vehicle are a hull, a propulsion
system, and a lift system. The hull, which accommodates the
crew, passengers, and freight, contains both the propulsion and lift
systems. The propulsion and lift systems can be driven by the same
power plant or by separate power plants. Early designs used only
one unit, but this proved to be a problem when adequate power was
not achieved for movement and lift. Better results are achieved
when two units are used, since far more power is used to lift the vehicle
than to propel it.
For lift, high-speed centrifugal fans are used to drive the air
through jets that are located under the craft. A redesigned aircraft
propeller is used for propulsion. Rudderlike fins and an air fan that
can be swiveled to provide direction are placed at the rear of the
craft.
Several different air systems can be used, depending on whether
a skirt system is used in the lift process. The plenum chamber system,
the peripheral jet system, and several types of recirculating air systems have all been successfully tried without skirting. Avariety
of rigid and flexible skirts have also proved to be satisfactory, depending
on the use of the vehicle.
Skirts are used to hold the air for lift. Skirts were once hung like curtains around hovercraft. Instead of simple curtains to contain the air,
there are now complicated designs that contain the cushion, duct the
air, and even provide a secondary suspension. The materials used in
the skirting have also changed from a rubberized fabric to pure rubber
and nylon and, finally, to neoprene, a lamination of nylon and plastic.
The three basic types of hovercraft are the amphibious, nonamphibious,
and semiamphibious models. The amphibious type can
travel over water and land, whereas the nonamphibious type is restricted
to water travel. The semiamphibious model is also restricted
to water travel but may terminate travel by nosing up on a prepared
ramp or beach. All hovercraft contain built-in buoyancy tanks in the
side skirting as a safety measure in the event that a hovercraft must
settle on the water. Most hovercraft are equipped with gas turbines
and use either propellers or water-jet propulsion.
Impact
Hovercraft are used primarily for short passenger ferry services.
Great Britain was the only nation to produce a large number of hovercraft.
The British built larger and faster craft and pioneered their
successful use as ferries across the English Channel, where they
could reach speeds of 111 kilometers per hour (160 knots) and carry
more than four hundred passengers and almost one hundred vehicles.
France and the former Soviet Union have also effectively demonstrated
hovercraft river travel, and the Soviets have experimented
with military applications as well.
The military adaptations of hovercraft have been more diversified.
Beach landings have been performed effectively, and the United
States used hovercraft for river patrols during the Vietnam War.
Other uses also exist for hovercraft. They can be used as harbor pilot
vessels and for patrolling shores in a variety of police-and customs-
related duties. Hovercraft can also serve as flood-rescue craft
and fire-fighting vehicles. Even a hoverfreighter is being considered.
The air-cushion theory in transport systems is rapidly developing.
It has spread to trains and smaller people movers in many
countries. Their smooth, rapid, clean, and efficient operation makes
hovercraft attractive to transportation designers around the world.
16 July 2009
Holography
The invention: A lensless system of three-dimensional photography
that was one of the most important developments in twentieth
century optical science.
The people behind the invention:
Dennis Gabor (1900-1979), a Hungarian-born inventor and
physicist who was awarded the 1971 Nobel Prize in Physics
Emmett Leith (1927- ), a radar researcher who, with Juris
Upatnieks, produced the first laser holograms
Juris Upatnieks (1936- ), a radar researcher who, with
Emmett Leith, produced the first laser holograms
Easter Inspiration
The development of photography in the early 1900’s made possible
the recording of events and information in ways unknown before
the twentieth century: the photographing of star clusters, the
recording of the emission spectra of heated elements, the storing of
data in the form of small recorded images (for example, microfilm),
and the photographing of microscopic specimens, among other
things. Because of its vast importance to the scientist, the science of
photography has developed steadily.
An understanding of the photographic and holographic processes
requires some knowledge of the wave behavior of light. Light is an
electromagnetic wave that, like a water wave, has an amplitude and a
phase. The amplitude corresponds to the wave height, while the
phase indicates which part of the wave is passing a given point at a
given time. A cork floating in a pond bobs up and down as waves
pass under it. The position of the cork at any time depends on both
amplitude and phase: The phase determines on which part of the
wave the cork is floating at any given time, and the amplitude determines
how high or low the cork can be moved. Waves from more
than one source arriving at the cork combine in ways that depend on
their relative phases. If the waves meet in the same phase, they add
and produce a large amplitude; if they arrive out of phase, they subtract and produce a small amplitude. The total amplitude, or intensity,
depends on the phases of the combining waves.
Dennis Gabor, the inventor of holography, was intrigued by the
way in which the photographic image of an object was stored by a
photographic plate but was unable to devote any consistent research
effort to the question until the 1940’s. At that time, Gabor was involved
in the development of the electron microscope. On Easter
morning in 1947, as Gabor was pondering the problem of how to
improve the electron microscope, the solution came to him. He
would attempt to take a poor electron picture and then correct it optically.
The process would require coherent electron beams—that is,
electron waves with a definite phase.
This two-stage method was inspired by the work of Lawrence
Bragg. Bragg had formed the image of a crystal lattice by diffracting
the photographic X-ray diffraction pattern of the original lattice.
This double diffraction process is the basis of the holographic process.
Bragg’s method was limited because of his inability to record
the phase information of the X-ray photograph. Therefore, he could
study only those crystals for which the phase relationship of the reflected
waves could be predicted.
Waiting for the Laser
Gabor devised a way of capturing the phase information after he
realized that adding coherent background to the wave reflected from
an object would make it possible to produce an interference pattern
on the photographic plate. When the phases of the two waves are
identical, a maximum intensity will be recorded; when they are out of
phase, a minimum intensity is recorded. Therefore, what is recorded
in a hologram is not an image of the object but rather the interference
pattern of the two coherent waves. This pattern looks like a collection
of swirls and blank spots. The hologram (or photograph) is then illuminated
by the reference beam, and part of the transmitted light is a
replica of the original object wave. When viewing this object wave,
one sees an exact replica of the original object.
The major impediment at the time in making holograms using
any form of radiation was a lack of coherent sources. For example,
the coherence of the mercury lamp used by Gabor and his assistant IvorWilliams was so short that they were able to make holograms of
only about a centimeter in diameter. The early results were rather
poor in terms of image quality and also had a double image. For this
reason, there was little interest in holography, and the subject lay almost
untouched for more than ten years.
Interest in the field was rekindled after the laser (light amplification
by stimulated emission of radiation) was developed in 1962.
Emmett Leith and Juris Upatnieks, who were conducting radar research
at the University of Michigan, published the first laser holographs
in 1963. The laser was an intense light source with a very
long coherence length. Its monochromatic nature improved the resolution
of the images greatly. Also, there was no longer any restriction
on the size of the object to be photographed.
The availability of the laser allowed Leith and Upatnieks to propose
another improvement in holographic technique. Before 1964,
holograms were made of only thin transparent objects. A small region
of the hologram bore a one-to-one correspondence to a region
of the object. Only a small portion of the image could be viewed at
one time without the aid of additional optical components. Illuminating
the transparency diffusely allowed the whole image to be
seen at one time. This development also made it possible to record
holograms of diffusely reflected three-dimensional objects. Gabor
had seen from the beginning that this should make it possible to create
three-dimensional images.
After the early 1960’s, the field of holography developed very
quickly. Because holography is different from conventional photography,
the two techniques often complement each other. Gabor saw
his idea blossom into a very important technique in optical science.
Impact
The development of the laser and the publication of the first laser
holograms in 1963 caused a blossoming of the new technique in
many fields. Soon, techniques were developed that allowed holograms
to be viewed with white light. It also became possible for holograms
to reconstruct multicolored images. Holographic methods
have been used to map terrain with radar waves and to conduct surveillance
in the fields of forestry, agriculture, and meteorology.By the 1990’s, holography had become a multimillion-dollar industry,
finding applications in advertising, as an art form, and in security
devices on credit cards, as well as in scientific fields. An alternate
form of holography, also suggested by Gabor, uses sound
waves. Acoustical imaging is useful whenever the medium around
the object to be viewed is opaque to light rays—for example, in
medical diagnosis. Holography has affected many areas of science,
technology, and culture.
13 July 2009
Heat pump
The invention:
A device that warms and cools buildings efficiently
and cheaply by moving heat from one area to another.
The people behind the invention:
T. G. N. Haldane, a British engineer
Lord Kelvin (William Thomson, 1824-1907), a British
mathematician, scientist, and engineer
Sadi Carnot (1796-1832), a French physicist and
thermodynamicist
Heart-lung machine
The invention: The first artificial device to oxygenate and circulate
blood during surgery, the heart-lung machine began the era of
open-heart surgery.
The people behind the invention:
John H. Gibbon, Jr. (1903-1974), a cardiovascular surgeon
Mary Hopkinson Gibbon (1905- ), a research technician
Thomas J. Watson (1874-1956), chairman of the board of IBM
T. L. Stokes and J. B. Flick, researchers in Gibbon’s laboratory
Bernard J. Miller (1918- ), a cardiovascular surgeon and
research associate
Cecelia Bavolek, the first human to undergo open-heart surgery
successfully using the heart-lung machine
A Young Woman’s Death
In the first half of the twentieth century, cardiovascular medicine
had many triumphs. Effective anesthesia, antiseptic conditions, and
antibiotics made surgery safer. Blood-typing, anti-clotting agents,
and blood preservatives made blood transfusion practical. Cardiac
catheterization (feeding a tube into the heart), electrocardiography,
and fluoroscopy (visualizing living tissues with an X-ray machine)
made the nonsurgical diagnosis of cardiovascular problems possible.
As of 1950, however, there was no safe way to treat damage or defects
within the heart. To make such a correction, this vital organ’s
function had to be interrupted. The problem was to keep the body’s
tissues alive while working on the heart. While some surgeons practiced
so-called blind surgery, in which they inserted a finger into the
heart through a small incision without observing what they were attempting
to correct, others tried to reduce the body’s need for circulation
by slowly chilling the patient until the heart stopped. Still other
surgeons used “cross-circulation,” in which the patient’s circulation
was connected to a donor’s circulation. All these approaches carried
profound risks of hemorrhage, tissue damage, and death.
In February of 1931, Gibbon witnessed the death of a young woman whose lung circulation was blocked by a blood clot. Because
her blood could not pass through her lungs, she slowly lost
consciousness from lack of oxygen. As he monitored her pulse and
breathing, Gibbon thought about ways to circumvent the obstructed
lungs and straining heart and provide the oxygen required. Because
surgery to remove such a blood clot was often fatal, the woman’s
surgeons operated only as a last resort. Though the surgery took
only six and one-half minutes, she never regained consciousness.
This experience prompted Gibbon to pursue what few people then
considered a practical line of research: a way to circulate and oxygenate
blood outside the body.
A Woman’s Life Restored
Gibbon began the project in earnest in 1934, when he returned to
the laboratory of Edward D. Churchill at Massachusetts General
Hospital for his second surgical research fellowship. He was assisted
by Mary Hopkinson Gibbon. Together, they developed, using
cats, a surgical technique for removing blood froma vein, supplying
the blood with oxygen, and returning it to an artery using tubes inserted
into the blood vessels. Their objective was to create a device
that would keep the blood moving, spread it over a very thin layer
to pick up oxygen efficiently and remove carbon dioxide, and avoid
both clotting and damaging blood cells. In 1939, they reported that
prolonged survival after heart-lung bypass was possible in experimental
animals.
WorldWar II (1939-1945) interrupted the progress of this work; it
was resumed by Gibbon at Jefferson Medical College in 1944. Shortly
thereafter, he attracted the interest of Thomas J.Watson, chairman of
the board of the International Business Machines (IBM) Corporation,
who provided the services of IBM’s experimental physics laboratory
and model machine shop as well as the assistance of staff engineers.
IBM constructed and modified two experimental machines
over the next seven years, and IBM engineers contributed significantly
to the evolution of a machine that would be practical in humans.
Gibbon’s first attempt to use the pump-oxygenator in a human
being was in a fifteen-month-old baby. This attempt failed, not because of a malfunction or a surgical mistake but because of a misdiagnosis.
The child died following surgery because the real problem
had not been corrected by the surgery.
On May 6, 1953, the heart-lung machine was first used successfully
on Cecelia Bavolek. In the six months before surgery, Bavolek
had been hospitalized three times for symptoms of heart failure
when she tried to engage in normal activity. While her circulation
was connected to the heart-lung machine for forty-five minutes, the
surgical team headed by Gibbon was able to close an opening between
her atria and establish normal heart function. Two months
later, an examination of the defect revealed that it was fully closed;
Bavolek resumed a normal life. The age of open-heart surgery had
begun.
Consequences
The heart-lung bypass technique alone could not make openheart
surgery truly practical. When it was possible to keep tissues
alive by diverting blood around the heart and oxygenating it, other
questions already under investigation became even more critical:
how to prolong the survival of bloodless organs, how to measure
oxygen and carbon dioxide levels in the blood, and how to prolong
anesthesia during complicated surgery. Thus, following the first
successful use of the heart-lung machine, surgeons continued to refine
the methods of open-heart surgery.
The heart-lung apparatus set the stage for the advent of “replacement
parts” for many types of cardiovascular problems. Cardiac
valve replacement was first successfully accomplished in 1960 by
placing an artificial ball valve between the left atrium and ventricle.
In 1957, doctors performed the first coronary bypass surgery, grafting
sections of a leg vein into the heart’s circulation system to divert
blood around clogged coronary arteries. Likewise, the first successful
heart transplant (1967) and the controversial Jarvik-7 artificial
heart implantation (1982) required the ability to stop the heart and
keep the body’s tissues alive during time-consuming and delicate
surgical procedures. Gibbon’s heart-lung machine paved the way
for all these developments.
09 July 2009
Hearing aid
The invention: Miniaturized electronic amplifier worn inside the
ears of hearing-impaired persons.
The organization behind the invention:
Bell Labs, the research and development arm of the American
Telephone and Telegraph Company
Trapped in Silence
Until the middle of the twentieth century, people who experienced
hearing loss had little hope of being able to hear sounds without the
use of large, awkward, heavy appliances. For many years, the only
hearing aids available were devices known as ear trumpets. The ear
trumpet tried to compensate for hearing loss by increasing the number
of sound waves funneled into the ear canal. A wide, bell-like
mouth similar to the bell of a musical trumpet narrowed to a tube that
the user placed in his or her ear. Ear trumpets helped a little, but they
could not truly increase the volume of the sounds heard.
Beginning in the nineteenth century, inventors tried to develop
electrical devices that would serve as hearing aids. The telephone
was actually a by-product of Alexander Graham Bell’s efforts to
make a hearing aid. Following the invention of the telephone, electrical
engineers designed hearing aids that employed telephone
technology, but those hearing aids were only a slight improvement
over the old ear trumpets. They required large, heavy battery packs
and used a carbon microphone similar to the receiver in a telephone.
More sensitive than purely physical devices such as the ear trumpet,
they could transmit a wider range of sounds but could not amplify
them as effectively as electronic hearing aids now do.
Transistors Make Miniaturization Possible
Two types of hearing aids exist: body-worn and head-worn.
Body-worn hearing aids permit the widest range of sounds to be
heard, but because of the devices’ larger size, many hearing impaired persons do not like to wear them. Head-worn hearing
aids, especially those worn completely in the ear, are much less conspicuous.
In addition to in-ear aids, the category of head-worn hearing
aids includes both hearing aids mounted in eyeglass frames and
those worn behind the ear.
All hearing aids, whether head-worn or body-worn, consist of
four parts: a microphone to pick up sounds, an amplifier, a receiver,
and a power source. The microphone gathers sound waves and converts
them to electrical signals; the amplifier boosts, or increases,
those signals; and the receiver then converts the signals back into
sound waves. In effect, the hearing aid is a miniature radio. After
the receiver converts the signals back to sound waves, those waves
are directed into the ear canal through an earpiece or ear mold. The
ear mold generally is made of plastic and is custom fitted from an
impression taken from the prospective user’s ear.
Effective head-worn hearing aids could not be built until the
electronic circuit was developed in the early 1950’s. The same invention—
the transistor—that led to small portable radios and tape
players allowed engineers to create miniaturized, inconspicuous
hearing aids. Depending on the degree of amplification required,
the amplifier in a hearing aid contains three or more transistors.
Transistors first replaced vacuum tubes in devices such as radios
and phonographs, and then engineers realized that they could be
used in devices for the hearing-impaired.
The research at Bell Labs that led to the invention of the transistor
rose out of military research duringWorldWar II. The vacuum tubes
used in, for example, radar installations to amplify the strength of electronic
signals were big, were fragile because they were made of
blown glass, and gave off high levels of heat when they were used.
Transistors, however, made it possible to build solid-state, integrated
circuits. These are made from crystals of metals such as germanium
or arsenic alloys and therefore are much less fragile than glass. They
are also extremely small (in fact, some integrated circuits are barely
visible to the naked eye) and give off no heat during use.
The number of transistors in a hearing aid varies depending upon
the amount of amplification required. The first transistor is the most
important for the listener in terms of the quality of sound heard. If the
frequency response is set too high—that is, if the device is too sensitive—the listener will be bothered by distracting background noise.
Theoretically, there is no limit on the amount of amplification that a
hearing aid can be designed to provide, but there are practical limits.
The higher the amplification, the more power is required to operate
the hearing aid. This is why body-worn hearing aids can convey a
wider range of sounds than head-worn devices can. It is the power
source—not the electronic components—that is the limiting factor. A
body-worn hearing aid includes a larger battery pack than can be
used with a head-worn device. Indeed, despite advances in battery
technology, the power requirements of a head-worn hearing aid are
such that a 1.4-volt battery that could power a wristwatch for several
years will last only a few days in a hearing aid.
Consequences
The invention of the electronic hearing aid made it possible for
many hearing-impaired persons to participate in a hearing world.
Prior to the invention of the hearing aid, hearing-impaired children
often were unable to participate in routine school activities or function
effectively in mainstream society. Instead of being able to live at
home with their families and enjoy the same experiences that were
available to other children their age, often they were forced to attend
special schools operated by the state or by charities.
Hearing-impaired people were singled out as being different and
were limited in their choice of occupations. Although not every
hearing-impaired person can be helped to hear with a hearing aid—
particularly in cases of total hearing loss—the electronic hearing aid
has ended restrictions for many hearing-impaired people. Hearingimpaired
children are now included in public school classes, and
hearing-impaired adults can now pursue occupations from which
they were once excluded.
Today, many deaf and hearing-impaired persons have chosen to
live without the help of a hearing aid. They believe that they are not
disabled but simply different, and they point out that their “disability”
often allows them to appreciate and participate in life in unique
and positive ways. For them, the use of hearing aids is a choice, not a
necessity. For those who choose, hearing aids make it possible to
participate in the hearing world.
Hard disk
The invention: A large-capacity, permanent magnetic storage device
built into most personal computers.
The people behind the invention:
Alan Shugart (1930- ), an engineer who first developed the
floppy disk
Philip D. Estridge (1938?-1985), the director of IBM’s product
development facility
Thomas J. Watson, Jr. (1914-1993), the chief executive officer of
IBM
The Personal Oddity
When the International Business Machines (IBM) Corporation
introduced its first microcomputer, called simply the IBM PC (for
“personal computer”), the occasion was less a dramatic invention
than the confirmation of a trend begun some years before. A number
of companies had introduced microcomputers before IBM; one
of the best known at that time was Apple Corporation’s Apple II, for
which software for business and scientific use was quickly developed.
Nevertheless, the microcomputer was quite expensive and
was often looked upon as an oddity, not as a useful tool.
Under the leadership of Thomas J. Watson, Jr., IBM, which had
previously focused on giant mainframe computers, decided to develop
the PC. A design team headed by Philip D. Estridge was assembled
in Boca Raton, Florida, and it quickly developed its first,
pacesetting product. It is an irony of history that IBM anticipated
selling only one hundred thousand or so of these machines, mostly
to scientists and technically inclined hobbyists. Instead, IBM’s product
sold exceedingly well, and its design parameters, as well as its
operating system, became standards.
The earliest microcomputers used a cassette recorder as a means
of mass storage; a floppy disk drive capable of storing approximately
160 kilobytes of data was initially offered only as an option.
While home hobbyists were accustomed to using a cassette recorder for storage purposes, such a system was far too slow and awkward
for use in business and science. As a result, virtually every IBM PC
sold was equipped with at least one 5.25-inch floppy disk drive.
Memory Requirements
All computers require memory of two sorts in order to carry out
their tasks. One type of memory is main memory, or random access
memory (RAM), which is used by the computer’s central processor
to store data it is using while operating. The type of memory used
for this function is built typically of silicon-based integrated circuits
that have the advantage of speed (to allow the processor to fetch or
store the data quickly), but the disadvantage of possibly losing or
“forgetting” data when the electric current is turned off. Further,
such memory generally is relatively expensive.
To reduce costs, another type of memory—long-term storage
memory, known also as “mass storage”—was developed. Mass
storage devices include magnetic media (tape or disk drives) and
optical media (such as the compact disc, read-only memory, or CDROM).
While the speed with which data may be retrieved from or
stored in such devices is rather slow compared to the central processor’s
speed, a disk drive—the most common form of mass storage
used in PCs—can store relatively large amounts of data quite inexpensively.
Early floppy disk drives (so called because the magnetically
treated material on which data are recorded is made of a very flexible
plastic) held 160 kilobytes of data using only one side of the
magnetically coated disk (about eighty pages of normal, doublespaced,
typewritten information). Later developments increased
storage capacities to 360 kilobytes by using both sides of the disk
and later, with increasing technological ability, 1.44 megabytes (millions
of bytes). In contrast, mainframe computers, which are typically
connected to large and expensive tape drive storage systems,
could store gigabytes (millions of megabytes) of information.
While such capacities seem large, the needs of business and scientific
users soon outstripped available space. Since even the mailing
list of a small business or a scientist’s mathematical model of a
chemical reaction easily could require greater storage potential than early PCs allowed, the need arose for a mass storage device that
could accommodate very large files of data.
The answer was the hard disk drive, also known as a “fixed disk
drive,” reflecting the fact that the disk itself is not only rigid but also
permanently installed inside the machine. In 1955, IBM had envisioned
the notion of a fixed, hard magnetic disk as a means of storing
computer data, and, under the direction of Alan Shugart in the
1960’s, the floppy disk was developed as well.
As the engineers of IBM’s facility in Boca Raton refined the idea
of the original PC to design the new IBM PC XT, it became clear that
chief among the needs of users was the availability of large-capability
storage devices. The decision was made to add a 10-megabyte
hard disk drive to the PC. On March 8, 1983, less than two years after
the introduction of its first PC, IBM introduced the PC XT. Like
the original, it was an evolutionary design, not a revolutionary one.
The inclusion of a hard disk drive, however, signaled that mass storage
devices in personal computers had arrived.
Consequences
Above all else, any computer provides a means for storing, ordering,
analyzing, and presenting information. If the personal computer
is to become the information appliance some have suggested
it will be, the ability to manipulate very large amounts of data will
be of paramount concern. Hard disk technology was greeted enthusiastically
in the marketplace, and the demand for hard drives has
seen their numbers increase as their quality increases and their
prices drop.
It is easy to understand one reason for such eager acceptance:
convenience. Floppy-bound computer users find themselves frequently
changing (or “swapping”) their disks in order to allow programs
to find the data they need. Moreover, there is a limit to how
much data a single floppy disk can hold. The advantage of a hard
drive is that it allows users to keep seemingly unlimited amounts of
data and programs stored in their machines and readily available.
Also, hard disk drives are capable of finding files and transferring
their contents to the processor much more quickly than a
floppy drive. A user may thus create exceedingly large files, keep them on hand at all times, and manipulate data more quickly than
with a floppy. Finally, while a hard drive is a slow substitute for
main memory, it allows users to enjoy the benefits of larger memories
at significantly lower cost.
The introduction of the PC XT with its 10-megabyte hard drive
was a milestone in the development of the PC. Over the next two decades,
the size of computer hard drives increased dramatically. By
2001, few personal computers were sold with hard drives with less
than three gigabytes of storage capacity, and hard drives with more
than thirty gigabytes were becoming the standard. Indeed, for less
money than a PC XT cost in the mid-1980’s, one could buy a fully
equipped computer with a hard drive holding sixty gigabytes—a
storage capacity equivalent to six thousand 10-megabyte hard drives.
Gyrocompass
The invention: The first practical navigational device that enabled
ships and submarines to stay on course without relying on the
earth’s unreliable magnetic poles.
The people behind the invention:
Hermann Anschütz-Kaempfe (1872-1931), a German inventor
and manufacturer
Jean-Bernard-Léon Foucault (1819-1868), a French experimental
physicist and inventor
Elmer Ambrose Sperry (1860-1930), an American engineer and
inventor
From Toys to Tools
A gyroscope consists of a rapidly spinning wheel mounted in a
frame that enables the wheel to tilt freely in any direction. The
amount of momentum allows the wheel to maintain its “attitude”
even when the whole device is turned or rotated.
These devices have been used to solve problems arising in such
areas as sailing and navigation. For example, a gyroscope aboard a
ship maintains its orientation even while the ship is rolling. Among
other things, this allows the extent of the roll to be measured accurately.
Moreover, the spin axis of a free gyroscope can be adjusted to
point toward true north. It will (with some exceptions) stay that
way despite changes in the direction of a vehicle in which it is
mounted. Gyroscopic effects were employed in the design of various
objects long before the theory behind them was formally
known. A classic example is a child’s top, which balances, seemingly
in defiance of gravity, as long as it continues to spin. Boomerangs
and flying disks derive stability and accuracy from the spin
imparted by the thrower. Likewise, the accuracy of rifles improved
when barrels were manufactured with internal spiral grooves that
caused the emerging bullet to spin.
In 1852, the French inventor Jean-Bernard-Léon Foucault built
the first gyroscope, a measuring device consisting of a rapidly spinning
wheel mounted within concentric rings that allowed the wheel to move freely about two axes. This device, like the Foucault pendulum,
was used to demonstrate the rotation of the earth around its
axis, since the spinning wheel, which is not fixed, retains its orientation
in space while the earth turns under it. The gyroscope had a related
interesting property: As it continued to spin, the force of the
earth’s rotation caused its axis to rotate gradually until it was oriented
parallel to the earth’s axis, that is, in a north-south direction. It
is this property that enables the gyroscope to be used as a compass.
When Magnets Fail
In 1904, Hermann Anschütz-Kaempfe, a German manufacturer
working in the Kiel shipyards, became interested in the navigation
problems of submarines used in exploration under the polar ice cap.
By 1905, efficient working submarines were a reality, and it was evident
to all major naval powers that submarines would play an increasingly
important role in naval strategy.
Submarine navigation posed problems, however, that could not
be solved by instruments designed for surface vessels. Asubmarine
needs to orient itself under water in three dimensions; it has no automatic
horizon with respect to which it can level itself. Navigation
by means of stars or landmarks is impossible when the submarine is
submerged. Furthermore, in an enclosed metal hull containing machinery
run by electricity, a magnetic compass is worthless. To a
lesser extent, increasing use of metal, massive moving parts, and
electrical equipment had also rendered the magnetic compass unreliable
in conventional surface battleships.
It made sense for Anschütz-Kaempfe to use the gyroscopic effect
to design an instrument that would enable a ship to maintain its
course while under water. Yet producing such a device would not be
easy. First, it needed to be suspended in such a way that it was free to
turn in any direction with as little mechanical resistance as possible.
At the same time, it had to be able to resist the inevitable pitching and
rolling of a vessel at sea. Finally, a continuous power supply was required
to keep the gyroscopic wheels spinning at high speed.
The original Anschütz-Kaempfe gyrocompass consisted of a pair
of spinning wheels driven by an electric motor. The device was connected
to a compass card visible to the ship’s navigator. Motor, gyroscope, and suspension system were mounted in a frame that allowed
the apparatus to remain stable despite the pitch and roll of the ship.
In 1906, the German navy installed a prototype of the Anschütz-
Kaempfe gyrocompass on the battleship Undine and subjected it to
exhaustive tests under simulated battle conditions, sailing the ship
under forced draft and suddenly reversing the engines, changing the
position of heavy turrets and other mechanisms, and firing heavy
guns. In conditions under which a magnetic compass would have
been worthless, the gyrocompass proved a satisfactory navigational
tool, and the results were impressive enough to convince the German
navy to undertake installation of gyrocompasses in submarines and
heavy battleships, including the battleship Deutschland.
Elmer Ambrose Sperry, a New York inventor intimately associated
with pioneer electrical development, was independently working on a design for a gyroscopic compass at about the same time.
In 1907, he patented a gyrocompass consisting of a single rotor
mounted within two concentric shells, suspended by fine piano
wire from a frame mounted on gimbals. The rotor of the Sperry
compass operated in a vacuum, which enabled it to rotate more
rapidly. The Sperry gyrocompass was in use on larger American
battleships and submarines on the eve ofWorldWar I (1914-1918).
Impact
The ability to navigate submerged submarines was of critical
strategic importance in World War I. Initially, the German navy
had an advantage both in the number of submarines at its disposal
and in their design and maneuverability. The German U-boat fleet
declared all-out war on Allied shipping, and, although their efforts
to blockade England and France were ultimately unsuccessful, the
tremendous toll they inflicted helped maintain the German position
and prolong the war. To a submarine fleet operating throughout
the Atlantic and in the Caribbean, as well as in near-shore European
waters, effective long-distance navigation was critical.
Gyrocompasses were standard equipment on submarines and
battleships and, increasingly, on larger commercial vessels during
World War I, World War II (1939-1945), and the period between the
wars. The devices also found their way into aircraft, rockets, and
guided missiles. Although the compasses were made more accurate
and easier to use, the fundamental design differed little from that invented
by Anschütz-Kaempfe.
05 July 2009
Geothermal power
The invention: Energy generated from the earth’s natural hot
springs.
The people behind the invention:
Prince Piero Ginori Conti (1865-1939), an Italian nobleman and
industrialist
Sir Charles Parsons (1854-1931), an English engineer
B. C. McCabe, an American businessman
Developing a Practical System
The first successful use of geothermal energy was at Larderello in
northern Italy. The Larderello geothermal field, located near the city
of Pisa about 240 kilometers northwest of Rome, contains many hot
springs and fumaroles (steam vents). In 1777, these springs were
found to be rich in boron, and in 1818, Francesco de Larderel began
extracting the useful mineral borax from them. Shortly after 1900,
Prince Piero Ginori Conti, director of the Larderello borax works,
conceived the idea of using the steam for power production. An experimental
electrical power plant was constructed at Larderello in
1904 to provide electric power to the borax plant. After this initial
experiment proved successful, a 250-kilowatt generating station
was installed in 1913 and commercial power production began.
As the Larderello field grew, additional geothermal sites throughout
the region were prospected and tapped for power. Power production
grew steadily until the 1940’s, when production reached
130 megawatts; however, the Larderello power plants were destroyed
late inWorldWar II (1939-1945). After the war, the generating
plants were rebuilt, and they were producing more than 400
megawatts by 1980.
The Larderello power plants encountered many of the technical
problems that were later to concern other geothermal facilities. For
example, hydrogen sulfide in the steam was highly corrosive to copper,
so the Larderello power plant used aluminum for electrical connections
much more than did conventional power plants of the time. Also, the low pressure of the steam in early wells at Larderello
presented problems. The first generators simply used steam to drive
a generator and vented the spent steam into the atmosphere. Asystem
of this sort, called a “noncondensing system,” is useful for small
generators but not efficient to produce large amounts of power.
Most steam engines derive power not only from the pressure of
the steam but also from the vacuum created when the steam is condensed
back to water. Geothermal systems that generate power
from condensation, as well as direct steam pressure, are called “condensing
systems.” Most large geothermal generators are of this
type. Condensation of geothermal steam presents special problems
not present in ordinary steam engines: There are other gases present
that do not condense. Instead of a vacuum, condensation of steam
contaminated with other gases would result in only a limited drop
in pressure and, consequently, very low efficiency.
Initially, the operators of Larderello tried to use the steam to heat
boilers that would, in turn, generate pure steam. Eventually, a device
was developed that removed most of the contaminating gases from
the steam. Although later wells at Larderello and other geothermal
fields produced steam at greater pressure, these engineering innovations
improved the efficiency of any geothermal power plant.
Expanding the Idea
In 1913, the English engineer Sir Charles Parsons proposed drilling
an extremely deep (12-kilometer) hole to tap the earth’s deep
heat. Power from such a deep hole would not come from natural
steam as at Larderello but would be generated by pumping fluid
into the hole and generating steam (as hot as 500 degrees Celsius) at
the bottom. In modern terms, Parsons proposed tapping “hot dryrock”
geothermal energy. (No such plant has been commercially operated
yet, but research is being actively pursued in several countries.)
The first use of geothermal energy in the United States was for direct
heating. In 1890, the municipal water company of Boise, Idaho,
began supplying hot water from a geothermal well. Water was
piped from the well to homes and businesses along appropriately
namedWarm Springs Avenue. At its peak, the system served more than four hundred customers, but as cheap natural gas became
available, the number declined.
Although Larderello was the first successful geothermal electric
power plant, the modern era of geothermal electric power began
with the opening of the Geysers Geothermal Field in California.
Early attempts began in the 1920’s, but it was not until 1955 that B.
C. McCabe, a Los Angeles businessman, leased 14.6 square kilometers
in the Geysers area and founded the Magma Power Company.
The first 12.5-megawatt generator was installed at the Geysers in
1960, and production increased steadily from then on. The Geysers
surpassed Larderello as the largest producing geothermal field in
the 1970’s, and more than 1,000 megawatts were being generated by
1980. By the end of 1980, geothermal plants had been installed in
thirteen countries, with a total capacity of almost 2,600 megawatts,
and projects with a total capacity of more than 15,000 megawatts
were being planned in more than twenty countries.
Impact
Geothermal power has many attractive features. Because the
steam is naturally heated and under pressure, generating equipment
can be simple, inexpensive, and quickly installed. Equipment
and installation costs are offset by savings in fuel. It is economically
practical to install small generators, a fact that makes geothermal
plants attractive in remote or underdeveloped areas. Most important
to a world faced with a variety of technical and environmental
problems connected with fossil fuels, geothermal power does not
deplete fossil fuel reserves, produces little pollution, and contributes
little to the greenhouse effect.
Despite its attractive features, geothermal power has some limitations.
Geologic settings suitable for easy geothermal power production
are rare; there must be a hot rock or magma body close to
the surface. Although it is technically possible to pump water from
an external source into a geothermal well to generate steam, most
geothermal sites require a plentiful supply of natural underground
water that can be tapped as a source of steam. In contrast, fossil-fuel
generating plants can be at any convenient location.
Genetically engineered insulin
The invention: Artificially manufactured human insulin (Humulin)
as a medication for people suffering from diabetes.
The people behind the invention:
Irving S. Johnson (1925- ), an American zoologist who was
vice president of research at Eli Lilly Research Laboratories
Ronald E. Chance (1934- ), an American biochemist at Eli
Lilly Research Laboratories
What Is Diabetes?
Carbohydrates (sugars and related chemicals) are the main food
and energy source for humans. In wealthy countries such as the
United States, more than 50 percent of the food people eat is made
up of carbohydrates, while in poorer countries the carbohydrate
content of diets is higher, from 70 to 90 percent.
Normally, most carbohydrates that a person eats are used (or metabolized)
quickly to produce energy. Carbohydrates not needed for
energy are either converted to fat or stored as a glucose polymer
called “glycogen.” Most adult humans carry about a pound of body
glycogen; this substance is broken down to produce energy when it
is needed.
Certain diseases prevent the proper metabolism and storage of
carbohydrates. The most common of these diseases is diabetes mellitus,
usually called simply “diabetes.” It is found in more than seventy
million people worldwide. Diabetic people cannot produce or
use enough insulin, a hormone secreted by the pancreas. When their
condition is not treated, the eyes may deteriorate to the point of
blindness. The kidneys may stop working properly, blood vessels
may be damaged, and the person may fall into a coma and die. In
fact, diabetes is the third most common killer in the United States.
Most of the problems surrounding diabetes are caused by high levels
of glucose in the blood. Cataracts often form in diabetics, as excess
glucose is deposited in the lens of the eye.
Important symptoms of diabetes include constant thirst, excessive urination, and large amounts of sugar in the blood and in the
urine. The glucose tolerance test (GTT) is the best way to find out
whether a person is suffering from diabetes. People given a GTT are
first told to fast overnight. In the morning their blood glucose level
is measured; then they are asked to drink about a fourth of a pound
of glucose dissolved in water. During the next four to six hours, the
blood glucose level is measured repeatedly. In nondiabetics, glucose
levels do not rise above a certain amount during a GTT, and the
level drops quickly as the glucose is assimilated by the body. In diabetics,
the blood glucose levels rise much higher and do not drop as
quickly. The extra glucose then shows up in the urine.
Treating Diabetes
Until the 1920’s, diabetes could be controlled only through a diet
very low in carbohydrates, and this treatment was not always successful.
Then Sir Frederick G. Banting and Charles H. Best found a
way to prepare purified insulin from animal pancreases and gave it
to patients. This gave diabetics their first chance to live a fairly normal
life. Banting and his coworkers won the 1923 Nobel Prize in
Physiology or Medicine for their work.
The usual treatment for diabetics became regular shots of insulin.
Drug companies took the insulin from the pancreases of cattle and
pigs slaughtered by the meat-packing industry. Unfortunately, animal
insulin has two disadvantages. First, about 5 percent of diabetics
are allergic to it and can have severe reactions. Second, the world
supply of animal pancreases goes up and down depending on how
much meat is being bought. Between 1970 and 1975, the supply of
insulin fell sharply as people began to eat less red meat, yet the
numbers of diabetics continued to increase. So researchers began to
look for a better way to supply insulin.
Studying pancreases of people who had donated their bodies to
science, researchers found that human insulin did not cause allergic
reactions. Scientists realized that it would be best to find a chemical
or biological way to prepare human insulin, and pharmaceutical
companies worked hard toward this goal. Eli Lilly and Company
was the first to succeed, and on May 14, 1982, it filed a new drug application
with the Food and Drug Administration (FDA) for the human insulin preparation it named “Humulin.”
Humulin is made by genetic engineering. Irving S. Johnson, who
worked on the development of Humulin, described Eli Lilly’s method
for producing Humulin. The common bacterium Escherichia coli
is used. Two strains of the bacterium are produced by genetic engineering:
The first strain is used to make a protein called an “A
chain,” and the second strain is used to make a “B chain.” After the
bacteria are harvested, the Aand B chains are removed and purified
separately. Then the two chains are combined chemically. When
they are purified once more, the result is Humulin, which has been
proved by Ronald E. Chance and his Eli Lilly coworkers to be chemically,
biologically, and physically identical to human insulin.
Consequences
The FDA and other regulatory agencies around the world approved
genetically engineered human insulin in 1982. Humulin
does not trigger allergic reactions, and its supply does not fluctuate.
It has brought an end to the fear that there would be a worldwide
shortage of insulin.
Humulin is important as well in being the first genetically engineered
industrial chemical. It began an era in which such advanced
technology could be a source for medical drugs, chemicals used in
farming, and other important industrial products. Researchers hope
that genetic engineering will help in the understanding of cancer
and other diseases, and that it will lead to ways to grow enough
food for a world whose population continues to rise.
29 June 2009
Genetic “fingerprinting”
The invention: Atechnique for using the unique characteristics of
each human being’s DNA to identify individuals, establish connections
among relatives, and identify criminals.
The people behind the invention:
Alec Jeffreys (1950- ), an English geneticist
Victoria Wilson (1950- ), an English geneticist
Swee Lay Thein (1951- ), a biochemical geneticist
Microscopic Fingerprints
In 1985, Alec Jeffreys, a geneticist at the University of Leicester in
England, developed a method of deoxyribonucleic acid (DNA)
analysis that provides a visual representation of the human genetic
structure. Jeffreys’s discovery had an immediate, revolutionary impact
on problems of human identification, especially the identification
of criminals. Whereas earlier techniques, such as conventional
blood typing, provide evidence that is merely exclusionary (indicating
only whether a suspect could or could not be the perpetrator of a
crime), DNA fingerprinting provides positive identification.
For example, under favorable conditions, the technique can establish
with virtual certainty whether a given individual is a murderer
or rapist. The applications are not limited to forensic science;
DNA fingerprinting can also establish definitive proof of parenthood
(paternity or maternity), and it is invaluable in providing
markers for mapping disease-causing genes on chromosomes. In
addition, the technique is utilized by animal geneticists to establish
paternity and to detect genetic relatedness between social groups.
DNAfingerprinting (also referred to as “genetic fingerprinting”)
is a sophisticated technique that must be executed carefully to produce
valid results. The technical difficulties arise partly from the
complex nature of DNA. DNA, the genetic material responsible for
heredity in all higher forms of life, is an enormously long, doublestranded
molecule composed of four different units called “bases.”
The bases on one strand of DNApair with complementary bases on the other strand. A human being contains twenty-three pairs of
chromosomes; one member of each chromosome pair is inherited
fromthe mother, the other fromthe father. The order, or sequence, of
bases forms the genetic message, which is called the “genome.” Scientists
did not know the sequence of bases in any sizable stretch of
DNA prior to the 1970’s because they lacked the molecular tools to
split DNA into fragments that could be analyzed. This situation
changed with the advent of biotechnology in the mid-1970’s.
The door toDNAanalysis was opened with the discovery of bacterial
enzymes called “DNA restriction enzymes.” A restriction enzyme
binds to DNA whenever it finds a specific short sequence of
base pairs (analogous to a code word), and it splits the DNAat a defined
site within that sequence. A single enzyme finds millions of
cutting sites in human DNA, and the resulting fragments range in
size from tens of base pairs to hundreds or thousands. The fragments
are exposed to a radioactive DNA probe, which can bind to
specific complementary DNA sequences in the fragments. X-ray
film detects the radioactive pattern. The developed film, called an
“autoradiograph,” shows a pattern of DNA fragments, which is
similar to a bar code and can be compared with patterns from
known subjects.
The Presence of Minisatellites
The uniqueness of a DNA fingerprint depends on the fact that,
with the exception of identical twins, no two human beings have
identical DNA sequences. Of the three billion base pairs in human
DNA, many will differ from one person to another.
In 1985, Jeffreys and his coworkers, Victoria Wilson at the University
of Leicester and Swee Lay Thein at the John Radcliffe Hospital
in Oxford, discovered a way to produce a DNA fingerprint.
Jeffreys had found previously that human DNA contains many repeated
minisequences called “minisatellites.” Minisatellites consist
of sequences of base pairs repeated in tandem, and the number of
repeated units varies widely from one individual to another. Every
person, with the exception of identical twins, has a different number
of tandem repeats and, hence, different lengths of minisatellite
DNA. By using two labeled DNA probes to detect two different minisatellite sequences, Jeffreys obtained a unique fragment band
pattern that was completely specific for an individual.
The power of the technique derives from the law of chance,
which indicates that the probability (chance) that two or more unrelated
events will occur simultaneously is calculated as the multiplication
product of the two separate probabilities. As Jeffreys discovered,
the likelihood of two unrelated people having completely
identical DNAfingerprints is extremely small—less than one in ten
trillion. Given the population of the world, it is clear that the technique
can distinguish any one person from everyone else. Jeffreys
called his band patterns “DNAfingerprints” because of their ability
to individualize. As he stated in his landmark research paper, published
in the English scientific journal Nature in 1985, probes to
minisatellite regions of human DNA produce “DNA ‘fingerprints’
which are completely specific to an individual (or to his or her identical
twin) and can be applied directly to problems of human identification,
including parenthood testing.”
Consequences
In addition to being used in human identification, DNA fingerprinting
has found applications in medical genetics. In the search
for a cause, a diagnostic test for, and ultimately the treatment of an
inherited disease, it is necessary to locate the defective gene on a human
chromosome. Gene location is accomplished by a technique
called “linkage analysis,” in which geneticists use marker sections
of DNA as reference points to pinpoint the position of a defective
gene on a chromosome. The minisatellite DNA probes developed
by Jeffreys provide a potent and valuable set of markers that are of
great value in locating disease-causing genes. Soon after its discovery,
DNA fingerprinting was used to locate the defective genes responsible
for several diseases, including fetal hemoglobin abnormality
and Huntington’s disease.
Genetic fingerprinting also has had a major impact on genetic
studies of higher animals. BecauseDNAsequences are conserved in
evolution, humans and other vertebrates have many sequences in
common. This commonality enabled Jeffreys to use his probes to
human minisatellites to bind to the DNA of many different vertebrates, ranging from mammals to birds, reptiles, amphibians, and
fish; this made it possible for him to produce DNA fingerprints of
these vertebrates. In addition, the technique has been used to discern
the mating behavior of birds, to determine paternity in zoo primates,
and to detect inbreeding in imperiled wildlife. DNA fingerprinting
can also be applied to animal breeding problems, such as
the identification of stolen animals, the verification of semen samples
for artificial insemination, and the determination of pedigree.
The technique is not foolproof, however, and results may be far
from ideal. Especially in the area of forensic science, there was a
rush to use the tremendous power of DNA fingerprinting to identify
a purported murderer or rapist, and the need for scientific standards
was often neglected. Some problems arose because forensic
DNA fingerprinting in the United States is generally conducted in
private, unregulated laboratories. In the absence of rigorous scientific
controls, the DNA fingerprint bands of two completely unknown
samples cannot be matched precisely, and the results may be
unreliable.
Subscribe to:
Posts (Atom)