12 August 2009
Laser-diode recording process
The invention: Video and audio playback system that uses a lowpower
laser to decode information digitally stored on reflective
disks.
The organization behind the invention:
The Philips Corporation, a Dutch electronics firm
The Development of Digital Systems
Since the advent of the computer age, it has been the goal of
many equipment manufacturers to provide reliable digital systems
for the storage and retrieval of video and audio programs. A need
for such devices was perceived for several reasons. Existing storage
media (movie film and 12-inch, vinyl, long-playing records) were
relatively large and cumbersome to manipulate and were prone to
degradation, breakage, and unwanted noise. Thus, during the late
1960’s, two different methods for storing video programs on disc
were invented. A mechanical system was demonstrated by the
Telefunken Company, while the Radio Corporation of America
(RCA) introduced an electrostatic device (a device that used static
electricity). The first commercially successful system, however, was
developed during the mid-1970’s by the Philips Corporation.
Philips devoted considerable resources to creating a digital video
system, read by light beams, which could reproduce an entire feature-
length film from one 12-inch videodisc. An integral part of this
innovation was the fabrication of a device small enough and fast
enough to read the vast amounts of greatly compacted data stored
on the 12-inch disc without introducing unwanted noise. Although
Philips was aware of the other formats, the company opted to use an
optical scanner with a small “semiconductor laser diode” to retrieve
the digital information. The laser diode is only a fraction of a millimeter
in size, operates quite efficiently with high amplitude and relatively
low power (0.1 watt), and can be used continuously. Because
this configuration operates at a high frequency, its informationcarrying
capacity is quite large.Although the digital videodisc system (called “laservision”) works
well, the low level of noise and the clear images offered by this system
were masked by the low quality of the conventional television
monitors on which they were viewed. Furthermore, the high price
of the playback systems and the discs made them noncompetitive
with the videocassette recorders (VCRs) that were then capturing
the market for home systems. VCRs had the additional advantage
that programs could be recorded or copied easily. The Philips Corporation
turned its attention to utilizing this technology in an area
where low noise levels and high quality would be more readily apparent—
audio disc systems. By 1979, they had perfected the basic
compact disc (CD) system, which soon revolutionized the world of
stereophonic home systems.
Reading Digital Discs with Laser Light
Digital signals (signals composed of numbers) are stored on
discs as “pits” impressed into the plastic disc and then coated with a
thin reflective layer of aluminum. A laser beam, manipulated by
delicate, fast-moving mirrors, tracks and reads the digital information
as changes in light intensity. These data are then converted to a
varying electrical signal that contains the video or audio information.
The data are then recovered by means of a sophisticated
pickup that consists of the semiconductor laser diode, a polarizing
beam splitter, an objective lens, a collective lens system, and a
photodiode receiver. The beam from the laser diode is focused by a
collimator lens (a lens that collects and focuses light) and then
passes through the polarizing beam splitter (PBS). This device acts
like a one-way mirror mounted at 45 degrees to the light path. Light
from the laser passes through the PBS as if it were a window, but the
light emerges in a polarized state (which means that the vibration of
the light takes place in only one plane). For the beam reflected from
the CD surface, however, the PBS acts like a mirror, since the reflected
beam has an opposite polarization. The light is thus deflected
toward the photodiode detector. The objective lens is needed
to focus the light onto the disc surface. On the outer surface of the
transparent disc, the main spot of light has a diameter of 0.8 millimeter,
which narrows to only 0.0017 millimeter at the reflective surface. At the surface, the spot is about three times the size of the microscopic
pits (0.0005 millimeter).
The data encoded on the disc determine the relative intensity of
the reflected light, on the basis of the presence or absence of pits.
When the reflected laser beam enters the photodiode, a modulated
light beam is changed into a digital signal that becomes an analog
(continuous) audio signal after several stages of signal processing
and error correction.
Consequences
The development of the semiconductor laser diode and associated
circuitry for reading stored information has made CD audio
systems practical and affordable. These systems can offer the quality
of a live musical performance with a clarity that is undisturbed
by noise and distortion. Digital systems also offer several other significant
advantages over analog devices. The dynamic range (the
difference between the softest and the loudest signals that can be
stored and reproduced) is considerably greater in digital systems. In
addition, digital systems can be copied precisely; the signal is not
degraded by copying, as is the case with analog systems. Finally,
error-correcting codes can be used to detect and correct errors in
transmitted or reproduced digital signals, allowing greater precision
and a higher-quality output sound.
Besides laser video systems, there are many other applications
for laser-read CDs. Compact disc read-only memory (CD-ROM) is
used to store computer text. One standard CD can store 500 megabytes
of information, which is about twenty times the storage of a
hard-disk drive on a typical home computer. Compact disc systems
can also be integrated with conventional televisions (called CD-V)
to present twenty minutes of sound and five minutes of sound with
picture. Finally, CD systems connected with a computer (CD-I) mix
audio, video, and computer programming. These devices allow the
user to stop at any point in the program, request more information,
and receive that information as sound with graphics, film clips, or
as text on the screen.
Laser
The invention: Taking its name from the acronym for light amplification
by the stimulated emission of radiation, a laser is a
beam of electromagnetic radiation that is monochromatic, highly
directional, and coherent. Lasers have found multiple applications
in electronics, medicine, and other fields.
The people behind the invention:
Theodore Harold Maiman (1927- ), an American physicist
Charles Hard Townes (1915- ), an American physicist who
was a cowinner of the 1964 Nobel Prize in Physics
Arthur L. Schawlow (1921-1999), an American physicist,
cowinner of the 1981 Nobel Prize in Physics
Mary Spaeth (1938- ), the American inventor of the tunable
laser
Coherent Light
Laser beams differ from other forms of electromagnetic radiation
in being consisting of a single wavelength, being highly directional,
and having waves whose crests and troughs are aligned. A laser
beam launched from Earth has produced a spot a few kilometers
wide on the Moon, nearly 400,000 kilometers away. Ordinary light
would have spread much more and produced a spot several times
wider than the Moon. Laser light can also be concentrated so as to
yield an enormous intensity of energy, more than that of the surface
of the Sun, an impossibility with ordinary light.
In order to appreciate the difference between laser light and ordinary
light, one must examine how light of any kind is produced. An
ordinary light bulb contains atoms of gas. For the bulb to light up,
these atoms must be excited to a state of energy higher then their
normal, or ground, state. This is accomplished by sending a current
of electricity through the bulb; the current jolts the atoms into the
higher-energy state. This excited state is unstable, however, and the
atoms will spontaneously return to their ground state by ridding
themselves of excess energy.As these atoms emit energy, light is produced. The light emitted
by a lamp full of atoms is disorganized and emitted in all directions
randomly. This type of light, common to all ordinary sources, from
fluorescent lamps to the Sun, is called “incoherent light.”
Laser light is different. The excited atoms in a laser emit their excess
energy in a unified, controlled manner. The atoms remain in the
excited state until there are a great many excited atoms. Then, they
are stimulated to emit energy, not independently, but in an organized
fashion, with all their light waves traveling in the same direction,
crests and troughs perfectly aligned. This type of light is called
“coherent light.”
Theory to Reality
In 1958, Charles Hard Townes of Columbia University, together
with Arthur L. Schawlow, explored the requirements of the laser in
a theoretical paper. In the Soviet Union, F. A. Butayeva and V. A.
Fabrikant had amplified light in 1957 using mercury; however, their
work was not published for two years and was not published in a
scientific journal. The work of the Soviet scientists, therefore, received virtually no attention in the Western world.
In 1960, Theodore Harold Maiman constructed the first laser in
the United States using a single crystal of synthetic pink ruby,
shaped into a cylindrical rod about 4 centimeters long and 0.5 centimeter
across. The ends, polished flat and made parallel to within
about a millionth of a centimeter, were coated with silver to make
them mirrors.
It is a property of stimulated emission that stimulated light
waves will be aligned exactly (crest to crest, trough to trough, and
with respect to direction) with the radiation that does the stimulating.
From the group of excited atoms, one atom returns to its ground state, emitting light. That light hits one of the other exited atoms and
stimulates it to fall to its ground state and emit light. The two light
waves are exactly in step. The light from these two atoms hits other
excited atoms, which respond in the same way, “amplifying” the total
sum of light.
If the first atom emits light in a direction parallel to the length of
the crystal cylinder, the mirrors at both ends bounce the light waves
back and forth, stimulating more light and steadily building up an
increasing intensity of light. The mirror at one end of the cylinder is
constructed to let through a fraction of the light, enabling the light to
emerge as a straight, intense, narrow beam.
Consequences
When the laser was introduced, it was an immediate sensation. In
the eighteen months following Maiman’s announcement that he had
succeeded in producing a working laser, about four hundred companies
and several government agencies embarked on work involving
lasers. Activity centered on improving lasers, as well as on exploring
their applications. At the same time, there was equal activity in publicizing
the near-miraculous promise of the device, in applications covering
the spectrum from “death” rays to sight-saving operations. A
popular film in the James Bond series, Goldfinger (1964), showed the
hero under threat of being sliced in half by a laser beam—an impossibility
at the time the film was made because of the low power-output
of the early lasers.
In the first decade after Maiman’s laser, there was some disappointment.
Successful use of lasers was limited to certain areas of
medicine, such as repairing detached retinas, and to scientific applications,
particularly in connection with standards: The speed of
light was measured with great accuracy, as was the distance to the
Moon. By 1990, partly because of advances in other fields, essentially
all the laser’s promise had been fulfilled, including the death
ray and James Bond’s slicer. Yet the laser continued to find its place
in technologies not envisioned at the time of the first laser. For example,
lasers are now used in computer printers, in compact disc
players, and even in arterial surgery.
10 August 2009
Laminated glass
The invention: Double sheets of glass separated by a thin layer of
plastic sandwiched between them.
The people behind the invention:
Edouard Benedictus (1879-1930), a French artist
Katherine Burr Blodgett (1898-1979), an American physicist
The Quest for Unbreakable Glass
People have been fascinated for centuries by the delicate transparency
of glass and the glitter of crystals. They have also been frustrated
by the brittleness and fragility of glass. When glass breaks, it
forms sharp pieces that can cut people severely. During the 1800’s
and early 1900’s, a number of people demonstrated ways to make
“unbreakable” glass. In 1855 in England, the first “unbreakable”
glass panes were made by embedding thin wires in the glass. The
embedded wire grid held the glass together when it was struck or
subjected to the intense heat of a fire.Wire glass is still used in windows
that must be fire resistant. The concept of embedding the wire
within a glass sheet so that the glass would not shatter was a predecessor
of the concept of laminated glass.
A series of inventors in Europe and the United States worked on
the idea of using a durable, transparent inner layer of plastic between
two sheets of glass to prevent the glass from shattering when it was
dropped or struck by an impact. In 1899, Charles E.Wade of Scranton,
Pennsylvania, obtained a patent for a kind of glass that had a sheet or
netting of mica fused within it to bind it. In 1902, Earnest E. G. Street
of Paris, France, proposed coating glass battery jars with pyroxylin
plastic (celluloid) so that they would hold together if they cracked. In
Swindon, England, in 1905, John Crewe Wood applied for a patent
for a material that would prevent automobile windshields from shattering
and injuring people when they broke. He proposed cementing
a sheet of material such as celluloid between two sheets of glass.
When the window was broken, the inner material would hold the
glass splinters together so that they would not cut anyone.Remembering a Fortuitous Fall
In his patent application, Edouard Benedictus described himself
as an artist and painter. He was also a poet, musician, and
philosopher who was descended from the philosopher Baruch
Benedictus Spinoza; he seemed an unlikely contributor to the
progress of glass manufacture. In 1903, Benedictus was cleaning his laboratory when he dropped a glass bottle that held a nitrocellulose
solution. The solvents, which had evaporated during the
years that the bottle had sat on a shelf, had left a strong celluloid
coating on the glass. When Benedictus picked up the bottle, he was
surprised to see that it had not shattered: It was starred, but all the
glass fragments had been held together by the internal celluloid
coating. He looked at the bottle closely, labeled it with the date
(November, 1903) and the height from which it had fallen, and put
it back on the shelf.
One day some years later (the date is uncertain), Benedictus became
aware of vehicular collisions in which two young women received
serious lacerations from broken glass. He wrote a poetic account
of a daydream he had while he was thinking intently about
the two women. He described a vision in which the faintly illuminated
bottle that had fallen some years before but had not shattered
appeared to float down to him from the shelf. He got up, went into
his laboratory, and began to work on an idea that originated with his
thoughts of the bottle that would not splinter.
Benedictus found the old bottle and devised a series of experiments
that he carried out until the next evening. By the time he had
finished, he had made the first sheet of Triplex glass, for which he
applied for a patent in 1909. He also founded the Société du Verre
Triplex (The Triplex Glass Society) in that year. In 1912, the Triplex
Safety Glass Company was established in England. The company
sold its products for military equipment in World War I, which began
two years later.
Triplex glass was the predecessor of laminated glass. Laminated
glass is composed of two or more sheets of glass with a thin
layer of plastic (usually polyvinyl butyral, although Benedictus
used pyroxylin) laminated between the glass sheets using pressure
and heat. The plastic layer will yield rather than rupture when subjected
to loads and stresses. This prevents the glass from shattering
into sharp pieces. Because of this property, laminated glass is also
known as “safety glass.”
Impact
Even after the protective value of laminated glass was known,the product was not widely used for some years. There were a number
of technical difficulties that had to be solved, such as the discoloring
of the plastic layer when it was exposed to sunlight; the relatively
high cost; and the cloudiness of the plastic layer, which
obscured vision—especially at night. Nevertheless, the expanding
automobile industry and the corresponding increase in the number
of accidents provided the impetus for improving the qualities and
manufacturing processes of laminated glass. In the early part of the
century, almost two-thirds of all injuries suffered in automobile accidents
involved broken glass.
Laminated glass is used in many applications in which safety is
important. It is typically used in all windows in cars, trucks, ships,
and aircraft. Thick sheets of bullet-resistant laminated glass are
used in banks, jewelry displays, and military installations. Thinner
sheets of laminated glass are used as security glass in museums, libraries,
and other areas where resistance to break-in attempts is
needed. Many buildings have large ceiling skylights that are made
of laminated glass; if the glass is damaged, it will not shatter, fall,
and hurt people below. Laminated glass is used in airports, hotels,
and apartments in noisy areas and in recording studios to reduce
the amount of noise that is transmitted. It is also used in safety goggles
and in viewing ports at industrial plants and test chambers.
Edouard Benedictus’s recollection of the bottle that fell but did not
shatter has thus helped make many situations in which glass is used
safer for everyone.
Iron lung
The invention: Amechanical respirator that saved the lives of victims
of poliomyelitis.
The people behind the invention:
Philip Drinker (1894-1972), an engineer who made many
contributions to medicine
Louis Shaw (1886-1940), a respiratory physiologist who
assisted Drinker
Charles F. McKhann III (1898-1988), a pediatrician and
founding member of the American Board of Pediatrics
A Terrifying Disease
Poliomyelitis (polio, or infantile paralysis) is an infectious viral
disease that damages the central nervous system, causing paralysis
in many cases. Its effect results from the destruction of neurons
(nerve cells) in the spinal cord. In many cases, the disease produces
crippled limbs and the wasting away of muscles. In others, polio results
in the fatal paralysis of the respiratory muscles. It is fortunate
that use of the Salk and Sabin vaccines beginning in the 1950’s has
virtually eradicated the disease.
In the 1920’s, poliomyelitis was a terrifying disease. Paralysis of
the respiratory muscles caused rapid death by suffocation, often
within only a few hours after the first signs of respiratory distress
had appeared. In 1929, Philip Drinker and Louis Shaw, both of Harvard
University, reported the development of a mechanical respirator
that would keep those afflicted with the disease alive for indefinite
periods of time. This device, soon nicknamed the “iron lung,”
helped thousands of people who suffered from respiratory paralysis
as a result of poliomyelitis or other diseases.
Development of the iron lung arose after Drinker, then an assistant
professor in Harvard’s Department of Industrial Hygiene, was
appointed to a Rockefeller Institute commission formed to improve
methods for resuscitating victims of electric shock. The best-known
use of the iron lung—treatment of poliomyelitis—was a result of
numerous epidemics of the disease that occurred from 1898 until the 1920’s, each leaving thousands of Americans paralyzed.
The concept of the iron lung reportedly arose from Drinker’s observation
of physiological experiments carried out by Shaw and
Drinker’s brother, Cecil. The experiments involved the placement
of a cat inside an airtight box—a body plethysmograph—with the
cat’s head protruding from an airtight collar. Shaw and Cecil Drinker
then measured the volume changes in the plethysmograph to identify
normal breathing patterns. Philip Drinker then placed cats paralyzed
by curare inside plethysmographies and showed that they
could be kept breathing artificially by use of air from a hypodermic
syringe connected to the device.
Next, they proceeded to build a human-sized plethysmographlike
machine, with a five-hundred-dollar grant from the New York
Consolidated Gas Company. This was done by a tinsmith and the
Harvard Medical School machine shop.
Breath for Paralyzed Lungs
The first machine was tested on Drinker and Shaw, and after several
modifications were made, a workable iron lung was made
available for clinical use. This machine consisted of a metal cylinder
large enough to hold a human being. One end of the cylinder, which
contained a rubber collar, slid out on casters along with a stretcher
on which the patient was placed. Once the patient was in position
and the collar was fitted around the patient’s neck, the stretcher was
pushed back into the cylinder and the iron lung was made airtight.
The iron lung then “breathed” for the patient by using an electric
blower to remove and replace air alternatively inside the machine.
In the human chest, inhalation occurs when the diaphragm contracts
and powerful muscles (which are paralyzed in poliomyelitis
sufferers) expand the rib cage. This lowers the air pressure in the
lungs and allows inhalation to occur. In exhalation, the diaphragm
and chest muscles relax, and air is expelled as the chest cavity returns
to its normal size. In cases of respiratory paralysis treated with
an iron lung, the air coming into or leaving the iron lung alternately
compressed the patient’s chest, producing artificial exhalation, and
the allowed it to expand to so that the chest could fill with air. In this
way, iron lungs “breathed” for the patients using them.Careful examination of each patient was required to allow technicians
to adjust the rate of operation of the machine. Acooling system
and ports for drainage lines, intravenous lines, and the other
apparatus needed to maintain a wide variety of patients were included
in the machine.
The first person treated in an iron lung was an eight-year-old girl
afflicted with respiratory paralysis resulting from poliomyelitis. The
iron lung kept her alive for five days. Unfortunately, she died from
heart failure as a result of pneumonia. The next iron lung patient, a
Harvard University student, was confined to the machine for several
weeks and later recovered enough to resume a normal life.
The Internet
The invention:
A worldwide network of interlocking computer
systems, developed out of a U.S. government project to improve
military preparedness.
The people behind the invention:
Paul Baran, a researcher for the RAND corporation
Vinton G. Cerf (1943- ), an American computer scientist
regarded as the “father of the Internet”
Internal combustion engine
The invention: The most common type of engine in automobiles
and many other vehicles, the internal combusion engine is characterized
by the fact that it burns its liquid fuelly internally—in
contrast to engines, such as the steam engine, that burn fuel in external
furnaces.
The people behind the invention:
Sir Harry Ralph Ricardo (1885-1974), an English engineer
Oliver Thornycroft (1885-1956), an engineer and works manager
Sir David Randall Pye (1886-1960), an engineer and
administrator
Sir Robert Waley Cohen (1877-1952), a scientist and industrialist
The Internal Combustion Engine: 1900-1916
By the beginning of the twentieth century, internal combustion
engines were almost everywhere. City streets in Berlin, London,
and New York were filled with automobile and truck traffic; gasoline-
and diesel-powered boat engines were replacing sails; stationary
steam engines for electrical generation were being edged out by
internal combustion engines. Even aircraft use was at hand: To
progress from theWright brothers’ first manned flight in 1903 to the
fighting planes ofWorldWar I took only a little more than a decade.
The internal combustion engines of the time, however, were
primitive in design. They were heavy (10 to 15 pounds per output
horsepower, as opposed to 1 to 2 pounds today), slow (typically
1,000 or fewer revolutions per minute or less, as opposed to 2,000 to
5,000 today), and extremely inefficient in extracting the energy content
of their fuel. These were not major drawbacks for stationary applications,
or even for road traffic that rarely went faster than 30 or
40 miles per hour, but the advent of military aircraft and tanks demanded
that engines be made more efficient.Engine and Fuel Design
Harry Ricardo, son of an architect and grandson (on his mother’s
side) of an engineer, was a central figure in the necessary redesign of
internal combustion engines. As a schoolboy, he built a coal-fired
steam engine for his bicycle, and at Cambridge University he produced
a single-cylinder gasoline motorcycle, incorporating many of
his own ideas, which won a fuel-economy competition when it traveled
almost 40 miles on a quart of gasoline. He also began development
of a two-cycle engine called the “Dolphin,” which later was
produced for use in fishing boats and automobiles. In fact, in 1911,
Ricardo took his new bride on their honeymoon trip in a Dolphinpowered
car.
The impetus that led to major engine research came in 1916
when Ricardo was an engineer in his family’s firm. The British
government asked for newly designed tank engines, which had to
operate in the dirt and mud of battle, at a tilt of up to 35 degrees,
and could not give off telltale clouds of blue oil smoke. Ricardo
solved the problem with a special piston design and with air circulation
around the carburetor and within the engine to keep the oil
cool.
Design work on the tank engines turned Ricardo into a fullfledged
research engineer. In 1917, he founded his own company,
and a remarkable series of discoveries quickly followed. He investigated
the problem of detonation of the fuel-air mixture in the internal
combustion cylinder. The mixture is supposed to be ignited
by the spark plug at the top of the compression stroke, with a controlled
flame front spreading at a rate about equal to the speed of
the piston head as it moves downward in the power stroke. Some
fuels, however, detonated (ignited spontaneously throughout the
entire fuel-air mixture) as a result of the compression itself, causing
loss of fuel efficiency and damage to the engine.
With the cooperation of RobertWaley Cohen of Shell Petroleum,
Ricardo evaluated chemical mixtures of fuels and found that paraffins
(such as n-heptane, the current low-octane standard) detonated
readily, but aromatics such as toluene were nearly immune to detonation.
He established a “toluene number” rating to describe the
tendency of various fuels to detonate; this number was replaced in the 1920’s by the “octane number” devised by Thomas Midgley at
the Delco laboratories in Dayton, Ohio.
The fuel work was carried out in an experimental engine designed
by Ricardo that allowed direct observation of the flame front
as it spread and permitted changes in compression ratio while the
engine was running. Three principles emerged from the investigation:
the fuel-air mixture should be admitted with as much turbulence
as possible, for thorough mixing and efficient combustion; the
spark plug should be centrally located to prevent distant pockets of
the mixture from detonating before the flame front reaches them;
and the mixture should be kept as cool as possible to prevent detonation.
These principles were then applied in the first truly efficient sidevalve
(“L-head”) engine—that is, an engine with the valves in a
chamber at the side of the cylinder, in the engine block, rather than
overhead, in the engine head. Ricardo patented this design, and after
winning a patent dispute in court in 1932, he received royalties
or consulting fees for it from engine manufacturers all over the
world.Impact
The side-valve engine was the workhorse design for automobile
and marine engines until after World War II. With its valves actuated
directly by a camshaft in the crankcase, it is simple, rugged,
and easy to manufacture. Overhead valves with overhead camshafts
are the standard in automobile engines today, but the sidevalve
engine is still found in marine applications and in small engines
for lawn mowers, home generator systems, and the like. In its
widespread use and its decades of employment, the side-valve engine
represents a scientific and technological breakthrough in the
twentieth century.
Ricardo and his colleagues, Oliver Thornycroft and D. R. Pye,
went on to create other engine designs—notably, the sleeve-valve
aircraft engine that was the basic pattern for most of the great British
planes of World War II and early versions of the aircraft jet engine.
For his technical advances and service to the government, Ricardo
was elected a Fellow of the Royal Society in 1929, and he was
knighted in 1948.
03 August 2009
Interchangeable parts
The invention:
A key idea in the late Industrial Revolution, the
interchangeability of parts made possible mass production of
identical products.
The people behind the invention:
Henry M. Leland (1843-1932), president of Cadillac Motor Car
Company in 1908, known as a master of precision
Frederick Bennett, the British agent for Cadillac Motor Car
Company who convinced the Royal Automobile Club to run
the standardization test at Brooklands, England
Henry Ford (1863-1947), founder of Ford Motor Company who
introduced the moving assembly line into the automobile
industry in 1913
A key idea in the late Industrial Revolution, the
interchangeability of parts made possible mass production of
identical products.
The people behind the invention:
Henry M. Leland (1843-1932), president of Cadillac Motor Car
Company in 1908, known as a master of precision
Frederick Bennett, the British agent for Cadillac Motor Car
Company who convinced the Royal Automobile Club to run
the standardization test at Brooklands, England
Henry Ford (1863-1947), founder of Ford Motor Company who
introduced the moving assembly line into the automobile
industry in 1913
Instant photography
The invention: Popularly known by its Polaroid tradename, a camera
capable of producing finished photographs immediately after
its film was exposed.
The people behind the invention:
Edwin Herbert Land (1909-1991), an American physicist and
chemist
Howard G. Rogers (1915- ), a senior researcher at Polaroid
and Land’s collaborator
William J. McCune (1915- ), an engineer and head of the
Polaroid team
Ansel Adams (1902-1984), an American photographer and
Land’s technical consultant
The Daughter of Invention
Because he was a chemist and physicist interested primarily in
research relating to light and vision, and to the materials that affect
them, it was inevitable that Edwin Herbert Land should be drawn
into the field of photography. Land founded the Polaroid Corporation
in 1929. During the summer of 1943, while Land and his wife
were vacationing in Santa Fe, New Mexico, with their three-yearold
daughter, Land stopped to take a picture of the child. After the
picture was taken, his daughter asked to see it. When she was told
she could not see the picture immediately, she asked how long it
would be. Within an hour after his daughter’s question, Land had
conceived a preliminary plan for designing the camera, the film,
and the physical chemistry of what would become the instant camera.
Such a device would, he hoped, produce a picture immediately
after exposure.
Within six months, Land had solved most of the essential problems
of the instant photography system. He and a small group of associates
at Polaroid secretly worked on the project. Howard G. Rogers
was Land’s collaborator in the laboratory. Land conferred the
responsibility for the engineering and mechanical phase of the project
on William J. McCune, who led the team that eventually designed the original camera and the machinery that produced both
the camera and Land’s new film.
The first Polaroid Land camera—the Model 95—produced photographs
measuring 8.25 by 10.8 centimeters; there were eight pictures
to a roll. Rather than being black-and-white, the original Polaroid
prints were sepia-toned (producing a warm, reddish-brown color).
The reasons for the sepia coloration were chemical rather than aesthetic;
as soon as Land’s researchers could devise a workable formula
for sharp black-and-white prints (about ten months after the camera
was introduced commercially), they replaced the sepia film.
A Sophisticated Chemical Reaction
Although the mechanical process involved in the first demonstration
camera was relatively simple, this process was merely
the means by which a highly sophisticated chemical reaction—
the diffusion transfer process—was produced.
In the basic diffusion transfer process, when an exposed negative
image is developed, the undeveloped portion corresponds
to the opposite aspect of the image, the positive. Almost all selfprocessing
instant photography materials operate according to
three phases—negative development, diffusion transfer, and
positive development. These occur simultaneously, so that positive
image formation begins instantly. With black-and-white materials,
the positive was originally completed in about sixty seconds; with
color materials (introduced later), the process took somewhat longer.
The basic phenomenon of silver in solution diffusing from one
emulsion to another was first observed in the 1850’s, but no practical
use of this action was made until 1939. The photographic use of
diffusion transfer for producing normal-continuous-tone images
was investigated actively from the early 1940’s by Land and his associates.
The instant camera using this method was demonstrated
in 1947 and marketed in 1948.
The fundamentals of photographic diffusion transfer are simplest
in a black-and-white peel-apart film. The negative sheet is exposed
in the camera in the normal way. It is then pulled out of the
camera, or film pack holder, by a paper tab. Next, it passes through a
set of rollers, which press it face-to-face with a sheet of receiving material included in the film pack. Simultaneously, the rollers rupture
a pod of reagent chemicals that are spread evenly by the rollers
between the two layers. The reagent contains a strong alkali and a
silver halide solvent, both of which diffuse into the negative emulsion. There the alkali activates the developing agent, which immediately
reduces the exposed halides to a negative image. At the
same time, the solvent dissolves the unexposed halides. The silver
in the dissolved halides forms the positive image.
Impact
The Polaroid Land camera had a tremendous impact on the photographic
industry as well as on the amateur and professional photographer.
Ansel Adams, who was known for his monumental,
ultrasharp black-and-white panoramas of the American West, suggested
to Land ways in which the tonal value of Polaroid film could
be enhanced, as well as new applications for Polaroid photographic
technology.
Soon after it was introduced, Polaroid photography became part
of the American way of life and changed the face of amateur photography
forever. By the 1950’s, Americans had become accustomed
to the world of recorded visual information through films, magazines,
and newspapers; they also had become enthusiastic picturetakers
as a result of the growing trend for simpler and more convenient
cameras. By allowing these photographers not only to record
their perceptions but also to see the results almost immediately, Polaroid
brought people closer to the creative process.
Infrared photography
The invention: The first application of color to infrared photography,
which performs tasks not possible for ordinary photography.
The person behind the invention:
Sir William Herschel (1738-1822), a pioneering English
astronomer
Invisible Light
Photography developed rapidly in the nineteenth century when it
became possible to record the colors and shades of visible light on
sensitive materials. Visible light is a form of radiation that consists of
electromagnetic waves, which also make up other forms of radiation
such as X rays and radio waves. Visible light occupies the range of
wavelengths from about 400 nanometers (1 nanometer is 1 billionth
of a meter) to about 700 nanometers in the electromagnetic spectrum.
Infrared radiation occupies the range fromabout 700 nanometers
to about 1,350 nanometers in the electromagnetic spectrum. Infrared
rays cannot be seen by the human eye, but they behave in the
same way that rays of visible light behave; they can be reflected, diffracted
(broken), and refracted (bent).
Sir William Herschel, a British astronomer, discovered infrared
rays in 1800 by calculating the temperature of the heat that they produced.
The term “infrared,” which was probably first used in 1800,
was used to indicate rays that had wavelengths that were longer than
those on the red end (the high end) of the spectrum of visible light but
shorter than those of the microwaves, which appear higher on the
electromagnetic spectrum. Infrared film is therefore sensitive to the
infrared radiation that the human eye cannot see or record. Dyes that
were sensitive to infrared radiation were discovered early in the
twentieth century, but they were not widely used until the 1930’s. Because
these dyes produced only black-and-white images, their usefulness
to artists and researchers was limited. After 1930, however, a
tidal wave of infrared photographic applications appeared.The Development of Color-Sensitive Infrared Film
In the early 1940’s, military intelligence used infrared viewers for
night operations and for gathering information about the enemy. One
device that was commonly used for such purposes was called a
“snooper scope.” Aerial photography with black-and-white infrared
film was used to locate enemy hiding places and equipment. The images
that were produced, however, often lacked clear definition.
The development in 1942 of the first color-sensitive infrared film,
Ektachrome Aero Film, became possible when researchers at the
Eastman Kodak Company’s laboratories solved some complex chemical
and physical problems that had hampered the development of
color infrared film up to that point. Regular color film is sensitive to
all visible colors of the spectrum; infrared color film is sensitive to
violet, blue, and red light as well as to infrared radiation. Typical
color film has three layers of emulsion, which are sensitized to blue,
green, and red. Infrared color film, however, has its three emulsion
layers sensitized to green, red, and infrared. Infrared wavelengths
are recorded as reds of varying densities, depending on the intensity
of the infrared radiation. The more infrared radiation there is,
the darker the color of the red that is recorded.
In infrared photography, a filter is placed over the camera lens to
block the unwanted rays of visible light. The filter blocks visible and
ultraviolet rays but allows infrared radiation to pass. All three layers
of infrared film are sensitive to blue, so a yellow filter is used. All
blue radiation is absorbed by this filter.
In regular photography, color film consists of three basic layers:
the top layer is sensitive to blue light, the middle layer is sensitive to
green, and the third layer is sensitive to red. Exposing the film to
light causes a latent image to be formed in the silver halide crystals
that make up each of the three layers. In infrared photography, color
film consists of a top layer that is sensitive to infrared radiation, a
middle layer sensitive to green, and a bottom layer sensitive to red.
“Reversal processing” produces blue in the infrared-sensitive layer,
yellow in the green-sensitive layer, and magenta in the red-sensitive
layer. The blue, yellow, and magenta layers of the film produce the
“false colors” that accentuate the various levels of infrared radiation
shown as red in a color transparency, slide, or print.relationship to the color of light to which the layer is sensitive. If the
relationship is not complementary, the resulting colors will be false.
This means that objects whose colors appear to be similar to the
human eye will not necessarily be recorded as similar colors on infrared
film. A red rose with healthy green leaves will appear on infrared
color film as being yellow with red leaves, because the chlorophyll
contained in the plant leaf reflects infrared radiation and
causes the green leaves to be recorded as red. Infrared radiation
from about 700 nanometers to about 900 nanometers on the electromagnetic
spectrum can be recorded by infrared color film. Above
900 nanometers, infrared radiation exists as heat patterns that must
be recorded by nonphotographic means.
Impact
Infrared photography has proved to be valuable in many of the
sciences and the arts. It has been used to create artistic images that
are often unexpected visual explosions of everyday views. Because
infrared radiation penetrates haze easily, infrared films are often
used in mapping areas or determining vegetation types. Many
cloud-covered tropical areas would be impossible to map without
infrared photography. False-color infrared film can differentiate between
healthy and unhealthy plants, so it is widely used to study insect
and disease problems in plants. Medical research uses infrared
photography to trace blood flow, detect and monitor tumor growth,
and to study many other physiological functions that are invisible
to the human eye.
Some forms of cancer can be detected by infrared analysis before
any other tests are able to perceive them. Infrared film is used in
criminology to photograph illegal activities in the dark and to study
evidence at crime scenes. Powder burns around a bullet hole, which
are often invisible to the eye, show clearly on infrared film. In addition,
forgeries in documents and works of art can often be seen
clearly when photographed on infrared film. Archaeologists have
used infrared film to locate ancient sites that are invisible in daylight.
Wildlife biologists also document the behavior of animals at
night with infrared equipment.
28 July 2009
In vitro plant culture
The invention: Method for propagating plants in artificial media
that has revolutionized agriculture.
The people behind the invention:
Georges Michel Morel (1916-1973), a French physiologist
Philip Cleaver White (1913- ), an American chemist
Plant Tissue Grows “In Glass”
In the mid-1800’s, biologists began pondering whether a cell isolated
from a multicellular organism could live separately if it were
provided with the proper environment. In 1902, with this question in
mind, the German plant physiologist Gottlieb Haberlandt attempted
to culture (grow) isolated plant cells under sterile conditions on an artificial
growth medium. Although his cultured cells never underwent
cell division under these “in vitro” (in glass) conditions, Haberlandt
is credited with originating the concept of cell culture.
Subsequently, scientists attempted to culture plant tissues and
organs rather than individual cells and tried to determine the medium
components necessary for the growth of plant tissue in vitro.
In 1934, Philip White grew the first organ culture, using tomato
roots. The discovery of plant hormones, which are compounds that
regulate growth and development, was crucial to the successful culture
of plant tissues; in 1939, Roger Gautheret, P. Nobécourt, and
White independently reported the successful culture of plant callus
tissue. “Callus” is an irregular mass of dividing cells that often results
from the wounding of plant tissue. Plant scientists were fascinated
by the perpetual growth of such tissue in culture and spent
years establishing optimal growth conditions and exploring the nutritional
and hormonal requirements of plant tissue.
Plants by the Millions
A lull in botanical research occurred during World War II, but
immediately afterward there was a resurgence of interest in applying
tissue culture techniques to plant research. Georges Morel, a plant physiologist at the National Institute for Agronomic Research
in France, was one of many scientists during this time who
had become interested in the formation of tumors in plants as well
as in studying various pathogens such as fungi and viruses that
cause plant disease.
To further these studies, Morel adapted existing techniques in order
to grow tissue from a wider variety of plant types in culture, and
he continued to try to identify factors that affected the normal
growth and development of plants. Morel was successful in culturing
tissue from ferns and was the first to culture monocot plants.
Monocots have certain features that distinguish them fromthe other
classes of seed-bearing plants, especially with respect to seed structure.
More important, the monocots include the economically important
species of grasses (the major plants of range and pasture)
and cereals.
For these cultures, Morel utilized a small piece of the growing tip
of a plant shoot (the shoot apex) as the starting tissue material. This
tissue was placed in a glass tube, supplied with a medium containing
specific nutrients, vitamins, and plant hormones, and allowed
to grow in the light. Under these conditions, the apex tissue grew
roots and buds and eventually developed into a complete plant.
Morel was able to generate whole plants from pieces of the shoot
apex that were only 100 to 250 micrometers in length.
Morel also investigated the growth of parasites such as fungi and
viruses in dual culture with host-plant tissue. Using results from
these studies and culture techniques that he had mastered, Morel
and his colleague Claude Martin regenerated virus-free plants from
tissue that had been taken from virally infected plants. Tissues from
certain tropical species, dahlias, and potato plants were used for the
original experiments, but after Morel adapted the methods for the
generation of virus-free orchids, plants that had previously been
difficult to propagate by any means, the true significance of his
work was recognized.
Morel was the first to recognize the potential of the in vitro culture
methods for the mass propagation of plants. He estimated that several
million plants could be obtained in one year from a single small
piece of shoot-apex tissue. Plants generated in this manner were
clonal (genetically identical organisms prepared from a single plant).With other methods of plant propagation, there is often a great variation
in the traits of the plants produced, but as a result of Morel’s
ideas, breeders could select for some desirable trait in a particular
plant and then produce multiple clonal plants, all of which expressed
the desired trait. The methodology also allowed for the production of
virus-free plant material, which minimized both the spread of potential
pathogens during shipping and losses caused by disease.
Consequences
Variations on Morel’s methods are used to propagate plants used
for human food consumption; plants that are sources of fiber, oil,
and livestock feed; forest trees; and plants used in landscaping and
in the floral industry. In vitro stocks are preserved under deepfreeze
conditions, and disease-free plants can be proliferated quickly
at any time of the year after shipping or storage.
The in vitro multiplication of plants has been especially useful
for species such as coconut and certain palms that cannot be propagated
by other methods, such as by sowing seeds or grafting, and
has also become important in the preservation and propagation of rare plant species that might otherwise have become extinct. Many
of these plants are sources of pharmaceuticals, oils, fragrances, and
other valuable products.
The capability of regenerating plants from tissue culture has also
been crucial in basic scientific research. Plant cells grown in culture
can be studied more easily than can intact plants, and scientists have
gained an in-depth understanding of plant physiology and biochemistry
by using this method. This information and the methods
of Morel and others have made possible the genetic engineering and
propagation of crop plants that are resistant to disease or disastrous
environmental conditions such as drought and freezing. In vitro
techniques have truly revolutionized agriculture.
IBM Model 1401 Computer
The invention: A relatively small, simple, and inexpensive computer
that is often credited with having launched the personal
computer age.
The people behind the invention:
Howard H. Aiken (1900-1973), an American mathematician
Charles Babbage (1792-1871), an English mathematician and
inventor
Herman Hollerith (1860-1929), an American inventor
Computers: From the Beginning
Computers evolved into their modern form over a period of
thousands of years as a result of humanity’s efforts to simplify the
process of counting. Two counting devices that are considered to be
very simple, early computers are the abacus and the slide rule.
These calculating devices are representative of digital and analog
computers, respectively, because an abacus counts numbers of things,
while the slide rule calculates length measurements.
The first modern computer, which was planned by Charles Babbage
in 1833, was never built. It was intended to perform complex
calculations with a data processing/memory unit that was controlled
by punched cards. In 1944, Harvard University’s Howard H.
Aiken and the International Business Machines (IBM) Corporation
built such a computer—the huge, punched-tape-controlled Automatic
Sequence Controlled Calculator, or Mark I ASCC, which
could perform complex mathematical operations in seconds. During
the next fifteen years, computer advances produced digital computers
that used binary arithmetic for calculation, incorporated
simplified components that decreased the sizes of computers, had
much faster calculating speeds, and were transistorized.
Although practical computers had become much faster than
they had been only a few years earlier, they were still huge and extremely
expensive. In 1959, however, IBM introduced the Model
1401 computer. Smaller, simpler, and much cheaper than the multimillion-dollar computers that were available, the IBM Model 1401
computer was also relatively easy to program and use. Its low cost,
simplicity of operation, and very wide use have led many experts
to view the IBM Model 1401 computer as beginning the age of the
personal computer.
Computer Operation and IBM’s Model 1401
Modern computers are essentially very fast calculating machines
that are capable of sorting, comparing, analyzing, and outputting information,
as well as storing it for future use. Many sources credit
Aiken’s Mark I ASCC as being the first modern computer to be built.
This huge, five-ton machine used thousands of relays to perform complex
mathematical calculations in seconds. Soon after its introduction,
other companies produced computers that were faster and more versatile
than the Mark I. The computer development race was on.
All these early computers utilized the decimal system for calculations
until it was found that binary arithmetic, whose numbers are
combinations of the binary digits 1 and 0, was much more suitable
for the purpose. The advantage of the binary system is that the electronic
switches that make up a computer (tubes, transistors, or
chips) can be either on or off; in the binary system, the on state can
be represented by the digit 1, the off state by the digit 0. Strung together
correctly, binary numbers, or digits, can be inputted rapidly
and used for high-speed computations. In fact, the computer term
bit is a contraction of the phrase “binary digit.”
A computer consists of input and output devices, a storage device
(memory), arithmetic and logic units, and a control unit. In
most cases, a central processing unit (CPU) combines the logic,
arithmetic, memory, and control aspects. Instructions are loaded
into the memory via an input device, processed, and stored. Then,
the CPU issues commands to the other parts of the system to carry
out computations or other functions and output the data as needed.
Most output is printed as hard copy or displayed on cathode-ray
tube monitors, or screens.
The early modern computers—such as the Mark I ASCC—were
huge because their information circuits were large relays or tubes.
Computers became smaller and smaller as the tubes were replaced first with transistors, then with simple integrated circuits, and then
with silicon chips. Each technological changeover also produced
more powerful, more cost-effective computers.
In the 1950’s, with reliable transistors available, IBM began the
development of two types of computers that were completed by
about 1959. The larger version was the Stretch computer, which was
advertised as the most powerful computer of its day. Customized
for each individual purchaser (for example, the Atomic Energy
Commission), a Stretch computer cost $10 million or more. Some innovations
in Stretch computers included semiconductor circuits,
new switching systems that quickly converted various kinds of data
into one language that was understood by the CPU, rapid data readers,
and devices that seemed to anticipate future operations.
Consequences
The IBM Model 1401 was the first computer sold in very large
numbers. It led IBM and other companies to seek to develop less expensive,
more versatile, smaller computers that would be sold to
small businesses and to individuals. Six years after the development
of the Model 1401, other IBM models—and those made by
other companies—became available that were more compact and
had larger memories. The search for compactness and versatility
continued. A major development was the invention of integrated
circuits by Jack S. Kilby of Texas Instruments; these integrated circuits
became available by the mid-1960’s. They were followed by
even smaller “microprocessors” (computer chips) that became available
in the 1970’s. Computers continued to become smaller and more
powerful.
Input and storage devices also decreased rapidly in size. At first,
the punched cards invented by Herman Hollerith, founder of the
Tabulation Machine Company (which later became IBM), were read
by bulky readers. In time, less bulky magnetic tapes and more compact
readers were developed, after which magnetic disks and compact
disc drives were introduced.
Many other advances have been made. Modern computers can
talk, create art and graphics, compose music, play games, and operate
robots. Further advancement is expected as societal needs change. Many experts believe that it was the sale of large numbers
of IBM Model 1401 computers that began the trend.
20 July 2009
Hydrogen bomb
The invention: Popularly known as the “H-Bomb,” the hydrogen
bomb differs from the original atomic bomb in using fusion,
rather than fission, to create a thermonuclear explosion almost a
thousand times more powerful.
The people behind the invention:
Edward Teller (1908- ), a Hungarian-born theoretical
physicist
Stanislaw Ulam (1909-1984), a Polish-born mathematician
Crash Development
Afew months before the 1942 creation of the Manhattan Project,
the United States-led effort to build the atomic (fission) bomb, physicist
Enrico Fermi suggested to Edward Teller that such a bomb
could release more energy by the process of heating a mass of the
hydrogen isotope deuterium and igniting the fusion of hydrogen
into helium. Fusion is the process whereby two atoms come together
to form a larger atom, and this process usually occurs only in stars,
such as the Sun. Physicists Hans Bethe, George Gamow, and Teller
had been studying fusion since 1934 and knew of the tremendous
energy than could be released by this process—even more energy
than the fission (atom-splitting) process that would create the atomic
bomb. Initially, Teller dismissed Fermi’s idea, but later in 1942, in
collaboration with Emil Konopinski, he concluded that a hydrogen
bomb, or superbomb, could be made.
For practical considerations, it was decided that the design of the
superbomb would have to wait until after the war. In 1946, a secret
conference on the superbomb was held in Los Alamos, New Mexico,
that was attended by, among other Manhattan Project veterans,
Stanislaw Ulam and Klaus Emil Julius Fuchs. Supporting the investigation
of Teller’s concept, the conferees requested a more complete
mathematical analysis of his own admittedly crude calculations
on the dynamics of the fusion reaction. In 1947, Teller believed
that these calculations might take years. Two years later, however,the Soviet explosion of an atomic bomb convinced Teller that America’s
ColdWar adversary was hard at work on its own superbomb.
Even when new calculations cast further doubt on his designs,
Teller began a vigorous campaign for crash development of the hydrogen
bomb, or H-bomb.
The Superbomb
Scientists knew that fusion reactions could be induced by the explosion
of an atomic bomb. The basic problem was simple and formidable:
How could fusion fuel be heated and compressed long
enough to achieve significant thermonuclear burning before the
atomic fission explosion blew the assembly apart? A major part of
the solution came from Ulam in 1951. He proposed using the energy
from an exploding atomic bomb to induce significant thermonuclear
reactions in adjacent fusion fuel components.
This arrangement, in which the A-bomb (the primary) is physically
separated from the H-bomb’s (the secondary’s) fusion fuel, became
known as the “Teller-Ulam configuration.” All H-bombs are
cylindrical, with an atomic device at one end and the other components
filling the remaining space. Energy from the exploding primary
could be transported by X rays and would therefore affect the
fusion fuel at near light speed—before the arrival of the explosion.
Frederick de Hoffman’s work verified and enriched the new concept.
In the revised method, moderated X rays from the primary irradiate
a reactive plastic medium surrounding concentric and generally
cylindrical layers of fusion and fission fuel in the secondary.
Instantly, the plastic becomes a hot plasma that compresses and
heats the inner layer of fusion fuel, which in turn compresses a central
core of fissile plutonium to supercriticality. Thus compressed,
and bombarded by fusion-produced, high-energy neutrons, the fission
element expands rapidly in a chain reaction from the inside
out, further compressing and heating the surrounding fusion fuel,
releasing more energy and more neutrons that induce fission in a
fuel casing-tamper made of normally stable uranium 238.
With its equipment to refrigerate the hydrogen isotopes, the device
created to test Teller’s new concept weighed more than sixty
tons. During Operation Ivy, it was tested at Elugelab in the Marshall Islands on November 1, 1952. Exceeding the expectations of all concerned
and vaporizing the island, the explosion equaled 10.4 million
tons of trinitrotoluene (TNT), which meant that it was about
seven hundred times more powerful than the atomic bomb dropped
on Hiroshima, Japan, in 1945. A version of this device weighing
about 20 tons was prepared for delivery by specially modified Air
Force B-36 bombers in the event of an emergency during wartime.
In development at Los Alamos before the 1952 test was a device
weighing only about 4 tons, a “dry bomb” that did not require refrigeration
equipment or liquid fusion fuel; when sufficiently compressed
and heated in its molded-powder form, the new fusion fuel
component, lithium-6 deutride, instantly produced tritium, an isotope
of hydrogen. This concept was tested during Operation Castle
at Bikini atoll in 1954 and produced a yield of 15 million tons of TNT,
the largest-ever nuclear explosion created by the United States.
Consequences
Teller was not alone in believing that the world could produce
thermonuclear devices capable of causing great destruction. Months
before Fermi suggested to Teller the possibility of explosive thermonuclear
reactions on Earth, Japanese physicist Tokutaro Hagiwara
had proposed that a uranium 235 bomb could ignite significant fusion
reactions in hydrogen. The Soviet Union successfully tested an
H-bomb dropped from an airplane in 1955, one year before the
United States did so.
Teller became the scientific adviser on nuclear affairs of many
presidents, from Dwight D. Eisenhower to Ronald Reagan. The
widespread blast and fallout effects of H-bombs assured the mutual
destruction of the users of such weapons. During the Cold War
(from about 1947 to 1981), both the United States and the Soviet
Union possessed H-bombs. “Testing” these bombs made each side
aware of how powerful the other side was. Everyone wanted to
avoid nuclear war. It was thought that no one would try to start a
war that would end in the world’s destruction. This theory was
called deterrence: The United States wanted to let the Soviet Union
know that it had just as many bombs, or more, than it did, so that the
leaders of the Sovet Union would be deterred from starting a war.Teller knew that the availability of H-bombs on both sides was
not enough to guarantee that such weapons would never be used. It
was also necessary to make the Soviet Union aware of the existence
of the bombs through testing. He consistently advised against U.S.
participation with the Soviet Union in a moratorium (period of
waiting) on nuclear weapons testing. Largely based on Teller’s urging
that underground testing be continued, the United States rejected
a total moratorium in favor of the 1963 Atmospheric Test Ban
Treaty.
During the 1980’s, Teller, among others, convinced President
Reagan to embrace the Strategic Defense Initiative (SDI). Teller argued
that SDI components, such as the space-based “Excalibur,” a
nuclear bomb-powered X-ray laser weapon proposed by the Lawrence-
Livermore National Laboratory, would make thermonuclear
war not unimaginable, but theoretically impossible.
19 July 2009
Hovercraft
The invention: A vehicle requiring no surface contact for traction
that moves freely over a variety of surfaces—particularly
water—while supported on a self-generated cushion of air.
The people behind the invention:
Christopher Sydney Cockerell (1910- ), a British engineer
who built the first hovercraft
Ronald A. Shaw (1910- ), an early pioneer in aerodynamics
who experimented with hovercraft
Sir John Isaac Thornycroft (1843-1928), a Royal Navy architect
who was the first to experiment with air-cushion theory
Air-Cushion Travel
The air-cushion vehicle was first conceived by Sir John Isaac
Thornycroft of Great Britain in the 1870’s. He theorized that if a
ship had a plenum chamber (a box open at the bottom) for a hull
and it were pumped full of air, the ship would rise out of the water
and move faster, because there would be less drag. The main problem
was keeping the air from escaping from under the craft.
In the early 1950’s, Christopher Sydney Cockerell was experimenting
with ways to reduce both the wave-making and frictional
resistance that craft had to water. In 1953, he constructed a punt
with a fan that supplied air to the bottom of the craft, which could
thus glide over the surface with very little friction. The air was contained
under the craft by specially constructed side walls. In 1955,
the first true “hovercraft,” as Cockerell called it, was constructed of
balsa wood. It weighed only 127 grams and traveled over water at a
speed of 13 kilometers per hour.
On November 16, 1956, Cockerell successfully demonstrated
his model hovercraft at the patent agent’s office in London. It was
immediately placed on the “secret” list, and Saunders-Roe Ltd.
was given the first contract to build hovercraft in 1957. The first experimental
piloted hovercraft, the SR.N1, which had a weight of
3,400 kilograms and could carry three people at the speed of 25 knots, was completed on May 28, 1959, and publicly demonstrated
on June 11, 1959.
Ground Effect Phenomenon
In a hovercraft, a jet airstream is directed downward through a
hole in a metal disk, which forces the disk to rise. The jet of air has a
reverse effect of its own that forces the disk away from the surface.
Some of the air hitting the ground bounces back against the disk to
add further lift. This is called the “ground effect.” The ground effect
is such that the greater the under-surface area of the hovercraft, the
greater the reverse thrust of the air that bounces back. This makes
the hovercraft a mechanically efficient machine because it provides
three functions.
First, the ground effect reduces friction between the craft and the
earth’s surface. Second, it acts as a spring suspension to reduce
some of the vertical acceleration effects that arise from travel over
an uneven surface. Third, it provides a safe and comfortable ride at
high speed, whatever the operating environment. The air cushion
can distribute the weight of the hovercraft over almost its entire area
so that the cushion pressure is low.
The basic elements of the air-cushion vehicle are a hull, a propulsion
system, and a lift system. The hull, which accommodates the
crew, passengers, and freight, contains both the propulsion and lift
systems. The propulsion and lift systems can be driven by the same
power plant or by separate power plants. Early designs used only
one unit, but this proved to be a problem when adequate power was
not achieved for movement and lift. Better results are achieved
when two units are used, since far more power is used to lift the vehicle
than to propel it.
For lift, high-speed centrifugal fans are used to drive the air
through jets that are located under the craft. A redesigned aircraft
propeller is used for propulsion. Rudderlike fins and an air fan that
can be swiveled to provide direction are placed at the rear of the
craft.
Several different air systems can be used, depending on whether
a skirt system is used in the lift process. The plenum chamber system,
the peripheral jet system, and several types of recirculating air systems have all been successfully tried without skirting. Avariety
of rigid and flexible skirts have also proved to be satisfactory, depending
on the use of the vehicle.
Skirts are used to hold the air for lift. Skirts were once hung like curtains around hovercraft. Instead of simple curtains to contain the air,
there are now complicated designs that contain the cushion, duct the
air, and even provide a secondary suspension. The materials used in
the skirting have also changed from a rubberized fabric to pure rubber
and nylon and, finally, to neoprene, a lamination of nylon and plastic.
The three basic types of hovercraft are the amphibious, nonamphibious,
and semiamphibious models. The amphibious type can
travel over water and land, whereas the nonamphibious type is restricted
to water travel. The semiamphibious model is also restricted
to water travel but may terminate travel by nosing up on a prepared
ramp or beach. All hovercraft contain built-in buoyancy tanks in the
side skirting as a safety measure in the event that a hovercraft must
settle on the water. Most hovercraft are equipped with gas turbines
and use either propellers or water-jet propulsion.
Impact
Hovercraft are used primarily for short passenger ferry services.
Great Britain was the only nation to produce a large number of hovercraft.
The British built larger and faster craft and pioneered their
successful use as ferries across the English Channel, where they
could reach speeds of 111 kilometers per hour (160 knots) and carry
more than four hundred passengers and almost one hundred vehicles.
France and the former Soviet Union have also effectively demonstrated
hovercraft river travel, and the Soviets have experimented
with military applications as well.
The military adaptations of hovercraft have been more diversified.
Beach landings have been performed effectively, and the United
States used hovercraft for river patrols during the Vietnam War.
Other uses also exist for hovercraft. They can be used as harbor pilot
vessels and for patrolling shores in a variety of police-and customs-
related duties. Hovercraft can also serve as flood-rescue craft
and fire-fighting vehicles. Even a hoverfreighter is being considered.
The air-cushion theory in transport systems is rapidly developing.
It has spread to trains and smaller people movers in many
countries. Their smooth, rapid, clean, and efficient operation makes
hovercraft attractive to transportation designers around the world.
16 July 2009
Holography
The invention: A lensless system of three-dimensional photography
that was one of the most important developments in twentieth
century optical science.
The people behind the invention:
Dennis Gabor (1900-1979), a Hungarian-born inventor and
physicist who was awarded the 1971 Nobel Prize in Physics
Emmett Leith (1927- ), a radar researcher who, with Juris
Upatnieks, produced the first laser holograms
Juris Upatnieks (1936- ), a radar researcher who, with
Emmett Leith, produced the first laser holograms
Easter Inspiration
The development of photography in the early 1900’s made possible
the recording of events and information in ways unknown before
the twentieth century: the photographing of star clusters, the
recording of the emission spectra of heated elements, the storing of
data in the form of small recorded images (for example, microfilm),
and the photographing of microscopic specimens, among other
things. Because of its vast importance to the scientist, the science of
photography has developed steadily.
An understanding of the photographic and holographic processes
requires some knowledge of the wave behavior of light. Light is an
electromagnetic wave that, like a water wave, has an amplitude and a
phase. The amplitude corresponds to the wave height, while the
phase indicates which part of the wave is passing a given point at a
given time. A cork floating in a pond bobs up and down as waves
pass under it. The position of the cork at any time depends on both
amplitude and phase: The phase determines on which part of the
wave the cork is floating at any given time, and the amplitude determines
how high or low the cork can be moved. Waves from more
than one source arriving at the cork combine in ways that depend on
their relative phases. If the waves meet in the same phase, they add
and produce a large amplitude; if they arrive out of phase, they subtract and produce a small amplitude. The total amplitude, or intensity,
depends on the phases of the combining waves.
Dennis Gabor, the inventor of holography, was intrigued by the
way in which the photographic image of an object was stored by a
photographic plate but was unable to devote any consistent research
effort to the question until the 1940’s. At that time, Gabor was involved
in the development of the electron microscope. On Easter
morning in 1947, as Gabor was pondering the problem of how to
improve the electron microscope, the solution came to him. He
would attempt to take a poor electron picture and then correct it optically.
The process would require coherent electron beams—that is,
electron waves with a definite phase.
This two-stage method was inspired by the work of Lawrence
Bragg. Bragg had formed the image of a crystal lattice by diffracting
the photographic X-ray diffraction pattern of the original lattice.
This double diffraction process is the basis of the holographic process.
Bragg’s method was limited because of his inability to record
the phase information of the X-ray photograph. Therefore, he could
study only those crystals for which the phase relationship of the reflected
waves could be predicted.
Waiting for the Laser
Gabor devised a way of capturing the phase information after he
realized that adding coherent background to the wave reflected from
an object would make it possible to produce an interference pattern
on the photographic plate. When the phases of the two waves are
identical, a maximum intensity will be recorded; when they are out of
phase, a minimum intensity is recorded. Therefore, what is recorded
in a hologram is not an image of the object but rather the interference
pattern of the two coherent waves. This pattern looks like a collection
of swirls and blank spots. The hologram (or photograph) is then illuminated
by the reference beam, and part of the transmitted light is a
replica of the original object wave. When viewing this object wave,
one sees an exact replica of the original object.
The major impediment at the time in making holograms using
any form of radiation was a lack of coherent sources. For example,
the coherence of the mercury lamp used by Gabor and his assistant IvorWilliams was so short that they were able to make holograms of
only about a centimeter in diameter. The early results were rather
poor in terms of image quality and also had a double image. For this
reason, there was little interest in holography, and the subject lay almost
untouched for more than ten years.
Interest in the field was rekindled after the laser (light amplification
by stimulated emission of radiation) was developed in 1962.
Emmett Leith and Juris Upatnieks, who were conducting radar research
at the University of Michigan, published the first laser holographs
in 1963. The laser was an intense light source with a very
long coherence length. Its monochromatic nature improved the resolution
of the images greatly. Also, there was no longer any restriction
on the size of the object to be photographed.
The availability of the laser allowed Leith and Upatnieks to propose
another improvement in holographic technique. Before 1964,
holograms were made of only thin transparent objects. A small region
of the hologram bore a one-to-one correspondence to a region
of the object. Only a small portion of the image could be viewed at
one time without the aid of additional optical components. Illuminating
the transparency diffusely allowed the whole image to be
seen at one time. This development also made it possible to record
holograms of diffusely reflected three-dimensional objects. Gabor
had seen from the beginning that this should make it possible to create
three-dimensional images.
After the early 1960’s, the field of holography developed very
quickly. Because holography is different from conventional photography,
the two techniques often complement each other. Gabor saw
his idea blossom into a very important technique in optical science.
Impact
The development of the laser and the publication of the first laser
holograms in 1963 caused a blossoming of the new technique in
many fields. Soon, techniques were developed that allowed holograms
to be viewed with white light. It also became possible for holograms
to reconstruct multicolored images. Holographic methods
have been used to map terrain with radar waves and to conduct surveillance
in the fields of forestry, agriculture, and meteorology.By the 1990’s, holography had become a multimillion-dollar industry,
finding applications in advertising, as an art form, and in security
devices on credit cards, as well as in scientific fields. An alternate
form of holography, also suggested by Gabor, uses sound
waves. Acoustical imaging is useful whenever the medium around
the object to be viewed is opaque to light rays—for example, in
medical diagnosis. Holography has affected many areas of science,
technology, and culture.
13 July 2009
Heat pump
The invention:
A device that warms and cools buildings efficiently
and cheaply by moving heat from one area to another.
The people behind the invention:
T. G. N. Haldane, a British engineer
Lord Kelvin (William Thomson, 1824-1907), a British
mathematician, scientist, and engineer
Sadi Carnot (1796-1832), a French physicist and
thermodynamicist
Heart-lung machine
The invention: The first artificial device to oxygenate and circulate
blood during surgery, the heart-lung machine began the era of
open-heart surgery.
The people behind the invention:
John H. Gibbon, Jr. (1903-1974), a cardiovascular surgeon
Mary Hopkinson Gibbon (1905- ), a research technician
Thomas J. Watson (1874-1956), chairman of the board of IBM
T. L. Stokes and J. B. Flick, researchers in Gibbon’s laboratory
Bernard J. Miller (1918- ), a cardiovascular surgeon and
research associate
Cecelia Bavolek, the first human to undergo open-heart surgery
successfully using the heart-lung machine
A Young Woman’s Death
In the first half of the twentieth century, cardiovascular medicine
had many triumphs. Effective anesthesia, antiseptic conditions, and
antibiotics made surgery safer. Blood-typing, anti-clotting agents,
and blood preservatives made blood transfusion practical. Cardiac
catheterization (feeding a tube into the heart), electrocardiography,
and fluoroscopy (visualizing living tissues with an X-ray machine)
made the nonsurgical diagnosis of cardiovascular problems possible.
As of 1950, however, there was no safe way to treat damage or defects
within the heart. To make such a correction, this vital organ’s
function had to be interrupted. The problem was to keep the body’s
tissues alive while working on the heart. While some surgeons practiced
so-called blind surgery, in which they inserted a finger into the
heart through a small incision without observing what they were attempting
to correct, others tried to reduce the body’s need for circulation
by slowly chilling the patient until the heart stopped. Still other
surgeons used “cross-circulation,” in which the patient’s circulation
was connected to a donor’s circulation. All these approaches carried
profound risks of hemorrhage, tissue damage, and death.
In February of 1931, Gibbon witnessed the death of a young woman whose lung circulation was blocked by a blood clot. Because
her blood could not pass through her lungs, she slowly lost
consciousness from lack of oxygen. As he monitored her pulse and
breathing, Gibbon thought about ways to circumvent the obstructed
lungs and straining heart and provide the oxygen required. Because
surgery to remove such a blood clot was often fatal, the woman’s
surgeons operated only as a last resort. Though the surgery took
only six and one-half minutes, she never regained consciousness.
This experience prompted Gibbon to pursue what few people then
considered a practical line of research: a way to circulate and oxygenate
blood outside the body.
A Woman’s Life Restored
Gibbon began the project in earnest in 1934, when he returned to
the laboratory of Edward D. Churchill at Massachusetts General
Hospital for his second surgical research fellowship. He was assisted
by Mary Hopkinson Gibbon. Together, they developed, using
cats, a surgical technique for removing blood froma vein, supplying
the blood with oxygen, and returning it to an artery using tubes inserted
into the blood vessels. Their objective was to create a device
that would keep the blood moving, spread it over a very thin layer
to pick up oxygen efficiently and remove carbon dioxide, and avoid
both clotting and damaging blood cells. In 1939, they reported that
prolonged survival after heart-lung bypass was possible in experimental
animals.
WorldWar II (1939-1945) interrupted the progress of this work; it
was resumed by Gibbon at Jefferson Medical College in 1944. Shortly
thereafter, he attracted the interest of Thomas J.Watson, chairman of
the board of the International Business Machines (IBM) Corporation,
who provided the services of IBM’s experimental physics laboratory
and model machine shop as well as the assistance of staff engineers.
IBM constructed and modified two experimental machines
over the next seven years, and IBM engineers contributed significantly
to the evolution of a machine that would be practical in humans.
Gibbon’s first attempt to use the pump-oxygenator in a human
being was in a fifteen-month-old baby. This attempt failed, not because of a malfunction or a surgical mistake but because of a misdiagnosis.
The child died following surgery because the real problem
had not been corrected by the surgery.
On May 6, 1953, the heart-lung machine was first used successfully
on Cecelia Bavolek. In the six months before surgery, Bavolek
had been hospitalized three times for symptoms of heart failure
when she tried to engage in normal activity. While her circulation
was connected to the heart-lung machine for forty-five minutes, the
surgical team headed by Gibbon was able to close an opening between
her atria and establish normal heart function. Two months
later, an examination of the defect revealed that it was fully closed;
Bavolek resumed a normal life. The age of open-heart surgery had
begun.
Consequences
The heart-lung bypass technique alone could not make openheart
surgery truly practical. When it was possible to keep tissues
alive by diverting blood around the heart and oxygenating it, other
questions already under investigation became even more critical:
how to prolong the survival of bloodless organs, how to measure
oxygen and carbon dioxide levels in the blood, and how to prolong
anesthesia during complicated surgery. Thus, following the first
successful use of the heart-lung machine, surgeons continued to refine
the methods of open-heart surgery.
The heart-lung apparatus set the stage for the advent of “replacement
parts” for many types of cardiovascular problems. Cardiac
valve replacement was first successfully accomplished in 1960 by
placing an artificial ball valve between the left atrium and ventricle.
In 1957, doctors performed the first coronary bypass surgery, grafting
sections of a leg vein into the heart’s circulation system to divert
blood around clogged coronary arteries. Likewise, the first successful
heart transplant (1967) and the controversial Jarvik-7 artificial
heart implantation (1982) required the ability to stop the heart and
keep the body’s tissues alive during time-consuming and delicate
surgical procedures. Gibbon’s heart-lung machine paved the way
for all these developments.
09 July 2009
Hearing aid
The invention: Miniaturized electronic amplifier worn inside the
ears of hearing-impaired persons.
The organization behind the invention:
Bell Labs, the research and development arm of the American
Telephone and Telegraph Company
Trapped in Silence
Until the middle of the twentieth century, people who experienced
hearing loss had little hope of being able to hear sounds without the
use of large, awkward, heavy appliances. For many years, the only
hearing aids available were devices known as ear trumpets. The ear
trumpet tried to compensate for hearing loss by increasing the number
of sound waves funneled into the ear canal. A wide, bell-like
mouth similar to the bell of a musical trumpet narrowed to a tube that
the user placed in his or her ear. Ear trumpets helped a little, but they
could not truly increase the volume of the sounds heard.
Beginning in the nineteenth century, inventors tried to develop
electrical devices that would serve as hearing aids. The telephone
was actually a by-product of Alexander Graham Bell’s efforts to
make a hearing aid. Following the invention of the telephone, electrical
engineers designed hearing aids that employed telephone
technology, but those hearing aids were only a slight improvement
over the old ear trumpets. They required large, heavy battery packs
and used a carbon microphone similar to the receiver in a telephone.
More sensitive than purely physical devices such as the ear trumpet,
they could transmit a wider range of sounds but could not amplify
them as effectively as electronic hearing aids now do.
Transistors Make Miniaturization Possible
Two types of hearing aids exist: body-worn and head-worn.
Body-worn hearing aids permit the widest range of sounds to be
heard, but because of the devices’ larger size, many hearing impaired persons do not like to wear them. Head-worn hearing
aids, especially those worn completely in the ear, are much less conspicuous.
In addition to in-ear aids, the category of head-worn hearing
aids includes both hearing aids mounted in eyeglass frames and
those worn behind the ear.
All hearing aids, whether head-worn or body-worn, consist of
four parts: a microphone to pick up sounds, an amplifier, a receiver,
and a power source. The microphone gathers sound waves and converts
them to electrical signals; the amplifier boosts, or increases,
those signals; and the receiver then converts the signals back into
sound waves. In effect, the hearing aid is a miniature radio. After
the receiver converts the signals back to sound waves, those waves
are directed into the ear canal through an earpiece or ear mold. The
ear mold generally is made of plastic and is custom fitted from an
impression taken from the prospective user’s ear.
Effective head-worn hearing aids could not be built until the
electronic circuit was developed in the early 1950’s. The same invention—
the transistor—that led to small portable radios and tape
players allowed engineers to create miniaturized, inconspicuous
hearing aids. Depending on the degree of amplification required,
the amplifier in a hearing aid contains three or more transistors.
Transistors first replaced vacuum tubes in devices such as radios
and phonographs, and then engineers realized that they could be
used in devices for the hearing-impaired.
The research at Bell Labs that led to the invention of the transistor
rose out of military research duringWorldWar II. The vacuum tubes
used in, for example, radar installations to amplify the strength of electronic
signals were big, were fragile because they were made of
blown glass, and gave off high levels of heat when they were used.
Transistors, however, made it possible to build solid-state, integrated
circuits. These are made from crystals of metals such as germanium
or arsenic alloys and therefore are much less fragile than glass. They
are also extremely small (in fact, some integrated circuits are barely
visible to the naked eye) and give off no heat during use.
The number of transistors in a hearing aid varies depending upon
the amount of amplification required. The first transistor is the most
important for the listener in terms of the quality of sound heard. If the
frequency response is set too high—that is, if the device is too sensitive—the listener will be bothered by distracting background noise.
Theoretically, there is no limit on the amount of amplification that a
hearing aid can be designed to provide, but there are practical limits.
The higher the amplification, the more power is required to operate
the hearing aid. This is why body-worn hearing aids can convey a
wider range of sounds than head-worn devices can. It is the power
source—not the electronic components—that is the limiting factor. A
body-worn hearing aid includes a larger battery pack than can be
used with a head-worn device. Indeed, despite advances in battery
technology, the power requirements of a head-worn hearing aid are
such that a 1.4-volt battery that could power a wristwatch for several
years will last only a few days in a hearing aid.
Consequences
The invention of the electronic hearing aid made it possible for
many hearing-impaired persons to participate in a hearing world.
Prior to the invention of the hearing aid, hearing-impaired children
often were unable to participate in routine school activities or function
effectively in mainstream society. Instead of being able to live at
home with their families and enjoy the same experiences that were
available to other children their age, often they were forced to attend
special schools operated by the state or by charities.
Hearing-impaired people were singled out as being different and
were limited in their choice of occupations. Although not every
hearing-impaired person can be helped to hear with a hearing aid—
particularly in cases of total hearing loss—the electronic hearing aid
has ended restrictions for many hearing-impaired people. Hearingimpaired
children are now included in public school classes, and
hearing-impaired adults can now pursue occupations from which
they were once excluded.
Today, many deaf and hearing-impaired persons have chosen to
live without the help of a hearing aid. They believe that they are not
disabled but simply different, and they point out that their “disability”
often allows them to appreciate and participate in life in unique
and positive ways. For them, the use of hearing aids is a choice, not a
necessity. For those who choose, hearing aids make it possible to
participate in the hearing world.
Subscribe to:
Posts (Atom)