16 July 2009
Holography
The invention: A lensless system of three-dimensional photography
that was one of the most important developments in twentieth
century optical science.
The people behind the invention:
Dennis Gabor (1900-1979), a Hungarian-born inventor and
physicist who was awarded the 1971 Nobel Prize in Physics
Emmett Leith (1927- ), a radar researcher who, with Juris
Upatnieks, produced the first laser holograms
Juris Upatnieks (1936- ), a radar researcher who, with
Emmett Leith, produced the first laser holograms
Easter Inspiration
The development of photography in the early 1900’s made possible
the recording of events and information in ways unknown before
the twentieth century: the photographing of star clusters, the
recording of the emission spectra of heated elements, the storing of
data in the form of small recorded images (for example, microfilm),
and the photographing of microscopic specimens, among other
things. Because of its vast importance to the scientist, the science of
photography has developed steadily.
An understanding of the photographic and holographic processes
requires some knowledge of the wave behavior of light. Light is an
electromagnetic wave that, like a water wave, has an amplitude and a
phase. The amplitude corresponds to the wave height, while the
phase indicates which part of the wave is passing a given point at a
given time. A cork floating in a pond bobs up and down as waves
pass under it. The position of the cork at any time depends on both
amplitude and phase: The phase determines on which part of the
wave the cork is floating at any given time, and the amplitude determines
how high or low the cork can be moved. Waves from more
than one source arriving at the cork combine in ways that depend on
their relative phases. If the waves meet in the same phase, they add
and produce a large amplitude; if they arrive out of phase, they subtract and produce a small amplitude. The total amplitude, or intensity,
depends on the phases of the combining waves.
Dennis Gabor, the inventor of holography, was intrigued by the
way in which the photographic image of an object was stored by a
photographic plate but was unable to devote any consistent research
effort to the question until the 1940’s. At that time, Gabor was involved
in the development of the electron microscope. On Easter
morning in 1947, as Gabor was pondering the problem of how to
improve the electron microscope, the solution came to him. He
would attempt to take a poor electron picture and then correct it optically.
The process would require coherent electron beams—that is,
electron waves with a definite phase.
This two-stage method was inspired by the work of Lawrence
Bragg. Bragg had formed the image of a crystal lattice by diffracting
the photographic X-ray diffraction pattern of the original lattice.
This double diffraction process is the basis of the holographic process.
Bragg’s method was limited because of his inability to record
the phase information of the X-ray photograph. Therefore, he could
study only those crystals for which the phase relationship of the reflected
waves could be predicted.
Waiting for the Laser
Gabor devised a way of capturing the phase information after he
realized that adding coherent background to the wave reflected from
an object would make it possible to produce an interference pattern
on the photographic plate. When the phases of the two waves are
identical, a maximum intensity will be recorded; when they are out of
phase, a minimum intensity is recorded. Therefore, what is recorded
in a hologram is not an image of the object but rather the interference
pattern of the two coherent waves. This pattern looks like a collection
of swirls and blank spots. The hologram (or photograph) is then illuminated
by the reference beam, and part of the transmitted light is a
replica of the original object wave. When viewing this object wave,
one sees an exact replica of the original object.
The major impediment at the time in making holograms using
any form of radiation was a lack of coherent sources. For example,
the coherence of the mercury lamp used by Gabor and his assistant IvorWilliams was so short that they were able to make holograms of
only about a centimeter in diameter. The early results were rather
poor in terms of image quality and also had a double image. For this
reason, there was little interest in holography, and the subject lay almost
untouched for more than ten years.
Interest in the field was rekindled after the laser (light amplification
by stimulated emission of radiation) was developed in 1962.
Emmett Leith and Juris Upatnieks, who were conducting radar research
at the University of Michigan, published the first laser holographs
in 1963. The laser was an intense light source with a very
long coherence length. Its monochromatic nature improved the resolution
of the images greatly. Also, there was no longer any restriction
on the size of the object to be photographed.
The availability of the laser allowed Leith and Upatnieks to propose
another improvement in holographic technique. Before 1964,
holograms were made of only thin transparent objects. A small region
of the hologram bore a one-to-one correspondence to a region
of the object. Only a small portion of the image could be viewed at
one time without the aid of additional optical components. Illuminating
the transparency diffusely allowed the whole image to be
seen at one time. This development also made it possible to record
holograms of diffusely reflected three-dimensional objects. Gabor
had seen from the beginning that this should make it possible to create
three-dimensional images.
After the early 1960’s, the field of holography developed very
quickly. Because holography is different from conventional photography,
the two techniques often complement each other. Gabor saw
his idea blossom into a very important technique in optical science.
Impact
The development of the laser and the publication of the first laser
holograms in 1963 caused a blossoming of the new technique in
many fields. Soon, techniques were developed that allowed holograms
to be viewed with white light. It also became possible for holograms
to reconstruct multicolored images. Holographic methods
have been used to map terrain with radar waves and to conduct surveillance
in the fields of forestry, agriculture, and meteorology.By the 1990’s, holography had become a multimillion-dollar industry,
finding applications in advertising, as an art form, and in security
devices on credit cards, as well as in scientific fields. An alternate
form of holography, also suggested by Gabor, uses sound
waves. Acoustical imaging is useful whenever the medium around
the object to be viewed is opaque to light rays—for example, in
medical diagnosis. Holography has affected many areas of science,
technology, and culture.
13 July 2009
Heat pump
The invention:
A device that warms and cools buildings efficiently
and cheaply by moving heat from one area to another.
The people behind the invention:
T. G. N. Haldane, a British engineer
Lord Kelvin (William Thomson, 1824-1907), a British
mathematician, scientist, and engineer
Sadi Carnot (1796-1832), a French physicist and
thermodynamicist
Heart-lung machine
The invention: The first artificial device to oxygenate and circulate
blood during surgery, the heart-lung machine began the era of
open-heart surgery.
The people behind the invention:
John H. Gibbon, Jr. (1903-1974), a cardiovascular surgeon
Mary Hopkinson Gibbon (1905- ), a research technician
Thomas J. Watson (1874-1956), chairman of the board of IBM
T. L. Stokes and J. B. Flick, researchers in Gibbon’s laboratory
Bernard J. Miller (1918- ), a cardiovascular surgeon and
research associate
Cecelia Bavolek, the first human to undergo open-heart surgery
successfully using the heart-lung machine
A Young Woman’s Death
In the first half of the twentieth century, cardiovascular medicine
had many triumphs. Effective anesthesia, antiseptic conditions, and
antibiotics made surgery safer. Blood-typing, anti-clotting agents,
and blood preservatives made blood transfusion practical. Cardiac
catheterization (feeding a tube into the heart), electrocardiography,
and fluoroscopy (visualizing living tissues with an X-ray machine)
made the nonsurgical diagnosis of cardiovascular problems possible.
As of 1950, however, there was no safe way to treat damage or defects
within the heart. To make such a correction, this vital organ’s
function had to be interrupted. The problem was to keep the body’s
tissues alive while working on the heart. While some surgeons practiced
so-called blind surgery, in which they inserted a finger into the
heart through a small incision without observing what they were attempting
to correct, others tried to reduce the body’s need for circulation
by slowly chilling the patient until the heart stopped. Still other
surgeons used “cross-circulation,” in which the patient’s circulation
was connected to a donor’s circulation. All these approaches carried
profound risks of hemorrhage, tissue damage, and death.
In February of 1931, Gibbon witnessed the death of a young woman whose lung circulation was blocked by a blood clot. Because
her blood could not pass through her lungs, she slowly lost
consciousness from lack of oxygen. As he monitored her pulse and
breathing, Gibbon thought about ways to circumvent the obstructed
lungs and straining heart and provide the oxygen required. Because
surgery to remove such a blood clot was often fatal, the woman’s
surgeons operated only as a last resort. Though the surgery took
only six and one-half minutes, she never regained consciousness.
This experience prompted Gibbon to pursue what few people then
considered a practical line of research: a way to circulate and oxygenate
blood outside the body.
A Woman’s Life Restored
Gibbon began the project in earnest in 1934, when he returned to
the laboratory of Edward D. Churchill at Massachusetts General
Hospital for his second surgical research fellowship. He was assisted
by Mary Hopkinson Gibbon. Together, they developed, using
cats, a surgical technique for removing blood froma vein, supplying
the blood with oxygen, and returning it to an artery using tubes inserted
into the blood vessels. Their objective was to create a device
that would keep the blood moving, spread it over a very thin layer
to pick up oxygen efficiently and remove carbon dioxide, and avoid
both clotting and damaging blood cells. In 1939, they reported that
prolonged survival after heart-lung bypass was possible in experimental
animals.
WorldWar II (1939-1945) interrupted the progress of this work; it
was resumed by Gibbon at Jefferson Medical College in 1944. Shortly
thereafter, he attracted the interest of Thomas J.Watson, chairman of
the board of the International Business Machines (IBM) Corporation,
who provided the services of IBM’s experimental physics laboratory
and model machine shop as well as the assistance of staff engineers.
IBM constructed and modified two experimental machines
over the next seven years, and IBM engineers contributed significantly
to the evolution of a machine that would be practical in humans.
Gibbon’s first attempt to use the pump-oxygenator in a human
being was in a fifteen-month-old baby. This attempt failed, not because of a malfunction or a surgical mistake but because of a misdiagnosis.
The child died following surgery because the real problem
had not been corrected by the surgery.
On May 6, 1953, the heart-lung machine was first used successfully
on Cecelia Bavolek. In the six months before surgery, Bavolek
had been hospitalized three times for symptoms of heart failure
when she tried to engage in normal activity. While her circulation
was connected to the heart-lung machine for forty-five minutes, the
surgical team headed by Gibbon was able to close an opening between
her atria and establish normal heart function. Two months
later, an examination of the defect revealed that it was fully closed;
Bavolek resumed a normal life. The age of open-heart surgery had
begun.
Consequences
The heart-lung bypass technique alone could not make openheart
surgery truly practical. When it was possible to keep tissues
alive by diverting blood around the heart and oxygenating it, other
questions already under investigation became even more critical:
how to prolong the survival of bloodless organs, how to measure
oxygen and carbon dioxide levels in the blood, and how to prolong
anesthesia during complicated surgery. Thus, following the first
successful use of the heart-lung machine, surgeons continued to refine
the methods of open-heart surgery.
The heart-lung apparatus set the stage for the advent of “replacement
parts” for many types of cardiovascular problems. Cardiac
valve replacement was first successfully accomplished in 1960 by
placing an artificial ball valve between the left atrium and ventricle.
In 1957, doctors performed the first coronary bypass surgery, grafting
sections of a leg vein into the heart’s circulation system to divert
blood around clogged coronary arteries. Likewise, the first successful
heart transplant (1967) and the controversial Jarvik-7 artificial
heart implantation (1982) required the ability to stop the heart and
keep the body’s tissues alive during time-consuming and delicate
surgical procedures. Gibbon’s heart-lung machine paved the way
for all these developments.
09 July 2009
Hearing aid
The invention: Miniaturized electronic amplifier worn inside the
ears of hearing-impaired persons.
The organization behind the invention:
Bell Labs, the research and development arm of the American
Telephone and Telegraph Company
Trapped in Silence
Until the middle of the twentieth century, people who experienced
hearing loss had little hope of being able to hear sounds without the
use of large, awkward, heavy appliances. For many years, the only
hearing aids available were devices known as ear trumpets. The ear
trumpet tried to compensate for hearing loss by increasing the number
of sound waves funneled into the ear canal. A wide, bell-like
mouth similar to the bell of a musical trumpet narrowed to a tube that
the user placed in his or her ear. Ear trumpets helped a little, but they
could not truly increase the volume of the sounds heard.
Beginning in the nineteenth century, inventors tried to develop
electrical devices that would serve as hearing aids. The telephone
was actually a by-product of Alexander Graham Bell’s efforts to
make a hearing aid. Following the invention of the telephone, electrical
engineers designed hearing aids that employed telephone
technology, but those hearing aids were only a slight improvement
over the old ear trumpets. They required large, heavy battery packs
and used a carbon microphone similar to the receiver in a telephone.
More sensitive than purely physical devices such as the ear trumpet,
they could transmit a wider range of sounds but could not amplify
them as effectively as electronic hearing aids now do.
Transistors Make Miniaturization Possible
Two types of hearing aids exist: body-worn and head-worn.
Body-worn hearing aids permit the widest range of sounds to be
heard, but because of the devices’ larger size, many hearing impaired persons do not like to wear them. Head-worn hearing
aids, especially those worn completely in the ear, are much less conspicuous.
In addition to in-ear aids, the category of head-worn hearing
aids includes both hearing aids mounted in eyeglass frames and
those worn behind the ear.
All hearing aids, whether head-worn or body-worn, consist of
four parts: a microphone to pick up sounds, an amplifier, a receiver,
and a power source. The microphone gathers sound waves and converts
them to electrical signals; the amplifier boosts, or increases,
those signals; and the receiver then converts the signals back into
sound waves. In effect, the hearing aid is a miniature radio. After
the receiver converts the signals back to sound waves, those waves
are directed into the ear canal through an earpiece or ear mold. The
ear mold generally is made of plastic and is custom fitted from an
impression taken from the prospective user’s ear.
Effective head-worn hearing aids could not be built until the
electronic circuit was developed in the early 1950’s. The same invention—
the transistor—that led to small portable radios and tape
players allowed engineers to create miniaturized, inconspicuous
hearing aids. Depending on the degree of amplification required,
the amplifier in a hearing aid contains three or more transistors.
Transistors first replaced vacuum tubes in devices such as radios
and phonographs, and then engineers realized that they could be
used in devices for the hearing-impaired.
The research at Bell Labs that led to the invention of the transistor
rose out of military research duringWorldWar II. The vacuum tubes
used in, for example, radar installations to amplify the strength of electronic
signals were big, were fragile because they were made of
blown glass, and gave off high levels of heat when they were used.
Transistors, however, made it possible to build solid-state, integrated
circuits. These are made from crystals of metals such as germanium
or arsenic alloys and therefore are much less fragile than glass. They
are also extremely small (in fact, some integrated circuits are barely
visible to the naked eye) and give off no heat during use.
The number of transistors in a hearing aid varies depending upon
the amount of amplification required. The first transistor is the most
important for the listener in terms of the quality of sound heard. If the
frequency response is set too high—that is, if the device is too sensitive—the listener will be bothered by distracting background noise.
Theoretically, there is no limit on the amount of amplification that a
hearing aid can be designed to provide, but there are practical limits.
The higher the amplification, the more power is required to operate
the hearing aid. This is why body-worn hearing aids can convey a
wider range of sounds than head-worn devices can. It is the power
source—not the electronic components—that is the limiting factor. A
body-worn hearing aid includes a larger battery pack than can be
used with a head-worn device. Indeed, despite advances in battery
technology, the power requirements of a head-worn hearing aid are
such that a 1.4-volt battery that could power a wristwatch for several
years will last only a few days in a hearing aid.
Consequences
The invention of the electronic hearing aid made it possible for
many hearing-impaired persons to participate in a hearing world.
Prior to the invention of the hearing aid, hearing-impaired children
often were unable to participate in routine school activities or function
effectively in mainstream society. Instead of being able to live at
home with their families and enjoy the same experiences that were
available to other children their age, often they were forced to attend
special schools operated by the state or by charities.
Hearing-impaired people were singled out as being different and
were limited in their choice of occupations. Although not every
hearing-impaired person can be helped to hear with a hearing aid—
particularly in cases of total hearing loss—the electronic hearing aid
has ended restrictions for many hearing-impaired people. Hearingimpaired
children are now included in public school classes, and
hearing-impaired adults can now pursue occupations from which
they were once excluded.
Today, many deaf and hearing-impaired persons have chosen to
live without the help of a hearing aid. They believe that they are not
disabled but simply different, and they point out that their “disability”
often allows them to appreciate and participate in life in unique
and positive ways. For them, the use of hearing aids is a choice, not a
necessity. For those who choose, hearing aids make it possible to
participate in the hearing world.
Hard disk
The invention: A large-capacity, permanent magnetic storage device
built into most personal computers.
The people behind the invention:
Alan Shugart (1930- ), an engineer who first developed the
floppy disk
Philip D. Estridge (1938?-1985), the director of IBM’s product
development facility
Thomas J. Watson, Jr. (1914-1993), the chief executive officer of
IBM
The Personal Oddity
When the International Business Machines (IBM) Corporation
introduced its first microcomputer, called simply the IBM PC (for
“personal computer”), the occasion was less a dramatic invention
than the confirmation of a trend begun some years before. A number
of companies had introduced microcomputers before IBM; one
of the best known at that time was Apple Corporation’s Apple II, for
which software for business and scientific use was quickly developed.
Nevertheless, the microcomputer was quite expensive and
was often looked upon as an oddity, not as a useful tool.
Under the leadership of Thomas J. Watson, Jr., IBM, which had
previously focused on giant mainframe computers, decided to develop
the PC. A design team headed by Philip D. Estridge was assembled
in Boca Raton, Florida, and it quickly developed its first,
pacesetting product. It is an irony of history that IBM anticipated
selling only one hundred thousand or so of these machines, mostly
to scientists and technically inclined hobbyists. Instead, IBM’s product
sold exceedingly well, and its design parameters, as well as its
operating system, became standards.
The earliest microcomputers used a cassette recorder as a means
of mass storage; a floppy disk drive capable of storing approximately
160 kilobytes of data was initially offered only as an option.
While home hobbyists were accustomed to using a cassette recorder for storage purposes, such a system was far too slow and awkward
for use in business and science. As a result, virtually every IBM PC
sold was equipped with at least one 5.25-inch floppy disk drive.
Memory Requirements
All computers require memory of two sorts in order to carry out
their tasks. One type of memory is main memory, or random access
memory (RAM), which is used by the computer’s central processor
to store data it is using while operating. The type of memory used
for this function is built typically of silicon-based integrated circuits
that have the advantage of speed (to allow the processor to fetch or
store the data quickly), but the disadvantage of possibly losing or
“forgetting” data when the electric current is turned off. Further,
such memory generally is relatively expensive.
To reduce costs, another type of memory—long-term storage
memory, known also as “mass storage”—was developed. Mass
storage devices include magnetic media (tape or disk drives) and
optical media (such as the compact disc, read-only memory, or CDROM).
While the speed with which data may be retrieved from or
stored in such devices is rather slow compared to the central processor’s
speed, a disk drive—the most common form of mass storage
used in PCs—can store relatively large amounts of data quite inexpensively.
Early floppy disk drives (so called because the magnetically
treated material on which data are recorded is made of a very flexible
plastic) held 160 kilobytes of data using only one side of the
magnetically coated disk (about eighty pages of normal, doublespaced,
typewritten information). Later developments increased
storage capacities to 360 kilobytes by using both sides of the disk
and later, with increasing technological ability, 1.44 megabytes (millions
of bytes). In contrast, mainframe computers, which are typically
connected to large and expensive tape drive storage systems,
could store gigabytes (millions of megabytes) of information.
While such capacities seem large, the needs of business and scientific
users soon outstripped available space. Since even the mailing
list of a small business or a scientist’s mathematical model of a
chemical reaction easily could require greater storage potential than early PCs allowed, the need arose for a mass storage device that
could accommodate very large files of data.
The answer was the hard disk drive, also known as a “fixed disk
drive,” reflecting the fact that the disk itself is not only rigid but also
permanently installed inside the machine. In 1955, IBM had envisioned
the notion of a fixed, hard magnetic disk as a means of storing
computer data, and, under the direction of Alan Shugart in the
1960’s, the floppy disk was developed as well.
As the engineers of IBM’s facility in Boca Raton refined the idea
of the original PC to design the new IBM PC XT, it became clear that
chief among the needs of users was the availability of large-capability
storage devices. The decision was made to add a 10-megabyte
hard disk drive to the PC. On March 8, 1983, less than two years after
the introduction of its first PC, IBM introduced the PC XT. Like
the original, it was an evolutionary design, not a revolutionary one.
The inclusion of a hard disk drive, however, signaled that mass storage
devices in personal computers had arrived.
Consequences
Above all else, any computer provides a means for storing, ordering,
analyzing, and presenting information. If the personal computer
is to become the information appliance some have suggested
it will be, the ability to manipulate very large amounts of data will
be of paramount concern. Hard disk technology was greeted enthusiastically
in the marketplace, and the demand for hard drives has
seen their numbers increase as their quality increases and their
prices drop.
It is easy to understand one reason for such eager acceptance:
convenience. Floppy-bound computer users find themselves frequently
changing (or “swapping”) their disks in order to allow programs
to find the data they need. Moreover, there is a limit to how
much data a single floppy disk can hold. The advantage of a hard
drive is that it allows users to keep seemingly unlimited amounts of
data and programs stored in their machines and readily available.
Also, hard disk drives are capable of finding files and transferring
their contents to the processor much more quickly than a
floppy drive. A user may thus create exceedingly large files, keep them on hand at all times, and manipulate data more quickly than
with a floppy. Finally, while a hard drive is a slow substitute for
main memory, it allows users to enjoy the benefits of larger memories
at significantly lower cost.
The introduction of the PC XT with its 10-megabyte hard drive
was a milestone in the development of the PC. Over the next two decades,
the size of computer hard drives increased dramatically. By
2001, few personal computers were sold with hard drives with less
than three gigabytes of storage capacity, and hard drives with more
than thirty gigabytes were becoming the standard. Indeed, for less
money than a PC XT cost in the mid-1980’s, one could buy a fully
equipped computer with a hard drive holding sixty gigabytes—a
storage capacity equivalent to six thousand 10-megabyte hard drives.
Gyrocompass
The invention: The first practical navigational device that enabled
ships and submarines to stay on course without relying on the
earth’s unreliable magnetic poles.
The people behind the invention:
Hermann Anschütz-Kaempfe (1872-1931), a German inventor
and manufacturer
Jean-Bernard-Léon Foucault (1819-1868), a French experimental
physicist and inventor
Elmer Ambrose Sperry (1860-1930), an American engineer and
inventor
From Toys to Tools
A gyroscope consists of a rapidly spinning wheel mounted in a
frame that enables the wheel to tilt freely in any direction. The
amount of momentum allows the wheel to maintain its “attitude”
even when the whole device is turned or rotated.
These devices have been used to solve problems arising in such
areas as sailing and navigation. For example, a gyroscope aboard a
ship maintains its orientation even while the ship is rolling. Among
other things, this allows the extent of the roll to be measured accurately.
Moreover, the spin axis of a free gyroscope can be adjusted to
point toward true north. It will (with some exceptions) stay that
way despite changes in the direction of a vehicle in which it is
mounted. Gyroscopic effects were employed in the design of various
objects long before the theory behind them was formally
known. A classic example is a child’s top, which balances, seemingly
in defiance of gravity, as long as it continues to spin. Boomerangs
and flying disks derive stability and accuracy from the spin
imparted by the thrower. Likewise, the accuracy of rifles improved
when barrels were manufactured with internal spiral grooves that
caused the emerging bullet to spin.
In 1852, the French inventor Jean-Bernard-Léon Foucault built
the first gyroscope, a measuring device consisting of a rapidly spinning
wheel mounted within concentric rings that allowed the wheel to move freely about two axes. This device, like the Foucault pendulum,
was used to demonstrate the rotation of the earth around its
axis, since the spinning wheel, which is not fixed, retains its orientation
in space while the earth turns under it. The gyroscope had a related
interesting property: As it continued to spin, the force of the
earth’s rotation caused its axis to rotate gradually until it was oriented
parallel to the earth’s axis, that is, in a north-south direction. It
is this property that enables the gyroscope to be used as a compass.
When Magnets Fail
In 1904, Hermann Anschütz-Kaempfe, a German manufacturer
working in the Kiel shipyards, became interested in the navigation
problems of submarines used in exploration under the polar ice cap.
By 1905, efficient working submarines were a reality, and it was evident
to all major naval powers that submarines would play an increasingly
important role in naval strategy.
Submarine navigation posed problems, however, that could not
be solved by instruments designed for surface vessels. Asubmarine
needs to orient itself under water in three dimensions; it has no automatic
horizon with respect to which it can level itself. Navigation
by means of stars or landmarks is impossible when the submarine is
submerged. Furthermore, in an enclosed metal hull containing machinery
run by electricity, a magnetic compass is worthless. To a
lesser extent, increasing use of metal, massive moving parts, and
electrical equipment had also rendered the magnetic compass unreliable
in conventional surface battleships.
It made sense for Anschütz-Kaempfe to use the gyroscopic effect
to design an instrument that would enable a ship to maintain its
course while under water. Yet producing such a device would not be
easy. First, it needed to be suspended in such a way that it was free to
turn in any direction with as little mechanical resistance as possible.
At the same time, it had to be able to resist the inevitable pitching and
rolling of a vessel at sea. Finally, a continuous power supply was required
to keep the gyroscopic wheels spinning at high speed.
The original Anschütz-Kaempfe gyrocompass consisted of a pair
of spinning wheels driven by an electric motor. The device was connected
to a compass card visible to the ship’s navigator. Motor, gyroscope, and suspension system were mounted in a frame that allowed
the apparatus to remain stable despite the pitch and roll of the ship.
In 1906, the German navy installed a prototype of the Anschütz-
Kaempfe gyrocompass on the battleship Undine and subjected it to
exhaustive tests under simulated battle conditions, sailing the ship
under forced draft and suddenly reversing the engines, changing the
position of heavy turrets and other mechanisms, and firing heavy
guns. In conditions under which a magnetic compass would have
been worthless, the gyrocompass proved a satisfactory navigational
tool, and the results were impressive enough to convince the German
navy to undertake installation of gyrocompasses in submarines and
heavy battleships, including the battleship Deutschland.
Elmer Ambrose Sperry, a New York inventor intimately associated
with pioneer electrical development, was independently working on a design for a gyroscopic compass at about the same time.
In 1907, he patented a gyrocompass consisting of a single rotor
mounted within two concentric shells, suspended by fine piano
wire from a frame mounted on gimbals. The rotor of the Sperry
compass operated in a vacuum, which enabled it to rotate more
rapidly. The Sperry gyrocompass was in use on larger American
battleships and submarines on the eve ofWorldWar I (1914-1918).
Impact
The ability to navigate submerged submarines was of critical
strategic importance in World War I. Initially, the German navy
had an advantage both in the number of submarines at its disposal
and in their design and maneuverability. The German U-boat fleet
declared all-out war on Allied shipping, and, although their efforts
to blockade England and France were ultimately unsuccessful, the
tremendous toll they inflicted helped maintain the German position
and prolong the war. To a submarine fleet operating throughout
the Atlantic and in the Caribbean, as well as in near-shore European
waters, effective long-distance navigation was critical.
Gyrocompasses were standard equipment on submarines and
battleships and, increasingly, on larger commercial vessels during
World War I, World War II (1939-1945), and the period between the
wars. The devices also found their way into aircraft, rockets, and
guided missiles. Although the compasses were made more accurate
and easier to use, the fundamental design differed little from that invented
by Anschütz-Kaempfe.
05 July 2009
Geothermal power
The invention: Energy generated from the earth’s natural hot
springs.
The people behind the invention:
Prince Piero Ginori Conti (1865-1939), an Italian nobleman and
industrialist
Sir Charles Parsons (1854-1931), an English engineer
B. C. McCabe, an American businessman
Developing a Practical System
The first successful use of geothermal energy was at Larderello in
northern Italy. The Larderello geothermal field, located near the city
of Pisa about 240 kilometers northwest of Rome, contains many hot
springs and fumaroles (steam vents). In 1777, these springs were
found to be rich in boron, and in 1818, Francesco de Larderel began
extracting the useful mineral borax from them. Shortly after 1900,
Prince Piero Ginori Conti, director of the Larderello borax works,
conceived the idea of using the steam for power production. An experimental
electrical power plant was constructed at Larderello in
1904 to provide electric power to the borax plant. After this initial
experiment proved successful, a 250-kilowatt generating station
was installed in 1913 and commercial power production began.
As the Larderello field grew, additional geothermal sites throughout
the region were prospected and tapped for power. Power production
grew steadily until the 1940’s, when production reached
130 megawatts; however, the Larderello power plants were destroyed
late inWorldWar II (1939-1945). After the war, the generating
plants were rebuilt, and they were producing more than 400
megawatts by 1980.
The Larderello power plants encountered many of the technical
problems that were later to concern other geothermal facilities. For
example, hydrogen sulfide in the steam was highly corrosive to copper,
so the Larderello power plant used aluminum for electrical connections
much more than did conventional power plants of the time. Also, the low pressure of the steam in early wells at Larderello
presented problems. The first generators simply used steam to drive
a generator and vented the spent steam into the atmosphere. Asystem
of this sort, called a “noncondensing system,” is useful for small
generators but not efficient to produce large amounts of power.
Most steam engines derive power not only from the pressure of
the steam but also from the vacuum created when the steam is condensed
back to water. Geothermal systems that generate power
from condensation, as well as direct steam pressure, are called “condensing
systems.” Most large geothermal generators are of this
type. Condensation of geothermal steam presents special problems
not present in ordinary steam engines: There are other gases present
that do not condense. Instead of a vacuum, condensation of steam
contaminated with other gases would result in only a limited drop
in pressure and, consequently, very low efficiency.
Initially, the operators of Larderello tried to use the steam to heat
boilers that would, in turn, generate pure steam. Eventually, a device
was developed that removed most of the contaminating gases from
the steam. Although later wells at Larderello and other geothermal
fields produced steam at greater pressure, these engineering innovations
improved the efficiency of any geothermal power plant.
Expanding the Idea
In 1913, the English engineer Sir Charles Parsons proposed drilling
an extremely deep (12-kilometer) hole to tap the earth’s deep
heat. Power from such a deep hole would not come from natural
steam as at Larderello but would be generated by pumping fluid
into the hole and generating steam (as hot as 500 degrees Celsius) at
the bottom. In modern terms, Parsons proposed tapping “hot dryrock”
geothermal energy. (No such plant has been commercially operated
yet, but research is being actively pursued in several countries.)
The first use of geothermal energy in the United States was for direct
heating. In 1890, the municipal water company of Boise, Idaho,
began supplying hot water from a geothermal well. Water was
piped from the well to homes and businesses along appropriately
namedWarm Springs Avenue. At its peak, the system served more than four hundred customers, but as cheap natural gas became
available, the number declined.
Although Larderello was the first successful geothermal electric
power plant, the modern era of geothermal electric power began
with the opening of the Geysers Geothermal Field in California.
Early attempts began in the 1920’s, but it was not until 1955 that B.
C. McCabe, a Los Angeles businessman, leased 14.6 square kilometers
in the Geysers area and founded the Magma Power Company.
The first 12.5-megawatt generator was installed at the Geysers in
1960, and production increased steadily from then on. The Geysers
surpassed Larderello as the largest producing geothermal field in
the 1970’s, and more than 1,000 megawatts were being generated by
1980. By the end of 1980, geothermal plants had been installed in
thirteen countries, with a total capacity of almost 2,600 megawatts,
and projects with a total capacity of more than 15,000 megawatts
were being planned in more than twenty countries.
Impact
Geothermal power has many attractive features. Because the
steam is naturally heated and under pressure, generating equipment
can be simple, inexpensive, and quickly installed. Equipment
and installation costs are offset by savings in fuel. It is economically
practical to install small generators, a fact that makes geothermal
plants attractive in remote or underdeveloped areas. Most important
to a world faced with a variety of technical and environmental
problems connected with fossil fuels, geothermal power does not
deplete fossil fuel reserves, produces little pollution, and contributes
little to the greenhouse effect.
Despite its attractive features, geothermal power has some limitations.
Geologic settings suitable for easy geothermal power production
are rare; there must be a hot rock or magma body close to
the surface. Although it is technically possible to pump water from
an external source into a geothermal well to generate steam, most
geothermal sites require a plentiful supply of natural underground
water that can be tapped as a source of steam. In contrast, fossil-fuel
generating plants can be at any convenient location.
Genetically engineered insulin
The invention: Artificially manufactured human insulin (Humulin)
as a medication for people suffering from diabetes.
The people behind the invention:
Irving S. Johnson (1925- ), an American zoologist who was
vice president of research at Eli Lilly Research Laboratories
Ronald E. Chance (1934- ), an American biochemist at Eli
Lilly Research Laboratories
What Is Diabetes?
Carbohydrates (sugars and related chemicals) are the main food
and energy source for humans. In wealthy countries such as the
United States, more than 50 percent of the food people eat is made
up of carbohydrates, while in poorer countries the carbohydrate
content of diets is higher, from 70 to 90 percent.
Normally, most carbohydrates that a person eats are used (or metabolized)
quickly to produce energy. Carbohydrates not needed for
energy are either converted to fat or stored as a glucose polymer
called “glycogen.” Most adult humans carry about a pound of body
glycogen; this substance is broken down to produce energy when it
is needed.
Certain diseases prevent the proper metabolism and storage of
carbohydrates. The most common of these diseases is diabetes mellitus,
usually called simply “diabetes.” It is found in more than seventy
million people worldwide. Diabetic people cannot produce or
use enough insulin, a hormone secreted by the pancreas. When their
condition is not treated, the eyes may deteriorate to the point of
blindness. The kidneys may stop working properly, blood vessels
may be damaged, and the person may fall into a coma and die. In
fact, diabetes is the third most common killer in the United States.
Most of the problems surrounding diabetes are caused by high levels
of glucose in the blood. Cataracts often form in diabetics, as excess
glucose is deposited in the lens of the eye.
Important symptoms of diabetes include constant thirst, excessive urination, and large amounts of sugar in the blood and in the
urine. The glucose tolerance test (GTT) is the best way to find out
whether a person is suffering from diabetes. People given a GTT are
first told to fast overnight. In the morning their blood glucose level
is measured; then they are asked to drink about a fourth of a pound
of glucose dissolved in water. During the next four to six hours, the
blood glucose level is measured repeatedly. In nondiabetics, glucose
levels do not rise above a certain amount during a GTT, and the
level drops quickly as the glucose is assimilated by the body. In diabetics,
the blood glucose levels rise much higher and do not drop as
quickly. The extra glucose then shows up in the urine.
Treating Diabetes
Until the 1920’s, diabetes could be controlled only through a diet
very low in carbohydrates, and this treatment was not always successful.
Then Sir Frederick G. Banting and Charles H. Best found a
way to prepare purified insulin from animal pancreases and gave it
to patients. This gave diabetics their first chance to live a fairly normal
life. Banting and his coworkers won the 1923 Nobel Prize in
Physiology or Medicine for their work.
The usual treatment for diabetics became regular shots of insulin.
Drug companies took the insulin from the pancreases of cattle and
pigs slaughtered by the meat-packing industry. Unfortunately, animal
insulin has two disadvantages. First, about 5 percent of diabetics
are allergic to it and can have severe reactions. Second, the world
supply of animal pancreases goes up and down depending on how
much meat is being bought. Between 1970 and 1975, the supply of
insulin fell sharply as people began to eat less red meat, yet the
numbers of diabetics continued to increase. So researchers began to
look for a better way to supply insulin.
Studying pancreases of people who had donated their bodies to
science, researchers found that human insulin did not cause allergic
reactions. Scientists realized that it would be best to find a chemical
or biological way to prepare human insulin, and pharmaceutical
companies worked hard toward this goal. Eli Lilly and Company
was the first to succeed, and on May 14, 1982, it filed a new drug application
with the Food and Drug Administration (FDA) for the human insulin preparation it named “Humulin.”
Humulin is made by genetic engineering. Irving S. Johnson, who
worked on the development of Humulin, described Eli Lilly’s method
for producing Humulin. The common bacterium Escherichia coli
is used. Two strains of the bacterium are produced by genetic engineering:
The first strain is used to make a protein called an “A
chain,” and the second strain is used to make a “B chain.” After the
bacteria are harvested, the Aand B chains are removed and purified
separately. Then the two chains are combined chemically. When
they are purified once more, the result is Humulin, which has been
proved by Ronald E. Chance and his Eli Lilly coworkers to be chemically,
biologically, and physically identical to human insulin.
Consequences
The FDA and other regulatory agencies around the world approved
genetically engineered human insulin in 1982. Humulin
does not trigger allergic reactions, and its supply does not fluctuate.
It has brought an end to the fear that there would be a worldwide
shortage of insulin.
Humulin is important as well in being the first genetically engineered
industrial chemical. It began an era in which such advanced
technology could be a source for medical drugs, chemicals used in
farming, and other important industrial products. Researchers hope
that genetic engineering will help in the understanding of cancer
and other diseases, and that it will lead to ways to grow enough
food for a world whose population continues to rise.
29 June 2009
Genetic “fingerprinting”
The invention: Atechnique for using the unique characteristics of
each human being’s DNA to identify individuals, establish connections
among relatives, and identify criminals.
The people behind the invention:
Alec Jeffreys (1950- ), an English geneticist
Victoria Wilson (1950- ), an English geneticist
Swee Lay Thein (1951- ), a biochemical geneticist
Microscopic Fingerprints
In 1985, Alec Jeffreys, a geneticist at the University of Leicester in
England, developed a method of deoxyribonucleic acid (DNA)
analysis that provides a visual representation of the human genetic
structure. Jeffreys’s discovery had an immediate, revolutionary impact
on problems of human identification, especially the identification
of criminals. Whereas earlier techniques, such as conventional
blood typing, provide evidence that is merely exclusionary (indicating
only whether a suspect could or could not be the perpetrator of a
crime), DNA fingerprinting provides positive identification.
For example, under favorable conditions, the technique can establish
with virtual certainty whether a given individual is a murderer
or rapist. The applications are not limited to forensic science;
DNA fingerprinting can also establish definitive proof of parenthood
(paternity or maternity), and it is invaluable in providing
markers for mapping disease-causing genes on chromosomes. In
addition, the technique is utilized by animal geneticists to establish
paternity and to detect genetic relatedness between social groups.
DNAfingerprinting (also referred to as “genetic fingerprinting”)
is a sophisticated technique that must be executed carefully to produce
valid results. The technical difficulties arise partly from the
complex nature of DNA. DNA, the genetic material responsible for
heredity in all higher forms of life, is an enormously long, doublestranded
molecule composed of four different units called “bases.”
The bases on one strand of DNApair with complementary bases on the other strand. A human being contains twenty-three pairs of
chromosomes; one member of each chromosome pair is inherited
fromthe mother, the other fromthe father. The order, or sequence, of
bases forms the genetic message, which is called the “genome.” Scientists
did not know the sequence of bases in any sizable stretch of
DNA prior to the 1970’s because they lacked the molecular tools to
split DNA into fragments that could be analyzed. This situation
changed with the advent of biotechnology in the mid-1970’s.
The door toDNAanalysis was opened with the discovery of bacterial
enzymes called “DNA restriction enzymes.” A restriction enzyme
binds to DNA whenever it finds a specific short sequence of
base pairs (analogous to a code word), and it splits the DNAat a defined
site within that sequence. A single enzyme finds millions of
cutting sites in human DNA, and the resulting fragments range in
size from tens of base pairs to hundreds or thousands. The fragments
are exposed to a radioactive DNA probe, which can bind to
specific complementary DNA sequences in the fragments. X-ray
film detects the radioactive pattern. The developed film, called an
“autoradiograph,” shows a pattern of DNA fragments, which is
similar to a bar code and can be compared with patterns from
known subjects.
The Presence of Minisatellites
The uniqueness of a DNA fingerprint depends on the fact that,
with the exception of identical twins, no two human beings have
identical DNA sequences. Of the three billion base pairs in human
DNA, many will differ from one person to another.
In 1985, Jeffreys and his coworkers, Victoria Wilson at the University
of Leicester and Swee Lay Thein at the John Radcliffe Hospital
in Oxford, discovered a way to produce a DNA fingerprint.
Jeffreys had found previously that human DNA contains many repeated
minisequences called “minisatellites.” Minisatellites consist
of sequences of base pairs repeated in tandem, and the number of
repeated units varies widely from one individual to another. Every
person, with the exception of identical twins, has a different number
of tandem repeats and, hence, different lengths of minisatellite
DNA. By using two labeled DNA probes to detect two different minisatellite sequences, Jeffreys obtained a unique fragment band
pattern that was completely specific for an individual.
The power of the technique derives from the law of chance,
which indicates that the probability (chance) that two or more unrelated
events will occur simultaneously is calculated as the multiplication
product of the two separate probabilities. As Jeffreys discovered,
the likelihood of two unrelated people having completely
identical DNAfingerprints is extremely small—less than one in ten
trillion. Given the population of the world, it is clear that the technique
can distinguish any one person from everyone else. Jeffreys
called his band patterns “DNAfingerprints” because of their ability
to individualize. As he stated in his landmark research paper, published
in the English scientific journal Nature in 1985, probes to
minisatellite regions of human DNA produce “DNA ‘fingerprints’
which are completely specific to an individual (or to his or her identical
twin) and can be applied directly to problems of human identification,
including parenthood testing.”
Consequences
In addition to being used in human identification, DNA fingerprinting
has found applications in medical genetics. In the search
for a cause, a diagnostic test for, and ultimately the treatment of an
inherited disease, it is necessary to locate the defective gene on a human
chromosome. Gene location is accomplished by a technique
called “linkage analysis,” in which geneticists use marker sections
of DNA as reference points to pinpoint the position of a defective
gene on a chromosome. The minisatellite DNA probes developed
by Jeffreys provide a potent and valuable set of markers that are of
great value in locating disease-causing genes. Soon after its discovery,
DNA fingerprinting was used to locate the defective genes responsible
for several diseases, including fetal hemoglobin abnormality
and Huntington’s disease.
Genetic fingerprinting also has had a major impact on genetic
studies of higher animals. BecauseDNAsequences are conserved in
evolution, humans and other vertebrates have many sequences in
common. This commonality enabled Jeffreys to use his probes to
human minisatellites to bind to the DNA of many different vertebrates, ranging from mammals to birds, reptiles, amphibians, and
fish; this made it possible for him to produce DNA fingerprints of
these vertebrates. In addition, the technique has been used to discern
the mating behavior of birds, to determine paternity in zoo primates,
and to detect inbreeding in imperiled wildlife. DNA fingerprinting
can also be applied to animal breeding problems, such as
the identification of stolen animals, the verification of semen samples
for artificial insemination, and the determination of pedigree.
The technique is not foolproof, however, and results may be far
from ideal. Especially in the area of forensic science, there was a
rush to use the tremendous power of DNA fingerprinting to identify
a purported murderer or rapist, and the need for scientific standards
was often neglected. Some problems arose because forensic
DNA fingerprinting in the United States is generally conducted in
private, unregulated laboratories. In the absence of rigorous scientific
controls, the DNA fingerprint bands of two completely unknown
samples cannot be matched precisely, and the results may be
unreliable.
Geiger counter
The invention: the first electronic device able to detect and measure
radioactivity in atomic particles.
The people behind the invention:
Hans Geiger (1882-1945), a German physicist
Ernest Rutherford (1871-1937), a British physicist
Sir John Sealy Edward Townsend (1868-1957), an Irish physicist
Sir William Crookes (1832-1919), an English physicist
Wilhelm Conrad Röntgen (1845-1923), a German physicist
Antoine-Henri Becquerel (1852-1908), a French physicist
Discovering Natural Radiation
When radioactivity was discovered and first studied, the work
was done with rather simple devices. In the 1870’s, Sir William
Crookes learned how to create a very good vacuum in a glass tube.
He placed electrodes in each end of the tube and studied the passage
of electricity through the tube. This simple device became known as
the “Crookes tube.” In 1895, Wilhelm Conrad Röntgen was experimenting
with a Crookes tube. It was known that when electricity
went through a Crookes tube, one end of the glass tube might glow.
Certain mineral salts placed near the tube would also glow. In order
to observe carefully the glowing salts, Röntgen had darkened the
room and covered most of the Crookes tube with dark paper. Suddenly,
a flash of light caught his eye. It came from a mineral sample
placed some distance from the tube and shielded by the dark paper;
yet when the tube was switched off, the mineral sample went dark.
Experimenting further, Röntgen became convinced that some ray
from the Crookes tube had penetrated the mineral and caused it to
glow. Since light rays were blocked by the black paper, he called the
mystery ray an “X ray,” with “X” standing for unknown.
Antoine-Henri Becquerel heard of the discovery of X rays and, in
February, 1886, set out to discover if glowing minerals themselves
emitted X rays. Some minerals, called “phosphorescent,” begin to
glow when activated by sunlight. Becquerel’s experiment involved wrapping photographic film in black paper and setting various
phosphorescent minerals on top and leaving them in the sun. He
soon learned that phosphorescent minerals containing uranium
would expose the film.
Aseries of cloudy days, however, brought a great surprise. Anxious
to continue his experiments, Becquerel decided to develop film
that had not been exposed to sunlight. He was astonished to discover
that the film was deeply exposed. Some emanations must be
coming from the uranium, he realized, and they had nothing to do
with sunlight. Thus, natural radioactivity was discovered by accident
with a simple piece of photographic film.
Rutherford and Geiger
Ernest Rutherford joined the world of international physics at
about the same time that radioactivity was discovered. Studying the
“Becquerel rays” emitted by uranium, Rutherford eventually distinguished
three different types of radiation, which he named “alpha,”
“beta,” and “gamma” after the first three letters of the Greek alphabet.
He showed that alpha particles, the least penetrating of the three, are
the nuclei of helium atoms (a group of two neutrons and a proton
tightly bound together). It was later shown that beta particles are electrons.
Gamma rays, which are far more penetrating than either alpha
or beta particles, were shown to be similar to X rays, but with higher
energies.
Rutherford became director of the associated research laboratory
at Manchester University in 1907. Hans Geiger became an assistant.
At this time, Rutherford was trying to prove that alpha particles
carry a double positive charge. The best way to do this was to measure
the electric charge that a stream of alpha particles would bring
to a target. By dividing that charge by the total number of alpha particles
that fell on the target, one could calculate the charge of a single
alpha particle. The problem lay in counting the particles and in
proving that every particle had been counted.
Basing their design upon work done by Sir John Sealy Edward
Townsend, a former colleague of Rutherford, Geiger and Rutherford
constructed an electronic counter. It consisted of a long brass
tube sealed at both ends from which most of the air had been pumped. A thin wire, insulated from the brass, was suspended
down the middle of the tube. This wire was connected to batteries
producing about thirteen hundred volts and to an electrometer, a
device that could measure the voltage of the wire. This voltage
could be increased until a spark jumped between the wire and the
tube. If the voltage was turned down a little, the tube was ready to
operate. An alpha particle entering the tube would ionize (knock
some electrons away from) at least a few atoms. These electrons
would be accelerated by the high voltage and, in turn, would ionize
more atoms, freeing more electrons. This process would continue
until an avalanche of electrons struck the central wire and the electrometer
registered the voltage change. Since the tube was nearly
ready to arc because of the high voltage, every alpha particle, even if
it had very little energy, would initiate a discharge. The most complex
of the early radiation detection devices—the forerunner of the
Geiger counter—had just been developed. The two physicists reported
their findings in February, 1908.
Impact
Their first measurements showed that one gram of radium
emitted 34 thousand million alpha particles per second. Soon, the
number was refined to 32.8 thousand million per second. Next,
Geiger and Rutherford measured the amount of charge emitted
by radium each second. Dividing this number by the previous
number gave them the charge on a single alpha particle. Just as
Rutherford had anticipated, the charge was double that of a hydrogen
ion (a proton). This proved to be the most accurate determination
of the fundamental charge until the American physicist
Robert Andrews Millikan conducted his classic oil-drop experiment
in 1911.
Another fundamental result came froma careful measurement of
the volume of helium emitted by radium each second. Using that
value, other properties of gases, and the number of helium nuclei
emitted each second, they were able to calculate Avogadro’s number
more directly and accurately than had previously been possible.
(Avogadro’s number enables one to calculate the number of atoms
in a given amount of material.)The true Geiger counter evolved when Geiger replaced the central
wire of the tube with a needle whose point lay just inside a thin
entrance window. This counter was much more sensitive to alpha
and beta particles and also to gamma rays. By 1928, with the assistance
of Walther Müller, Geiger made his counter much more efficient,
responsive, durable, and portable. There are probably few radiation
facilities in the world that do not have at least one Geiger
counter or one of its compact modern relatives.
26 June 2009
Gas-electric car
The invention:
A hybrid automobile with both an internal combustion engine and an electric motor.
The people behind the invention:
Victor Wouk - an American engineer Tom Elliott, executive vice president of
American Honda Motor Company
Hiroyuki Yoshino - president and chief executive officer of Honda Motor Company
Fujio Cho - president of Toyota Motor Corporation
23 June 2009
Fuel cell
The invention: An electrochemical cell that directly converts energy
from reactions between oxidants and fuels, such as liquid
hydrogen, into electrical energy.
The people behind the invention:
Francis Thomas Bacon (1904-1992), an English engineer
Sir William Robert Grove (1811-1896), an English inventor
Georges Leclanché (1839-1882), a French engineer
Alessandro Volta (1745-1827), an Italian physicist
The Earth’s Resources
Because of the earth’s rapidly increasing population and the
dwindling of fossil fuels (natural gas, coal, and petroleum), there is
a need to design and develop new ways to obtain energy and to encourage
its intelligent use. The burning of fossil fuels to create energy
causes a slow buildup of carbon dioxide in the atmosphere,
creating pollution that poses many problems for all forms of life on
this planet. Chemical and electrical studies can be combined to create
electrochemical processes that yield clean energy.
Because of their very high rate of efficiency and their nonpolluting
nature, fuel cells may provide the solution to the problem of
finding sufficient energy sources for humans. The simple reaction of
hydrogen and oxygen to form water in such a cell can provide an
enormous amount of clean (nonpolluting) energy. Moreover, hydrogen
and oxygen are readily available.
Studies by Alessandro Volta, Georges Leclanché, and William
Grove preceded the work of Bacon in the development of the fuel
cell. Bacon became interested in the idea of a hydrogen-oxygen fuel
cell in about 1932. His original intent was to develop a fuel cell that
could be used in commercial applications.
The Fuel Cell Emerges
In 1800, the Italian physicist Alessandro Volta experimented
with solutions of chemicals and metals that were able to conduct electricity. He found that two pieces of metal and such a solution
could be arranged in such a way as to produce an electric current.
His creation was the first electrochemical battery, a device that produced
energy from a chemical reaction. Studies in this area were
continued by various people, and in the late nineteenth century,
Georges Leclanché invented the dry cell battery, which is now commonly
used.
The work of William Grove followed that of Leclanché. His first
significant contribution was the Grove cell, an improved form of the
cells described above, which became very popular. Grove experimented
with various forms of batteries and eventually invented the
“gas battery,” which was actually the earliest fuel cell. It is worth
noting that his design incorporated separate test tubes of hydrogen
and oxygen, which he placed over strips of platinum.
After studying the design of Grove’s fuel cell, Bacon decided
that, for practical purposes, the use of platinum and other precious
metals should be avoided. By 1939, he had constructed a cell in
which nickel replaced the platinum used.
The theory behind the fuel cell can be described in the following
way. If a mixture of hydrogen and oxygen is ignited, energy is released
in the form of a violent explosion. In a fuel cell, however, the
reaction takes place in a controlled manner. Electrons lost by the hydrogen
gas flow out of the fuel cell and return to be taken up by the
oxygen in the cell. The electron flow provides electricity to any device
that is connected to the fuel cell, and the water that the fuel cell
produces can be purified and used for drinking.
Bacon’s studies were interrupted byWorldWar II. After the war
was over, however, Bacon continued his work. Sir Eric Keightley
Rideal of Cambridge University in England supported Bacon’s
studies; later, others followed suit. In January, 1954, Bacon wrote an
article entitled “Research into the Properties of the Hydrogen/ Oxygen
Fuel Cell” for a British journal. He was surprised at the speed
with which news of the article spread throughout the scientific
world, particularly in the United States.
After a series of setbacks, Bacon demonstrated a forty-cell unit
that had increased power. This advance showed that the fuel cell
was not merely an interesting toy; it had the capacity to do useful
work. At this point, the General Electric Company (GE), an American corporation, sent a representative to England to offer employment
in the United States to senior members of Bacon’s staff. Three scientists
accepted the offer.
A high point in Bacon’s career was the announcement that the
American Pratt and Whitney Aircraft company had obtained an order
to build fuel cells for the Apollo project, which ultimately put
two men on the Moon in 1969. Toward the end of his career in 1978,
Bacon hoped that commercial applications for his fuel cells would
be found.Impact
Because they are lighter and more efficient than batteries, fuel
cells have proved to be useful in the space program. Beginning with
the Gemini 5 spacecraft, alkaline fuel cells (in which a water solution
of potassium hydroxide, a basic, or alkaline, chemical, is placed)
have been used for more than ten thousand hours in space. The fuel
cells used aboard the space shuttle deliver the same amount of power
as batteries weighing ten times as much. On a typical seven-day
mission, the shuttle’s fuel cells consume 680 kilograms (1,500 pounds)
of hydrogen and generate 719 liters (190 gallons) of water that can
be used for drinking.
Major technical and economic problems must be overcome in order
to design fuel cells for practical applications, but some important
advancements have been made.Afew test vehicles that use fuel cells as a source of power have been constructed. Fuel cells using
hydrogen as a fuel and oxygen to burn the fuel have been used in a
van built by General Motors Corporation. Thirty-two fuel cells are
installed below the floorboards, and tanks of liquid oxygen are carried
in the back of the van. A power plant built in New York City
contains stacks of hydrogen-oxygen fuel cells, which can be put on
line quickly in response to power needs. The Sanyo Electric Company
has developed an electric car that is partially powered by a
fuel cell.
These tremendous technical advances are the result of the singleminded
dedication of Francis Thomas Bacon, who struggled all of
his life with an experiment he was convinced would be successful.
Freeze-drying
The invention:
Method for preserving foods and other organic matter by freezing them and using a vacuum to remove their water content without damaging their solid matter.
The people behind the invention:
Earl W. Flosdorf (1904- ), an American physician
Ronald I. N. Greaves (1908- ), an English pathologist
Jacques Arsène d’Arsonval (1851-1940), a French physicist
Method for preserving foods and other organic matter by freezing them and using a vacuum to remove their water content without damaging their solid matter.
The people behind the invention:
Earl W. Flosdorf (1904- ), an American physician
Ronald I. N. Greaves (1908- ), an English pathologist
Jacques Arsène d’Arsonval (1851-1940), a French physicist
FORTRAN programming language
The invention: The first major computer programming language,
FORTRAN supported programming in a mathematical language
that was natural to scientists and engineers and achieved unsurpassed
success in scientific computation.
The people behind the invention:
John Backus (1924- ), an American software engineer and
manager
John W. Mauchly (1907-1980), an American physicist and
engineer
Herman Heine Goldstine (1913- ), a mathematician and
computer scientist
John von Neumann (1903-1957), a Hungarian American
mathematician and physicist
Talking to Machines
Formula Translation, or FORTRAN—the first widely accepted
high-level computer language—was completed by John Backus
and his coworkers at the International Business Machines (IBM)
Corporation in April, 1957. Designed to support programming
in a mathematical language that was natural to scientists and engineers,
FORTRAN achieved unsurpassed success in scientific
computation.
Computer languages are means of specifying the instructions
that a computer should execute and the order of those instructions.
Computer languages can be divided into categories of progressively
higher degrees of abstraction. At the lowest level is binary
code, or machine code: Binary digits, or “bits,” specify in
complete detail every instruction that the machine will execute.
This was the only language available in the early days of computers,
when such machines as the ENIAC (Electronic Numerical Integrator
and Calculator) required hand-operated switches and
plugboard connections. All higher levels of language are implemented by having a program translate instructions written in the
higher language into binary machine language (also called “object
code”). High-level languages (also called “programming languages”)
are largely or entirely independent of the underlying
machine structure. FORTRAN was the first language of this type
to win widespread acceptance.
The emergence of machine-independent programming languages
was a gradual process that spanned the first decade of electronic
computation. One of the earliest developments was the invention of
“flowcharts,” or “flow diagrams,” by Herman Heine Goldstine and
John von Neumann in 1947. Flowcharting became the most influential
software methodology during the first twenty years of
computing.
Short Code was the first language to be implemented that contained
some high-level features, such as the ability to use mathematical
equations. The idea came from JohnW. Mauchly, and it was
implemented on the BINAC (Binary Automatic Computer) in 1949
with an “interpreter”; later, it was carried over to the UNIVAC (Universal
Automatic Computer) I. Interpreters are programs that do
not translate commands into a series of object-code instructions; instead,
they directly execute (interpret) those commands. Every time
the interpreter encounters a command, that command must be interpreted
again. “Compilers,” however, convert the entire command
into object code before it is executed.
Much early effort went into creating ways to handle commonly
encountered problems—particularly scientific mathematical
calculations. A number of interpretive languages arose to
support these features. As long as such complex operations had
to be performed by software (computer programs), however, scientific
computation would be relatively slow. Therefore, Backus
lobbied successfully for a direct hardware implementation of these
operations on IBM’s new scientific computer, the 704. Backus then
started the Programming Research Group at IBM in order to develop
a compiler that would allow programs to be written in a
mathematically oriented language rather than a machine-oriented
language. In November of 1954, the group defined an initial version
of FORTRAN.A More Accessible Language
Before FORTRAN was developed, a computer had to perform a
whole series of tasks to make certain types of mathematical calculations.
FORTRAN made it possible for the same calculations to be
performed much more easily. In general, FORTRAN supported constructs
with which scientists were already acquainted, such as functions
and multidimensional arrays. In defining a powerful notation
that was accessible to scientists and engineers, FORTRAN opened
up programming to a much wider community.
Backus’s success in getting the IBM 704’s hardware to support
scientific computation directly, however, posed a major challenge:
Because such computation would be much faster, the object code
produced by FORTRAN would also have to be much faster. The
lower-level compilers preceding FORTRAN produced programs
that were usually five to ten times slower than their hand-coded
counterparts; therefore, efficiency became the primary design objective
for Backus. The highly publicized claims for FORTRAN met
with widespread skepticism among programmers. Much of the
team’s efforts, therefore, went into discovering ways to produce the
most efficient object code.
The efficiency of the compiler produced by Backus, combined
with its clarity and ease of use, guaranteed the system’s success. By
1959, many IBM 704 users programmed exclusively in FORTRAN.
By 1963, virtually every computer manufacturer either had delivered
or had promised a version of FORTRAN.
Incompatibilities among manufacturers were minimized by the
popularity of IBM’s version of FORTRAN; every company wanted
to be able to support IBM programs on its own equipment. Nevertheless,
there was sufficient interest in obtaining a standard for
FORTRAN that the American National Standards Institute adopted
a formal standard for it in 1966. Arevised standard was adopted in
1978, yielding FORTRAN 77.
Consequences
In demonstrating the feasibility of efficient high-level languages,
FORTRAN inaugurated a period of great proliferation of programming languages. Most of these languages attempted to provide similar
or better high-level programming constructs oriented toward a
different, nonscientific programming environment. COBOL, for example,
stands for “Common Business Oriented Language.”
FORTRAN, while remaining the dominant language for scientific
programming, has not found general acceptance among nonscientists.
An IBM project established in 1963 to extend FORTRAN
found the task too unwieldy and instead ended up producing an entirely
different language, PL/I, which was delivered in 1966. In the
beginning, Backus and his coworkers believed that their revolutionary
language would virtually eliminate the burdens of coding and
debugging. Instead, FORTRAN launched software as a field of
study and an industry in its own right.
In addition to stimulating the introduction of new languages,
FORTRAN encouraged the development of operating systems. Programming
languages had already grown into simple operating systems
called “monitors.” Operating systems since then have been
greatly improved so that they support, for example, simultaneously
active programs (multiprogramming) and the networking (combining)
of multiple computers.
21 June 2009
Food freezing
The invention: It was long known that low temperatures helped to
protect food against spoiling; the invention that made frozen
food practical was a method of freezing items quickly. Clarence
Birdseye’s quick-freezing technique made possible a revolution
in food preparation, storage, and distribution.
The people behind the invention:
Clarence Birdseye (1886-1956), a scientist and inventor
Donald K. Tressler (1894-1981), a researcher at Cornell
University
Amanda Theodosia Jones (1835-1914), a food-preservation
pioneer
Feeding the Family
In 1917, Clarence Birdseye developed a means of quick-freezing
meat, fish, vegetables, and fruit without substantially changing
their original taste. His system of freezing was called by Fortune
magazine “one of the most exciting and revolutionary ideas in the
history of food.” Birdseye went on to refine and perfect his method
and to promote the frozen foods industry until it became a commercial
success nationwide.
It was during a trip to Labrador, where he worked as a fur trader,
that Birdseye was inspired by this idea. Birdseye’s new wife and
five-week-old baby had accompanied him there. In order to keep
his family well fed, he placed barrels of fresh cabbages in salt water
and then exposed the vegetables to freezing winds. Successful at
preserving vegetables, he went on to freeze a winter’s supply of
ducks, caribou, and rabbit meat.
In the following years, Birdseye experimented with many freezing
techniques. His equipment was crude: an electric fan, ice, and salt
water. His earliest experiments were on fish and rabbits, which he
froze and packed in old candy boxes. By 1924, he had borrowed
money against his life insurance and was lucky enough to find three
partners willing to invest in his new General Seafoods Company (later renamed General Foods), located in Gloucester, Massachusetts.
Although it was Birdseye’s genius that put the principles of
quick-freezing to work, he did not actually invent quick-freezing.
The scientific principles involved had been known for some time.
As early as 1842, a patent for freezing fish had been issued in England.
Nevertheless, the commercial exploitation of the freezing
process could not have happened until the end of the 1800’s, when
mechanical refrigeration was invented. Even then, Birdseye had to
overcome major obstacles.
Finding a Niche
By the 1920’s, there still were few mechanical refrigerators in
American homes. It would take years before adequate facilities for
food freezing and retail distribution would be established across the
United States. By the late 1930’s, frozen foods had, indeed, found its
role in commerce but still could not compete with canned or fresh
foods. Birdseye had to work tirelessly to promote the industry, writing
and delivering numerous lectures and articles to advance its
popularity. His efforts were helped by scientific research conducted
at Cornell University by Donald K. Tressler and by C. R. Fellers of
what was then Massachusetts State College. Also, during World
War II (1939-1945), more Americans began to accept the idea: Rationing,
combined with a shortage of canned foods, contributed to
the demand for frozen foods. The armed forces made large purchases
of these items as well.
General Foods was the first to use a system of extremely rapid
freezing of perishable foods in packages. Under the Birdseye system,
fresh foods, such as berries or lobster, were packaged snugly in convenient
square containers. Then, the packages were pressed between
refrigerated metal plates under pressure at 50 degrees below zero.
Two types of freezing machines were used. The “double belt” freezer
consisted of two metal belts that moved through a 15-meter freezing
tunnel, while a special salt solution was sprayed on the surfaces of
the belts. This double-belt freezer was used only in permanent installations
and was soon replaced by the “multiplate” freezer, which was
portable and required only 11.5 square meters of floor space compared
to the double belt’s 152 square meters.The multiplate freezer also made it possible to apply the technique
of quick-freezing to seasonal crops. People were able to transport
these freezers easily from one harvesting field to another,
where they were used to freeze crops such as peas fresh off the vine.
The handy multiplate freezer consisted of an insulated cabinet
equipped with refrigerated metal plates. Stacked one above the
other, these plates were capable of being opened and closed to receive
food products and to compress them with evenly distributed
pressure. Each aluminum plate had internal passages through which
ammonia flowed and expanded at a temperature of -3.8 degrees
Celsius, thus causing the foods to freeze.
A major benefit of the new frozen foods was that their taste and vitamin content were not lost. Ordinarily, when food is frozen
slowly, ice crystals form, which slowly rupture food cells, thus altering
the taste of the food. With quick-freezing, however, the food
looks, tastes, and smells like fresh food. Quick-freezing also cuts
down on bacteria.
Impact
During the months between one food harvest and the next, humankind
requires trillions of pounds of food to survive. In many
parts of the world, an adequate supply of food is available; elsewhere,
much food goes to waste and many go hungry. Methods of
food preservation such as those developed by Birdseye have done
much to help those who cannot obtain proper fresh foods. Preserving
perishable foods also means that they will be available in
greater quantity and variety all year-round. In all parts of the world,
both tropical and arctic delicacies can be eaten in any season of the
year.
With the rise in popularity of frozen “fast” foods, nutritionists
began to study their effect on the human body. Research has shown
that fresh is the most beneficial. In an industrial nation with many
people, the distribution of fresh commodities is, however, difficult.
It may be many decades before scientists know the long-term effects
on generations raised primarily on frozen foods.
FM radio
The invention: A method of broadcasting radio signals by modulating
the frequency, rather than the amplitude, of radio waves,
FM radio greatly improved the quality of sound transmission.
The people behind the invention:
Edwin H. Armstrong (1890-1954), the inventor of FM radio
broadcasting
David Sarnoff (1891-1971), the founder of RCA
An Entirely New System
Because early radio broadcasts used amplitude modulation (AM)
to transmit their sounds, they were subject to a sizable amount of interference
and static. Since goodAMreception relies on the amount
of energy transmitted, energy sources in the atmosphere between
the station and the receiver can distort or weaken the original signal.
This is particularly irritating for the transmission of music.
Edwin H. Armstrong provided a solution to this technological
constraint. A graduate of Columbia University, Armstrong made a
significant contribution to the development of radio with his basic
inventions for circuits for AM receivers. (Indeed, the monies Armstrong
received from his earlier inventions financed the development
of the frequency modulation, or FM, system.) Armstrong was
one among many contributors to AM radio. For FM broadcasting,
however, Armstrong must be ranked as the most important inventor.
During the 1920’s, Armstrong established his own research laboratory
in Alpine, New Jersey, across the Hudson River from New
York City. With a small staff of dedicated assistants, he carried out
research on radio circuitry and systems for nearly three decades. At
that time, Armstrong also began to teach electrical engineering at
Columbia University.
From 1928 to 1933, Armstrong worked diligently at his private
laboratory at Columbia University to construct a working model of
an FM radio broadcasting system. With the primitive limitations
then imposed on the state of vacuum tube technology, a number of Armstrong’s experimental circuits required as many as one hundred
tubes. Between July, 1930, and January, 1933, Armstrong filed
four basic FM patent applications. All were granted simultaneously
on December 26, 1933.
Armstrong sought to perfectFMradio broadcasting, not to offer
radio listeners better musical reception but to create an entirely
new radio broadcasting system. On November 5, 1935, Armstrong
made his first public demonstration of FM broadcasting in New
York City to an audience of radio engineers. An amateur station
based in suburban Yonkers, New York, transmitted these first signals.
The scientific world began to consider the advantages and
disadvantages of Armstrong’s system; other laboratories began to
craft their own FM systems.
Corporate Conniving
Because Armstrong had no desire to become a manufacturer or
broadcaster, he approached David Sarnoff, head of the Radio Corporation
of America (RCA). As the owner of the top manufacturer
of radio sets and the top radio broadcasting network, Sarnoff was
interested in all advances of radio technology. Armstrong first demonstrated
FM radio broadcasting for Sarnoff in December, 1933.
This was followed by visits from RCA engineers, who were sufficiently
impressed to recommend to Sarnoff that the company conduct
field tests of the Armstrong system.
In 1934, Armstrong, with the cooperation of RCA, set up a test
transmitter at the top of the Empire State Building, sharing facilities
with the experimental RCAtelevision transmitter. From 1934 through
1935, tests were conducted using the Empire State facility, to mixed
reactions of RCA’s best engineers. AM radio broadcasting already
had a performance record of nearly two decades. The engineers
wondered if this new technology could replace something that had
worked so well.
This less-than-enthusiastic evaluation fueled the skepticism of
RCA lawyers and salespeople. RCA had too much invested in the
AM system, both as a leading manufacturer and as the dominant
owner of the major radio network of the time, the National Broadcasting
Company (NBC). Sarnoff was in no rush to adopt FM. To change systems would risk the millions of dollars RCAwas making
as America emerged from the Great Depression.
In 1935, Sarnoff advised Armstrong that RCA would cease any
further research and development activity in FM radio broadcasting.
(Still, engineers at RCA laboratories continued to work on FM
to protect the corporate patent position.) Sarnoff declared to the
press that his company would push the frontiers of broadcasting by
concentrating on research and development of radio with pictures,
that is, television. As a tangible sign, Sarnoff ordered that Armstrong’s
FM radio broadcasting tower be removed from the top of
the Empire State Building.
Armstrong was outraged. By the mid-1930’s, the development of
FM radio broadcasting had become a mission for Armstrong. For
the remainder of his life, Armstrong devoted his considerable talents
to the promotion of FM radio broadcasting.
Impact
After the break with Sarnoff, Armstrong proceeded with plans to
develop his own FM operation. Allied with two of RCA’s biggest
manufacturing competitors, Zenith and General Electric, Armstrong
pressed ahead. In June of 1936, at a Federal Communications Commission
(FCC) hearing, Armstrong proclaimed that FM broadcasting
was the only static-free, noise-free, and uniform system—both
day and night—available. He argued, correctly, thatAMradio broadcasting
had none of these qualities.
During World War II (1939-1945), Armstrong gave the military
permission to use FM with no compensation. That patriotic gesture
cost Armstrong millions of dollars when the military soon became
all FM. It did, however, expand interest in FM radio broadcasting.
World War II had provided a field test of equipment and use.
By the 1970’s, FM radio broadcasting had grown tremendously.
By 1972, one in three radio listeners tuned into an FM station some
time during the day. Advertisers began to use FM radio stations to
reach the young and affluent audiences that were turning to FM stations
in greater numbers.
By the late 1970’s, FM radio stations were outnumberingAMstations.
By 1980, nearly half of radio listeners tuned into FM stations on a regular basis. Adecade later, FM radio listening accounted for
more than two-thirds of audience time. Armstrong’s predictions
that listeners would prefer the clear, static-free sounds offered by
FM radio broadcasting had come to pass by the mid-1980’s, nearly
fifty years after Armstrong had commenced his struggle to make
FM radio broadcasting a part of commercial radio.
Fluorescent lighting
lighting
The invention: A form of electrical lighting that uses a glass tube
coated with phosphor that gives off a cool bluish light and emits
ultraviolet radiation.
The people behind the invention:
Vincenzo Cascariolo (1571-1624), an Italian alchemist and
shoemaker
Heinrich Geissler (1814-1879), a German glassblower
Peter Cooper Hewitt (1861-1921), an American electrical
engineer
Celebrating the “Twelve Greatest Inventors”
On the night of November 23, 1936, more than one thousand industrialists,
patent attorneys, and scientists assembled in the main
ballroom of the Mayflower Hotel in Washington, D.C., to celebrate
the one hundredth anniversary of the U.S. Patent Office.Atransport
liner over the city radioed the names chosen by the Patent Office as
America’s “Twelve Greatest Inventors,” and, as the distinguished
group strained to hear those names, “the room was flooded for a
moment by the most brilliant light yet used to illuminate a space
that size.”
Thus did The New York Times summarize the commercial introduction
of the fluorescent lamp. The twelve inventors present were
Thomas Alva Edison, Robert Fulton, Charles Goodyear, Charles
Hall, Elias Howe, Cyrus Hall McCormick, Ottmar Mergenthaler,
Samuel F. B. Morse, George Westinghouse, Wilbur Wright, and Eli
Whitney. There was, however, no name to bear the honor for inventing
fluorescent lighting. That honor is shared by many who participated
in a very long series of discoveries.
The fluorescent lamp operates as a low-pressure, electric discharge
inside a glass tube that contains a droplet of mercury and a
gas, commonly argon. The inside of the glass tube is coated with
fine particles of phosphor. When electricity is applied to the gas, the
mercury gives off a bluish light and emits ultraviolet radiation.When bathed in the strong ultraviolet radiation emitted by the mercury,
the phosphor fluoresces (emits light).
The setting for the introduction of the fluorescent lamp began at
the beginning of the 1600’s, when Vincenzo Cascariolo, an Italian
shoemaker and alchemist, discovered a substance that gave off a
bluish glow in the dark after exposure to strong sunlight. The fluorescent
substance was apparently barium sulfide and was so unusual
for that time and so valuable that its formulation was kept secret
for a long time. Gradually, however, scholars became aware of
the preparation secrets of the substance and studied it and other luminescent
materials.
Further studies in fluorescent lighting were made by the German
physicist Johann Wilhelm Ritter. He observed the luminescence of
phosphors that were exposed to various “exciting” lights. In 1801,
he noted that some phosphors shone brightly when illuminated by
light that the eye could not see (ultraviolet light). Ritter thus discovered
the ultraviolet region of the light spectrum. The use of phosphors
to transform ultraviolet light into visible light was an important
step in the continuing development of the fluorescent lamp.
Further studies in fluorescent lighting were made by the German
physicist Johann Wilhelm Ritter. He observed the luminescence of
phosphors that were exposed to various “exciting” lights. In 1801,
he noted that some phosphors shone brightly when illuminated by
light that the eye could not see (ultraviolet light). Ritter thus discovered
the ultraviolet region of the light spectrum. The use of phosphors
to transform ultraviolet light into visible light was an important
step in the continuing development of the fluorescent lamp.
The British mathematician and physicist Sir George Gabriel Stokes
studied the phenomenon as well. It was he who, in 1852, termed the
afterglow “fluorescence.”
Geissler Tubes
While these advances were being made, other workers were trying
to produce a practical form of electric light. In 1706, the English
physicist Francis Hauksbee devised an electrostatic generator, which
is used to accelerate charged particles to very high levels of electrical
energy. He then connected the device to a glass “jar,” used a vacuum pump to evacuate the jar to a low pressure, and tested his
generator. In so doing, Hauksbee obtained the first human-made
electrical glow discharge by “capturing lightning” in a jar.
In 1854, Heinrich Geissler, a glassblower and apparatus maker,
opened his shop in Bonn, Germany, to make scientific instruments;
in 1855, he produced a vacuum pump that used liquid mercury as
an evacuation fluid. That same year, Geissler made the first gaseous
conduction lamps while working in collaboration with the German
scientist Julius Plücker. Plücker referred to these lamps as “Geissler
tubes.” Geissler was able to create red light with neon gas filling a
lamp and light of nearly all colors by using certain types of gas
within each of the lamps. Thus, both the neon sign business and the
science of spectroscopy were born.
Geissler tubes were studied extensively by a variety of workers.
At the beginning of the twentieth century, the practical American
engineer Peter Cooper Hewitt put these studies to use by marketing
the first low-pressure mercury vapor lamps. The lamps were quite
successful, although they required high voltage for operation, emitted
an eerie blue-green, and shone dimly by comparison with their
eventual successor, the fluorescent lamp. At about the same time,
systematic studies of phosphors had finally begun.
By the 1920’s, a number of investigators had discovered that the
low-pressure mercury vapor discharge marketed by Hewitt was an
extremely efficient method for producing ultraviolet light, if the
mercury and rare gas pressures were properly adjusted. With a
phosphor to convert the ultraviolet light back to visible light, the
Hewitt lamp made an excellent light source.
Impact
The introduction of fluorescent lighting in 1936 presented the
public with a completely new form of lighting that had enormous
advantages of high efficiency, long life, and relatively low cost.
By 1938, production of fluorescent lamps was well under way. By
April, 1938, four sizes of fluorescent lamps in various colors had
been offered to the public and more than two hundred thousand
lamps had been sold.
During 1939 and 1940, two great expositions—the New York World’s Fair and the San Francisco International Exposition—
helped popularize fluorescent lighting. Thousands of tubular fluorescent
lamps formed a great spiral in the “motor display salon,”
the car showroom of the General Motors exhibit at the New York
World’s Fair. Fluorescent lamps lit the Polish Restaurant and hung
in vertical clusters on the flagpoles along theAvenue of the Flags at
the fair, while two-meter-long, upright fluorescent tubes illuminated
buildings at the San Francisco International Exposition.
When the United States entered World War II (1939-1945), the
demand for efficient factory lighting soared. In 1941, more than
twenty-one million fluorescent lamps were sold. Technical advances
continued to improve the fluorescent lamp. By the 1990’s,
this type of lamp supplied most of the world’s artificial lighting.
Subscribe to:
Posts (Atom)