The invention: Early digital calculator designed to solve differential equations that was a forerunner of modern computers. The people behind the invention: Howard H. Aiken (1900-1973), Harvard University professor and architect of the Mark I Clair D. Lake (1888-1958), a senior engineer at IBM Francis E. Hamilton (1898-1972), an IBM engineer Benjamin M. Durfee (1897-1980), an IBM engineer The Human Computer The physical world can be described by means of mathematics. In principle, one can accurately describe nature down to the smallest detail.
31 August 2009
24 August 2009
Mammography
The invention: The first X-ray procedure for detecting and diagnosing
breast cancer.
The people behind the invention:
Albert Salomon, the first researcher to use X-ray technology
instead of surgery to identify breast cancer
Jacob Gershon-Cohen (1899-1971), a breast cancer researcher
Studying Breast Cancer
Medical researchers have been studying breast cancer for more
than a century. At the end of the nineteenth century, however, no one
knew how to detect breast cancer until it was quite advanced. Often,
by the time it was detected, it was too late for surgery; many patients
who did have surgery died. So after X-ray technology first appeared
in 1896, cancer researchers were eager to experiment with it.
The first scientist to use X-ray techniques in breast cancer experiments
was Albert Salomon, a German surgeon. Trying to develop a
biopsy technique that could tell which tumors were cancerous and
thereby avoid unnecessary surgery, he X-rayed more than three
thousand breasts that had been removed from patients during breast
cancer surgery. In 1913, he published the results of his experiments,
showing that X rays could detect breast cancer. Different types of Xray
images, he said, showed different types of cancer.
Though Salomon is recognized as the inventor of breast radiology,
he never actually used his technique to diagnose breast cancer.
In fact, breast cancer radiology, which came to be known as “mammography,”
was not taken up quickly by other medical researchers.
Those who did try to reproduce his research often found that their
results were not conclusive.
During the 1920’s, however, more research was conducted in Leipzig,
Germany, and in South America. Eventually, the Leipzig researchers,
led by Erwin Payr, began to use mammography to diagnose
cancer. In the 1930’s, a Leipzig researcher named W. Vogel
published a paper that accurately described differences between
cancerous and noncancerous tumors as they appeared on X-ray photographs. Researchers in the United States paid little attention to
mammography until 1926. That year, a physician in Rochester, New
York, was using a fluoroscope to examine heart muscle in a patient
and discovered that the fluoroscope could be used to make images of
breast tissue as well. The physician, Stafford L. Warren, then developed
a stereoscopic technique that he used in examinations before
surgery. Warren published his findings in 1930; his article also described
changes in breast tissue that occurred because of pregnancy,
lactation (milk production), menstruation, and breast disease. Yet
Stafford’s technique was complicated and required equipment that
most physicians of the time did not have. Eventually, he lost interest
in mammography and went on to other research.
Using the Technique
In the late 1930’s, Jacob Gershon-Cohen became the first clinician
to advocate regular mammography for all women to detect breast
cancer before it became a major problem. Mammography was not
very expensive, he pointed out, and it was already quite accurate. A
milestone in breast cancer research came in 1956, when Gershon-
Cohen and others began a five-year study of more than 1,300 women
to test the accuracy of mammography for detecting breast cancer.
Each woman studied was screened once every six months. Of the
1,055 women who finished the study, 92 were diagnosed with benign
tumors and 23 with malignant tumors. Remarkably, out of all
these, only one diagnosis turned out to be wrong.
During the same period, Robert Egan of Houston began tracking
breast cancer X rays. Over a span of three years, one thousand X-ray
photographs were used to make diagnoses. When these diagnoses
were compared to the results of surgical biopsies, it was confirmed
that mammography had produced 238 correct diagnoses of cancer,
out of 240 cases. Egan therefore joined the crusade for regular breast
cancer screening.
Once mammography was finally accepted by doctors in the late
1950’s and early 1960’s, researchers realized that they needed a way
to teach mammography quickly and effectively to those who would
use it. A study was done, and it showed that any radiologist could
conduct the procedure with only five days of training.In the early 1970’s, the American Cancer Society and the National
Cancer Institute joined forces on a nationwide breast cancer
screening program called the “Breast Cancer Detection Demonstration
Project.” Its goal in 1971 was to screen more than 250,000
women over the age of thirty-five.
Since the 1960’s, however, some people had argued that mammography
was dangerous because it used radiation on patients. In
1976, Ralph Nader, a consumer advocate, stated that women who
were to undergo mammography should be given consent forms
that would list the dangers of radiation. In the years that followed,
mammography was refined to reduced the amount of radiation
needed to detect cancer. It became a standard tool for diagnosis, and
doctors recommended that women have a mammogram every two
or three years after the age of forty.
Impact
Radiology is not a science that concerns only breast cancer screening.
While it does provide the technical facilities necessary to practice
mammography, the photographic images obtained must be interpreted
by general practitioners, as well as by specialists. Once Gershon-Cohen had demonstrated the viability of the technique, a
means of training was devised that made it fairly easy for clinicians
to learn how to practice mammography successfully. Once all these
factors—accuracy, safety, simplicity—were in place, mammography
became an important factor in the fight against breast cancer.
The progress made in mammography during the twentieth century
was a major improvement in the effort to keep more women
from dying of breast cancer. The disease has always been one of the
primary contributors to the number of female cancer deaths that occur
annually in the United States and around the world. This high
figure stems from the fact that women had no way of detecting the
disease until tumors were in an advanced state.
Once Salomon’s procedure was utilized, physicians had a means
by which they could look inside breast tissue without engaging in
exploratory surgery, thus giving women a screening technique that
was simple and inexpensive. By 1971, a quarter million women over
age thirty-five had been screened. Twenty years later, that number
was in the millions.
17 August 2009
Long-distance telephone
The invention: System for conveying voice signals via wires over
long distances.
The people behind the invention:
Alexander Graham Bell (1847-1922), a Scottish American
inventor
Thomas A. Watson (1854-1934), an American electrical engineer
The Problem of Distance
The telephone may be the most important invention of the nineteenth
century. The device developed by Alexander Graham Bell
and Thomas A. Watson opened a new era in communication and
made it possible for people to converse over long distances for the
first time. During the last two decades of the nineteenth century and
the first decade of the twentieth century, the American Telephone
and Telegraph (AT&T) Company continued to refine and upgrade
telephone facilities, introducing such innovations as automatic dialing
and long-distance service.
One of the greatest challenges faced by Bell engineers was to
develop a way of maintaining signal quality over long distances.
Telephone wires were susceptible to interference from electrical
storms and other natural phenomena, and electrical resistance
and radiation caused a fairly rapid drop-off in signal strength,
which made long-distance conversations barely audible or unintelligible.
By 1900, Bell engineers had discovered that signal strength could
be improved somewhat by wrapping the main wire conductor with
thinner wires called “loading coils” at prescribed intervals along
the length of the cable. Using this procedure, Bell extended longdistance
service from New York to Denver, Colorado, which was
then considered the farthest point that could be reached with acceptable
quality. The result, however, was still unsatisfactory, and
Bell engineers realized that some form of signal amplification would
be necessary to improve the quality of the signal.A breakthrough came in 1906, when Lee de Forest invented the
“audion tube,” which could send and amplify radio waves. Bell scientists
immediately recognized the potential of the new device for
long-distance telephony and began building amplifiers that would
be placed strategically along the long-distance wire network.
Work progressed so quickly that by 1909, Bell officials were predicting
that the first transcontinental long-distance telephone service,
between New York and San Francisco, was imminent. In that
year, Bell president Theodore N. Vail went so far as to promise the
organizers of the Panama-Pacific Exposition, scheduled to open in
San Francisco in 1914, that Bell would offer a demonstration at
the exposition. The promise was risky, because certain technical
problems associated with sending a telephone signal over a 4,800-
kilometer wire had not yet been solved. De Forest’s audion tube was
a crude device, but progress was being made.
Two more breakthroughs came in 1912, when de Forest improved
on his original concept and Bell engineer Harold D. Arnold
improved it further. Bell bought the rights to de Forest’s vacuumtube
patents in 1913 and completed the construction of the New
York-San Francisco circuit. The last connection was made at the
Utah-Nevada border on June 17, 1914.
Success Leads to Further Improvements
Bell’s long-distance network was tested successfully on June 29,
1914, but the official demonstration was postponed until January
25, 1915, to accommodate the Panama-Pacific Exposition, which
had also been postponed. On that date, a connection was established
between Jekyll Island, Georgia, where Theodore Vail was recuperating
from an illness, and New York City, where Alexander
Graham Bell was standing by to talk to his former associate Thomas
Watson, who was in San Francisco. When everything was in place,
the following conversation took place. Bell: “Hoy! Hoy! Mr. Watson?
Are you there? Do you hear me?”Watson: “Yes, Dr. Bell, I hear
you perfectly. Do you hear me well?” Bell: “Yes, your voice is perfectly
distinct. It is as clear as if you were here in New York.”
The first transcontinental telephone conversation transmitted
by wire was followed quickly by another that was transmitted via radio. Although the Bell company was slow to recognize the potential
of radio wave amplification for the “wireless” transmission
of telephone conversations, by 1909 the company had made a significant
commitment to conduct research in radio telephony. On
April 4, 1915, a wireless signal was transmitted by Bell technicians
from Montauk Point on Long Island, New York, to Wilmington,
Delaware, a distance of more than 320 kilometers. Shortly thereafter,
a similar test was conducted between New York City and
Brunswick, Georgia, via a relay station at Montauk Point. The total
distance of the transmission was more than 1,600 kilometers. Finally,
in September, 1915, Vail placed a successful transcontinental radiotelephone
call from his office in New York to Bell engineering chief
J. J. Carty in San Francisco.
Only a month later, the first telephone transmission across the
Atlantic Ocean was accomplished via radio from Arlington, Virginia,
to the Eiffel Tower in Paris, France. The signal was detectable,
although its quality was poor. It would be ten years before true
transatlantic radio-telephone service would begin.
The Bell company recognized that creating a nationwide longdistance
network would increase the volume of telephone calls simply
by increasing the number of destinations that could be reached
from any single telephone station. As the network expanded, each
subscriber would have more reason to use the telephone more often,
thereby increasing Bell’s revenues. Thus, the company’s strategy
became one of tying local and regional networks together to create
one large system.
Impact
Just as the railroads had interconnected centers of commerce, industry,
and agriculture all across the continental United States in the
nineteenth century, the telephone promised to bring a new kind of
interconnection to the country in the twentieth century: instantaneous
voice communication. During the first quarter century after
the invention of the telephone and during its subsequent commercialization,
the emphasis of telephone companies was to set up central
office switches that would provide interconnections among
subscribers within a fairly limited geographical area. Large cities were wired quickly, and by the beginning of the twentieth century
most were served by telephone switches that could accommodate
thousands of subscribers.
The development of intercontinental telephone service was a
milestone in the history of telephony for two reasons. First, it was a
practical demonstration of the almost limitless applications of this
innovative technology. Second, for the first time in its brief history,
the telephone network took on a national character. It became clear
that large central office networks, even in large cities such as New
York, Chicago, and Baltimore, were merely small parts of a much
larger, universally accessible communication network that spanned
a continent. The next step would be to look abroad, to Europe and
beyond.
Long-distance radiotelephony
The invention: The first radio transmissions fromthe United States
to Europe opened a new era in telecommunications.
The people behind the invention:
Guglielmo Marconi (1874-1937), Italian inventor of transatlantic
telegraphy
Reginald Aubrey Fessenden (1866-1932), an American radio
engineer
Lee de Forest (1873-1961), an American inventor
Harold D. Arnold (1883-1933), an American physicist
John J. Carty (1861-1932), an American electrical engineer
An Accidental Broadcast
The idea of commercial transatlantic communication was first
conceived by Italian physicist and inventor Guglielmo Marconi, the
pioneer of wireless telegraphy. Marconi used a spark transmitter to
generate radio waves that were interrupted, or modulated, to form
the dots and dashes of Morse code. The rapid generation of sparks
created an electromagnetic disturbance that sent radio waves of different
frequencies into the air—a broad, noisy transmission that was
difficult to tune and detect.
The inventor Reginald Aubrey Fessenden produced an alternative
method that became the basis of radio technology in the twentieth
century. His continuous radio waves kept to one frequency,
making them much easier to detect at long distances. Furthermore,
the continuous waves could be modulated by an audio signal, making
it possible to transmit the sound of speech.
Fessenden used an alternator to generate electromagnetic waves
at the high frequencies required in radio transmission. It was specially
constructed at the laboratories of the General Electric Company.
The machine was shipped to Brant Rock, Massachusetts, in
1906 for testing. Radio messages were sent to a boat cruising offshore,
and the feasibility of radiotelephony was thus demonstrated.
Fessenden followed this success with a broadcast of messages and music between Brant Rock and a receiving station constructed at
Plymouth, Massachusetts.
The equipment installed at Brant Rock had a range of about 160
kilometers. The transmission distance was determined by the strength
of the electric power delivered by the alternator, which was measured
in watts. Fessenden’s alternator was rated at 500 watts, but it
usually delivered much less power.
Yet this was sufficient to send a radio message across the Atlantic.
Fessenden had built a receiving station at Machrihanish, Scotland,
to test the operation of a large rotary spark transmitter that he
had constructed. An operator at this station picked up the voice of
an engineer at Brant Rock who was sending instructions to Plymouth.
Thus, the first radiotelephone message had been sent across
the Atlantic by accident. Fessenden, however, decided not to make
this startling development public. The station at Machrihanish was
destroyed in a storm, making it impossible to carry out further tests.
The successful transmission undoubtedly had been the result of exceptionally
clear atmospheric conditions that might never again favor
the inventor.
One of the parties following the development of the experiments
in radio telephony was the American Telephone and Telegraph
(AT&T) Company. Fessenden entered into negotiations to sell his
system to the telephone company, but, because of the financial panic
of 1907, the sale was never made.
Virginia to Paris and Hawaii
The English physicist John Ambrose Fleming had invented a twoelement
(diode) vacuum tube in 1904 that could be used to generate
and detect radio waves. Two years later, the American inventor Lee
de Forest added a third element to the diode to produce his “audion”
(triode), which was a more sensitive detector. John J. Carty, head of a
research and development effort at AT&T, examined these new devices
carefully. He became convinced that an electronic amplifier, incorporating
the triode into its design, could be used to increase the
strength of telephone signals and to long distances.
On Carty’s advice, AT&T purchased the rights to de Forest’s
audion. A team of about twenty-five researchers, under the leadership of physicist Harold D. Arnold, were assigned the job of perfecting
the triode and turning it into a reliable amplifier. The improved
triode was responsible for the success of transcontinental cable telephone
service, which was introduced in January, 1915. The triode
was also the basis of AT&T’s foray into radio telephony.
Carty’s research plan called for a system with three components:
an oscillator to generate the radio waves, a modulator to add the
audio signals to the waves, and an amplifier to transmit the radio
waves. The total power output of the system was 7,500 watts,
enough to send the radio waves over thousands of kilometers.The apparatus was installed in the U.S. Navy’s radio tower in
Arlington, Virginia, in 1915. Radio messages from Arlington were
picked up at a receiving station in California, a distance of 4,000 kilometers,
then at a station in Pearl Harbor, Hawaii, which was 7,200
kilometers from Arlington. AT&T’s engineers had succeeded in
joining the company telephone lines with the radio transmitter at
Arlington; therefore, the president of AT&T, Theodore Vail, could
pick up his telephone and talk directly with someone in California.
The next experiment was to send a radio message fromArlington
to a receiving station set up in the Eiffel Tower in Paris. After several
unsuccessful attempts, the telephone engineers in the Eiffel Tower
finally heard Arlington’s messages on October 21, 1915. The AT&T
receiving station in Hawaii also picked up the messages. The two receiving
stations had to send their reply by telegraph to the United
States because both stations were set up to receive only. Two-way
radio communication was still years in the future.
Impact
The announcement that messages had been received in Paris was
front-page news and brought about an outburst of national pride in
the United States. The demonstration of transatlantic radio telephony
was more important as publicity for AT&T than as a scientific
advance. All the credit went to AT&T and to Carty’s laboratory.
Both Fessenden and de Forest attempted to draw attention to their
contributions to long-distance radio telephony, but to no avail. The
Arlington-to-Paris transmission was a triumph for corporate public
relations and corporate research.
The development of the triode had been achieved with large
teams of highly trained scientists—in contrast to the small-scale efforts
of Fessenden and de Forest, who had little formal scientific
training. Carty’s laboratory was an example of the new type of industrial
research that was to dominate the twentieth century. The
golden days of the lone inventor, in the mold of Thomas Edison or
Alexander Graham Bell, were gone.
In the years that followed the first transatlantic radio telephone
messages, little was done by AT&T to advance the technology or to
develop a commercial service. The equipment used in the 1915 demonstration was more a makeshift laboratory apparatus than a prototype
for a new radio technology. The messages sent were short and
faint. There was a great gulf between hearing “hello” and “goodbye”
amid the static. The many predictions of a direct telephone
connection between New York and other major cities overseas were
premature. It was not until 1927 that a transatlantic radio circuit was
opened for public use. By that time, a new technological direction
had been taken, and the method used in 1915 had been superseded
by shortwave radio communication.
Laser vaporization
The invention: Technique using laser light beams to vaporize the
plaque that clogs arteries.
The people behind the invention:
Albert Einstein (1879-1955), a theoretical American physicist
Theodore Harold Maiman (1927- ), inventor of the laser
Light, Lasers, and Coronary Arteries
Visible light, a type of electromagnetic radiation, is actually a
form of energy. The fact that the light beams produced by a light
bulb can warm an object demonstrates that this is the case. Light
beams are radiated in all directions by a light bulb. In contrast, the
device called the “laser” produces light that travels in the form of a
“coherent” unidirectional beam. Coherent light beams can be focused
on very small areas, generating sufficient heat to melt steel.
The term “laser” was invented in 1957 by R. Gordon Gould of
Columbia University. It stands for light amplification by stimulated
emission of radiation, the means by which laser light beams are
made. Many different materials—including solid ruby gemstones,
liquid dye solutions, and mixtures of gases—can produce such
beams in a process called “lasing.” The different types of lasers yield
light beams of different colors that have many uses in science, industry,
and medicine. For example, ruby lasers, which were developed
in 1960, are widely used in eye surgery. In 1983, a group of
physicians in Toulouse, France, used a laser for cardiovascular treatment.
They used the laser to vaporize the “atheroma” material that
clogs the arteries in the condition called “atherosclerosis.” The technique
that they used is known as “laser vaporization surgery.”
Laser Operation, Welding, and Surgery
Lasers are electronic devices that emit intense beams of light
when a process called “stimulated emission” occurs. The principles
of laser operation, including stimulated emission, were established
by Albert Einstein and other scientists in the first third of the twentieth century. In 1960, Theodore H. Maiman of the Hughes Research
Center in Malibu, California, built the first laser, using a ruby crystal
to produce a laser beam composed of red light.
All lasers are made up of three main components. The first of
these, the laser’s “active medium,” is a solid (like Maiman’s ruby
crystal), a liquid, or a gas that can be made to lase. The second component
is a flash lamp or some other light energy source that puts
light into the active medium. The third component is a pair of mirrors
that are situated on both sides of the active medium and are designed
in such a way that one mirror transmits part of the energy
that strikes it, yielding the light beam that leaves the laser.
Lasers can produce energy because light is one of many forms of
energy that are called, collectively, electromagnetic radiation (among
the other forms of electromagnetic radiation are X rays and radio
waves). These forms of electromagnetic radiation have different wavelengths;
the smaller the wavelength, the higher the energy level. The
energy level is measured in units called “quanta.” The emission of
light quanta from atoms that are said to be in the “excited state” produces
energy, and the absorption of quanta by unexcited atoms—
atoms said to be in the “ground state”—excites those atoms.
The familiar light bulb spontaneously and haphazardly emits
light of many wavelengths from excited atoms. This emission occurs
in all directions and at widely varying times. In contrast, the
light reflection between the mirrors at the ends of a laser causes all
of the many excited atoms present in the active medium simultaneously
to emit light waves of the same wavelength. This process is
called “stimulated emission.”
Stimulated emission ultimately causes a laser to yield a beam of
coherent light, which means that the wavelength, emission time,
and direction of all the waves in the laser beam are the same. The
use of focusing devices makes it possible to convert an emitted laser
beam into a point source that can be as small as a few thousandths of
an inch in diameter. Such focused beams are very hot, and they can
be used for such diverse functions as cutting or welding metal objects
and performing delicate surgery. The nature of the active medium
used in a laser determines the wavelength of its emitted light
beam; this in turn dictates both the energy of the emitted quanta and
the appropriate uses for the laser.Maiman’s ruby laser, for example, has been used since the 1960’s
in eye surgery to reattach detached retinas. This is done by focusing
the laser on the tiny retinal tear that causes a retina to become detached.
The very hot, high-intensity light beam then “welds” the
retina back into place, bloodlessly, by burning it to produce scar tissue.
The burning process has no effect on nearby tissues. Other
types of lasers have been used in surgeries on the digestive tract and
the uterus since the 1970’s.
In 1983, a group of physicians began using lasers to treat cardiovascular
disease. The original work, which was carried out by a
number of physicians in Toulouse, France, involved the vaporization
of atheroma deposits (atherosclerotic plaque) in a human artery. This very exciting event added a new method to medical science’s
arsenal of life-saving techniques.
Consequences
Since their discovery, lasers have been used for many purposes
in science and industry. Such uses include the study of the laws of
chemistry and physics, photography, communications, and surveying.
Lasers have been utilized in surgery since the mid-1960’s, and
their use has had a tremendous impact on medicine. The first type
of laser surgery to be conducted was the repair of detached retinas
via ruby lasers. This technique has become the method of choice for
such eye surgery because it takes only minutes to perform rather
than the hours required for conventional surgical methods. It is also
beneficial because the lasing of the surgical site cauterizes that site,
preventing bleeding.
In the late 1970’s, the use of other lasers for abdominal cancer
surgery and uterine surgery began and flourished. In these
forms of surgery, more powerful lasers are used. In the 1980’s,
laser vaporization surgery (LVS) began to be used to clear atherosclerotic
plaque (atheromas) from clogged arteries. This methodology
gives cardiologists a useful new tool. Before LVS was
available, surgeons dislodged atheromas by means of “transluminal
angioplasty,” which involved pushing small, fluoroscopeguided
inflatable balloons through clogged arteries.
12 August 2009
Laser eye surgery
The invention: The first significant clinical ophthalmic application
of any laser system was the treatment of retinal tears with a
pulsed ruby laser.
The people behind the invention:
Charles J. Campbell (1926- ), an ophthalmologist
H. Christian Zweng (1925- ), an ophthalmologist
Milton M. Zaret (1927- ), an ophthalmologist
Theodore Harold Maiman (1927- ), the physicist who
developed the first laser
Monkeys and Rabbits
The term “laser” is an acronym for light amplification by the
stimulated emission of radiation. The development of the laser for
ophthalmic (eye surgery) surgery arose from the initial concentration
of conventional light by magnifying lenses.
Within a laser, atoms are highly energized. When one of these atoms
loses its energy in the form of light, it stimulates other atoms to
emit light of the same frequency and in the same direction. A cascade
of these identical light waves is soon produced, which then oscillate
back and forth between the mirrors in the laser cavity. One
mirror is only partially reflective, allowing some of the laser light to
pass through. This light can be concentrated further into a small
burst of high intensity.
On July 7, 1960, Theodore Harold Maiman made public his discovery
of the first laser—a ruby laser. Shortly thereafter, ophthalmologists
began using ruby lasers for medical purposes.
The first significant medical uses of the ruby laser occurred in
1961, with experiments on animals conducted by Charles J. Campbell
in New York, H. Christian Zweng, and Milton M. Zaret. Zaret and his
colleagues produced photocoagulation (a thickening or drawing together
of substances by use of light) of the eyes of rabbits by flashes
froma ruby laser. Sufficient energy was delivered to cause immediate
thermal injury to the retina and iris of the rabbit. The beam also was directed to the interior of the rabbit eye, resulting in retinal coagulations.
The team examined the retinal lesions and pointed out both
the possible advantages of laser as a tool for therapeutic photocoagulation
and the potential applications in medical research.
In 1962, Zweng, along with several of his associates, began experimenting
with laser photocoagulation on the eyes of monkeys
and rabbits in order to establish parameters for the use of lasers on
the human eye.
Reflected by Blood
The vitreous humor, a transparent jelly that usually fills the vitreous
cavity of the eyes of younger individuals, commonly shrinks with age,
with myopia, or with certain pathologic conditions. As these conditions
occur, the vitreous humor begins to separate from the adjacent
retina. In some patients, the separating vitreous humor produces a
traction (pulling), causing a retinal tear to form. Through this opening in
the retina, liquefied vitreous humor can pass to a site underneath the
retina, producing retinal detachment and loss of vision.
Alaser can be used to cause photocoagulation of a retinal tear. As a
result, an adhesive scar forms between the retina surrounding the
tear and the underlying layers so that, despite traction, the retina
does not detach. If more than a small area of retina has detached, the
laser often is ineffective and major retinal detachment surgery must
be performed. Thus, in the experiments of Campbell and Zweng, the
ruby laser was used to prevent, rather than treat, retinal detachment.
In subsequent experiments with humans, all patients were treated
with the experimental laser photocoagulator without anesthesia.
Although usually no attempt was made to seal holes or tears, the
diseased portions of the retina were walled off satisfactorily so that
no detachments occurred. One problem that arose involved microaneurysms.
A“microaneurysm” is a tiny aneurysm, or blood-filled
bubble extending from the wall of a blood vessel. When attempts to
obliterate microaneurysms were unsuccessful, the researchers postulated
that the color of the ruby pulse so resembled the red of blood
that the light was reflected rather than absorbed. They believed that
another lasing material emitting light in another part of the spectrum
might have performed more successfully.Previously, xenon-arc lamp photocoagulators had been used to
treat retinal tears. The long exposure time required of these systems,
combined with their broad spectral range emission (versus
the single wavelength output of a laser), however, made the retinal
spot on which the xenon-arc could be focused too large for many
applications. Focused laser spots on the retina could be as small as
50 microns.
Consequences
The first laser in ophthalmic use by Campbell, Zweng, and Zaret,
among others, was a solid laser—Maiman’s ruby laser. While the results
they achieved with this laser were more impressive than with
the previously used xenon-arc, in the decades following these experiments,
argon gas replaced ruby as the most frequently used material
in treating retinal tears.
Argon laser energy is delivered to the area around the retinal tear
through a slit lamp or by using an intraocular probe introduced directly
into the eye. The argon wavelength is transmitted through the
clear structures of the eye, such as the cornea, lens, and vitreous.
This beam is composed of blue-green light that can be effectively
aimed at the desired portion of the eye. Nevertheless, the beam can
be absorbed by cataracts and by vitreous or retinal blood, decreasing
its effectiveness.
Moreover, while the ruby laser was found to be highly effective
in producing an adhesive scar, it was not useful in the treatment of
vascular diseases of the eye. Aseries of laser sources, each with different
characteristics, was considered, investigated, and used clinically
for various durations during the period that followed Campbell
and Zweng’s experiments.
Other laser types that are being adapted for use in ophthalmology
are carbon dioxide lasers for scleral surgery (surgery on the
tough, white, fibrous membrane covering the entire eyeball except
the area covered by the cornea) and eye wall resection, dye lasers to
kill or slow the growth of tumors, eximer lasers for their ability to
break down corneal tissue without heating, and pulsed erbium lasers
used to cut intraocular membranes.
Laser-diode recording process
The invention: Video and audio playback system that uses a lowpower
laser to decode information digitally stored on reflective
disks.
The organization behind the invention:
The Philips Corporation, a Dutch electronics firm
The Development of Digital Systems
Since the advent of the computer age, it has been the goal of
many equipment manufacturers to provide reliable digital systems
for the storage and retrieval of video and audio programs. A need
for such devices was perceived for several reasons. Existing storage
media (movie film and 12-inch, vinyl, long-playing records) were
relatively large and cumbersome to manipulate and were prone to
degradation, breakage, and unwanted noise. Thus, during the late
1960’s, two different methods for storing video programs on disc
were invented. A mechanical system was demonstrated by the
Telefunken Company, while the Radio Corporation of America
(RCA) introduced an electrostatic device (a device that used static
electricity). The first commercially successful system, however, was
developed during the mid-1970’s by the Philips Corporation.
Philips devoted considerable resources to creating a digital video
system, read by light beams, which could reproduce an entire feature-
length film from one 12-inch videodisc. An integral part of this
innovation was the fabrication of a device small enough and fast
enough to read the vast amounts of greatly compacted data stored
on the 12-inch disc without introducing unwanted noise. Although
Philips was aware of the other formats, the company opted to use an
optical scanner with a small “semiconductor laser diode” to retrieve
the digital information. The laser diode is only a fraction of a millimeter
in size, operates quite efficiently with high amplitude and relatively
low power (0.1 watt), and can be used continuously. Because
this configuration operates at a high frequency, its informationcarrying
capacity is quite large.Although the digital videodisc system (called “laservision”) works
well, the low level of noise and the clear images offered by this system
were masked by the low quality of the conventional television
monitors on which they were viewed. Furthermore, the high price
of the playback systems and the discs made them noncompetitive
with the videocassette recorders (VCRs) that were then capturing
the market for home systems. VCRs had the additional advantage
that programs could be recorded or copied easily. The Philips Corporation
turned its attention to utilizing this technology in an area
where low noise levels and high quality would be more readily apparent—
audio disc systems. By 1979, they had perfected the basic
compact disc (CD) system, which soon revolutionized the world of
stereophonic home systems.
Reading Digital Discs with Laser Light
Digital signals (signals composed of numbers) are stored on
discs as “pits” impressed into the plastic disc and then coated with a
thin reflective layer of aluminum. A laser beam, manipulated by
delicate, fast-moving mirrors, tracks and reads the digital information
as changes in light intensity. These data are then converted to a
varying electrical signal that contains the video or audio information.
The data are then recovered by means of a sophisticated
pickup that consists of the semiconductor laser diode, a polarizing
beam splitter, an objective lens, a collective lens system, and a
photodiode receiver. The beam from the laser diode is focused by a
collimator lens (a lens that collects and focuses light) and then
passes through the polarizing beam splitter (PBS). This device acts
like a one-way mirror mounted at 45 degrees to the light path. Light
from the laser passes through the PBS as if it were a window, but the
light emerges in a polarized state (which means that the vibration of
the light takes place in only one plane). For the beam reflected from
the CD surface, however, the PBS acts like a mirror, since the reflected
beam has an opposite polarization. The light is thus deflected
toward the photodiode detector. The objective lens is needed
to focus the light onto the disc surface. On the outer surface of the
transparent disc, the main spot of light has a diameter of 0.8 millimeter,
which narrows to only 0.0017 millimeter at the reflective surface. At the surface, the spot is about three times the size of the microscopic
pits (0.0005 millimeter).
The data encoded on the disc determine the relative intensity of
the reflected light, on the basis of the presence or absence of pits.
When the reflected laser beam enters the photodiode, a modulated
light beam is changed into a digital signal that becomes an analog
(continuous) audio signal after several stages of signal processing
and error correction.
Consequences
The development of the semiconductor laser diode and associated
circuitry for reading stored information has made CD audio
systems practical and affordable. These systems can offer the quality
of a live musical performance with a clarity that is undisturbed
by noise and distortion. Digital systems also offer several other significant
advantages over analog devices. The dynamic range (the
difference between the softest and the loudest signals that can be
stored and reproduced) is considerably greater in digital systems. In
addition, digital systems can be copied precisely; the signal is not
degraded by copying, as is the case with analog systems. Finally,
error-correcting codes can be used to detect and correct errors in
transmitted or reproduced digital signals, allowing greater precision
and a higher-quality output sound.
Besides laser video systems, there are many other applications
for laser-read CDs. Compact disc read-only memory (CD-ROM) is
used to store computer text. One standard CD can store 500 megabytes
of information, which is about twenty times the storage of a
hard-disk drive on a typical home computer. Compact disc systems
can also be integrated with conventional televisions (called CD-V)
to present twenty minutes of sound and five minutes of sound with
picture. Finally, CD systems connected with a computer (CD-I) mix
audio, video, and computer programming. These devices allow the
user to stop at any point in the program, request more information,
and receive that information as sound with graphics, film clips, or
as text on the screen.
Laser
The invention: Taking its name from the acronym for light amplification
by the stimulated emission of radiation, a laser is a
beam of electromagnetic radiation that is monochromatic, highly
directional, and coherent. Lasers have found multiple applications
in electronics, medicine, and other fields.
The people behind the invention:
Theodore Harold Maiman (1927- ), an American physicist
Charles Hard Townes (1915- ), an American physicist who
was a cowinner of the 1964 Nobel Prize in Physics
Arthur L. Schawlow (1921-1999), an American physicist,
cowinner of the 1981 Nobel Prize in Physics
Mary Spaeth (1938- ), the American inventor of the tunable
laser
Coherent Light
Laser beams differ from other forms of electromagnetic radiation
in being consisting of a single wavelength, being highly directional,
and having waves whose crests and troughs are aligned. A laser
beam launched from Earth has produced a spot a few kilometers
wide on the Moon, nearly 400,000 kilometers away. Ordinary light
would have spread much more and produced a spot several times
wider than the Moon. Laser light can also be concentrated so as to
yield an enormous intensity of energy, more than that of the surface
of the Sun, an impossibility with ordinary light.
In order to appreciate the difference between laser light and ordinary
light, one must examine how light of any kind is produced. An
ordinary light bulb contains atoms of gas. For the bulb to light up,
these atoms must be excited to a state of energy higher then their
normal, or ground, state. This is accomplished by sending a current
of electricity through the bulb; the current jolts the atoms into the
higher-energy state. This excited state is unstable, however, and the
atoms will spontaneously return to their ground state by ridding
themselves of excess energy.As these atoms emit energy, light is produced. The light emitted
by a lamp full of atoms is disorganized and emitted in all directions
randomly. This type of light, common to all ordinary sources, from
fluorescent lamps to the Sun, is called “incoherent light.”
Laser light is different. The excited atoms in a laser emit their excess
energy in a unified, controlled manner. The atoms remain in the
excited state until there are a great many excited atoms. Then, they
are stimulated to emit energy, not independently, but in an organized
fashion, with all their light waves traveling in the same direction,
crests and troughs perfectly aligned. This type of light is called
“coherent light.”
Theory to Reality
In 1958, Charles Hard Townes of Columbia University, together
with Arthur L. Schawlow, explored the requirements of the laser in
a theoretical paper. In the Soviet Union, F. A. Butayeva and V. A.
Fabrikant had amplified light in 1957 using mercury; however, their
work was not published for two years and was not published in a
scientific journal. The work of the Soviet scientists, therefore, received virtually no attention in the Western world.
In 1960, Theodore Harold Maiman constructed the first laser in
the United States using a single crystal of synthetic pink ruby,
shaped into a cylindrical rod about 4 centimeters long and 0.5 centimeter
across. The ends, polished flat and made parallel to within
about a millionth of a centimeter, were coated with silver to make
them mirrors.
It is a property of stimulated emission that stimulated light
waves will be aligned exactly (crest to crest, trough to trough, and
with respect to direction) with the radiation that does the stimulating.
From the group of excited atoms, one atom returns to its ground state, emitting light. That light hits one of the other exited atoms and
stimulates it to fall to its ground state and emit light. The two light
waves are exactly in step. The light from these two atoms hits other
excited atoms, which respond in the same way, “amplifying” the total
sum of light.
If the first atom emits light in a direction parallel to the length of
the crystal cylinder, the mirrors at both ends bounce the light waves
back and forth, stimulating more light and steadily building up an
increasing intensity of light. The mirror at one end of the cylinder is
constructed to let through a fraction of the light, enabling the light to
emerge as a straight, intense, narrow beam.
Consequences
When the laser was introduced, it was an immediate sensation. In
the eighteen months following Maiman’s announcement that he had
succeeded in producing a working laser, about four hundred companies
and several government agencies embarked on work involving
lasers. Activity centered on improving lasers, as well as on exploring
their applications. At the same time, there was equal activity in publicizing
the near-miraculous promise of the device, in applications covering
the spectrum from “death” rays to sight-saving operations. A
popular film in the James Bond series, Goldfinger (1964), showed the
hero under threat of being sliced in half by a laser beam—an impossibility
at the time the film was made because of the low power-output
of the early lasers.
In the first decade after Maiman’s laser, there was some disappointment.
Successful use of lasers was limited to certain areas of
medicine, such as repairing detached retinas, and to scientific applications,
particularly in connection with standards: The speed of
light was measured with great accuracy, as was the distance to the
Moon. By 1990, partly because of advances in other fields, essentially
all the laser’s promise had been fulfilled, including the death
ray and James Bond’s slicer. Yet the laser continued to find its place
in technologies not envisioned at the time of the first laser. For example,
lasers are now used in computer printers, in compact disc
players, and even in arterial surgery.
10 August 2009
Laminated glass
The invention: Double sheets of glass separated by a thin layer of
plastic sandwiched between them.
The people behind the invention:
Edouard Benedictus (1879-1930), a French artist
Katherine Burr Blodgett (1898-1979), an American physicist
The Quest for Unbreakable Glass
People have been fascinated for centuries by the delicate transparency
of glass and the glitter of crystals. They have also been frustrated
by the brittleness and fragility of glass. When glass breaks, it
forms sharp pieces that can cut people severely. During the 1800’s
and early 1900’s, a number of people demonstrated ways to make
“unbreakable” glass. In 1855 in England, the first “unbreakable”
glass panes were made by embedding thin wires in the glass. The
embedded wire grid held the glass together when it was struck or
subjected to the intense heat of a fire.Wire glass is still used in windows
that must be fire resistant. The concept of embedding the wire
within a glass sheet so that the glass would not shatter was a predecessor
of the concept of laminated glass.
A series of inventors in Europe and the United States worked on
the idea of using a durable, transparent inner layer of plastic between
two sheets of glass to prevent the glass from shattering when it was
dropped or struck by an impact. In 1899, Charles E.Wade of Scranton,
Pennsylvania, obtained a patent for a kind of glass that had a sheet or
netting of mica fused within it to bind it. In 1902, Earnest E. G. Street
of Paris, France, proposed coating glass battery jars with pyroxylin
plastic (celluloid) so that they would hold together if they cracked. In
Swindon, England, in 1905, John Crewe Wood applied for a patent
for a material that would prevent automobile windshields from shattering
and injuring people when they broke. He proposed cementing
a sheet of material such as celluloid between two sheets of glass.
When the window was broken, the inner material would hold the
glass splinters together so that they would not cut anyone.Remembering a Fortuitous Fall
In his patent application, Edouard Benedictus described himself
as an artist and painter. He was also a poet, musician, and
philosopher who was descended from the philosopher Baruch
Benedictus Spinoza; he seemed an unlikely contributor to the
progress of glass manufacture. In 1903, Benedictus was cleaning his laboratory when he dropped a glass bottle that held a nitrocellulose
solution. The solvents, which had evaporated during the
years that the bottle had sat on a shelf, had left a strong celluloid
coating on the glass. When Benedictus picked up the bottle, he was
surprised to see that it had not shattered: It was starred, but all the
glass fragments had been held together by the internal celluloid
coating. He looked at the bottle closely, labeled it with the date
(November, 1903) and the height from which it had fallen, and put
it back on the shelf.
One day some years later (the date is uncertain), Benedictus became
aware of vehicular collisions in which two young women received
serious lacerations from broken glass. He wrote a poetic account
of a daydream he had while he was thinking intently about
the two women. He described a vision in which the faintly illuminated
bottle that had fallen some years before but had not shattered
appeared to float down to him from the shelf. He got up, went into
his laboratory, and began to work on an idea that originated with his
thoughts of the bottle that would not splinter.
Benedictus found the old bottle and devised a series of experiments
that he carried out until the next evening. By the time he had
finished, he had made the first sheet of Triplex glass, for which he
applied for a patent in 1909. He also founded the Société du Verre
Triplex (The Triplex Glass Society) in that year. In 1912, the Triplex
Safety Glass Company was established in England. The company
sold its products for military equipment in World War I, which began
two years later.
Triplex glass was the predecessor of laminated glass. Laminated
glass is composed of two or more sheets of glass with a thin
layer of plastic (usually polyvinyl butyral, although Benedictus
used pyroxylin) laminated between the glass sheets using pressure
and heat. The plastic layer will yield rather than rupture when subjected
to loads and stresses. This prevents the glass from shattering
into sharp pieces. Because of this property, laminated glass is also
known as “safety glass.”
Impact
Even after the protective value of laminated glass was known,the product was not widely used for some years. There were a number
of technical difficulties that had to be solved, such as the discoloring
of the plastic layer when it was exposed to sunlight; the relatively
high cost; and the cloudiness of the plastic layer, which
obscured vision—especially at night. Nevertheless, the expanding
automobile industry and the corresponding increase in the number
of accidents provided the impetus for improving the qualities and
manufacturing processes of laminated glass. In the early part of the
century, almost two-thirds of all injuries suffered in automobile accidents
involved broken glass.
Laminated glass is used in many applications in which safety is
important. It is typically used in all windows in cars, trucks, ships,
and aircraft. Thick sheets of bullet-resistant laminated glass are
used in banks, jewelry displays, and military installations. Thinner
sheets of laminated glass are used as security glass in museums, libraries,
and other areas where resistance to break-in attempts is
needed. Many buildings have large ceiling skylights that are made
of laminated glass; if the glass is damaged, it will not shatter, fall,
and hurt people below. Laminated glass is used in airports, hotels,
and apartments in noisy areas and in recording studios to reduce
the amount of noise that is transmitted. It is also used in safety goggles
and in viewing ports at industrial plants and test chambers.
Edouard Benedictus’s recollection of the bottle that fell but did not
shatter has thus helped make many situations in which glass is used
safer for everyone.
Iron lung
The invention: Amechanical respirator that saved the lives of victims
of poliomyelitis.
The people behind the invention:
Philip Drinker (1894-1972), an engineer who made many
contributions to medicine
Louis Shaw (1886-1940), a respiratory physiologist who
assisted Drinker
Charles F. McKhann III (1898-1988), a pediatrician and
founding member of the American Board of Pediatrics
A Terrifying Disease
Poliomyelitis (polio, or infantile paralysis) is an infectious viral
disease that damages the central nervous system, causing paralysis
in many cases. Its effect results from the destruction of neurons
(nerve cells) in the spinal cord. In many cases, the disease produces
crippled limbs and the wasting away of muscles. In others, polio results
in the fatal paralysis of the respiratory muscles. It is fortunate
that use of the Salk and Sabin vaccines beginning in the 1950’s has
virtually eradicated the disease.
In the 1920’s, poliomyelitis was a terrifying disease. Paralysis of
the respiratory muscles caused rapid death by suffocation, often
within only a few hours after the first signs of respiratory distress
had appeared. In 1929, Philip Drinker and Louis Shaw, both of Harvard
University, reported the development of a mechanical respirator
that would keep those afflicted with the disease alive for indefinite
periods of time. This device, soon nicknamed the “iron lung,”
helped thousands of people who suffered from respiratory paralysis
as a result of poliomyelitis or other diseases.
Development of the iron lung arose after Drinker, then an assistant
professor in Harvard’s Department of Industrial Hygiene, was
appointed to a Rockefeller Institute commission formed to improve
methods for resuscitating victims of electric shock. The best-known
use of the iron lung—treatment of poliomyelitis—was a result of
numerous epidemics of the disease that occurred from 1898 until the 1920’s, each leaving thousands of Americans paralyzed.
The concept of the iron lung reportedly arose from Drinker’s observation
of physiological experiments carried out by Shaw and
Drinker’s brother, Cecil. The experiments involved the placement
of a cat inside an airtight box—a body plethysmograph—with the
cat’s head protruding from an airtight collar. Shaw and Cecil Drinker
then measured the volume changes in the plethysmograph to identify
normal breathing patterns. Philip Drinker then placed cats paralyzed
by curare inside plethysmographies and showed that they
could be kept breathing artificially by use of air from a hypodermic
syringe connected to the device.
Next, they proceeded to build a human-sized plethysmographlike
machine, with a five-hundred-dollar grant from the New York
Consolidated Gas Company. This was done by a tinsmith and the
Harvard Medical School machine shop.
Breath for Paralyzed Lungs
The first machine was tested on Drinker and Shaw, and after several
modifications were made, a workable iron lung was made
available for clinical use. This machine consisted of a metal cylinder
large enough to hold a human being. One end of the cylinder, which
contained a rubber collar, slid out on casters along with a stretcher
on which the patient was placed. Once the patient was in position
and the collar was fitted around the patient’s neck, the stretcher was
pushed back into the cylinder and the iron lung was made airtight.
The iron lung then “breathed” for the patient by using an electric
blower to remove and replace air alternatively inside the machine.
In the human chest, inhalation occurs when the diaphragm contracts
and powerful muscles (which are paralyzed in poliomyelitis
sufferers) expand the rib cage. This lowers the air pressure in the
lungs and allows inhalation to occur. In exhalation, the diaphragm
and chest muscles relax, and air is expelled as the chest cavity returns
to its normal size. In cases of respiratory paralysis treated with
an iron lung, the air coming into or leaving the iron lung alternately
compressed the patient’s chest, producing artificial exhalation, and
the allowed it to expand to so that the chest could fill with air. In this
way, iron lungs “breathed” for the patients using them.Careful examination of each patient was required to allow technicians
to adjust the rate of operation of the machine. Acooling system
and ports for drainage lines, intravenous lines, and the other
apparatus needed to maintain a wide variety of patients were included
in the machine.
The first person treated in an iron lung was an eight-year-old girl
afflicted with respiratory paralysis resulting from poliomyelitis. The
iron lung kept her alive for five days. Unfortunately, she died from
heart failure as a result of pneumonia. The next iron lung patient, a
Harvard University student, was confined to the machine for several
weeks and later recovered enough to resume a normal life.
The Internet
The invention:
A worldwide network of interlocking computer
systems, developed out of a U.S. government project to improve
military preparedness.
The people behind the invention:
Paul Baran, a researcher for the RAND corporation
Vinton G. Cerf (1943- ), an American computer scientist
regarded as the “father of the Internet”
Internal combustion engine
The invention: The most common type of engine in automobiles
and many other vehicles, the internal combusion engine is characterized
by the fact that it burns its liquid fuelly internally—in
contrast to engines, such as the steam engine, that burn fuel in external
furnaces.
The people behind the invention:
Sir Harry Ralph Ricardo (1885-1974), an English engineer
Oliver Thornycroft (1885-1956), an engineer and works manager
Sir David Randall Pye (1886-1960), an engineer and
administrator
Sir Robert Waley Cohen (1877-1952), a scientist and industrialist
The Internal Combustion Engine: 1900-1916
By the beginning of the twentieth century, internal combustion
engines were almost everywhere. City streets in Berlin, London,
and New York were filled with automobile and truck traffic; gasoline-
and diesel-powered boat engines were replacing sails; stationary
steam engines for electrical generation were being edged out by
internal combustion engines. Even aircraft use was at hand: To
progress from theWright brothers’ first manned flight in 1903 to the
fighting planes ofWorldWar I took only a little more than a decade.
The internal combustion engines of the time, however, were
primitive in design. They were heavy (10 to 15 pounds per output
horsepower, as opposed to 1 to 2 pounds today), slow (typically
1,000 or fewer revolutions per minute or less, as opposed to 2,000 to
5,000 today), and extremely inefficient in extracting the energy content
of their fuel. These were not major drawbacks for stationary applications,
or even for road traffic that rarely went faster than 30 or
40 miles per hour, but the advent of military aircraft and tanks demanded
that engines be made more efficient.Engine and Fuel Design
Harry Ricardo, son of an architect and grandson (on his mother’s
side) of an engineer, was a central figure in the necessary redesign of
internal combustion engines. As a schoolboy, he built a coal-fired
steam engine for his bicycle, and at Cambridge University he produced
a single-cylinder gasoline motorcycle, incorporating many of
his own ideas, which won a fuel-economy competition when it traveled
almost 40 miles on a quart of gasoline. He also began development
of a two-cycle engine called the “Dolphin,” which later was
produced for use in fishing boats and automobiles. In fact, in 1911,
Ricardo took his new bride on their honeymoon trip in a Dolphinpowered
car.
The impetus that led to major engine research came in 1916
when Ricardo was an engineer in his family’s firm. The British
government asked for newly designed tank engines, which had to
operate in the dirt and mud of battle, at a tilt of up to 35 degrees,
and could not give off telltale clouds of blue oil smoke. Ricardo
solved the problem with a special piston design and with air circulation
around the carburetor and within the engine to keep the oil
cool.
Design work on the tank engines turned Ricardo into a fullfledged
research engineer. In 1917, he founded his own company,
and a remarkable series of discoveries quickly followed. He investigated
the problem of detonation of the fuel-air mixture in the internal
combustion cylinder. The mixture is supposed to be ignited
by the spark plug at the top of the compression stroke, with a controlled
flame front spreading at a rate about equal to the speed of
the piston head as it moves downward in the power stroke. Some
fuels, however, detonated (ignited spontaneously throughout the
entire fuel-air mixture) as a result of the compression itself, causing
loss of fuel efficiency and damage to the engine.
With the cooperation of RobertWaley Cohen of Shell Petroleum,
Ricardo evaluated chemical mixtures of fuels and found that paraffins
(such as n-heptane, the current low-octane standard) detonated
readily, but aromatics such as toluene were nearly immune to detonation.
He established a “toluene number” rating to describe the
tendency of various fuels to detonate; this number was replaced in the 1920’s by the “octane number” devised by Thomas Midgley at
the Delco laboratories in Dayton, Ohio.
The fuel work was carried out in an experimental engine designed
by Ricardo that allowed direct observation of the flame front
as it spread and permitted changes in compression ratio while the
engine was running. Three principles emerged from the investigation:
the fuel-air mixture should be admitted with as much turbulence
as possible, for thorough mixing and efficient combustion; the
spark plug should be centrally located to prevent distant pockets of
the mixture from detonating before the flame front reaches them;
and the mixture should be kept as cool as possible to prevent detonation.
These principles were then applied in the first truly efficient sidevalve
(“L-head”) engine—that is, an engine with the valves in a
chamber at the side of the cylinder, in the engine block, rather than
overhead, in the engine head. Ricardo patented this design, and after
winning a patent dispute in court in 1932, he received royalties
or consulting fees for it from engine manufacturers all over the
world.Impact
The side-valve engine was the workhorse design for automobile
and marine engines until after World War II. With its valves actuated
directly by a camshaft in the crankcase, it is simple, rugged,
and easy to manufacture. Overhead valves with overhead camshafts
are the standard in automobile engines today, but the sidevalve
engine is still found in marine applications and in small engines
for lawn mowers, home generator systems, and the like. In its
widespread use and its decades of employment, the side-valve engine
represents a scientific and technological breakthrough in the
twentieth century.
Ricardo and his colleagues, Oliver Thornycroft and D. R. Pye,
went on to create other engine designs—notably, the sleeve-valve
aircraft engine that was the basic pattern for most of the great British
planes of World War II and early versions of the aircraft jet engine.
For his technical advances and service to the government, Ricardo
was elected a Fellow of the Royal Society in 1929, and he was
knighted in 1948.
03 August 2009
Interchangeable parts
The invention:
A key idea in the late Industrial Revolution, the
interchangeability of parts made possible mass production of
identical products.
The people behind the invention:
Henry M. Leland (1843-1932), president of Cadillac Motor Car
Company in 1908, known as a master of precision
Frederick Bennett, the British agent for Cadillac Motor Car
Company who convinced the Royal Automobile Club to run
the standardization test at Brooklands, England
Henry Ford (1863-1947), founder of Ford Motor Company who
introduced the moving assembly line into the automobile
industry in 1913
A key idea in the late Industrial Revolution, the
interchangeability of parts made possible mass production of
identical products.
The people behind the invention:
Henry M. Leland (1843-1932), president of Cadillac Motor Car
Company in 1908, known as a master of precision
Frederick Bennett, the British agent for Cadillac Motor Car
Company who convinced the Royal Automobile Club to run
the standardization test at Brooklands, England
Henry Ford (1863-1947), founder of Ford Motor Company who
introduced the moving assembly line into the automobile
industry in 1913
Instant photography
The invention: Popularly known by its Polaroid tradename, a camera
capable of producing finished photographs immediately after
its film was exposed.
The people behind the invention:
Edwin Herbert Land (1909-1991), an American physicist and
chemist
Howard G. Rogers (1915- ), a senior researcher at Polaroid
and Land’s collaborator
William J. McCune (1915- ), an engineer and head of the
Polaroid team
Ansel Adams (1902-1984), an American photographer and
Land’s technical consultant
The Daughter of Invention
Because he was a chemist and physicist interested primarily in
research relating to light and vision, and to the materials that affect
them, it was inevitable that Edwin Herbert Land should be drawn
into the field of photography. Land founded the Polaroid Corporation
in 1929. During the summer of 1943, while Land and his wife
were vacationing in Santa Fe, New Mexico, with their three-yearold
daughter, Land stopped to take a picture of the child. After the
picture was taken, his daughter asked to see it. When she was told
she could not see the picture immediately, she asked how long it
would be. Within an hour after his daughter’s question, Land had
conceived a preliminary plan for designing the camera, the film,
and the physical chemistry of what would become the instant camera.
Such a device would, he hoped, produce a picture immediately
after exposure.
Within six months, Land had solved most of the essential problems
of the instant photography system. He and a small group of associates
at Polaroid secretly worked on the project. Howard G. Rogers
was Land’s collaborator in the laboratory. Land conferred the
responsibility for the engineering and mechanical phase of the project
on William J. McCune, who led the team that eventually designed the original camera and the machinery that produced both
the camera and Land’s new film.
The first Polaroid Land camera—the Model 95—produced photographs
measuring 8.25 by 10.8 centimeters; there were eight pictures
to a roll. Rather than being black-and-white, the original Polaroid
prints were sepia-toned (producing a warm, reddish-brown color).
The reasons for the sepia coloration were chemical rather than aesthetic;
as soon as Land’s researchers could devise a workable formula
for sharp black-and-white prints (about ten months after the camera
was introduced commercially), they replaced the sepia film.
A Sophisticated Chemical Reaction
Although the mechanical process involved in the first demonstration
camera was relatively simple, this process was merely
the means by which a highly sophisticated chemical reaction—
the diffusion transfer process—was produced.
In the basic diffusion transfer process, when an exposed negative
image is developed, the undeveloped portion corresponds
to the opposite aspect of the image, the positive. Almost all selfprocessing
instant photography materials operate according to
three phases—negative development, diffusion transfer, and
positive development. These occur simultaneously, so that positive
image formation begins instantly. With black-and-white materials,
the positive was originally completed in about sixty seconds; with
color materials (introduced later), the process took somewhat longer.
The basic phenomenon of silver in solution diffusing from one
emulsion to another was first observed in the 1850’s, but no practical
use of this action was made until 1939. The photographic use of
diffusion transfer for producing normal-continuous-tone images
was investigated actively from the early 1940’s by Land and his associates.
The instant camera using this method was demonstrated
in 1947 and marketed in 1948.
The fundamentals of photographic diffusion transfer are simplest
in a black-and-white peel-apart film. The negative sheet is exposed
in the camera in the normal way. It is then pulled out of the
camera, or film pack holder, by a paper tab. Next, it passes through a
set of rollers, which press it face-to-face with a sheet of receiving material included in the film pack. Simultaneously, the rollers rupture
a pod of reagent chemicals that are spread evenly by the rollers
between the two layers. The reagent contains a strong alkali and a
silver halide solvent, both of which diffuse into the negative emulsion. There the alkali activates the developing agent, which immediately
reduces the exposed halides to a negative image. At the
same time, the solvent dissolves the unexposed halides. The silver
in the dissolved halides forms the positive image.
Impact
The Polaroid Land camera had a tremendous impact on the photographic
industry as well as on the amateur and professional photographer.
Ansel Adams, who was known for his monumental,
ultrasharp black-and-white panoramas of the American West, suggested
to Land ways in which the tonal value of Polaroid film could
be enhanced, as well as new applications for Polaroid photographic
technology.
Soon after it was introduced, Polaroid photography became part
of the American way of life and changed the face of amateur photography
forever. By the 1950’s, Americans had become accustomed
to the world of recorded visual information through films, magazines,
and newspapers; they also had become enthusiastic picturetakers
as a result of the growing trend for simpler and more convenient
cameras. By allowing these photographers not only to record
their perceptions but also to see the results almost immediately, Polaroid
brought people closer to the creative process.
Infrared photography
The invention: The first application of color to infrared photography,
which performs tasks not possible for ordinary photography.
The person behind the invention:
Sir William Herschel (1738-1822), a pioneering English
astronomer
Invisible Light
Photography developed rapidly in the nineteenth century when it
became possible to record the colors and shades of visible light on
sensitive materials. Visible light is a form of radiation that consists of
electromagnetic waves, which also make up other forms of radiation
such as X rays and radio waves. Visible light occupies the range of
wavelengths from about 400 nanometers (1 nanometer is 1 billionth
of a meter) to about 700 nanometers in the electromagnetic spectrum.
Infrared radiation occupies the range fromabout 700 nanometers
to about 1,350 nanometers in the electromagnetic spectrum. Infrared
rays cannot be seen by the human eye, but they behave in the
same way that rays of visible light behave; they can be reflected, diffracted
(broken), and refracted (bent).
Sir William Herschel, a British astronomer, discovered infrared
rays in 1800 by calculating the temperature of the heat that they produced.
The term “infrared,” which was probably first used in 1800,
was used to indicate rays that had wavelengths that were longer than
those on the red end (the high end) of the spectrum of visible light but
shorter than those of the microwaves, which appear higher on the
electromagnetic spectrum. Infrared film is therefore sensitive to the
infrared radiation that the human eye cannot see or record. Dyes that
were sensitive to infrared radiation were discovered early in the
twentieth century, but they were not widely used until the 1930’s. Because
these dyes produced only black-and-white images, their usefulness
to artists and researchers was limited. After 1930, however, a
tidal wave of infrared photographic applications appeared.The Development of Color-Sensitive Infrared Film
In the early 1940’s, military intelligence used infrared viewers for
night operations and for gathering information about the enemy. One
device that was commonly used for such purposes was called a
“snooper scope.” Aerial photography with black-and-white infrared
film was used to locate enemy hiding places and equipment. The images
that were produced, however, often lacked clear definition.
The development in 1942 of the first color-sensitive infrared film,
Ektachrome Aero Film, became possible when researchers at the
Eastman Kodak Company’s laboratories solved some complex chemical
and physical problems that had hampered the development of
color infrared film up to that point. Regular color film is sensitive to
all visible colors of the spectrum; infrared color film is sensitive to
violet, blue, and red light as well as to infrared radiation. Typical
color film has three layers of emulsion, which are sensitized to blue,
green, and red. Infrared color film, however, has its three emulsion
layers sensitized to green, red, and infrared. Infrared wavelengths
are recorded as reds of varying densities, depending on the intensity
of the infrared radiation. The more infrared radiation there is,
the darker the color of the red that is recorded.
In infrared photography, a filter is placed over the camera lens to
block the unwanted rays of visible light. The filter blocks visible and
ultraviolet rays but allows infrared radiation to pass. All three layers
of infrared film are sensitive to blue, so a yellow filter is used. All
blue radiation is absorbed by this filter.
In regular photography, color film consists of three basic layers:
the top layer is sensitive to blue light, the middle layer is sensitive to
green, and the third layer is sensitive to red. Exposing the film to
light causes a latent image to be formed in the silver halide crystals
that make up each of the three layers. In infrared photography, color
film consists of a top layer that is sensitive to infrared radiation, a
middle layer sensitive to green, and a bottom layer sensitive to red.
“Reversal processing” produces blue in the infrared-sensitive layer,
yellow in the green-sensitive layer, and magenta in the red-sensitive
layer. The blue, yellow, and magenta layers of the film produce the
“false colors” that accentuate the various levels of infrared radiation
shown as red in a color transparency, slide, or print.relationship to the color of light to which the layer is sensitive. If the
relationship is not complementary, the resulting colors will be false.
This means that objects whose colors appear to be similar to the
human eye will not necessarily be recorded as similar colors on infrared
film. A red rose with healthy green leaves will appear on infrared
color film as being yellow with red leaves, because the chlorophyll
contained in the plant leaf reflects infrared radiation and
causes the green leaves to be recorded as red. Infrared radiation
from about 700 nanometers to about 900 nanometers on the electromagnetic
spectrum can be recorded by infrared color film. Above
900 nanometers, infrared radiation exists as heat patterns that must
be recorded by nonphotographic means.
Impact
Infrared photography has proved to be valuable in many of the
sciences and the arts. It has been used to create artistic images that
are often unexpected visual explosions of everyday views. Because
infrared radiation penetrates haze easily, infrared films are often
used in mapping areas or determining vegetation types. Many
cloud-covered tropical areas would be impossible to map without
infrared photography. False-color infrared film can differentiate between
healthy and unhealthy plants, so it is widely used to study insect
and disease problems in plants. Medical research uses infrared
photography to trace blood flow, detect and monitor tumor growth,
and to study many other physiological functions that are invisible
to the human eye.
Some forms of cancer can be detected by infrared analysis before
any other tests are able to perceive them. Infrared film is used in
criminology to photograph illegal activities in the dark and to study
evidence at crime scenes. Powder burns around a bullet hole, which
are often invisible to the eye, show clearly on infrared film. In addition,
forgeries in documents and works of art can often be seen
clearly when photographed on infrared film. Archaeologists have
used infrared film to locate ancient sites that are invisible in daylight.
Wildlife biologists also document the behavior of animals at
night with infrared equipment.
28 July 2009
In vitro plant culture
The invention: Method for propagating plants in artificial media
that has revolutionized agriculture.
The people behind the invention:
Georges Michel Morel (1916-1973), a French physiologist
Philip Cleaver White (1913- ), an American chemist
Plant Tissue Grows “In Glass”
In the mid-1800’s, biologists began pondering whether a cell isolated
from a multicellular organism could live separately if it were
provided with the proper environment. In 1902, with this question in
mind, the German plant physiologist Gottlieb Haberlandt attempted
to culture (grow) isolated plant cells under sterile conditions on an artificial
growth medium. Although his cultured cells never underwent
cell division under these “in vitro” (in glass) conditions, Haberlandt
is credited with originating the concept of cell culture.
Subsequently, scientists attempted to culture plant tissues and
organs rather than individual cells and tried to determine the medium
components necessary for the growth of plant tissue in vitro.
In 1934, Philip White grew the first organ culture, using tomato
roots. The discovery of plant hormones, which are compounds that
regulate growth and development, was crucial to the successful culture
of plant tissues; in 1939, Roger Gautheret, P. Nobécourt, and
White independently reported the successful culture of plant callus
tissue. “Callus” is an irregular mass of dividing cells that often results
from the wounding of plant tissue. Plant scientists were fascinated
by the perpetual growth of such tissue in culture and spent
years establishing optimal growth conditions and exploring the nutritional
and hormonal requirements of plant tissue.
Plants by the Millions
A lull in botanical research occurred during World War II, but
immediately afterward there was a resurgence of interest in applying
tissue culture techniques to plant research. Georges Morel, a plant physiologist at the National Institute for Agronomic Research
in France, was one of many scientists during this time who
had become interested in the formation of tumors in plants as well
as in studying various pathogens such as fungi and viruses that
cause plant disease.
To further these studies, Morel adapted existing techniques in order
to grow tissue from a wider variety of plant types in culture, and
he continued to try to identify factors that affected the normal
growth and development of plants. Morel was successful in culturing
tissue from ferns and was the first to culture monocot plants.
Monocots have certain features that distinguish them fromthe other
classes of seed-bearing plants, especially with respect to seed structure.
More important, the monocots include the economically important
species of grasses (the major plants of range and pasture)
and cereals.
For these cultures, Morel utilized a small piece of the growing tip
of a plant shoot (the shoot apex) as the starting tissue material. This
tissue was placed in a glass tube, supplied with a medium containing
specific nutrients, vitamins, and plant hormones, and allowed
to grow in the light. Under these conditions, the apex tissue grew
roots and buds and eventually developed into a complete plant.
Morel was able to generate whole plants from pieces of the shoot
apex that were only 100 to 250 micrometers in length.
Morel also investigated the growth of parasites such as fungi and
viruses in dual culture with host-plant tissue. Using results from
these studies and culture techniques that he had mastered, Morel
and his colleague Claude Martin regenerated virus-free plants from
tissue that had been taken from virally infected plants. Tissues from
certain tropical species, dahlias, and potato plants were used for the
original experiments, but after Morel adapted the methods for the
generation of virus-free orchids, plants that had previously been
difficult to propagate by any means, the true significance of his
work was recognized.
Morel was the first to recognize the potential of the in vitro culture
methods for the mass propagation of plants. He estimated that several
million plants could be obtained in one year from a single small
piece of shoot-apex tissue. Plants generated in this manner were
clonal (genetically identical organisms prepared from a single plant).With other methods of plant propagation, there is often a great variation
in the traits of the plants produced, but as a result of Morel’s
ideas, breeders could select for some desirable trait in a particular
plant and then produce multiple clonal plants, all of which expressed
the desired trait. The methodology also allowed for the production of
virus-free plant material, which minimized both the spread of potential
pathogens during shipping and losses caused by disease.
Consequences
Variations on Morel’s methods are used to propagate plants used
for human food consumption; plants that are sources of fiber, oil,
and livestock feed; forest trees; and plants used in landscaping and
in the floral industry. In vitro stocks are preserved under deepfreeze
conditions, and disease-free plants can be proliferated quickly
at any time of the year after shipping or storage.
The in vitro multiplication of plants has been especially useful
for species such as coconut and certain palms that cannot be propagated
by other methods, such as by sowing seeds or grafting, and
has also become important in the preservation and propagation of rare plant species that might otherwise have become extinct. Many
of these plants are sources of pharmaceuticals, oils, fragrances, and
other valuable products.
The capability of regenerating plants from tissue culture has also
been crucial in basic scientific research. Plant cells grown in culture
can be studied more easily than can intact plants, and scientists have
gained an in-depth understanding of plant physiology and biochemistry
by using this method. This information and the methods
of Morel and others have made possible the genetic engineering and
propagation of crop plants that are resistant to disease or disastrous
environmental conditions such as drought and freezing. In vitro
techniques have truly revolutionized agriculture.
IBM Model 1401 Computer
The invention: A relatively small, simple, and inexpensive computer
that is often credited with having launched the personal
computer age.
The people behind the invention:
Howard H. Aiken (1900-1973), an American mathematician
Charles Babbage (1792-1871), an English mathematician and
inventor
Herman Hollerith (1860-1929), an American inventor
Computers: From the Beginning
Computers evolved into their modern form over a period of
thousands of years as a result of humanity’s efforts to simplify the
process of counting. Two counting devices that are considered to be
very simple, early computers are the abacus and the slide rule.
These calculating devices are representative of digital and analog
computers, respectively, because an abacus counts numbers of things,
while the slide rule calculates length measurements.
The first modern computer, which was planned by Charles Babbage
in 1833, was never built. It was intended to perform complex
calculations with a data processing/memory unit that was controlled
by punched cards. In 1944, Harvard University’s Howard H.
Aiken and the International Business Machines (IBM) Corporation
built such a computer—the huge, punched-tape-controlled Automatic
Sequence Controlled Calculator, or Mark I ASCC, which
could perform complex mathematical operations in seconds. During
the next fifteen years, computer advances produced digital computers
that used binary arithmetic for calculation, incorporated
simplified components that decreased the sizes of computers, had
much faster calculating speeds, and were transistorized.
Although practical computers had become much faster than
they had been only a few years earlier, they were still huge and extremely
expensive. In 1959, however, IBM introduced the Model
1401 computer. Smaller, simpler, and much cheaper than the multimillion-dollar computers that were available, the IBM Model 1401
computer was also relatively easy to program and use. Its low cost,
simplicity of operation, and very wide use have led many experts
to view the IBM Model 1401 computer as beginning the age of the
personal computer.
Computer Operation and IBM’s Model 1401
Modern computers are essentially very fast calculating machines
that are capable of sorting, comparing, analyzing, and outputting information,
as well as storing it for future use. Many sources credit
Aiken’s Mark I ASCC as being the first modern computer to be built.
This huge, five-ton machine used thousands of relays to perform complex
mathematical calculations in seconds. Soon after its introduction,
other companies produced computers that were faster and more versatile
than the Mark I. The computer development race was on.
All these early computers utilized the decimal system for calculations
until it was found that binary arithmetic, whose numbers are
combinations of the binary digits 1 and 0, was much more suitable
for the purpose. The advantage of the binary system is that the electronic
switches that make up a computer (tubes, transistors, or
chips) can be either on or off; in the binary system, the on state can
be represented by the digit 1, the off state by the digit 0. Strung together
correctly, binary numbers, or digits, can be inputted rapidly
and used for high-speed computations. In fact, the computer term
bit is a contraction of the phrase “binary digit.”
A computer consists of input and output devices, a storage device
(memory), arithmetic and logic units, and a control unit. In
most cases, a central processing unit (CPU) combines the logic,
arithmetic, memory, and control aspects. Instructions are loaded
into the memory via an input device, processed, and stored. Then,
the CPU issues commands to the other parts of the system to carry
out computations or other functions and output the data as needed.
Most output is printed as hard copy or displayed on cathode-ray
tube monitors, or screens.
The early modern computers—such as the Mark I ASCC—were
huge because their information circuits were large relays or tubes.
Computers became smaller and smaller as the tubes were replaced first with transistors, then with simple integrated circuits, and then
with silicon chips. Each technological changeover also produced
more powerful, more cost-effective computers.
In the 1950’s, with reliable transistors available, IBM began the
development of two types of computers that were completed by
about 1959. The larger version was the Stretch computer, which was
advertised as the most powerful computer of its day. Customized
for each individual purchaser (for example, the Atomic Energy
Commission), a Stretch computer cost $10 million or more. Some innovations
in Stretch computers included semiconductor circuits,
new switching systems that quickly converted various kinds of data
into one language that was understood by the CPU, rapid data readers,
and devices that seemed to anticipate future operations.
Consequences
The IBM Model 1401 was the first computer sold in very large
numbers. It led IBM and other companies to seek to develop less expensive,
more versatile, smaller computers that would be sold to
small businesses and to individuals. Six years after the development
of the Model 1401, other IBM models—and those made by
other companies—became available that were more compact and
had larger memories. The search for compactness and versatility
continued. A major development was the invention of integrated
circuits by Jack S. Kilby of Texas Instruments; these integrated circuits
became available by the mid-1960’s. They were followed by
even smaller “microprocessors” (computer chips) that became available
in the 1970’s. Computers continued to become smaller and more
powerful.
Input and storage devices also decreased rapidly in size. At first,
the punched cards invented by Herman Hollerith, founder of the
Tabulation Machine Company (which later became IBM), were read
by bulky readers. In time, less bulky magnetic tapes and more compact
readers were developed, after which magnetic disks and compact
disc drives were introduced.
Many other advances have been made. Modern computers can
talk, create art and graphics, compose music, play games, and operate
robots. Further advancement is expected as societal needs change. Many experts believe that it was the sale of large numbers
of IBM Model 1401 computers that began the trend.
Subscribe to:
Posts (Atom)