10 February 2009
BINAC computer
The invention: The world’s first electronic general-purpose digital
computer.
The people behind the invention:
John Presper Eckert (1919-1995), an American electrical engineer
John W. Mauchly (1907-1980), an American physicist
John von Neumann (1903-1957), a Hungarian American
mathematician
Alan Mathison Turing (1912-1954), an English mathematician
Computer Evolution
In the 1820’s, there was a need for error-free mathematical and
astronomical tables for use in navigation, unreliable versions of
which were being produced by human “computers.” The problem
moved English mathematician and inventor Charles Babbage to design
and partially construct some of the earliest prototypes of modern
computers, with substantial but inadequate funding from the
British government. In the 1880’s, the search by the U.S. Bureau of
the Census for a more efficient method of compiling the 1890 census
led American inventor Herman Hollerith to devise a punched-card
calculator, a machine that reduced by several years the time required
to process the data.
The emergence of modern electronic computers began during
World War II (1939-1945), when there was an urgent need in the
American military for reliable and quickly produced mathematical
tables that could be used to aim various types of artillery. The calculation
of very complex tables had progressed somewhat since
Babbage’s day, and the human computers were being assisted by
mechanical calculators. Still, the growing demand for increased accuracy
and efficiency was pushing the limits of these machines.
Finally, in 1946, following three years of intense work at the University
of Pennsylvania’s Moore School of Engineering, John Presper
Eckert and John W. Mauchly presented their solution to the problems
in the form of the Electronic Numerical Integrator and Calculator (ENIAC) the world’s first electronic general-purpose digital
computer.
The ENIAC, built under a contract with the Army’s Ballistic Research
Laboratory, became a great success for Eckert and Mauchly,
but even before it was completed, they were setting their sights on
loftier targets. The primary drawback of the ENIAC was the great
difficulty involved in programming it. Whenever the operators
needed to instruct the machine to shift from one type of calculation
to another, they had to reset a vast array of dials and switches, unplug
and replug numerous cables, and make various other adjustments
to the multiple pieces of hardware involved. Such a mode of
operation was deemed acceptable for the ENIAC because, in computing
firing tables, it would need reprogramming only occasionally.
Yet if instructions could be stored in a machine’s memory, along
with the data, such a machine would be able to handle a wide range
of calculations with ease and efficiency.
The Turing Concept
The idea of a stored-program computer had first appeared in a
paper published by English mathematician Alan Mathison Turing
in 1937. In this paper, Turing described a hypothetical machine of
quite simple design that could be used to solve a wide range of logical
and mathematical problems. One significant aspect of this imaginary
Turing machine was that the tape that would run through it
would contain both information to be processed and instructions on
how to process it. The tape would thus be a type of memory device,
storing both the data and the program as sets of symbols that the
machine could “read” and understand. Turing never attempted to
construct this machine, and it was not until 1946 that he developed a
design for an electronic stored-program computer, a prototype of
which was built in 1950.
In the meantime, John von Neumann, a Hungarian American
mathematician acquainted with Turing’s ideas, joined Eckert and
Mauchly in 1944 and contributed to the design of ENIAC’s successor,
the Electronic Discrete Variable Automatic Computer (EDVAC), another
project financed by the Army. The EDVAC was the first computer
designed to incorporate the concept of the stored program.In March of 1946, Eckert and Mauchly, frustrated by a controversy
over patent rights for the ENIAC, resigned from the
Moore School. Several months later, they formed the Philadelphiabased
Electronic Control Company on the strength of a contract
from the National Bureau of Standards and the Census Bureau to
build a much grander computer, the Universal Automatic Computer
(UNIVAC). They thus abandoned the EDVAC project, which
was finally completed by the Moore School in 1952, but they incorporated
the main features of the EDVAC into the design of the
UNIVAC.
Building the UNIVAC, however, proved to be much more involved
and expensive than anticipated, and the funds provided by
the original contract were inadequate. Eckert and Mauchly, therefore,
took on several other smaller projects in an effort to raise
funds. On October 9, 1947, they signed a contract with the Northrop
Corporation of Hawthorne, California, to produce a relatively small
computer to be used in the guidance system of a top-secret missile
called the Snark, which Northrop was building for the Air Force.
This computer, the Binary Automatic Computer (BINAC), turned
out to be Eckert and Mauchly’s first commercial sale and the first
stored-program computer completed in the United States.
The BINAC was designed to be at least a preliminary version of a
compact, airborne computer. It had two main processing units.
These contained a total of fourteen hundred vacuum tubes, a drastic
reduction from the eighteen thousand used in the ENIAC. There
were also two memory units, as well as two power supplies, an input
converter unit, and an input console, which used either a typewriter
keyboard or an encoded magnetic tape (the first time such
tape was used for computer input). Because of its dual processing,
memory, and power units, the BINAC was actually two computers,
each of which would continually check its results against those of
the other in an effort to identify errors.
The BINAC became operational in August, 1949. Public demonstrations
of the computer were held in Philadelphia from August 18
through August 20.Impact
The design embodied in the BINAC is the real source of its significance.
It demonstrated successfully the benefits of the dual processor
design for minimizing errors, a feature adopted in many subsequent
computers. It showed the suitability of magnetic tape as an
input-output medium. Its most important new feature was its ability
to store programs in its relatively spacious memory, the principle
that Eckert, Mauchly, and von Neumann had originally designed
into the EDVAC. In this respect, the BINAC was a direct descendant
of the EDVAC.
In addition, the stored-program principle gave electronic computers
new powers, quickness, and automatic control that, as they
have continued to grow, have contributed immensely to the aura of
intelligence often associated with their operation.
The BINAC successfully demonstrated some of these impressive
new powers in August of 1949 to eager observers from a number of
major American corporations. It helped to convince many influential
leaders of the commercial segment of society of the promise of
electronic computers. In doing so, the BINAC helped to ensure the
further evolution of computers.
See also Apple II computer; BINAC computer; Colossus computer;
ENIAC computer; IBM Model 1401 computer; Personal computer;
Supercomputer; UNIVAC computer.
Bathysphere
The invention: The first successful chamber for manned deep-sea
diving missions.
The people behind the invention:
William Beebe (1877-1962), an American naturalist and curator
of ornithology
Otis Barton (1899- ), an American engineer
John Tee-Van (1897-1967), an American general associate with
the New York Zoological Society
Gloria Hollister Anable (1903?-1988), an American research
associate with the New York Zoological Society
Inner Space
Until the 1930’s, the vast depths of the oceans had remained
largely unexplored, although people did know something of the
ocean’s depths. Soundings and nettings of the ocean bottom had
been made many times by a number of expeditions since the 1870’s.
Diving helmets had allowed humans to descend more than 91 meters
below the surface, and the submarine allowed them to reach a
depth of nearly 120 meters. There was no firsthand knowledge,
however, of what it was like in the deepest reaches of the ocean: inner
space.
The person who gave the world the first account of life at great
depths wasWilliam Beebe. When he announced in 1926 that he was
attempting to build a craft to explore the ocean, he was already a
well-known naturalist. Although his only degrees had been honorary
doctorates, he was graduated as a special student in the Department
of Zoology of Columbia University in 1898. He began his lifelong
association with the New York Zoological Society in 1899.
It was during a trip to the Galápagos Islands off the west coast of
South America that Beebe turned his attention to oceanography. He
became the first scientist to use a diving helmet in fieldwork, swimming
in the shallow waters. He continued this shallow-water work
at the new station he established in 1928, with the permission of English authorities, on the tiny island of Nonesuch in the Bermudas.
Beebe realized, however, that he had reached the limits of the current
technology and that to study the animal life of the ocean depths
would require a new approach.
A New Approach
While he was considering various cylindrical designs for a new
deep-sea exploratory craft, Beebe was introduced to Otis Barton.
Barton, a young New Englander who had been trained as an engineer
at Harvard University, had turned to the problems of ocean
diving while doing postgraduate work at Columbia University. In
December, 1928, Barton brought his blueprints to Beebe. Beebe immediately
saw that Barton’s design was what he was looking for,
and the two went ahead with the construction of Barton’s craft.
The “bathysphere,” as Beebe named the device, weighed 2,268
kilograms and had a diameter of 1.45 meters and steel walls 3.8 centimeters
thick. The door, weighing 180 kilograms, would be fastened
over a manhole with ten bolts. Four windows, made of fused
quartz, were ordered from the General Electric Company at a cost of
$500 each. A 250-watt water spotlight lent by the Westinghouse
Company provided the exterior illumination, and a telephone lent
by the Bell Telephone Laboratory provided a means of communicating
with the surface. The breathing apparatus consisted of two oxygen
tanks that allowed 2 liters of oxygen per minute to escape into
the sphere. During the dive, the carbon dioxide and moisture were
removed, respectively, by trays containing soda lime and calcium
chloride. A winch would lower the bathysphere on a steel cable.
In early July, 1930, after several test dives, the first manned dive
commenced. Beebe and Barton descended to a depth of 244 meters.
A short circuit in one of the switches showered them with sparks
momentarily, but the descent was largely a success. Beebe and
Barton had descended farther than any human.
Two more days of diving yielded a final dive record of 435 meters
below sea level. Beebe and the other members of his staff (ichthyologist
John Tee-Van and zoologist Gloria Hollister Anable) saw many
species of fish and other marine life that previously had been seen
only after being caught in nets. These first dives proved that an undersea exploratory craft had potential value, at least for deep water.
After 1932, the bathysphere went on display at the Century of Progress
Exhibition in Chicago.
In late 1933, the National Geographic Society offered to sponsor
another series of dives. Although a new record was not a stipulation,
Beebe was determined to supply one. The bathysphere was
completely refitted before the new dives.
An unmanned test dive to 920 meters was made on August 7,
1934, once again off Nonesuch Island. Minor adjustments were
made, and on the morning of August 11, the first dive commenced,
attaining a depth of 765 meters and recording a number of new scientific
observations. Several days later, on August 15, the weather
was again right for the dive.
This dive also paid rich dividends in the number of species of
deep-sea life observed. Finally, with only a few turns of cable left on
the winch spool, the bathysphere reached a record depth of 923 meters—
almost a kilometer below the ocean’s surface.Impact
Barton continued to work on the bathysphere design for some
years. It was not until 1948, however, that his new design, the
benthoscope, was finally constructed. It was similar in basic design
to the bathysphere, though the walls were increased to withstand
greater pressures. Other improvements were made, but the essential
strengths and weaknesses remained. On August 16, 1949, Barton,
diving alone, broke the record he and Beebe had set earlier,
reaching a depth of 1,372 meters off the coast of Southern California.
The bathysphere effectively marked the end of the tethered exploration
of the deep, but it pointed the way to other possibilities.
The first advance in this area came in 1943, when undersea explorer
Jacques-Yves Cousteau and engineer Émile Gagnan developed the
Aqualung underwater breathing apparatus, which made possible
unfettered and largely unencumbered exploration down to about
60 meters. This was by no means deep diving, but it was clearly a
step along the lines that Beebe had envisioned for underwater research.
A further step came in the development of the bathyscaphe by
102 / Bathysphere
Auguste Piccard, the renowned Swiss physicist, who, in the 1930’s,
had conquered the stratosphere in high-altitude balloons. The bathyscaphe
was a balloon that operated in reverse. Aspherical steel passenger
cabin was attached beneath a large float filled with gasoline
for buoyancy. Several tons of iron pellets held by electromagnets
acted as ballast. The bathyscaphe would sink slowly to the bottom
of the ocean, and when its passengers wished to return, the ballast
would be dumped. The craft would then slowly rise to the surface.
On September 30, 1953, Piccard touched bottom off the coast of Italy,
some 3,000 meters below sea level.
04 February 2009
Bathyscaphe
The invention: A submersible vessel capable of exploring the
deepest trenches of the world’s oceans.
The people behind the invention:
William Beebe (1877-1962), an American biologist and explorer
Auguste Piccard (1884-1962), a Swiss-born Belgian physicist
Jacques Piccard (1922- ), a Swiss ocean engineer
Early Exploration of the Deep Sea
The first human penetration of the deep ocean was made byWilliam
Beebe in 1934, when he descended 923 meters into the Atlantic
Ocean near Bermuda. His diving chamber was a 1.5-meter steel ball
that he named Bathysphere, from the Greek word bathys (deep) and
the word sphere, for its shape. He found that a sphere resists pressure
in all directions equally and is not easily crushed if it is constructed
of thick steel. The bathysphere weighed 2.5 metric tons. It
had no buoyancy and was lowered from a surface ship on a single
2.2-centimeter cable; a broken cable would have meant certain
death for the bathysphere’s passengers.
Numerous deep dives by Beebe and his engineer colleague, Otis
Barton, were the first uses of submersibles for science. Through two
small viewing ports, they were able to observe and photograph
many deep-sea creatures in their natural habitats for the first time.
They also made valuable observations on the behavior of light as
the submersible descended, noting that the green surface water became
pale blue at 100 meters, dark blue at 200 meters, and nearly
black at 300 meters. A technique called “contour diving” was particularly
dangerous. In this practice, the bathysphere was slowly
towed close to the seafloor. On one such dive, the bathysphere narrowly
missed crashing into a coral crag, but the explorers learned a
great deal about the submarine geology of Bermuda and the biology
of a coral-reef community. Beebe wrote several popular and scientific
books about his adventures that did much to arouse interest in
the ocean.
Testing the Bathyscaphe
The next important phase in the exploration of the deep ocean
was led by the Swiss physicist Auguste Piccard. In 1948, he launched
a new type of deep-sea research craft that did not require a cable and
that could return to the surface by means of its own buoyancy. He
called the craft a bathyscaphe, which is Greek for “deep boat.”
Piccard began work on the bathyscaphe in 1937, supported by a
grant from the Belgian National Scientific Research Fund. The German
occupation of Belgium early in World War II cut the project
short, but Piccard continued his work after the war. The finished
bathyscaphe was named FNRS 2, for the initials of the Belgian fund
that had sponsored the project. The vessel was ready for testing in
the fall of 1948.
The first bathyscaphe, as well as later versions, consisted of
two basic components: first, a heavy steel cabin to accommodate
observers, which looked somewhat like an enlarged version of
Beebe’s bathysphere; and second, a light container called a float,
filled with gasoline, that provided lifting power because it was
lighter than water. Enough iron shot was stored in silos to cause
the vessel to descend. When this ballast was released, the gasoline
in the float gave the bathyscaphe sufficient buoyancy to return to
the surface.
Piccard’s bathyscaphe had a number of ingenious devices. Jacques-
Yves Cousteau, inventor of the Aqualung six years earlier, contributed
a mechanical claw that was used to take samples of rocks, sediment,
and bottom creatures. A seven-barreled harpoon gun, operated
by water pressure, was attached to the sphere to capture
specimens of giant squids or other large marine animals for study.
The harpoons had electrical-shock heads to stun the “sea monsters,”
and if that did not work, the harpoon could give a lethal injection of
strychnine poison. Inside the sphere were various instruments for
measuring the deep-sea environment, including a Geiger counter
for monitoring cosmic rays. The air-purification system could support
two people for up to twenty-four hours. The bathyscaphe had a
radar mast to broadcast its location as soon as it surfaced. This was
essential because there was no way for the crew to open the sphere
from the inside.The FNRS 2 was first tested off the Cape Verde Islands with the
assistance of the French navy. Although Piccard descended to only
25 meters, the dive demonstrated the potential of the bathyscaphe.
On the second dive, the vessel was severely damaged by waves, and
further tests were suspended. Aredesigned and rebuilt bathyscaphe,
renamed FNRS 3 and operated by the French navy, descended to a
depth of 4,049 meters off Dakar, Senegal, on the west coast of Africa
in early 1954.
In August, 1953, Auguste Piccard, with his son Jacques, launched a greatly improved bathyscaphe, the Trieste, which they named for the
Italian city in which it was built. In September of the same year, the
Trieste successfully dived to 3,150 meters in the Mediterranean Sea. The
Piccards glimpsed, for the first time, animals living on the seafloor at
that depth. In 1958, the U.S. Navy purchased the Trieste and transported
it to California, where it was equipped with a new cabin designed
to enable the vessel to reach the seabed of the great oceanic
trenches. Several successful descents were made in the Pacific by
Jacques Piccard, and on January 23, 1960, Piccard, accompanied by
Lieutenant DonaldWalsh of the U.S. Navy, dived a record 10,916 meters
to the bottom of the Mariana Trench near the island of Guam.
Impact
The oceans have always raised formidable barriers to humanity’s
curiosity and understanding. In 1960, two events demonstrated the
ability of humans to travel underwater for prolonged periods and to
observe the extreme depths of the ocean. The nuclear submarine
Triton circumnavigated the world while submerged, and Jacques
Piccard and Lieutenant Donald Walsh descended nearly 11 kilometers
to the bottom of the ocean’s greatest depression aboard the
Trieste. After sinking for four hours and forty-eight minutes, the
Trieste landed in the Challenger Deep of the Mariana Trench, the
deepest known spot on the ocean floor. The explorers remained on
the bottom for only twenty minutes, but they answered one of the
biggest questions about the sea: Can animals live in the immense
cold and pressure of the deep trenches? Observations of red shrimp
and flatfishes proved that the answer was yes.
The Trieste played another important role in undersea exploration
when, in 1963, it located and photographed the wreckage of the
nuclear submarine Thresher. The Thresher had mysteriously disappeared
on a test dive off the New England coast, and the Navy had
been unable to find a trace of the lost submarine using surface vessels
equipped with sonar and remote-control cameras on cables.
Only the Trieste could actually search the bottom. On its third dive,
the bathyscaphe found a piece of the wreckage, and it eventually
photographed a 3,000-meter trail of debris that led to Thresher‘s hull,
at a depth of 2.5 kilometers.These exploits showed clearly that scientific submersibles could
be used anywhere in the ocean. Piccard’s work thus opened the last
geographic frontier on Earth.
BASIC programming language
The invention: An interactive computer system and simple programming
language that made it easier for nontechnical people
to use computers.
The people behind the invention:
John G. Kemeny (1926-1992), the chairman of Dartmouth’s
mathematics department
Thomas E. Kurtz (1928- ), the director of the Kiewit
Computation Center at Dartmouth
Bill Gates (1955- ), a cofounder and later chairman of the
board and chief operating officer of the Microsoft
Corporation
The Evolution of Programming
The first digital computers were developed duringWorldWar II
(1939-1945) to speed the complex calculations required for ballistics,
cryptography, and other military applications. Computer technology
developed rapidly, and the 1950’s and 1960’s saw computer systems
installed throughout the world. These systems were very large
and expensive, requiring many highly trained people for their operation.
The calculations performed by the first computers were determined
solely by their electrical circuits. In the 1940’s, The American
mathematician John von Neumann and others pioneered the idea of
computers storing their instructions in a program, so that changes
in calculations could be made without rewiring their circuits. The
programs were written in machine language, long lists of zeros and
ones corresponding to on and off conditions of circuits. During the
1950’s, “assemblers” were introduced that used short names for
common sequences of instructions and were, in turn, transformed
into the zeros and ones intelligible to the computer. The late 1950’s
saw the introduction of high-level languages, notably Formula Translation
(FORTRAN), CommonBusinessOriented Language (COBOL),
and Algorithmic Language (ALGOL), which used English words to communicate instructions to the computer. Unfortunately, these
high-level languages were complicated; they required some knowledge
of the computer equipment and were designed to be used by
scientists, engineers, and other technical experts.
Developing BASIC
John G. Kemeny was chairman of the department of mathematics
at Dartmouth College in Hanover, New Hampshire. In 1962,
Thomas E. Kurtz, Dartmouth’s computing director, approached
Kemeny with the idea of implementing a computer system at Dartmouth
College. Both men were dedicated to the idea that liberal arts
students should be able to make use of computers. Although the English
commands of FORTRAN and ALGOL were a tremendous improvement
over the cryptic instructions of assembly language, they
were both too complicated for beginners. Kemeny convinced Kurtz
that they needed a completely new language, simple enough for beginners
to learn quickly, yet flexible enough for many different
kinds of applications.
The language they developed was known as the “Beginner’s Allpurpose
Symbolic Instruction Code,” or BASIC. The original language
consisted of fourteen different statements. Each line of a
BASIC program was preceded by a number. Line numbers were referenced
by control flow statements, such as, “IF X = 9 THEN GOTO
200.” Line numbers were also used as an editing reference. If line 30
of a program contained an error, the programmer could make the
necessary correction merely by retyping line 30.
Programming in BASIC was first taught at Dartmouth in the fall
of 1964. Students were ready to begin writing programs after two
hours of classroom lectures. By June of 1968, more than 80 percent of
the undergraduates at Dartmouth could write a BASIC program.
Most of them were not science majors and used their programs in
conjunction with other nontechnical courses.
Kemeny and Kurtz, and later others under their supervision,
wrote more powerful versions of BASIC that included support for
graphics on video terminals and structured programming. The creators
of BASIC, however, always tried to maintain their original design
goal of keeping BASIC simple enough for beginners.
Consequences
Kemeny and Kurtz encouraged the widespread adoption of BASIC
by allowing other institutions to use their computer system and
by placing BASIC in the public domain. Over time, they shaped BASIC
into a powerful language with numerous features added in response
to the needs of its users. What Kemeny and Kurtz had not
foreseen was the advent of the microprocessor chip in the early
1970’s, which revolutionized computer technology. By 1975, microcomputer
kits were being sold to hobbyists for well under a thousand
dollars. The earliest of these was the Altair.
That same year, prelaw studentWilliam H. Gates (1955- ) was
persuaded by a friend, Paul Allen, to drop out of Harvard University
and help create a version of BASIC that would run on the Altair.
Gates and Allen formed a company, Microsoft Corporation, to sell
their BASIC interpreter, which was designed to fit into the tiny
memory of the Altair. It was about as simple as the original Dartmouth
BASIC but had to depend heavily on the computer hardware.
Most computers purchased for home use still include a version
of Microsoft Corporation’s BASIC.
See also BINAC computer; COBOL computer language; FORTRAN
programming language; SAINT; Supercomputer.
Autochrome plate
The invention: The first commercially successful process in which
a single exposure in a regular camera produced a color image.
The people behind the invention:
Louis Lumière (1864-1948), a French inventor and scientist
Auguste Lumière (1862-1954), an inventor, physician, physicist,
chemist, and botanist
Alphonse Seyewetz, a skilled scientist and assistant of the
Lumière brothers
Adding Color
In 1882, Antoine Lumière, painter, pioneer photographer, and father
of Auguste and Louis, founded a factory to manufacture photographic
gelatin dry-plates. After the Lumière brothers took over the
factory’s management, they expanded production to include roll
film and printing papers in 1887 and also carried out joint research
that led to fundamental discoveries and improvements in photographic
development and other aspects of photographic chemistry.
While recording and reproducing the actual colors of a subject
was not possible at the time of photography’s inception (about
1822), the first practical photographic process, the daguerreotype,
was able to render both striking detail and good tonal quality. Thus,
the desire to produce full-color images, or some approximation to
realistic color, occupied the minds of many photographers and inventors,
including Louis and Auguste Lumière, throughout the
nineteenth century.
As researchers set out to reproduce the colors of nature, the first
process that met with any practical success was based on the additive
color theory expounded by the Scottish physicist James Clerk
Maxwell in 1861. He believed that any color can be created by
adding together red, green, and blue light in certain proportions.
Maxwell, in his experiments, had taken three negatives through
screens or filters of these additive primary colors. He then took
slides made from these negatives and projected the slides through the same filters onto a screen so that their images were superimposed.
As a result, he found that it was possible to reproduce the exact
colors as well as the form of an object.
Unfortunately, since colors could not be printed in their tonal
relationships on paper before the end of the nineteenth century,Maxwell’s experiment was unsuccessful. Although Frederick E.
Ives of Philadelphia, in 1892, optically united three transparencies
so that they could be viewed in proper alignment by looking through
a peephole, viewing the transparencies was still not as simple as
looking at a black-and-white photograph.
The Autochrome Plate
The first practical method of making a single photograph that
could be viewed without any apparatus was devised by John Joly of
Dublin in 1893. Instead of taking three separate pictures through
three colored filters, he took one negative through one filter minutely
checkered with microscopic areas colored red, green, and
blue. The filter and the plate were exactly the same size and were
placed in contact with each other in the camera. After the plate was
developed, a transparency was made, and the filter was permanently
attached to it. The black-and-white areas of the picture allowed
more or less light to shine through the filters; if viewed froma
proper distance, the colored lights blended to form the various colors
of nature.
In sum, the potential principles of additive color and other methods
and their potential applications in photography had been discovered
and even experimentally demonstrated by 1880. Yet a practical
process of color photography utilizing these principles could
not be produced until a truly panchromatic emulsion was available,
since making a color print required being able to record the primary
colors of the light cast by the subject.
Louis and Auguste Lumière, along with their research associate
Alphonse Seyewetz, succeeded in creating a single-plate process
based on this method in 1903. It was introduced commercially as the
autochrome plate in 1907 and was soon in use throughout the
world. This process is one of many that take advantage of the limited
resolving power of the eye. Grains or dots too small to be recognized
as separate units are accepted in their entirety and, to the
sense of vision, appear as tones and continuous color.Impact
While the autochrome plate remained one of the most popular
color processes until the 1930’s, soon this process was superseded by
subtractive color processes. Leopold Mannes and Leopold Godowsky,
both musicians and amateur photographic researchers who eventually
joined forces with Eastman Kodak research scientists, did the
most to perfect the Lumière brothers’ advances in making color
photography practical. Their collaboration led to the introduction in
1935 of Kodachrome, a subtractive process in which a single sheet of
film is coated with three layers of emulsion, each sensitive to one
primary color. A single exposure produces a color image.
Color photography is now commonplace. The amateur market is
enormous, and the snapshot is almost always taken in color. Commercial
and publishing markets use color extensively. Even photography
as an art form, which was done in black and white through
most of its history, has turned increasingly to color.
Atomic-powered ship
The invention: The world’s first atomic-powered merchant ship
demonstrated a peaceful use of atomic power.
The people behind the invention:
Otto Hahn (1879-1968), a German chemist
Enrico Fermi (1901-1954), an Italian American physicist
Dwight D. Eisenhower (1890-1969), president of the United
States, 1953-1961
Splitting the Atom
In 1938, Otto Hahn, working at the Kaiser Wilhelm Institute for
Chemistry, discovered that bombarding uranium atoms with neutrons
causes them to split into two smaller, lighter atoms. A large
amount of energy is released during this process, which is called
“fission.” When one kilogram of uranium is fissioned, it releases the
same amount of energy as does the burning of 3,000 metric tons of
coal. The fission process also releases new neutrons.
Enrico Fermi suggested that these new neutrons could be used to
split more uranium atoms and produce a chain reaction. Fermi and
his assistants produced the first human-made chain reaction at the
University of Chicago on December 2, 1942. Although the first use
of this new energy source was the atomic bombs that were used to
defeat Japan in World War II, it was later realized that a carefully
controlled chain reaction could produce useful energy. The submarine
Nautilus, launched in 1954, used the energy released from fission
to make steam to drive its turbines.
U.S. President Dwight David Eisenhower proposed his “Atoms
for Peace” program in December, 1953. On April 25, 1955, President
Eisenhower announced that the “Atoms for Peace” program would
be expanded to include the design and construction of an atomicpowered
merchant ship, and he signed the legislation authorizing
the construction of the ship in 1956.Savannah’s Design and Construction
A contract to design an atomic-powered merchant ship was
awarded to George G. Sharp, Inc., on April 4, 1957. The ship was to
carry approximately one hundred passengers (later reduced to sixty
to reduce the ship’s cost) and 10,886 metric tons of cargo while making
a speed of 21 knots, about 39 kilometers per hour. The ship was
to be 181 meters long and 23.7 meters wide. The reactor was to provide
steam for a 20,000-horsepower turbine that would drive the
ship’s propeller. Most of the ship’s machinery was similar to that of
existing ships; the major difference was that steam came from a reactor
instead of a coal- or oil-burning boiler.
New York Shipbuilding Corporation of Camden, New Jersey,
won the contract to build the ship on November 16, 1957. States Marine
Lines was selected in July, 1958, to operate the ship. It was christened
Savannah and launched on July 21, 1959. The name Savannah
was chosen to honor the first ship to use steam power while crossing
an ocean. This earlier Savannah was launched in New York City
in 1818.
Ships are normally launched long before their construction is
complete, and the new Savannah was no exception. It was finally
turned over to States Marine Lines on May 1, 1962. After extensive
testing by its operators and delays caused by labor union disputes,
it began its maiden voyage from Yorktown, Virginia, to Savannah,
Georgia, on August 20, 1962. The original budget for design and
construction was $35 million, but by this time, the actual cost was
about $80 million.
Savannah‘s nuclear reactor was fueled with about 7,000 kilograms
(15,400 pounds) of uranium. Uranium consists of two forms,
or “isotopes.” These are uranium 235, which can fission, and uranium
238, which cannot. Naturally occurring uranium is less than 1
percent uranium 235, but the uranium in Savannah‘s reactor had
been enriched to contain nearly 5 percent of this isotope. Thus, there
was less than 362 kilograms of usable uranium in the reactor. The
ship was able to travel about 800,000 kilometers on this initial fuel
load. Three and a half million kilograms of water per hour flowed
through the reactor under a pressure of 5,413 kilograms per square
centimeter. It entered the reactor at 298.8 degrees Celsius and left at
317.7 degrees Celsius. Water leaving the reactor passed through a
heat exchanger called a “steam generator.” In the steam generator,
reactor water flowed through many small tubes. Heat passed through
the walls of these tubes and boiled water outside them. About
113,000 kilograms of steam per hour were produced in this way at a
pressure of 1,434 kilograms per square centimeter and a temperature
of 240.5 degrees Celsius.
Labor union disputes dogged Savannah‘s early operations, and it
did not start its first trans-Atlantic crossing until June 8, 1964. Savannah
was never a money maker. Even in the 1960’s, the trend was toward
much bigger ships. It was announced that the ship would be
retired in August, 1967, but that did not happen. It was finally put
out of service in 1971. Later, Savannah was placed on permanent display
at Charleston, South Carolina.
Consequences
Following the United States’ lead, Germany and Japan built
atomic-powered merchant ships. The Soviet Union is believed to
have built several atomic-powered icebreakers. Germany’s Otto
Hahn, named for the scientist who first split the atom, began service
in 1968, and Japan’s Mutsuai was under construction as Savannah retired.
Numerous studies conducted in the early 1970’s claimed to prove
that large atomic-powered merchant ships were more profitable
than oil-fired ships of the same size. Several conferences devoted to
this subject were held, but no new ships were built.
Although the U.S. Navy has continued to use reactors to power
submarines, aircraft carriers, and cruisers, atomic power has not
been widely used for merchant-ship propulsion. Labor union problems
such as those that haunted Savannah, high insurance costs, and
high construction costs are probably the reasons. Public opinion, after
the reactor accidents at Three Mile Island (in 1979) and Chernobyl
(in 1986) is also a factor.
Atomic clock
The invention: A clock using the ammonia molecule as its oscillator
that surpasses mechanical clocks in long-term stability, precision,
and accuracy.
The person behind the invention:
Harold Lyons (1913-1984), an American physicist
Time Measurement
The accurate measurement of basic quantities, such as length,
electrical charge, and temperature, is the foundation of science. The
results of such measurements dictate whether a scientific theory is
valid or must be modified or even rejected. Many experimental
quantities change over time, but time cannot be measured directly.
It must be measured by the occurrence of an oscillation or rotation,
such as the twenty-four-hour rotation of the earth. For centuries, the
rising of the Sun was sufficient as a timekeeper, but the need for
more precision and accuracy increased as human knowledge grew.
Progress in science can be measured by how accurately time has
been measured at any given point. In 1713, the British government,
after the disastrous sinking of a British fleet in 1707 because of a miscalculation
of longitude, offered a reward of 20,000 pounds for the
invention of a ship’s chronometer (a very accurate clock). Latitude
is determined by the altitude of the Sun above the southern horizon
at noon local time, but the determination of longitude requires an
accurate clock set at Greenwich, England, time. The difference between
the ship’s clock and the local sun time gives the ship’s longitude.
This permits the accurate charting of new lands, such as those
that were being explored in the eighteenth century. John Harrison,
an English instrument maker, eventually built a chronometer that
was accurate within one minute after five months at sea. He received
his reward from Parliament in 1765.
Atomic Clocks Provide Greater Stability
A clock contains four parts: energy to keep the clock operating,
an oscillator, an oscillation counter, and a display. A grandfather
clock has weights that fall slowly, providing energy that powers the
clock’s gears. The pendulum, a weight on the end of a rod, swings
back and forth (oscillates) with a regular beat. The length of the rod
determines the pendulum’s period of oscillation. The pendulum is
attached to gears that count the oscillations and drive the display
hands.
There are limits to a mechanical clock’s accuracy and stability.
The length of the rod changes as the temperature changes, so the
period of oscillation changes. Friction in the gears changes as they
wear out. Making the clock smaller increases its accuracy, precision,
and stability. Accuracy is how close the clock is to telling the actual
time. Stability indicates how the accuracy changes over time, while
precision is the number of accurate decimal places in the display. A
grandfather clock, for example, might be accurate to ten seconds per
day and precise to a second, while having a stability of minutes per
week.
Applying an electrical signal to a quartz crystal will make the
crystal oscillate at its natural vibration frequency, which depends on
its size, its shape, and the way in which it was cut from the larger
crystal. Since the faster a clock’s oscillator vibrates, the more precise
the clock, a crystal-based clock is more precise than a large pendulum
clock. By keeping the crystal under constant temperature, the
clock is kept accurate, but it eventually loses its stability and slowly
wears out.
In 1948, Harold Lyons and his colleagues at the National Bureau
of Standards (NBS) constructed the first atomic clock, which used
the ammonia molecule as its oscillator. Such a clock is called an
atomic clock because, when it operates, a nitrogen atom vibrates.
The pyramid-shaped ammonia molecule is composed of a triangular
base; there is a hydrogen atom at each corner and a nitrogen
atom at the top of the pyramid. The nitrogen atom does not remain
at the top; if it absorbs radio waves of the right energy and frequency,
it passes through the base to produce an upside-down pyramid
and then moves back to the top. This oscillation frequency occurs
at 23,870 megacycles (1 megacycle equals 1 million cycles) per
second.
Lyons’s clock was actually a quartz-ammonia clock, since the signal
from a quartz crystal produced radio waves of the crystal’s fre-
quency that were fed into an ammonia-filled tube. If the radio
waves were at 23,870 megacycles, the ammonia molecules absorbed
the waves; a detector sensed this, and it sent no correction signal to
the crystal. If radio waves deviated from 23,870 megacycles, the ammonia
did not absorb them, the detector sensed the unabsorbed radio
waves, and a correction signal was sent to the crystal. The
atomic clock’s accuracy and precision were comparable to those of a
quartz-based clock—one part in a hundred million—but the atomic
clock was more stable because molecules do not wear out.
The atomic clock’s accuracy was improved by using cesium
133 atoms as the source of oscillation. These atoms oscillate at
9,192,631,770 plus or minus 20 cycles per second. They are accurate
to a billionth of a second per day and precise to nine decimal places.
A cesium clock is stable for years. Future developments in atomic
clocks may see accuracies of one part in a million billions.
Impact
The development of stable, very accurate atomic clocks has farreaching
implications for many areas of science. Global positioning
satellites send signals to receivers on ships and airplanes. By timing
the signals, the receiver’s position is calculated to within several
meters of its true location.
Chemists are interested in finding the speed of chemical reactions,
and atomic clocks are used for this purpose. The atomic clock
led to the development of the maser (an acronym formicrowave amplification
by stimulated emission of radiation), which is used to
amplify weak radio signals, and the maser led to the development
of the laser, a light-frequency maser that has more uses than can be
listed here.
Atomic clocks have been used to test Einstein’s theories of relativity
that state that time on a moving clock, as observed by a stationary
observer, slows down, and that a clock slows down near a
large mass (because of the effects of gravity). Under normal conditions
of low velocities and low mass, the changes in time are very
small, but atomic clocks are accurate and stable enough to detect
even these small changes. In such experiments, three sets of clocks
were used—one group remained on Earth, one was flown west around the earth on a jet, and the last set was flown east. By comparing
the times of the in-flight sets with the stationary set, the
predicted slowdowns of time were observed and the theories were
verified.
03 February 2009
Atomic bomb
The invention: A weapon of mass destruction created during
World War II that utilized nuclear fission to create explosions
equivalent to thousands of tons of trinitrotoluene (TNT),
The people behind the invention:
J. Robert Oppenheimer (1904-1967), an American physicist
Leslie Richard Groves (1896-1970), an American engineer and
Army general
Enrico Fermi (1900-1954), an Italian American nuclear physicist
Niels Bohr (1885-1962), a Danish physicist
Energy on a Large Scale
The first evidence of uranium fission (the splitting of uranium
atoms) was observed by German chemists Otto Hahn and Fritz
Strassmann in Berlin at the end of 1938. When these scientists discovered
radioactive barium impurities in neutron-irradiated uranium,
they wrote to their colleague Lise Meitner in Sweden. She and
her nephew, physicist Otto Robert Frisch, calculated the large release
of energy that would be generated during the nuclear fission
of certain elements. This result was reported to Niels Bohr in Copenhagen.
Meanwhile, similar fission energies were measured by Frédéric
Joliot and his associates in Paris, who demonstrated the release of
up to three additional neutrons during nuclear fission. It was recognized
immediately that if neutron-induced fission released enough
additional neutrons to cause at least one more such fission, a selfsustaining
chain reaction would result, yielding energy on a large
scale.
While visiting the United States from January to May of 1939,
Bohr derived a theory of fission with John Wheeler of Princeton
University. This theory led Bohr to predict that the common isotope
uranium 238 (which constitutes 99.3 percent of naturally occurring
uranium) would require fast neutrons for fission, but that the rarer
uranium 235 would fission with neutrons of any energy. This meant that uranium 235 would be far more suitable for use in any sort of
bomb. Uranium bombardment in a cyclotron led to the discovery of
plutonium in 1940 and the discovery that plutonium 239 was fissionable—
and thus potentially good bomb material. Uranium 238
was then used to “breed” (create) plutonium 239, which was then
separated from the uranium by chemical methods.
During 1942, the Manhattan District of the Army Corps of Engineers
was formed under General Leslie Richard Groves, an engineer
and Army general who contracted with E. I. Du Pont de
Nemours and Company to construct three secret atomic cities at a
total cost of $2 billion. At Oak Ridge, Tennessee, twenty-five thousand
workers built a 1,000-kilowatt reactor as a pilot plant.Asecond
city of sixty thousand inhabitants was built at Hanford, Washington,
where three huge reactors and remotely controlled plutoniumextraction
plants were completed in early 1945.
A Sustained and Awesome Roar
Studies of fast-neutron reactions for an atomic bomb were brought
together in Chicago in June of 1942 under the leadership of J. Robert
Oppenheimer. He soon became a personal adviser to Groves, who
built for Oppenheimer a laboratory for the design and construction
of the bomb at Los Alamos, New Mexico. In 1943, Oppenheimer
gathered two hundred of the best scientists in what was by now being
called the Manhattan Project to live and work in this third secret
city.
Two bomb designs were developed. A gun-type bomb called
“Little Boy” used 15 kilograms of uranium 235 in a 4,500-kilogram
cylinder about 2 meters long and 0.5 meter in diameter, in which a
uranium bullet could be fired into three uranium target rings to
form a critical mass. An implosion-type bomb called “Fat Man” had
a 5-kilogram spherical core of plutonium about the size of an orange,
which could be squeezed inside a 2,300-kilogram sphere
about 1.5 meters in diameter by properly shaped explosives to make
the mass critical in the shorter time required for the faster plutonium
fission process.
A flat scrub region 200 kilometers southeast of Alamogordo,
called Trinity, was chosen for the test site, and observer bunkers
were built about 10 kilometers from a 30-meter steel tower. On July
13, 1945, one of the plutonium bombs was assembled at the site; the
next morning, it was raised to the top of the tower. Two days later,
on July 16, after a short thunderstorm delay, the bomb was detonated
at 5:30 a.m. The resulting implosion initiated a chain reaction
of nearly 60 fission generations in about a microsecond. It produced
an intense flash of light and a fireball that expanded to a diameter of
about 600 meters in two seconds, rose to a height of more than 12 kilometers,
and formed an ominous mushroom shape. Forty seconds
later, an air blast hit the observer bunkers, followed by a sustained
and awesome roar. Measurements confirmed that the explosion had
the power of 18.6 kilotons of trinitrotoluene (TNT), nearly four
times the predicted value.
Impact
On March 9, 1945, 325 American B-29 bombers dropped 2,000
tons of incendiary bombs on Tokyo, resulting in 100,000 deaths from
the fire storms that swept the city. Nevertheless, the Japanese military
refused to surrender, and American military plans called for an
invasion of Japan, with estimates of up to a half million American
casualties, plus as many as 2 million Japanese casualties. On August
6, 1945, after authorization by President Harry S. Truman, the
B-29 Enola Gay dropped the uranium Little Boy bomb on Hiroshima
at 8:15 a.m. On August 9, the remaining plutonium Fat Man bomb
was dropped on Nagasaki. Approximately 100,000 people died at
Hiroshima (out of a population of 400,000), and about 50,000 more
died at Nagasaki. Japan offered to surrender on August 10, and after
a brief attempt by some army officers to rebel, an official announcement
by Emperor Hirohito was broadcast on August 15.
The development of the thermonuclear fusion bomb, in which
hydrogen isotopes could be fused together by the force of a fission
explosion to produce helium nuclei and almost unlimited energy,
had been proposed early in the Manhattan Project by physicist Edward
Teller. Little effort was invested in the hydrogen bomb until
after the surprise explosion of a Soviet atomic bomb in September,
1949, which had been built with information stolen from the Manhattan
Project. After three years of development under Teller’s guidance, the first successful H-bomb was exploded on November
1, 1952, obliterating the Elugelab atoll in the Marshall Islands of
the South Pacific. The arms race then accelerated until each side had
stockpiles of thousands of H-bombs.
The Manhattan Project opened a Pandora’s box of nuclear weapons
that would plague succeeding generations, but it contributed
more than merely weapons. About 19 percent of the electrical energy
in the United States is generated by about 110 nuclear reactors
producing more than 100,000 megawatts of power. More than 400
reactors in thirty countries provide 300,000 megawatts of the world’s
power. Reactors have made possible the widespread use of radioisotopes
in medical diagnosis and therapy. Many of the techniques
for producing and using these isotopes were developed by the hundreds
of nuclear physicists who switched to the field of radiation
biophysics after the war, ensuring that the benefits of their wartime
efforts would reach the public.
27 January 2009
Assembly line
The invention: Amanufacturing technique pioneered in the automobile
industry by Henry Ford that lowered production costs
and helped bring automobile ownership within the reach of millions
of Americans in the early twentieth century.
The people behind the invention:
Henry Ford (1863-1947), an American carmaker
Eli Whitney (1765-1825), an American inventor
Elisha King Root (1808-1865), the developer of division of labor
Oliver Evans (1755-1819), the inventor of power conveyors
Frederick Winslow Taylor (1856-1915), an efficiency engineer
A Practical Man
Henry Ford built his first “horseless carriage” by hand in his
home workshop in 1896. In 1903, the Ford Motor Company was
born. Ford’s first product, the Model A, sold for less than one thousand
dollars, while other cars at that time were priced at five to ten
thousand dollars each. When Ford and his partners tried, in 1905, to
sell a more expensive car, sales dropped. Then, in 1907, Ford decided
that the Ford Motor Company would build “a motor car for
the great multitude.” It would be called the Model T.
The Model T came out in 1908 and was everything that Henry Ford
said it would be. Ford’s Model T was a low-priced (about $850), practical
car that came in one color only: black. In the twenty years during
which the Model T was built, the basic design never changed. Yet the
price of the Model T, or “Tin Lizzie,” as it was affectionately called,
dropped over the years to less than half that of the original Model T. As
the price dropped, sales increased, and the Ford Motor Company
quickly became the world’s largest automobile manufacturer.
The last of more than 15 million Model T’s was made in 1927. Although
it looked and drove almost exactly like the first Model T,
these two automobiles were built in an entirely different way. The
first was custom-built, while the last came off an assembly line.
At first, Ford had built his cars in the same way everyone else
did: one at a time. Skilled mechanics would work on a car from start
to finish, while helpers and runners brought parts to these highly
paid craftsmen as they were needed. After finishing one car, the mechanics
and their helpers would begin the next.
The Quest for Efficiency
Custom-built products are good when there is little demand and
buyers are willing to pay the high labor costs. This was not the case
with the automobile. Ford realized that in order to make a large
number of quality cars at a low price, he had to find a more efficient
way to build cars. To do this, he looked to the past and the work of
others. He found four ideas: interchangeable parts, continuous flow,
division of labor, and elimination of wasted motion.
Eli Whitney, the inventor of the cotton gin, was the first person to
use interchangeable parts successfully in mass production. In 1798, the
United States government asked Whitney to make several thousand
muskets in two years. Instead of finding and hiring gunsmiths to make
the muskets by hand, Whitney used most of his time and money to design
and build special machines that could make large numbers of identical parts—one machine for each part that was needed to build a
musket. These tools, and others Whitney made for holding, measuring,
and positioning the parts, made it easy for semiskilled, and even
unskilled, workers to build a large number of muskets.
Production can be made more efficient by carefully arranging the
different stages of production to create a “continuous flow.” Ford
borrowed this idea from at least two places: the meat-packing
houses of Chicago and an automatic grain mill run by Oliver Evans.
Ford’s idea for a moving assembly line came from Chicago’s
great meat-packing houses in the late 1860’s. Here, the bodies of animals
were moved along an overhead rail past a number of workers,
each ofwhommade a certain cut, or handled one part of the packing
job. This meant that many animals could be butchered and packaged
in a single day.
Ford looked to Oliver Evans for an automatic conveyor system.
In 1783, Evans had designed and operated an automatic grain mill
that could be run by only two workers. As one worker poured grain
into a funnel-shaped container, called a “hopper,” at one end of the
mill, a second worker filled sacks with flour at the other end. Everything
in between was done automatically, as Evans’s conveyors
passed the grain through the different steps of the milling process
without any help.
The idea of “division of labor” is simple: When one complicated
job is divided into several easier jobs, some things can be made
faster, with fewer mistakes, by workers who need fewer skills than
ever before. Elisha King Root had used this principle to make the famous
Colt “Six-Shooter.” In 1849, Root went to work for Samuel
Colt at his Connecticut factory and proved to be a manufacturing
genius. By dividing the work into very simple steps, with each step
performed by one worker, Root was able to make many more guns
in much less time.
Before Ford applied Root’s idea to the making of engines, it took
one worker one day to make one engine. By breaking down the
complicated job of making an automobile engine into eighty-four
simpler jobs, Ford was able to make the process much more efficient.
By assigning one person to each job, Ford’s company was able
to make 352 engines per day—an increase of more than 400 percent.
Frederick Winslow Taylor has been called the “original efficiency
expert.” His idea was that inefficiency was caused by wasted time
and wasted motion. So Taylor studied ways to eliminate wasted
motion. He proved that, in the long run, doing a job too quickly was
as bad as doing it too slowly. “Correct speed is the speed at which
men can work hour after hour, day after day, year in and year out,
and remain continuously in good health,” he said. Taylor also studied
ways to streamline workers’ movements. In this way, he was
able to keep wasted motion to a minimum.
Impact
The changeover from custom production to mass production
was an evolution rather than a revolution. Henry Ford applied the
four basic ideas of mass production slowly and with care, testing
each new idea before it was used. In 1913, the first moving assembly
line for automobiles was being used to make Model T’s. Ford was
able to make his Tin Lizzies faster than ever, and his competitors
soon followed his lead. He had succeeded in making it possible for
millions of people to buy automobiles.
Ford’s work gave a new push to the Industrial Revolution. It
showed Americans that mass production could be used to improve
quality, cut the cost of making an automobile, and improve profits.
In fact, the Model T was so profitable that in 1914 Ford was able to
double the minimum daily wage of his workers, so that they too
could afford to buy Tin Lizzies.
Although Americans account for only about 6 percent of the
world’s population, they now own about 50 percent of its wealth.
There are more than twice as many radios in the United States as
there are people. The roads are crowded with more than 180 million
automobiles. Homes are filled with the sounds and sights emitting
from more than 150 million television sets. Never have the people of
one nation owned so much. Where did all the products—radios,
cars, television sets—come from? The answer is industry, which still
depends on the methods developed by Henry Ford.
25 January 2009
Aspartame
The invention
An artificial sweetener with a comparatively natural taste widely used in carbonated beverages.
The people behind the invention
Arthur H. Hayes, Jr. (1933- ), a physician and commissioner of the U.S. Food
and Drug Administration (FDA)
James M. Schlatter (1942- ), an American chemist
Michael Sveda (1912- ), an American chemist and inventor
Ludwig Frederick Audrieth (1901- ), an American chemist and educator
Ira Remsen (1846-1927), an American chemist and educator Constantin Fahlberg (1850-1910), a German chemist.
Sweetness Without Calories
People have sweetened food and beverages since before recorded
history. The most widely used sweetener is sugar, or sucrose. The
only real drawback to the use of sucrose is that it is a nutritive sweetener:
In addition to adding a sweet taste, it adds calories. Because sucrose is
readily absorbed by the body, an excessive amount can be life-threatening to diabetics. This fact alone would make the development of nonsucrose
sweeteners attractive.
There are three common nonsucrose sweeteners in use around the world:
saccharin, cyclamates, and aspartame. Saccharin was the first of this group
to be discovered, in 1879. Constantin Fahlberg synthesized saccharin based
on the previous experimental work of Ira Remsen using toluene (derived from petroleum).
This product was found to be three hundred to five hundred times as sweet as
sugar, although some people could detect a bitter aftertaste.
In 1944, the chemical family of cyclamates was discovered by Ludwig Frederick Audrieth and Michael Sveda. Although these compounds are only thirty to eighty times as sweet as sugar, there was no detectable aftertaste.
By the mid-1960’s, cyclamates had resplaced saccharin as the leading nonnutritive sweetener in theUnited States.
Although cyclamates are still in use throughout the world, in October, 1969, FDA removed them from the list of approved food additives because of tests that indicated possible health hazards.
A Political Additive
Aspartame is the latest in artificial sweeteners that are derived from natural ingredients—in this case, two amino acids, one from milk and one from bananas. Discovered by accident in 1965 by American chemist James M. Schlatter when he licked his fingers during an experiment, aspartame is 180 times as sweet as sugar. In 1974, the FDAapproved its use in dry foods such as gum and cerealand as a sugar replacement.
Shortly after its approval for this limited application, the FDA held public hearings on the safety concerns raised by JohnW. Olney, a professor of neuropathology at Washington University in St. Louis.
There was some indication that aspartame, when combined with the common food additive monosodium glutamate, caused brain damage in children. These fears were confirmed, but the risk of brain damage was limited to a small percentage of individuals with a rare genetic disorder.
At this point, the public debate took a political turn:
Senator William Proxmire charged FDA Commissioner AlexanderM. Schmidt with public misconduct.
This controversy resulted in aspartame being taken off the market in 1975.
In 1981, the new FDA commissioner, Arthur H. Hayes, Jr., resapproved aspartame for use in the same applications: as a tabletop sweetener, as a cold-cereal additive, in chewing gum, and for other miscellaneous uses.
In 1983, the FDAapproved aspartame for use in carbonated beverages, its largest application to date.
Later safety studies revealed that children with a rare metabolic disease, phenylketonuria,could not ingest this sweetener without severe health
risks because of the presence of phenylalanine in aspartame.
This condition results in a rapid buildup in phenylalanine in the blood.
Laboratories simulated this condition in rats and found that high doses of aspartame inhibited the synthesis of dopamine, a neurotransmitter.
Once this happens, an increase in the frequency of seizures can occur.
There was no direct evidence, however, that aspartame actually caused
seizures in these experiments.
Many other compounds are being tested for use as sugar replacements,
the sweetest being a relative of aspartame. This compound is seventeen
thousand to fifty-two thousand times sweeter than sugar.
Impact
The business fallout from the approval of a new low-calorie sweetener occurred
over a short span of time. In 1981, sales of thisartificial sweetener by G. D. Searle and Company were $74 million.
In 1983, sales rose to $336 million and exceeded half a billion dollars
the following year.
These figures represent sales of more than 2,500tons of this product.
In 1985, 3,500 tons of aspartame were consumed.
Clearly, this product’s introduction was a commercial success for Searle.
During this same period, the percentage of reduced calorie carbonated
beverages containing saccharin declined from100 percent to 20 percent in an industry that had $4 billion in sales.
Universally, consumers preferred products containing aspartame; the bitter aftertaste of saccharin was rejected in favor of the new, less
powerful sweetener.
There is a trade-off in using these products. The FDA found evidence linking
both saccharin and cyclamates to an elevated incidence of cancer.
Cyclamates were banned in the United States for this reason. Public resistance
to this measure caused the agency to back away from its position.
The rationale was that, compared toother health risks associated with the consumption of sugar (especially for diabetics and overweight persons),
the chance of getting cancer was slight and therefore a risk that many people
wouldchoose to ignore. The total domination of aspartame in the sweetener
market seems to support this assumption.
16 January 2009
Artificial satellite
The invention
Sputnik I, the first object put into orbit around the
earth, which began the exploration of space.
The people behind the invention
Sergei P. Korolev (1907-1966), a Soviet rocket scientist
Konstantin Tsiolkovsky (1857-1935), a Soviet schoolteacher and the founder of rocketry in the Soviet Union
Robert H. Goddard (1882-1945), an American scientist and the founder of rocketry in the United States
Wernher von Braun (1912-1977), a German who worked on rocket projects
Arthur C. Clarke (1917- ), the author of more than fifty books and the visionary behind telecommunications satellites
A Shocking Launch
In Russian, sputnik means “satellite” or “fellow traveler.”
On October4, 1957, Sputnik 1, the first artificial satellite to orbit Earth,
wasplaced into successful orbit by the Soviet Union. The launch of this
small aluminum sphere, 0.58 meter in diameter and weighing 83.6
kilograms, opened the doors to the frontiers of space.
Orbiting Earth every 96 minutes, at 28,962 kilometers per hour,
Sputnik 1 came within 215 kilometers of Earth at its closest point and
939 kilometers away at its farthest point. It carried equipment to
measure the atmosphere and to experiment with the transmission
of electromagnetic waves from space. Equipped with two radio
transmitters (at different frequencies) that broadcast for twenty-one
days, Sputnik 1 was in orbit for ninety-two days, until January 4,
1958, when it disintegrated in the atmosphere.
Sputnik 1 was launched using a Soviet intercontinental ballistic
missile (ICBM) modified by Soviet rocket expert Sergei P. Korolev.
After the launch of Sputnik 2, less than a month later, Chester
Bowles, a former United States ambassador to India and Nepal,
wrote: “Armed with a nuclear warhead, the rocket which launched
Sputnik 1 could destroy New York, Chicago, or Detroit 18 minutes
after the button was pushed in Moscow.”
Although the launch of Sputnik 1 came as a shock to the general
public, it came as no surprise to those who followed rocketry. In
June, 1957, the United States Air Force had issued a nonclassified
memo stating that there was “every reason to believe that the Rus-
sian satellite shot would be made on the hundredth anniversary” of
Konstantin Tsiolkovsky’s birth.
Thousands of Launches
Rockets have been used since at least the twelfth century, when
Europeans and the Chinese were using black powder devices. In
1659, the Polish engineer Kazimir Semenovich published his Roketten
für Luft und Wasser (rockets for air and water), which had a drawing
of a three-stage rocket. Rockets were used and perfected for warfare
during the nineteenth and twentieth centuries. Nazi Germany’s V-2
rocket (thousands of which were launched by Germany against England
during the closing years of World War II) was the model for
American and Soviet rocket designers between 1945 and 1957. In
the Soviet Union, Tsiolkovsky had been thinking about and writing
about space flight since the last decade of the nineteenth century,
and in the United States, Robert H. Goddard had been thinking
about and experimenting with rockets since the first decade of the
twentieth century.
Wernher von Braun had worked on rocket projects for Nazi Germany
duringWorldWar II, and, as the war was ending in May, 1945,
von Braun and several hundred other people involved in German
rocket projects surrendered to American troops in Europe. Hundreds
of other German rocket experts ended up in the Soviet Union
to continue with their research. Tom Bower pointed out in his book
The Paperclip Conspiracy: The Hunt for the Nazi Scientists (1987)—so
named because American “recruiting officers had identified [Nazi]
scientists to be offered contracts by slipping an ordinary paperclip
onto their files”—that American rocketry research was helped
tremendously by Nazi scientists who switched sides after World
War II.
The successful launch of Sputnik 1 convinced people that space
travel was no longer simply science fiction. The successful launch of
Sputnik 2 on November 3, 1957, carrying the first space traveler, a
dog named Laika (who was euthanized in orbit because there were
no plans to retrieve her), showed that the launch of Sputnik 1 was
only the beginning of greater things to come.
Consequences
After October 4, 1957, the Soviet Union and other nations launched
more experimental satellites. On January 31, 1958, the United
States sent up Explorer 1, after failing to launch a Vanguard satellite
on December 6, 1957.
Arthur C. Clarke, most famous for his many books of science fiction,
published a technical paper in 1945 entitled “Extra-Terrestrial
Relays: Can Rocket Stations GiveWorld-Wide Radio Coverage?” In
that paper, he pointed out that a satellite placed in orbit at the correct
height and speed above the equator would be able to hover over
the same spot on Earth. The placement of three such “geostationary”
satellites would allow radio signals to be transmitted around
the world. By the 1990’s, communications satellites were numerous.
In the first twenty-five years after Sputnik 1 was launched, from
1957 to 1982, more than two thousand objects were placed into various
Earth orbits by more than twenty-four nations. On the average,
something was launched into space every 3.82 days for this twentyfive-
year period, all beginning with Sputnik 1.
08 January 2009
Artificial kidney
The invention
A machine that removes waste end-products and poisons out of the blood when human kidneys are not working properly.
The people behind the invention
John Jacob Abel (1857-1938), a pharmacologist and biochemist known as the “father of American pharmacology”
Willem Johan Kolff (1911- ), a Dutch American clinician who pioneered the artificial kidney and the artificial heart.
Cleansing the Blood
In the human body, the kidneys are the dual organs that remove waste matter from the bloodstream and send it out of the system as urine. If the kidneys fail to work properly, this cleansing process must be done artifically—such as by a machine.
John Jacob Abel was the first professor of pharmacology at Johns Hopkins University School of Medicine. Around 1912, he began to study the by-products of metabolism that are carried in the blood.
This work was difficult, he realized, because it was nearly impossible to detect even the tiny amounts of the many substances in blood.
Moreover, no one had yet developed a method or machine for taking these substances out of the blood.
In devising a blood filtering system, Abel understood that he needed a saline solution and a membrane that would let some substances pass through but not others. Working with Leonard Rowntree and Benjamin B. Turner, he spent nearly two years figuring out how to build a machine that would perform dialysis—that is, remove metabolic by-products from blood. Finally their efforts succeeded.
The first experiments were performed on rabbits and dogs. In operating the machine, the blood leaving the patient was sent flowing through a celloidin tube that had been wound loosely around a drum. An anticlotting substance (hirudin, taken out of leeches) was added to blood as the blood flowed through the tube. The drum, which was immersed in a saline and dextrose solution, rotated slowly. As blood flowed through the immersed tubing, the pressure of osmosis removed urea and other substances, but not the plasma or cells, from the blood.
The celloidin membranes allowed oxygen to pass from the saline and dextrose solution into the blood, so that purified, oxygenated blood then flowed back into the arteries.
Abel studied the substances that his machine had removed from the blood, and he found that they included not only urea but also free amino acids. He quickly realized that his machine could be useful for taking care of people whose kidneys were not working properly.
Reporting on his research, he wrote, “In the hope of providing a substitute in such emergencies, which might tide over a dangerous crisis . . . a method has been devised by which the blood of a living animal may be submitted to dialysis outside the body,
and again returned to the natural circulation.” Abel’s machine removed large quantities of urea and other poisonous substances fairly quickly, so that the process, which he called “vividiffusion,” could serve as an artificial kidney during cases of kidney failure.
For his physiological research, Abel found it necessary to remove, study, and then replace large amounts of blood from living animals, all without dissolving the red blood cells, which carry oxygen to the body’s various parts. He realized that this process, which
he called “plasmaphaeresis,” would make possible blood banks, where blood could be stored for emergency use.
In 1914, Abel published these two discoveries in a series of three articles in the Journal of Pharmacology and Applied Therapeutics, and he demonstrated his techniques in London, England, and Groningen,The Netherlands. Though he had suggested that his techniques could be used for medical purposes, he himself was interested mostly in continuing his biochemical research. So he turned to other projects in pharmacology, such as the crystallization of insulin,and never returned to studying vividiffusion.
Refining the Technique
Georg Haas, a German biochemist working in Giessen,West Germany, was also interested in dialysis; in 1915, he began to experiment with “blood washing.” After reading Abel’s 1914 writings,Haas tried substituting collodium for the celloidin that Abel had used as a filtering membrane and using commercially prepared heparin instead of the homemade hirudin Abel had used to prevent blood clotting. He then used this machine on a patient and found that it showed promise, but he knew that many technical problems had to be worked out before the procedure could be used on many patients.
In 1937,Willem Johan Kolff was a young physician at Groningen.He felt sad to see patients die from kidney failure, and he wanted to find a way to cure others. Having heard his colleagues talk about the possibility of using dialysis on human patients, he decided to build a dialysis machine.
Kolff knew that cellophane was an excellent membrane for dialyzing, and that heparin was a good anticoagulant, but he also realized that his machine would need to be able to treat larger volumes of blood than Abel’s and Haas’s had. During World War II (1939-1945), with the help of the director of a nearby enamel factory, Kolff built an artificial kidney that was first tried on a patient on March 17, 1943. Between March, 1943, and July 21, 1944, Kolff used his secretly constructed dialysis machines on fifteen patients, of whom only one survived. He published the results of his research in Acta Medica Scandinavica. Even though most of his patients had not survived,he had collected information and developed the technique until he was sure dialysis would eventually work.
Kolff brought machines to Amsterdam and The Hague and encouraged other physicians to try them; meanwhile, he continued to study blood dialysis and to improve his machines.
In 1947, he brought improved machines to London and the United States. By the time he reached Boston, however, he had given away all of his machines. He did, however, explain the technique to John P.Merrill, a physician at the Harvard Medical School, who soon became the leading American developer of kidney dialysis and kidney-transplant surgery.
Kolff himself moved to the United States, where he became an expert not only in artificial kidneys but also in artificial hearts. He helped develop the Jarvik-7 artificial heart (named for its chief inventor,Robert Jarvik), which was implanted in a patient in 1982.
Impact
Abel’s work showed that the blood carried some substances that had not been previously known and led to the development of the first dialysis machine for humans. It also encouraged interest in the possibility of organ transplants.
After World War II, surgeons had tried to transplant kidneys from one animal to another, but after a few days the recipient began to reject the kidney and die. In spite of these failures, researchers in Europe and America transplanted kidneys in several patients, and they used artificial kidneys to take care of the patients who were waiting for transplants.
In 1954, Merrill—to whom Kolff had demonstrated an artificial kidney—successfully transplanted kidneys in identical twins.After immunosuppressant drugs (used to prevent the body from rejecting newly transplanted tissue) were discovered in 1962,transplantation surgery became much more practical. After kidney transplants became common, the artificial kidney became simply a way of keeping a person alive until a kidney donor could befound.
29 December 2008
Artificial insemination
The invention:
Practical techniques for the artificial insemination of farm animals that have revolutionized livestock breeding practices throughout the world.
The people behind the invention:
Lazzaro Spallanzani (1729-1799), an Italian physiologist
Ilya Ivanovich Ivanov (1870-1932), a Soviet biologist
R. W. Kunitsky, a Soviet veterinarian
Reproduction Without Sex
The tale is told of a fourteenth-century Arabian chieftain who sought to improve his mediocre breed of horses. Sneaking into the territory of a neighboring hostile tribe, he stimulated a prize stallion to ejaculate into a piece of cotton. Quickly returning home, he inserted this cotton into the vagina of his own mare, who subsequently gave birth to a high-quality horse. This may have been the first case of “artificial insemination,” the technique by which semen is introduced into the female reproductive tract without sexual contact.
The first scientific record of artificial insemination comes from Italy in the 1770’s.
Lazzaro Spallanzani was one of the foremost physiologists of his time, well known for having disproved the theory of spontaneous generation, which states that living organisms can spring “spontaneously” from lifeless matter. There was some disagreement at that time about the basic requirements for reproduction in animals. It was unclear if the sex act was necessary for an embryo to develop, or if it was sufficient that the sperm and eggs come into contact. Spallanzani began by studying animals in which union of the sperm and egg normally takes place outside the body of the female. He stimulated males and females to release their sperm and eggs, then mixed these sex cells in a glass dish. In this way, he produced young frogs, toads, salamanders, and silkworms.
Next, Spallanzani asked whether the sex act was also unnecessary for reproduction in those species in which fertilization normally takes place inside the body of the female. He collected semen that had been ejaculated by a male spaniel and, using a syringe, injected the semen into the vagina of a female spaniel in heat. Two
months later, she delivered a litter of three pups, which bore some resemblance to both the mother and the male that had provided the sperm.
It was in animal breeding that Spallanzani’s techniques were to have their most dramatic application. In the 1880’s, an English dog breeder, Sir Everett Millais, conducted several experiments on artificial insemination. He was interested mainly in obtaining offspring from dogs that would not normally mate with one another because of difference in size. He followed Spallanzani’s methods to produce
a cross between a short, low, basset hound and the much larger bloodhound.
Long-Distance Reproduction
Ilya Ivanovich Ivanov was a Soviet biologist who was commissioned by his government to investigate the use of artificial insemination on horses. Unlike previous workers who had used artificial insemination to get around certain anatomical barriers to fertilization, Ivanov began the use of artificial insemination to reproduce
thoroughbred horses more effectively. His assistant in this work was the veterinarian R. W. Kunitsky.
In 1901, Ivanov founded the Experimental Station for the Artificial Insemination of Horses. As its director, he embarked on a series of experiments to devise the most efficient techniques for breeding these animals. Not content with the demonstration that the technique was scientifically feasible, he wished to ensure further that it could be practiced by Soviet farmers.
If sperm from a male were to be used to impregnate females in another location, potency would have to be maintained for a long time. Ivanov first showed that the secretions from the sex glands were not required for successful insemination; only the sperm itself was necessary. He demonstrated further that if a testicle were removed from a bull and kept cold, the sperm would remain alive.
More useful than preservation of testicles would be preservation
of the ejaculated sperm. By adding certain salts to the sperm-containing fluids, and by keeping these at cold temperatures, Ivanov was able to preserve sperm for long periods.
Ivanov also developed instruments to inject the sperm, to hold the vagina open during insemination, and to hold the horse in place during the procedure. In 1910, Ivanov wrote a practical textbook with technical instructions for the artificial insemination of horses.
He also trained some three hundred veterinary technicians in the use of artificial insemination, and the knowledge he developed quickly spread throughout the Soviet Union. Artificial insemination became the major means of breeding horses.
Until his death in 1932, Ivanov was active in researching many aspects of the reproductive biology of animals. He developed methods to treat reproductive diseases of farm animals and refined methods of obtaining, evaluating, diluting, preserving, and disinfecting sperm. He also began to produce hybrids between wild and domestic animals in the hope of producing new breeds that would be able to withstand extreme weather conditions better and that would be more resistant to disease.
His crosses included hybrids of ordinary cows with aurochs, bison, and yaks, as well as some more exotic crosses of zebras with horses.
Ivanov also hoped to use artificial insemination to help preserve species that were in danger of becoming extinct. In 1926, he led an expedition to West Africa to experiment with the hybridization of different species of anthropoid apes.
Impact
The greatest beneficiaries of artificial insemination have been dairy farmers. Some bulls are able to sire genetically superior cows that produce exceptionally large volumes of milk. Under natural conditions, such a bull could father at most a few hundred offspring in its lifetime. Using artificial insemination, a prize bull can inseminate ten to fifteen thousand cows each year. Since frozen sperm may be purchased through the mail, this also means that dairy farmers no longer need to keep dangerous bulls on the farm. Artificial insemination has become the main method of reproduction of dairy cows, with about 150 million cows (as of 1992) produced this way throughout the world.
In the 1980’s, artificial insemination gained added importance as a method of breeding rare animals. Animals kept in zoo cages, animals that are unable to take part in normal mating, may still produce sperm that can be used to inseminate a female artificially.
Some species require specific conditions of housing or diet for normal breeding to occur, conditions not available in all zoos. Such animals can still reproduce using artificial insemination.
17 December 2008
Artificial hormone
The invention:
Synthesized oxytocin, a small polypeptide hormone
from the pituitary gland that has shown how complex polypeptides
and proteins may be synthesized and used in medicine.
The people behind the invention:
Vincent du Vigneaud (1901-1978), an American biochemist and
winner of the 1955 Nobel Prize in Chemistry
Oliver Kamm (1888-1965), an American biochemist
Sir Edward Albert Sharpey-Schafer (1850-1935), an English
physiologist
Sir Henry Hallett Dale (1875-1968), an English physiologist and
winner of the 1936 Nobel Prize in Physiology or Medicine
John Jacob Abel (1857-1938), an American pharmacologist and
biochemist
12 December 2008
Artificial heart
The invention:
The first successful artificial heart, the Jarvik-7, has
helped to keep patients suffering from otherwise terminal heart
disease alive while they await human heart transplants.
The people behind the invention:
Robert Jarvik (1946- ), the main inventor of the Jarvik-7
William Castle DeVries (1943- ), a surgeon at the University
of Utah in Salt Lake City
Barney Clark (1921-1983), a Seattle dentist, the first recipient of
the Jarvik-7
Early Success
The Jarvik-7 artificial heart was designed and produced by researchers
at the University of Utah in Salt Lake City; it is named for
the leader of the research team, Robert Jarvik. An air-driven pump
made of plastic and titanium, it is the size of a human heart. It is made
up of two hollow chambers of polyurethane and aluminum, each
containing a flexible plastic membrane. The heart is implanted in a
human being but must remain connected to an external air pump by
means of two plastic hoses. The hoses carry compressed air to the
heart, which then pumps the oxygenated blood through the pulmonary
artery to the lungs and through the aorta to the rest of the body.
The device is expensive, and initially the large, clumsy air compressor
had to be wheeled from room to room along with the patient.
The device was new in 1982, and that same year Barney Clark, a
dentist from Seattle, was diagnosed as having only hours to live.
His doctor, cardiac specialistWilliam Castle DeVries, proposed surgically
implanting the Jarvik-7 heart, and Clark and his wife agreed.
The Food and Drug Administration (FDA), which regulates the use
of medical devices, had already given DeVries and his coworkers
permission to implant up to seven Jarvik-7 hearts for permanent use.
The operation was performed on Clark, and at first it seemed quite
successful. Newspapers, radio, and television reported this medical
breakthrough: the first time a severely damaged heart had been re-placed by a totally artificial heart. It seemed DeVries had proved that an artificial heart could be almost as good as a human heart.
Soon after Clark’s surgery, DeVries went on to implant the device placed by a totally artificial heart.in several other patients with serious heart disease. For a time, all of them survived the surgery. As a result, DeVries was offered a position
at Humana Hospital in Louisville, Kentucky. Humana offered
to pay for the first one hundred implant operations
The Controversy Begins
In the three years after DeVries’s operation on Barney Clark,
however, doubts and criticism arose. Of the people who by then had
received the plastic and metal device as a permanent replacement
for their own diseased hearts, three had died (including Clark) and
four had suffered serious strokes. The FDAasked Humana Hospital
and Symbion (the company that manufactured the Jarvik-7) for
complete, detailed histories of the artificial-heart recipients.
It was determined that each of the patients who had died or been
disabled had suffered from infection. Life-threatening infection, or
“foreign-body response,” is a danger with the use of any artificial
organ. The Jarvik-7, with its metal valves, plastic body, and Velcro
attachments, seemed to draw bacteria like a magnet—and these
bacteria proved resistant to even the most powerful antibiotics.
By 1988, researchers had come to realize that severe infection was
almost inevitable if a patient used the Jarvik-7 for a long period of
time. As a result, experts recommended that the device be used for
no longer than thirty days.
Questions of values and morality also became part of the controversy
surrounding the artificial heart. Some people thought that it
was wrong to offer patients a device that would extend their lives
but leave them burdened with hardship and pain. At times DeVries
claimed that it was worth the price for patients to be able live another
year; at other times, he admitted that if he thought a patient
would have to spend the rest of his or her life in a hospital, he would
think twice before performing the implant.
There were also questions about “informed consent”—the patient’s
understanding that a medical procedure has a high risk of
failure and may leave the patient in misery even if it succeeds.
Getting truly informed consent from a dying patient is tricky, because,
understandably, the patient is probably willing to try anything.
The Jarvik-7 raised several questions in this regard:Was the ordeal worth the risk? Was the patient’s suffering justifiable? Who should make the decision for or against the surgery: the patient, the researchers, or a government agency?
Also there was the issue of cost. Should money be poured into expensive,
high-technology devices such as the Jarvik heart, or should
it be reserved for programs to help prevent heart disease in the first
place? Expenses for each of DeVries’s patients had amounted to
about one million dollars.
Humana’s and DeVries’s earnings were criticized in particular.
Once the first one hundred free Jarvik-7 implantations had been
performed, Humana Hospital could expect to make large amounts
of money on the surgery. By that time, Humana would have so
much expertise in the field that, though the surgical techniques
could not be patented, it was expected to have a practical monopoly.
DeVries himself owned thousands of shares of stock in Symbion.
Many people wondered whether this was ethical.
Consequences
Given all the controversies, in December of 1985 a panel of experts
recommended that the FDAallow the experiment to continue,but only with careful monitoring. Meanwhile, cardiac transplantation was becoming easier and more common. By the end of 1985, almost twenty-six hundred patients in various countries had received human heart transplants, and 76 percent of these patients had survived
for at least four years. When the demand for donor hearts exceeded the supply, physicians turned to the Jarvik device and other artificial hearts to help see patients through the waiting period.
Experience with the Jarvik-7 made the world keenly aware of
how far medical science still is from making the implantable permanent
mechanical heart a reality. Nevertheless, the device was a
breakthrough in the relatively new field of artificial organs. Since
then, other artificial body parts have included heart valves, blood
vessels, and inner ears that help restore hearing to the deaf.
William C. DeVries
William Castle DeVries did not invent the artificial heart
himself; however, he did develop the procedure to implant it.
The first attempt took him seven and a half hours, and he
needed fourteen assistants. Asuccess, the surgery made DeVries
one of the most talked-about doctors in the world.
DeVries was born in Brooklyn,NewYork, in 1943. His father,
a Navy physician, was killed in action a few months later, and
his mother, a nurse, moved with her son to Utah. As a child
DeVries showed both considerable mechanical aptitude and
athletic prowess. He won an athletic scholarship to the University
of Utah, graduating with honors in 1966. He entered the
state medical school and there met Willem Kolff, a pioneer in
designing and testing artificial organs. Under Kolff’s guidance,
DeVries began performing experimental surgeries on animals
to test prototype mechanical hearts. He finished medical school
in 1970 and from 1971 until 1979 was an intern and then a resident
in surgery at the Duke University Medical Center in North
Carolina.
DeVries returned to the University of Utah as an assistant
professor of cardiovascular and thoracic surgery. In the meantime,
Robert K. Jarvik had devised the Jarvik-7 artificial heart.
DeVries experimented, implanting it in animals and cadavers
until, following approval from the Federal Drug Administration,
Barney Clark agreed to be the first test patient. He died 115
days after the surgery, having never left the hospital. Although
controversy arose over the ethics and cost of the procedure,
more artificial heart implantations followed, many by DeVries.
Long administrative delays getting patients approved for
surgery at Utah frustrated DeVries, so he moved to Humana
Hospital-Audubon in Louisville, Kentucky, in 1984 and then
took a professorship at the University of Louisville. In 1988 he
left experimentation for a traditional clinical practice. The FDA
withdrew its approval for the Jarvik-7 in 1990.
In 1999 DeVries retired from practice, but not from medicine.
The next year he joined the Army Reserve and began teaching
surgery at the Walter Reed Army Medical Center.
Subscribe to:
Posts (Atom)