20 February 2009
Bullet train
The invention: An ultrafast passenger railroad system capable of
moving passengers at speeds double or triple those of ordinary
trains.
The people behind the invention:
Ikeda Hayato (1899-1965), Japanese prime minister from 1960 to
1964, who pushed for the expansion of public expenditures
Shinji Sogo (1901-1971), the president of the Japanese National
Railways, the “father of the bullet train”
Building a Faster Train
By 1900, Japan had a world-class railway system, a logical result
of the country’s dense population and the needs of its modernizing
economy. After 1907, the government controlled the system
through the Japanese National Railways (JNR). In 1938, JNR engineers
first suggested the idea of a train that would travel 125 miles
per hour from Tokyo to the southern city of Shimonoseki. Construction
of a rapid train began in 1940 but was soon stopped because of
World War II.
The 311-mile railway between Tokyo and Osaka, the Tokaido
Line, has always been the major line in Japan. By 1957, a business express
along the line operated at an average speed of 57 miles per
hour, but the double-track line was rapidly reaching its transport capacity.
The JNR established two investigative committees to explore
alternative solutions. In 1958, the second committee recommended
the construction of a high-speed railroad on a separate double track,
to be completed in time for the Tokyo Olympics of 1964. The Railway
Technical Institute of the JNR concluded that it was feasible to
design a line that would operate at an average speed of about 130
miles per hour, cutting time for travel between Tokyo and Osaka
from six hours to three hours.
By 1962, about 17 miles of the proposed line were completed for
test purposes. During the next two years, prototype trains were
tested to correct flaws and make improvements in the design. The entire project was completed on schedule in July, 1964, with total construction
costs of more than $1 billion, double the original estimates.
The Speeding Bullet
Service on the Shinkansen, or New Trunk Line, began on October
1, 1964, ten days before the opening of the Olympic Games.
Commonly called the “bullet train” because of its shape and speed,
the Shinkansen was an instant success with the public, both in Japan
and abroad. As promised, the time required to travel between Tokyo
and Osaka was cut in half. Initially, the system provided daily
services of sixty trains consisting of twelve cars each, but the number
of scheduled trains was almost doubled by the end of the year.
The Shinkansen was able to operate at its unprecedented speed
because it was designed and operated as an integrated system,
making use of countless technological and scientific developments.
Tracks followed the standard gauge of 56.5 inches, rather than the
more narrow gauge common in Japan. For extra strength, heavy welded rails were attached directly onto reinforced concrete slabs.
The minimum radius of a curve was 8,200 feet, except where sharper
curves were mandated by topography. In many ways similar to
modern airplanes, the railway cars were made airtight in order to
prevent ear discomfort caused by changes in pressure when trains
enter tunnels.
The Shinkansen trains were powered by electric traction motors,
with four 185-kilowatt motors on each car—one motor attached to
each axle. This design had several advantages: It provided an even
distribution of axle load for reducing strain on the tracks; it allowed
the application of dynamic brakes (where the motor was used for
braking) on all axles; and it prevented the failure of one or two units
from interrupting operation of the entire train. The 25,000-volt electrical
current was carried by trolley wire to the cars, where it was
rectified into a pulsating current to drive the motors.
The Shinkansen system established a casualty-free record because
of its maintenance policies combined with its computerized
Centralized Traffic Control system. The control room at Tokyo Station
was designed to maintain timely information about the location
of all trains and the condition of all routes. Although train operators
had some discretion in determining speed, automatic brakes
also operated to ensure a safe distance between trains. At least once
each month, cars were thoroughly inspected; every ten days, an inspection
train examined the conditions of tracks, communication
equipment, and electrical systems.
Impact
Public usage of the Tokyo-Osaka bullet train increased steadily
because of the system’s high speed, comfort, punctuality, and superb
safety record. Businesspeople were especially happy that the
rapid service allowed them to make the round-trip without the necessity
of an overnight stay, and continuing modernization soon allowed
nonstop trains to make a one-way trip in two and one-half
hours, requiring speeds of 160 miles per hour in some stretches. By
the early 1970’s, the line was transporting a daily average of 339,000
passengers in 240 trains, meaning that a train departed from Tokyo
about every ten minutes The popularity of the Shinkansen system quickly resulted in demands
for its extension into other densely populated regions. In
1972, a 100-mile stretch between Osaka and Okayama was opened
for service. By 1975, the line was further extended to Hakata on the
island of Kyushu, passing through the Kammon undersea tunnel.
The cost of this 244-mile stretch was almost $2.5 billion. In 1982,
lines were completed from Tokyo to Niigata and from Tokyo to
Morioka. By 1993, the system had grown to 1,134 miles of track.
Since high usage made the system extremely profitable, the sale of
the JNR to private companies in 1987 did not appear to produce adverse
consequences.
The economic success of the Shinkansen had a revolutionary effect
on thinking about the possibilities of modern rail transportation,
leading one authority to conclude that the line acted as “a
savior of the declining railroad industry.” Several other industrial
countries were stimulated to undertake large-scale railway projects;
France, especially, followed Japan’s example by constructing highspeed
electric railroads from Paris to Nice and to Lyon. By the mid-
1980’s, there were experiments with high-speed trains based on
magnetic levitation and other radical innovations, but it was not
clear whether such designs would be able to compete with the
Shinkansen model.
Bubble memory
The invention: An early nonvolatile medium for storing information
on computers.
The person behind the invention:
Andrew H. Bobeck (1926- ), a Bell Telephone Laboratories
scientist
Magnetic Technology
The fanfare over the commercial prospects of magnetic bubbles
was begun on August 8, 1969, by a report appearing in both The New
York Times and TheWall Street Journal. The early 1970’s would see the
anticipation mount (at least in the computer world) with each prediction
of the benefits of this revolution in information storage technology.
Although it was not disclosed to the public until August of 1969,
magnetic bubble technology had held the interest of a small group
of researchers around the world for many years. The organization
that probably can claim the greatest research advances with respect
to computer applications of magnetic bubbles is Bell Telephone
Laboratories (later part of American Telephone and Telegraph). Basic
research into the properties of certain ferrimagnetic materials
started at Bell Laboratories shortly after the end of World War II
(1939-1945).
Ferrimagnetic substances are typically magnetic iron oxides. Research
into the properties of these and related compounds accelerated
after the discovery of ferrimagnetic garnets in 1956 (these are a
class of ferrimagnetic oxide materials that have the crystal structure
of garnet). Ferrimagnetism is similar to ferromagnetism, the phenomenon
that accounts for the strong attraction of one magnetized
body for another. The ferromagnetic materials most suited for bubble
memories contain, in addition to iron, the element yttrium or a
metal from the rare earth series.
It was a fruitful collaboration between scientist and engineer,
between pure and applied science, that produced this promising breakthrough in data storage technology. In 1966, Bell Laboratories
scientist Andrew H. Bobeck and his coworkers were the first to realize
the data storage potential offered by the strange behavior of thin
slices of magnetic iron oxides under an applied magnetic field. The
first U.S. patent for a memory device using magnetic bubbles was
filed by Bobeck in the fall of 1966 and issued on August 5, 1969.
Bubbles Full of Memories
The three basic functional elements of a computer are the central
processing unit, the input/output unit, and memory. Most implementations
of semiconductor memory require a constant power
source to retain the stored data. If the power is turned off, all stored
data are lost. Memory with this characteristic is called “volatile.”
Disks and tapes, which are typically used for secondary memory,
are “nonvolatile.” Nonvolatile memory relies on the orientation of
magnetic domains, rather than on electrical currents, to sustain its
existence.
One can visualize by analogy how this will work by taking a
group of permanent bar magnets that are labeled withNfor north at
one end and S for south at the other. If an arrow is painted starting
from the north end with the tip at the south end on each magnet, an
orientation can then be assigned to a magnetic domain (here one
whole bar magnet). Data are “stored” with these bar magnets by arranging
them in rows, some pointing up, some pointing down. Different
arrangements translate to different data. In the binary world
of the computer, all information is represented by two states. A
stored data item (known as a “bit,” or binary digit) is either on or off,
up or down, true or false, depending on the physical representation.
The “on” state is commonly labeled with the number 1 and the “off”
state with the number 0. This is the principle behind magnetic disk
and tape data storage.
Now imagine a thin slice of a certain type of magnetic material in
the shape of a 3-by-5-inch index card. Under a microscope, using a
special source of light, one can see through this thin slice in many regions
of the surface. Darker, snakelike regions can also be seen, representing
domains of an opposite orientation (polarity) to the transparent
regions. If a weak external magnetic field is then applied by placing a permanent magnet of the same shape as the card on the
underside of the slice, a strange thing happens to the dark serpentine
pattern—the long domains shrink and eventually contract into
“bubbles,” tiny magnetized spots. Viewed from the side of the slice,
the bubbles are cylindrically shaped domains having a polarity opposite
to that of the material on which they rest. The presence or absence
of a bubble indicates either a 0 or a 1 bit. Data bits are stored by
moving the bubbles in the thin film. As long as the field is applied
by the permanent magnet substrate, the data will be retained. The
bubble is thus a nonvolatile medium for data storage.Consequences
Magnetic bubble memory created quite a stir in 1969 with its
splashy public introduction. Most of the manufacturers of computer
chips immediately instituted bubble memory development projects.
Texas Instruments, Philips, Hitachi, Motorola, Fujitsu, and International
Business Machines (IBM) joined the race with Bell Laboratories
to mass-produce bubble memory chips. Texas Instruments
became the first major chip manufacturer to mass-produce bubble
memories in the mid-to-late 1970’s. By 1990, however, almost all the
research into magnetic bubble technology had shifted to Japan.
Hitachi and Fujitsu began to invest heavily in this area.
Mass production proved to be the most difficult task. Although
the materials it uses are different, the process of producing magnetic
bubble memory chips is similar to the process applied in producing
semiconductor-based chips such as those used for random access
memory (RAM). It is for this reason that major semiconductor manufacturers
and computer companies initially invested in this technology.
Lower fabrication yields and reliability issues plagued
early production runs, however, and, although these problems
have mostly been solved, gains in the performance characteristics of
competing conventional memories have limited the impact that
magnetic bubble technology has had on the marketplace. The materials
used for magnetic bubble memories are costlier and possess
more complicated structures than those used for semiconductor or
disk memory.
Speed and cost of materials are not the only bases for comparison. It is possible to perform some elementary logic with magnetic
bubbles. Conventional semiconductor-based memory offers storage
only. The capability of performing logic with magnetic bubbles
puts bubble technology far ahead of other magnetic technologies
with respect to functional versatility.
Asmall niche market for bubble memory developed in the 1980’s.
Magnetic bubble memory can be found in intelligent terminals, desktop
computers, embedded systems, test equipment, and similar microcomputer-
based systems.
Brownie camera
The invention: The first inexpensive and easy-to-use camera available
to the general public, the Brownie revolutionized photography
by making it possible for every person to become a photographer.
The people behind the invention:
George Eastman (1854-1932), founder of the Eastman Kodak
Company
Frank A. Brownell, a camera maker for the Kodak Company
who designed the Brownie
Henry M. Reichenbach, a chemist who worked with Eastman to
develop flexible film
William H. Walker, a Rochester camera manufacturer who
collaborated with Eastman
A New Way to Take Pictures
In early February of 1900, the first shipments of a new small box
camera called the Brownie reached Kodak dealers in the United
States and England. George Eastman, eager to put photography
within the reach of everyone, had directed Frank Brownell to design
a small camera that could be manufactured inexpensively but that
would still take good photographs.
Advertisements for the Brownie proclaimed that everyone—
even children—could take good pictures with the camera. The
Brownie was aimed directly at the children’s market, a fact indicated
by its box, which was decorated with drawings of imaginary
elves called “Brownies” created by the Canadian illustrator Palmer
Cox. Moreover, the camera cost only one dollar.
The Brownie was made of jute board and wood, with a hinged
back fastened by a sliding catch. It had an inexpensive two-piece
glass lens and a simple rotary shutter that allowed both timed and
instantaneous exposures to be made. With a lens aperture of approximately
f14 and a shutter speed of approximately 1/50 of a second,
the Brownie was certainly capable of taking acceptable snapshots. It had no viewfinder; however, an optional clip-on reflecting
viewfinder was available. The camera came loaded with a six-exposure
roll of Kodak film that produced square negatives 2.5 inches on
a side. This film could be developed, printed, and mounted for forty
cents, and a new roll could be purchased for fifteen cents.
George Eastman’s first career choice had been banking, but when
he failed to receive a promotion he thought he deserved, he decided
to devote himself to his hobby, photography. Having worked with a
rigorous wet-plate process, he knew why there were few amateur
photographers at the time—the whole process, from plate preparation
to printing, was too expensive and too much trouble. Even so,
he had already begun to think about the commercial possibilities of
photography; after reading of British experiments with dry-plate
technology, he set up a small chemical laboratory and came up with
a process of his own. The Eastman Dry Plate Company became one
of the most successful producers of gelatin dry plates.
Dry-plate photography had attracted more amateurs, but it was
still a complicated and expensive hobby. Eastman realized that the
number of photographers would have to increase considerably if
the market for cameras and supplies were to have any potential. In
the early 1880’s, Eastman first formulated the policies that would
make the Eastman Kodak Company so successful in years to come:
mass production, low prices, foreign and domestic distribution, and
selling through extensive advertising and by demonstration.
In his efforts to expand the amateur market, Eastman first tackled
the problem of the glass-plate negative, which was heavy, fragile,
and expensive to make. By 1884, his experiments with paper
negatives had been successful enough that he changed the name of
his company to The Eastman Dry Plate and Film Company. Since
flexible roll film needed some sort of device to hold it steady in the
camera’s focal plane, Eastman collaborated with William Walker
to develop the Eastman-Walker roll-holder. Eastman’s pioneering
manufacture and use of roll films led to the appearance on the market
in the 1880’s of a wide array of hand cameras from a number of
different companies. Such cameras were called “detective cameras”
because they were small and could be used surreptitiously. The
most famous of these, introduced by Eastman in 1888, was named
the “Kodak”—a word he coined to be terse, distinctive, and easily pronounced in any language. This camera’s simplicity of operation
was appealing to the general public and stimulated the growth of
amateur photography.
The Camera
The Kodak was a box about seven inches long and four inches
wide, with a one-speed shutter and a fixed-focus lens that produced
reasonably sharp pictures. It came loaded with enough roll film to
make one hundred exposures. The camera’s initial price of twentyfive
dollars included the cost of processing the first roll of film; the
camera also came with a leather case and strap. After the film was
exposed, the camera was mailed, unopened, to the company’s plant
in Rochester, New York, where the developing and printing were
done. For an additional ten dollars, the camera was reloaded and
sent back to the customer.
The Kodak was advertised in mass-market publications, rather
than in specialized photographic journals, with the slogan: “You
press the button, we do the rest.”With his introduction of a camera
that was easy to use and a service that eliminated the need to know
anything about processing negatives, Eastman revolutionized the
photographic market. Thousands of people no longer depended
upon professional photographers for their portraits but instead
learned to make their own. In 1892, the Eastman Dry Plate and Film
Company became the Eastman Kodak Company, and by the mid-
1890’s, one hundred thousand Kodak cameras had been manufactured
and sold, half of them in Europe by Kodak Limited.
Having popularized photography with the first Kodak, in 1900
Eastman turned his attention to the children’s market with the introduction
of the Brownie. The first five thousand cameras sent to
dealers were sold immediately; by the end of the following year, almost
a quarter of a million had been sold. The Kodak Company organized
Brownie camera clubs and held competitions specifically
for young photographers. The Brownie came with an instruction
booklet that gave children simple directions for taking successful
pictures, and “The Brownie Boy,” an appealing youngster who
loved photography, became a standard feature of Kodak’s advertisements.
Impact
Eastman followed the success of the first Brownie by introducing
several additional models between 1901 and 1917. Each was a more
elaborate version of the original. These Brownie box cameras were
on the market until the early 1930’s, and their success inspired other
companies to manufacture box cameras of their own. In 1906, the
Ansco company produced the Buster Brown camera in three sizes
that corresponded to Kodak’s Brownie camera range; in 1910 and
1914, Ansco made three more versions. The Seneca company’s
Scout box camera, in three sizes, appeared in 1913, and Sears Roebuck’s
Kewpie cameras, in five sizes, were sold beginning in 1916.
In England, the Houghtons company introduced its first Scout camera
in 1901, followed by another series of four box cameras in 1910
sold under the Ensign trademark. Other English manufacturers of
box cameras included the James Sinclair company, with its Traveller
Una of 1909, and the Thornton-Pickard company, with a Filma camera
marketed in four sizes in 1912.
After World War I ended, several series of box cameras were
manufactured in Germany by companies that had formerly concentrated
on more advanced and expensive cameras. The success of
box cameras in other countries, led by Kodak’s Brownie, undoubtedly
prompted this trend in the German photographic industry. The
Ernemann Film K series of cameras in three sizes, introduced in
1919, and the all-metal Trapp LittleWonder of 1922 are examples of
popular German box cameras.
In the early 1920’s, camera manufacturers began making boxcamera
bodies from metal rather than from wood and cardboard.
Machine-formed metal was less expensive than the traditional handworked
materials. In 1924, Kodak’s two most popular Brownie sizes
appeared with aluminum bodies.
In 1928, Kodak Limited of England added two important new
features to the Brownie—a built-in portrait lens, which could be
brought in front of the taking lens by pressing a lever, and camera
bodies in a range of seven different fashion colors. The Beau
Brownie cameras, made in 1930, were the most popular of all the
colored box cameras. The work ofWalter Dorwin Teague, a leading
American designer, these cameras had an Art Deco geometric pattern on the front panel, which was enameled in a color matching the
leatherette covering of the camera body. Several other companies,
including Ansco, again followed Kodak’s lead and introduced their
own lines of colored cameras.
In the 1930’s, several new box cameras with interesting features appeared,
many manufactured by leading film companies. In France, the
Lumiere Company advertised a series of box cameras—the Luxbox,
Scoutbox, and Lumibox—that ranged from a basic camera to one with
an adjustable lens and shutter. In 1933, the German Agfa company restyled
its entire range of box cameras, and in 1939, the Italian Ferrania
company entered the market with box cameras in two sizes. In 1932,
Kodak redesigned its Brownie series to take the new 620 roll film,
which it had just introduced. This film and the new Six-20 Brownies inspired
other companies to experiment with variations of their own;
some box cameras, such as the Certo Double-box, the Coronet Every
Distance, and the Ensign E-20 cameras, offered a choice of two picture
formats.
Another new trend was a move toward smaller-format cameras
using standard 127 roll film. In 1934, Kodak marketed the small
Baby Brownie. Designed by Teague and made from molded black
plastic, this little camera with a folding viewfinder sold for only one
dollar—the price of the original Brownie in 1900.
The Baby Brownie, the first Kodak camera made of molded plastic,
heralded the move to the use of plastic in camera manufacture.
Soon many others, such as the Altissa series of box cameras and the
Voigtlander Brilliant V/6 camera, were being made from this new material.
Later Trends
By the late 1930’s, flashbulbs had replaced flash powder for taking
pictures in low light; again, the Eastman Kodak Company led
the way in introducing this new technology as a feature on the inexpensive
box camera. The Falcon Press-Flash, marketed in 1939, was
the first mass-produced camera to have flash synchronization and
was followed the next year by the Six-20 Flash Brownie, which had a
detachable flash gun. In the early 1940’s, other companies, such as
Agfa-Ansco, introduced this feature on their own box cameras.In the years after World War II, the box camera evolved into an
eye-level camera, making it more convenient to carry and use.
Many amateur photographers, however, still had trouble handling paper-backed roll film and were taking their cameras back to dealers
to be unloaded and reloaded. Kodak therefore developed a new
system of film loading, using the Kodapak cartridge, which could
be mass-produced with a high degree of accuracy by precision plastic-
molding techniques. To load the camera, the user simply opened
the camera back and inserted the cartridge. This new film was introduced
in 1963, along with a series of Instamatic cameras designed
for its use. Both were immediately successful.
The popularity of the film cartridge ended the long history of the
simple and inexpensive roll film camera. The last English Brownie
was made in 1967, and the series of Brownies made in the United
States was discontinued in 1970. Eastman’s original marketing strategy
of simplifying photography in order to increase the demand for
cameras and film continued, however, with the public’s acceptance
of cartridge-loading cameras such as the Instamatic.
From the beginning, Eastman had recognized that there were
two kinds of photographers other than professionals. The first, he
declared, were the true amateurs who devoted time enough to acquire
skill in the complex processing procedures of the day. The second
were those who merely wanted personal pictures or memorabilia
of their everyday lives, families, and travels. The second class,
he observed, outnumbered the first by almost ten to one. Thus, it
was to this second kind of amateur photographer that Eastman had
appealed, both with his first cameras and with his advertising slogan,
“You press the button, we do the rest.” Eastman had done
much more than simply invent cameras and films; he had invented
a system and then developed the means for supporting that system.
This is essentially what the Eastman Kodak Company continued to
accomplish with the series of Instamatics and other descendants of
the original Brownie. In the decade between 1963 and 1973, for example,
approximately sixty million Instamatics were sold throughout
the world.
The research, manufacturing, and marketing activities of the
Eastman Kodak Company have been so complex and varied that no
one would suggest that the company’s prosperity rests solely on the
success of its line of inexpensive cameras and cartridge films, although
these have continued to be important to the company. Like
Kodak, however, most large companies in the photographic industry have expanded their research to satisfy the ever-growing demand
from amateurs. The amateurism that George Eastman recognized
and encouraged at the beginning of the twentieth century
thus still flourished at its end.
17 February 2009
Broadcaster guitar
The invention: The first commercially manufactured solid-body
electric guitar, the Broadcaster revolutionized the guitar industry
and changed the face of popular music
The people behind the invention:
Leo Fender (1909-1991), designer of affordable and easily massproduced
solid-body electric guitars
Les Paul (Lester William Polfuss, 1915- ), a legendary
guitarist and designer of solid-body electric guitars
Charlie Christian (1919-1942), an influential electric jazz
guitarist of the 1930’s
Early Electric Guitars
It has been estimated that between 1931 and 1937, approximately
twenty-seven hundred electric guitars and amplifiers were sold in
the United States. The Electro String Instrument Company, run
by Adolph Rickenbacker and his designer partners, George Beauchamp
and Paul Barth, produced two of the first commercially manufactured
electric guitars—the Rickenbacker A-22 and A-25—in
1931. The Rickenbacker models were what are known as “lap steel”
or Hawaiian guitars. A Hawaiian guitar is played with the instrument
lying flat across a guitarist’s knees. By the mid-1930’s, the Gibson
company had introduced an electric Spanish guitar, the ES-150.
Legendary jazz guitarist Charlie Christian made this model famous
while playing for Benny Goodman’s orchestra. Christian was the
first electric guitarist to be heard by a large American audience.
He became an inspiration for future electric guitarists, because he
proved that the electric guitar could have its own unique solo
sound. Along with Christian, the other electric guitar figures who
put the instrument on the musical map were blues guitarist T-Bone
Walker, guitarist and inventor Les Paul, and engineer and inventor
Leo Fender.
Early electric guitars were really no more than acoustic guitars,
with the addition of one or more pickups, which convert string vibrations to electrical signals that can be played through a speaker.
Amplification of a guitar made it a more assertive musical instrument.
The electrification of the guitar ultimately would make it
more flexible, giving it a more prominent role in popular music. Les
Paul, always a compulsive inventor, began experimenting with
ways of producing an electric solid-body guitar in the late 1930’s. In
1929, at the age of thirteen, he had amplified his first acoustic guitar.
Another influential inventor of the 1940’s was Paul Bigsby. He built
a prototype solid-body guitar for country music star Merle Travis in
1947. It was Leo Fender who revolutionized the electric guitar industry
by producing the first commercially viable solid-body electric
guitar, the Broadcaster, in 1948.Leo Fender
Leo Fender was born in the Anaheim, California, area in 1909. As
a teenager, he began to build and repair guitars. By the 1930’s,
Fender was building and renting out public address systems for
group gatherings. In 1937, after short tenures of employment with
the Division of Highways and the U.S. Tire Company, he opened a
radio repair company in Fullerton, California. Always looking to
expand and invent new and exciting electrical gadgets, Fender and
Clayton Orr “Doc” Kauffman started the K & F Company in 1944.
Kauffman was a musician and a former employee of the Electro
String Instrument Company. The K & F Company lasted until 1946
and produced steel guitars and amplifiers. After that partnership
ended, Fender founded the Fender Electric Instruments Company.
With the help of George Fullerton, who joined the company in
1948, Fender developed the Fender Broadcaster. The body of the
Broadcaster was made of a solid plank of ash wood. The corners of
the ash body were rounded. There was a cutaway located under the
joint with the solid maple neck, making it easier for the guitarist to
access the higher frets. The maple neck was bolted to the body of the
guitar, which was unusual, since most guitar necks prior to the
Broadcaster had been glued to the body. Frets were positioned directly
into designed cuts made in the maple of the neck. The guitar
had two pickups.
The Fender Electric Instruments Company made fewer than one thousand Broadcasters. In 1950, the name of the guitar was changed
from the Broadcaster to the Telecaster, as the Gretsch company had
already registered the name Broadcaster for some of its drums and
banjos. Fender decided not to fight in court over use of the name.
Leo Fender has been called the Henry Ford of the solid-body
electric guitar, and the Telecaster became known as the Model T of
the industry. The early Telecasters sold for $189.50. Besides being inexpensive,
the Telecaster was a very durable instrument. Basically,
the Telecaster was a continuation of the Broadcaster. Fender did not
file for a patent on its unique bridge pickup until January 13, 1950,
and he did not file for a patent on the Telecaster’s unique body
shape until April 3, 1951.
In the music industry during the late 1940’s, it was important for
a company to unveil new instruments at trade shows. At this time,
there was only one important trade show, sponsored by the National
Association of Music Merchants. The Broadcaster was first
sprung on the industry at the 1948 trade show in Chicago. The industry
had seen nothing like this guitar ever before. This new guitar
existed only to be amplified; it was not merely an acoustic guitar
that had been converted.
Impact
The Telecaster, as it would be called after 1950, remained in continuous
production for more years than any other guitar of its type
and was one of the industry’s best sellers. From the beginning, it
looked and sounded unique. The electrified acoustic guitars had a
mellow woody tone, whereas the Telecaster had a clean twangy
tone. This tone made it popular with country and blues guitarists.
The Telecaster could also be played at higher volume than previous
electric guitars.
Because Leo Fender attempted something revolutionary by introducing
an electric solid-body guitar, there was no guarantee that
his business venture would succeed. Fender Electric Instruments
Company had fifteen employees in 1947. At times, during the early
years of the company, it looked as though Fender’s dreams would
not come to fruition, but the company persevered and grew. Between
1948 and 1955 with an increase of employees, the company was able to produce ten thousand Broadcaster/Telecaster guitars.
Fender had taken a big risk, but it paid off enormously. Between
1958 and the mid-1970’s, Fender produced more than 250,000 Telecasters.
Other guitar manufacturers were placed in a position of
having to catch up. Fender had succeeded in developing a process
by which electric solid-body guitars could be manufactured profitably
on a large scale.
Early Guitar Pickups
The first pickups used on a guitar can be traced back to the 1920’s
and the efforts of Lloyd Loar, but there was not strong interest on the
part of the American public for the guitar to be amplified. The public
did not become intrigued until the 1930’s. Charlie Christian’s
electric guitar performances with Benny Goodman woke up the
public to the potential of this new and exciting sound. It was not until
the 1950’s, though, that the electric guitar became firmly established.
Leo Fender was the right man in the right place. He could not
have known that his Fender guitars would help to usher in a whole
new musical landscape. Since the electric guitar was the newest
member of the family of guitars, it took some time for musical audiences
to fully appreciate what it could do. The electric solid-body
guitar has been called a dangerous, uncivilized instrument. The
youth culture of the 1950’s found in this new guitar a voice for their
rebellion. Fender unleashed a revolution not only in the construction
of a guitar but also in the way popular music would be approached
henceforth.
Because of the ever-increasing demand for the Fender product,
Fender Sales was established as a separate distribution company in
1953 by Don Randall. Fender Electric Instruments Company had fifteen
employees in 1947, but by 1955, the company employed fifty
people. By 1960, the number of employees had risen to more than
one hundred. Before Leo Fender sold the company to CBS on January
4, 1965, for $13 million, the company occupied twenty-seven
buildings and employed more than five hundred workers.
Always interested in finding new ways of designing a more nearly
perfect guitar, Leo Fender again came up with a remarkable guitar in
1954, with the Stratocaster. There was talk in the guitar industry that
Fender had gone too far with the introduction of the Stratocaster, but
it became a huge success because of its versatility. It was the first commercial
solid-body electric guitar to have three pickups and a vibrato
bar. It was also easier to play than the Telecaster because of its double
cutaway, contoured body, and scooped back. The Stratocaster sold
for $249.50. Since its introduction, the Stratocaster has undergone
some minor changes, but Fender and his staff basically got it right the
first time.
The Gibson company entered the solid-body market in 1952 with
the unveiling of the “Les Paul” model. After the Telecaster, the Les
Paul guitar was the next significant solid-body to be introduced. Les
Paul was a legendary guitarist who also had been experimenting
with electric guitar designs for many years. The Gibson designers
came up with a striking model that produced a thick rounded tone.
Over the years, the Les Paul model has won a loyal following.
The Precision Bass
In 1951, Leo Fender introduced another revolutionary guitar, the
Precision bass. At a cost of $195.50, the first electric bass would go on
to dominate the market. The Fender company has manufactured numerous
guitar models over the years, but the three that stand above
all others in the field are the Telecaster, the Precision bass, and the
Stratocaster. The Telecaster is considered to be more of a workhorse,
whereas the Stratocaster is thought of as the thoroughbred of electric
guitars. The Precision bass was in its own right a revolutionary guitar.
With a styling that had been copied from the Telecaster, the Precision
freed musicians from bulky oversized acoustic basses, which
were prone to feedback. The name Precision had meaning. Fender’s
electric bass made it possible, with its frets, for the precise playing of
notes; many acoustic basses were fretless. The original Precision bass
model was manufactured from 1951 to 1954. The next version lasted
from 1954 until June of 1957. The Precision bass that went into production
in June, 1957, with its split humbucking pickup, continued to
be the standard electric bass on the market into the 1990’s.
By 1964, the Fender Electric Instruments Company had grown
enormously. In addition to Leo Fender, a number of crucial people
worked for the organization, including George Fullerton and Don Randall. Fred Tavares joined the company’s research and development
team in 1953. In May, 1954, Forrest White became Fender’s
plant manager. All these individuals played vital roles in the success
of Fender, but the driving force behind the scene was always
Leo Fender. As Fender’s health deteriorated, Randall commenced
negotiations with CBS to sell the Fender company. In January, 1965,
CBS bought Fender for $13 million. Eventually, Leo Fender regained
his health, and he was hired as a technical adviser by CBS/Fender.
He continued in this capacity until 1970. He remained determined
to create more guitar designs of note. Although he never again produced
anything that could equal his previous success, he never
stopped trying to attain a new perfection of guitar design.
Fender died on March 21, 1991, in Fullerton, California. He had
suffered for years from Parkinson’s disease, and he died of complications
from the disease. He is remembered for his Broadcaster/
Telecaster, Precision bass, and Stratocaster, which revolutionized
popular music. Because the Fender company was able to mass produce
these and other solid-body electric guitars, new styles of music
that relied on the sound made by an electric guitar exploded onto
the scene. The electric guitar manufacturing business grew rapidly
after Fender introduced mass production. Besides American companies,
there are guitar companies that have flourished in Europe
and Japan.
The marriage between rock music and solid-body electric guitars
was initiated by the Fender guitars. The Telecaster, Precision bass,
and Stratocaster become synonymous with the explosive character
of rock and roll music. The multi-billion-dollar music business can
point to Fender as the pragmatic visionary who put the solid-body
electric guitar into the forefront of the musical scene. His innovative
guitars have been used by some of the most important guitarists of
the rock era, including Jimi Hendrix, Eric Clapton, and Jeff Beck.
More important, Fender guitars have remained bestsellers with
the public worldwide. Amateur musicians purchased them by the
thousands for their own entertainment. Owning and playing a
Fender guitar, or one of the other electric guitars that followed, allowed
these amateurs to feel closer to their musician idols. A large
market for sheet music from popular artists also developed.
In 1992, Fender was inducted into the Rock and Roll Hall of Fame. He is one of the few non-musicians ever to be inducted. The
sound of an electric guitar is the sound of exuberance, and since the
Broadcaster was first unveiled in 1948, that sound has grown to be
pervasive and enormously profitable.
Breeder reactor
The invention: A plant that generates electricity from nuclear fission
while creating new fuel.
The person behind the invention:
Walter Henry Zinn (1906-2000), the first director of the Argonne
National Laboratory
Producing Electricity with More Fuel
The discovery of nuclear fission involved both the discovery that
the nucleus of a uranium atom would split into two lighter elements
when struck by a neutron and the observation that additional neutrons,
along with a significant amount of energy, were released at
the same time. These neutrons might strike other atoms and cause
them to fission (split) also. That, in turn, would release more energy
and more neutrons, triggering a chain reaction as the process continued
to repeat itself, yielding a continuing supply of heat.
Besides the possibility that an explosive weapon could be constructed,
early speculation about nuclear fission included its use in
the generation of electricity. The occurrence of World War II (1939-
1945) meant that the explosive weapon would be developed first.
Both the weapons technology and the basic physics for the electrical
reactor had their beginnings in Chicago with the world’s first nuclear
chain reaction. The first self-sustaining nuclear chain reaction occurred
in a laboratory at the University of Chicago on December 2, 1942.
It also became apparent at that time that there was more than one
way to build a bomb. At this point, two paths were taken: One was
to build an atomic bomb with enough fissionable uranium in it to
explode when detonated, and another was to generate fissionable
plutonium and build a bomb. Energy was released in both methods,
but the second method also produced another fissionable substance.
The observation that plutonium and energy could be produced together
meant that it would be possible to design electric power systems
that would produce fissionable plutonium in quantities as large
as, or larger than, the amount of fissionable material consumed. This is the breeder concept, the idea that while using up fissionable uranium
235, another fissionable element can be made. The full development
of this concept for electric power was delayed until the end of
WorldWar II.
Electricity from Atomic Energy
On August 1, 1946, the Atomic Energy Commission (AEC) was
established to control the development and explore the peaceful
uses of nuclear energy. The Argonne National Laboratory was assigned
the major responsibilities for pioneering breeder reactor
technologies.Walter Henry Zinn was the laboratory’s first director.
He led a team that planned a modest facility (Experimental Breeder
Reactor I, or EBR-I) for testing the validity of the breeding principle.
Planning for this had begun in late 1944 and grew as a natural extension
of the physics that developed the plutonium atomic bomb.
The conceptual design details for a breeder-electric reactor were
reasonably complete by late 1945. On March 1, 1949, the AEC announced
the selection of a site in Idaho for the National Reactor Station
(later to be named the Idaho National Engineering Laboratory,
or INEL). Construction at the INEL site in Arco, Idaho, began in October,
1949. Critical mass was reached in August, 1951. (“Critical
mass” is the amount and concentration of fissionable material required
to produce a self-sustaining chain reaction.)
The system was brought to full operating power, 1.1 megawatts
of thermal power, on December 19, 1951. The next day, December
20, at 11:00 a.m., steam was directed to a turbine generator. At 1:23
p.m., the generator was connected to the electrical grid at the site,
and “electricity flowed from atomic energy,” in the words of Zinn’s
console log of that day. Approximately 200 kilowatts of electric
power were generated most of the time that the reactor was run.
This was enough to satisfy the needs of the EBR-I facilities. The reactor
was shut down in 1964 after five years of use primarily as a test
facility. It had also produced the first pure plutonium.
With the first fuel loading, a conversion ratio of 1.01 was achieved,
meaning that more new fuel was generated than was consumed by
about 1 percent. When later fuel loadings were made with plutonium,
the conversion ratios were more favorable, reaching as high as 1.27. EBR-I was the first reactor to generate its own fuel and the
first power reactor to use plutonium for fuel.
The use of EBR-I also included pioneering work on fuel recovery
and reprocessing. During its five-year lifetime, EBR-I operated with
four different fuel loadings, each designed to establish specific
benchmarks of breeder technology. This reactor was seen as the first
in a series of increasingly large reactors in a program designed to
develop breeder technology. The reactor was replaced by EBR-II,
which had been proposed in 1953 and was constructed from 1955 to
1964. EBR-II was capable of producing 20 megawatts of electrical
power. It was approximately fifty times more powerful than EBR-I
but still small compared to light-water commercial reactors of 600 to
1,100 megawatts in use toward the end of the twentieth century.
Consequences
The potential for peaceful uses of nuclear fission were dramatized
with the start-up of EBR-I in 1951: It was the first in the world
to produce electricity, while also being the pioneer in a breeder reactor
program. The breeder program was not the only reactor program
being developed, however, and it eventually gave way to the
light-water reactor design for use in the United States. Still, if energy
resources fall into short supply, it is likely that the technologies first
developed with EBR-I will find new importance. In France and Japan,
commercial reactors make use of breeder reactor technology;
these reactors require extensive fuel reprocessing.
Following the completion of tests with plutonium loading in 1964,
EBR-I was shut down and placed in standby status. In 1966, it was declared
a national historical landmark under the stewardship of the
U.S. Department of the Interior. The facility was opened to the public
in June, 1975.
12 February 2009
Blood transfusion
The invention: A technique that greatly enhanced surgery patients’
chances of survival by replenishing the blood they lose in
surgery with a fresh supply.
The people behind the invention:
Charles Drew (1904-1950), American pioneer in blood
transfusion techniques
George Washington Crile (1864-1943), an American surgeon,
author, and brigadier general in the U.S. Army Medical
Officers’ Reserve Corps
Alexis Carrel (1873-1944), a French surgeon
Samuel Jason Mixter (1855-1923), an American surgeon
Nourishing Blood Transfusions
It is impossible to say when and where the idea of blood transfusion
first originated, although descriptions of this procedure are
found in ancient Egyptian and Greek writings. The earliest documented
case of a blood transfusion is that of Pope Innocent VII. In
April, 1492, the pope, who was gravely ill, was transfused with the
blood of three young boys. As a result, all three boys died without
bringing any relief to the pope.
In the centuries that followed, there were occasional descriptions
of blood transfusions, but it was not until the middle of the seventeenth
century that the technique gained popularity following the
English physician and anatomistWilliam Harvey’s discovery of the
circulation of the blood in 1628. In the medical thought of those
times, blood transfusion was considered to have a nourishing effect
on the recipient. In many of those experiments, the human recipient
received animal blood, usually from a lamb or a calf. Blood transfusion
was tried as a cure for many different diseases, mainly those
that caused hemorrhages, as well as for other medical problems and
even for marital problems.
Blood transfusions were a dangerous procedure, causing many
deaths of both donor and recipient as a result of excessive blood loss, infection, passage of blood clots into the circulatory systems of
the recipients, passage of air into the blood vessels (air embolism),
and transfusion reaction as a result of incompatible blood types. In
the mid-nineteenth century, blood transfusions from animals to humans
stopped after it was discovered that the serum of one species
agglutinates and dissolves the blood cells of other species. A sharp
drop in the use of blood transfusion came with the introduction of
physiologic salt solution in 1875. Infusion of salt solution was simple
and was safer than blood transfusion.Direct-Connection Blood Transfusions
In 1898, when GeorgeWashington Crile began his work on blood
transfusions, the major obstacle he faced was solving the problem of
blood clotting during transfusions. He realized that salt solutions
were not helpful in severe cases of blood loss, when there is a need to
restore the patient to consciousness, steady the heart action, and raise
the blood pressure. At that time, he was experimenting with indirect
blood transfusions by drawing the blood of the donor into a vessel,
then transferring it into the recipient’s vein by tube, funnel, and cannula,
the same technique used in the infusion of saline solution.
The solution to the problem of blood clotting came in 1902 when
Alexis Carrel developed the technique of surgically joining blood
vessels without exposing the blood to air or germs, either of which
can lead to clotting. Crile learned this technique from Carrel and
used it to join the peripheral artery in the donor to a peripheral vein
of the recipient. Since the transfused blood remained sealed in the
inner lining of the vessels, blood clotting did not occur.
The first human blood transfusion of this type was performed by
Crile in December, 1905. The patient, a thirty-five-year-old woman,
was transfused by her husband but died a few hours after the procedure.
The second, but first successful, transfusion was performed on
August 8, 1906. The patient, a twenty-three-year-old male, suffered
from severe hemorrhaging following surgery to remove kidney
stones. After all attempts to stop the bleeding were exhausted with
no results, and the patient was dangerously weak, transfusion was
considered as a last resort. One of the patient’s brothers was the dofew days later, another transfusion was done. This time, too, he
showed remarkable improvement, which continued until his complete
recovery.
For his first transfusions, Crile used the Carrel suture method,
which required using very fine needles and thread. It was a very
delicate and time-consuming procedure. At the suggestion of Samuel
Jason Mixter, Crile developed a new method using a short tubal
device with an attached handle to connect the blood vessels. By this
method, 3 or 4 centimeters of the vessels to be connected were surgically
exposed, clamped, and cut, just as under the previous method.
Yet, instead of suturing of the blood vessels, the recipient’s vein was
passed through the tube and then cuffed back over the tube and tied
to it. Then the donor’s artery was slipped over the cuff. The clamps
were opened, and blood was allowed to flow from the donor to the
recipient. In order to accommodate different-sized blood vessels,
tubes of four different sizes were made, ranging in diameter from
1.5 to 3 millimeters.Impact,
Crile’s method was the preferred method of blood transfusion
for a number of years. Following the publication of his book on
transfusion, a number of modifications to the original method were
published in medical journals. In 1913, Edward Lindeman developed
a method of transfusing blood simply by inserting a needle
through the patient’s skin and into a surface vein, making it for the
first time a nonsurgical method. This method allowed one to measure
the exact quantity of blood transfused. It also allowed the donor
to serve in multiple transfusions. This development opened the
field of transfusions to all physicians. Lindeman’s needle and syringe
method also eliminated another major drawback of direct
blood transfusion: the need to have both donor and recipient right
next to each other.
Birth control pill
The invention: An orally administered drug that inhibits ovulation
in women, thereby greatly reducing the chance of pregnancy.
The people behind the invention:
Gregory Pincus (1903-1967), an American biologist
Min-Chueh Chang (1908-1991), a Chinese-born reproductive
biologist
John Rock (1890-1984), an American gynecologist
Celso-Ramon Garcia (1921- ), a physician
Edris Rice-Wray (1904- ), a physician
Katherine Dexter McCormick (1875-1967), an American
millionaire
Margaret Sanger (1879-1966), an American activist
An Ardent Crusader
Margaret Sanger was an ardent crusader for birth control and
family planning. Having decided that a foolproof contraceptive was
necessary, Sanger met with her friend, the wealthy socialite Katherine
Dexter McCormick. A1904 graduate in biology from the Massachusetts
Institute of Technology, McCormick had the knowledge
and the vision to invest in biological research. Sanger arranged a
meeting between McCormick and Gregory Pincus, head of the
Worcester Institutes of Experimental Biology. After listening to Sanger’s
pleas for an effective contraceptive and McCormick’s offer of financial
backing, Pincus agreed to focus his energies on finding a pill
that would prevent pregnancy.
Pincus organized a team to conduct research on both laboratory
animals and humans. The laboratory studies were conducted under
the direction of Min-Chueh Chang, a Chinese-born scientist who
had been studying sperm biology, artificial insemination, and in vitro
fertilization. The goal of his research was to see whether pregnancy
might be prevented by manipulation of the hormones usually
found in a woman.It was already known that there was one time when a woman
could not become pregnant—when she was already pregnant. In
1921, Ludwig Haberlandt, an Austrian physiologist, had transplanted
the ovaries from a pregnant rabbit into a nonpregnant one.
The latter failed to produce ripe eggs, showing that some substance
from the ovaries of a pregnant female prevents ovulation. This substance
was later identified as the hormone progesterone by George
W. Corner, Jr., and Willard M. Allen in 1928.
If progesterone could inhibit ovulation during pregnancy, maybe
progesterone treatment could prevent ovulation in nonpregnant females
as well. In 1937, this was shown to be the case by scientists
from the University of Pennsylvania, who prevented ovulation in
rabbits with injections of progesterone. It was not until 1951, however,
when Carl Djerassi and other chemists devised inexpensive
ways of producing progesterone in the laboratory, that serious consideration
was given to the medical use of progesterone. The synthetic
version of progesterone was called “progestin.”
Testing the Pill
In the laboratory, Chang tried more than two hundred different
progesterone and progestin compounds, searching for one that
would inhibit ovulation in rabbits and rats. Finally, two compounds
were chosen: progestins derived from the root of a wild Mexican
yam. Pincus arranged for clinical tests to be carried out by Celso-
Ramon Garcia, a physician, and John Rock, a gynecologist.
Rock had already been conducting experiments with progesterone
as a treatment for infertility. The treatment was effective in some
women but required that large doses of expensive progesterone be
injected daily. Rock was hopeful that the synthetic progestin that
Chang had found effective in animals would be helpful in infertile
women as well. With Garcia and Pincus, Rock treated another
group of fifty infertile women with the synthetic progestin. After
treatment ended, seven of these previously infertile women became
pregnant within half a year. Garcia, Pincus, and Rock also took several
physiological measurements of the women while they were
taking the progestin and were able to conclude that ovulation did
not occur while the women were taking the progestin pill.Having shown that the hormone could effectively prevent ovulation
in both animals and humans, the investigators turned their attention
back to birth control. They were faced with several problems:
whether side effects might occur in women using progestins for a
long time, and whether women would remember to take the pill day
after day, for months or even years. To solve these problems, the birth
control pill was tested on a large scale. Because of legal problems in
the United States, Pincus decided to conduct the test in Puerto Rico.
The test started in April of 1956. Edris Rice-Wray, a physician,
was responsible for the day-to-day management of the project. As
director of the Puerto Rico Family Planning Association, she had
seen firsthand the need for a cheap, reliable contraceptive. The
women she recruited for the study were married women from a
low-income population living in a housing development in RÃo
Piedras, a suburb of San Juan. Word spread quickly, and soon
women were volunteering to take the pill that would prevent pregnancy.
In the first study, 221 women took a pill containing 10 milligrams
of progestin and 0.15 milligrams of estrogen. (The estrogen
was added to help control breakthrough bleeding.)
Results of the test were reported in 1957. Overall, the pill proved
highly effective in preventing conception. None of the women
who took the pill according to directions became pregnant, and
most women who wanted to get pregnant after stopping the pill
had no difficulty. Nevertheless, 17 percent of the women had some
unpleasant reactions, such as nausea or dizziness. The scientists
believed that these mild side effects, as well as one death from congestive
heart failure, were unrelated to the use of the pill.
Even before the final results were announced, additional field
tests were begun. In 1960, the U.S. Food and Drug Administration
(FDA) approved the use of the pill developed by Pincus and his collaborators
as an oral contraceptive.Consequences
Within two years of approval by the FDA, more than a million
women in the United States were using the birth control pill. New
contraceptives were developed in the 1960’s and 1970’s, but the
birth control pill remains the most widely used method of preventing pregnancy. More than 60
million women use the pill
worldwide.
The greatest impact of the
pill has been in the social and
political world. Before Sanger
began the push for the pill,
birth control was regarded often
as socially immoral and
often illegal as well. Women
in those post-World War II
years were expected to have
a lifelong career as a mother
to their many children.
With the advent of the pill,
a radical change occurred
in society’s attitude toward
women’s work.Women had increased
freedom to work and enter careers previously closed to them
because of fears that they might get pregnant. Women could control
more precisely when they would get pregnant and how many children
they would have. The women’s movement of the 1960’s—with its
change to more liberal social and sexual values—gained much of its
strength from the success of the birth control pill.
10 February 2009
BINAC computer
The invention: The world’s first electronic general-purpose digital
computer.
The people behind the invention:
John Presper Eckert (1919-1995), an American electrical engineer
John W. Mauchly (1907-1980), an American physicist
John von Neumann (1903-1957), a Hungarian American
mathematician
Alan Mathison Turing (1912-1954), an English mathematician
Computer Evolution
In the 1820’s, there was a need for error-free mathematical and
astronomical tables for use in navigation, unreliable versions of
which were being produced by human “computers.” The problem
moved English mathematician and inventor Charles Babbage to design
and partially construct some of the earliest prototypes of modern
computers, with substantial but inadequate funding from the
British government. In the 1880’s, the search by the U.S. Bureau of
the Census for a more efficient method of compiling the 1890 census
led American inventor Herman Hollerith to devise a punched-card
calculator, a machine that reduced by several years the time required
to process the data.
The emergence of modern electronic computers began during
World War II (1939-1945), when there was an urgent need in the
American military for reliable and quickly produced mathematical
tables that could be used to aim various types of artillery. The calculation
of very complex tables had progressed somewhat since
Babbage’s day, and the human computers were being assisted by
mechanical calculators. Still, the growing demand for increased accuracy
and efficiency was pushing the limits of these machines.
Finally, in 1946, following three years of intense work at the University
of Pennsylvania’s Moore School of Engineering, John Presper
Eckert and John W. Mauchly presented their solution to the problems
in the form of the Electronic Numerical Integrator and Calculator (ENIAC) the world’s first electronic general-purpose digital
computer.
The ENIAC, built under a contract with the Army’s Ballistic Research
Laboratory, became a great success for Eckert and Mauchly,
but even before it was completed, they were setting their sights on
loftier targets. The primary drawback of the ENIAC was the great
difficulty involved in programming it. Whenever the operators
needed to instruct the machine to shift from one type of calculation
to another, they had to reset a vast array of dials and switches, unplug
and replug numerous cables, and make various other adjustments
to the multiple pieces of hardware involved. Such a mode of
operation was deemed acceptable for the ENIAC because, in computing
firing tables, it would need reprogramming only occasionally.
Yet if instructions could be stored in a machine’s memory, along
with the data, such a machine would be able to handle a wide range
of calculations with ease and efficiency.
The Turing Concept
The idea of a stored-program computer had first appeared in a
paper published by English mathematician Alan Mathison Turing
in 1937. In this paper, Turing described a hypothetical machine of
quite simple design that could be used to solve a wide range of logical
and mathematical problems. One significant aspect of this imaginary
Turing machine was that the tape that would run through it
would contain both information to be processed and instructions on
how to process it. The tape would thus be a type of memory device,
storing both the data and the program as sets of symbols that the
machine could “read” and understand. Turing never attempted to
construct this machine, and it was not until 1946 that he developed a
design for an electronic stored-program computer, a prototype of
which was built in 1950.
In the meantime, John von Neumann, a Hungarian American
mathematician acquainted with Turing’s ideas, joined Eckert and
Mauchly in 1944 and contributed to the design of ENIAC’s successor,
the Electronic Discrete Variable Automatic Computer (EDVAC), another
project financed by the Army. The EDVAC was the first computer
designed to incorporate the concept of the stored program.In March of 1946, Eckert and Mauchly, frustrated by a controversy
over patent rights for the ENIAC, resigned from the
Moore School. Several months later, they formed the Philadelphiabased
Electronic Control Company on the strength of a contract
from the National Bureau of Standards and the Census Bureau to
build a much grander computer, the Universal Automatic Computer
(UNIVAC). They thus abandoned the EDVAC project, which
was finally completed by the Moore School in 1952, but they incorporated
the main features of the EDVAC into the design of the
UNIVAC.
Building the UNIVAC, however, proved to be much more involved
and expensive than anticipated, and the funds provided by
the original contract were inadequate. Eckert and Mauchly, therefore,
took on several other smaller projects in an effort to raise
funds. On October 9, 1947, they signed a contract with the Northrop
Corporation of Hawthorne, California, to produce a relatively small
computer to be used in the guidance system of a top-secret missile
called the Snark, which Northrop was building for the Air Force.
This computer, the Binary Automatic Computer (BINAC), turned
out to be Eckert and Mauchly’s first commercial sale and the first
stored-program computer completed in the United States.
The BINAC was designed to be at least a preliminary version of a
compact, airborne computer. It had two main processing units.
These contained a total of fourteen hundred vacuum tubes, a drastic
reduction from the eighteen thousand used in the ENIAC. There
were also two memory units, as well as two power supplies, an input
converter unit, and an input console, which used either a typewriter
keyboard or an encoded magnetic tape (the first time such
tape was used for computer input). Because of its dual processing,
memory, and power units, the BINAC was actually two computers,
each of which would continually check its results against those of
the other in an effort to identify errors.
The BINAC became operational in August, 1949. Public demonstrations
of the computer were held in Philadelphia from August 18
through August 20.Impact
The design embodied in the BINAC is the real source of its significance.
It demonstrated successfully the benefits of the dual processor
design for minimizing errors, a feature adopted in many subsequent
computers. It showed the suitability of magnetic tape as an
input-output medium. Its most important new feature was its ability
to store programs in its relatively spacious memory, the principle
that Eckert, Mauchly, and von Neumann had originally designed
into the EDVAC. In this respect, the BINAC was a direct descendant
of the EDVAC.
In addition, the stored-program principle gave electronic computers
new powers, quickness, and automatic control that, as they
have continued to grow, have contributed immensely to the aura of
intelligence often associated with their operation.
The BINAC successfully demonstrated some of these impressive
new powers in August of 1949 to eager observers from a number of
major American corporations. It helped to convince many influential
leaders of the commercial segment of society of the promise of
electronic computers. In doing so, the BINAC helped to ensure the
further evolution of computers.
See also Apple II computer; BINAC computer; Colossus computer;
ENIAC computer; IBM Model 1401 computer; Personal computer;
Supercomputer; UNIVAC computer.
Bathysphere
The invention: The first successful chamber for manned deep-sea
diving missions.
The people behind the invention:
William Beebe (1877-1962), an American naturalist and curator
of ornithology
Otis Barton (1899- ), an American engineer
John Tee-Van (1897-1967), an American general associate with
the New York Zoological Society
Gloria Hollister Anable (1903?-1988), an American research
associate with the New York Zoological Society
Inner Space
Until the 1930’s, the vast depths of the oceans had remained
largely unexplored, although people did know something of the
ocean’s depths. Soundings and nettings of the ocean bottom had
been made many times by a number of expeditions since the 1870’s.
Diving helmets had allowed humans to descend more than 91 meters
below the surface, and the submarine allowed them to reach a
depth of nearly 120 meters. There was no firsthand knowledge,
however, of what it was like in the deepest reaches of the ocean: inner
space.
The person who gave the world the first account of life at great
depths wasWilliam Beebe. When he announced in 1926 that he was
attempting to build a craft to explore the ocean, he was already a
well-known naturalist. Although his only degrees had been honorary
doctorates, he was graduated as a special student in the Department
of Zoology of Columbia University in 1898. He began his lifelong
association with the New York Zoological Society in 1899.
It was during a trip to the Galápagos Islands off the west coast of
South America that Beebe turned his attention to oceanography. He
became the first scientist to use a diving helmet in fieldwork, swimming
in the shallow waters. He continued this shallow-water work
at the new station he established in 1928, with the permission of English authorities, on the tiny island of Nonesuch in the Bermudas.
Beebe realized, however, that he had reached the limits of the current
technology and that to study the animal life of the ocean depths
would require a new approach.
A New Approach
While he was considering various cylindrical designs for a new
deep-sea exploratory craft, Beebe was introduced to Otis Barton.
Barton, a young New Englander who had been trained as an engineer
at Harvard University, had turned to the problems of ocean
diving while doing postgraduate work at Columbia University. In
December, 1928, Barton brought his blueprints to Beebe. Beebe immediately
saw that Barton’s design was what he was looking for,
and the two went ahead with the construction of Barton’s craft.
The “bathysphere,” as Beebe named the device, weighed 2,268
kilograms and had a diameter of 1.45 meters and steel walls 3.8 centimeters
thick. The door, weighing 180 kilograms, would be fastened
over a manhole with ten bolts. Four windows, made of fused
quartz, were ordered from the General Electric Company at a cost of
$500 each. A 250-watt water spotlight lent by the Westinghouse
Company provided the exterior illumination, and a telephone lent
by the Bell Telephone Laboratory provided a means of communicating
with the surface. The breathing apparatus consisted of two oxygen
tanks that allowed 2 liters of oxygen per minute to escape into
the sphere. During the dive, the carbon dioxide and moisture were
removed, respectively, by trays containing soda lime and calcium
chloride. A winch would lower the bathysphere on a steel cable.
In early July, 1930, after several test dives, the first manned dive
commenced. Beebe and Barton descended to a depth of 244 meters.
A short circuit in one of the switches showered them with sparks
momentarily, but the descent was largely a success. Beebe and
Barton had descended farther than any human.
Two more days of diving yielded a final dive record of 435 meters
below sea level. Beebe and the other members of his staff (ichthyologist
John Tee-Van and zoologist Gloria Hollister Anable) saw many
species of fish and other marine life that previously had been seen
only after being caught in nets. These first dives proved that an undersea exploratory craft had potential value, at least for deep water.
After 1932, the bathysphere went on display at the Century of Progress
Exhibition in Chicago.
In late 1933, the National Geographic Society offered to sponsor
another series of dives. Although a new record was not a stipulation,
Beebe was determined to supply one. The bathysphere was
completely refitted before the new dives.
An unmanned test dive to 920 meters was made on August 7,
1934, once again off Nonesuch Island. Minor adjustments were
made, and on the morning of August 11, the first dive commenced,
attaining a depth of 765 meters and recording a number of new scientific
observations. Several days later, on August 15, the weather
was again right for the dive.
This dive also paid rich dividends in the number of species of
deep-sea life observed. Finally, with only a few turns of cable left on
the winch spool, the bathysphere reached a record depth of 923 meters—
almost a kilometer below the ocean’s surface.Impact
Barton continued to work on the bathysphere design for some
years. It was not until 1948, however, that his new design, the
benthoscope, was finally constructed. It was similar in basic design
to the bathysphere, though the walls were increased to withstand
greater pressures. Other improvements were made, but the essential
strengths and weaknesses remained. On August 16, 1949, Barton,
diving alone, broke the record he and Beebe had set earlier,
reaching a depth of 1,372 meters off the coast of Southern California.
The bathysphere effectively marked the end of the tethered exploration
of the deep, but it pointed the way to other possibilities.
The first advance in this area came in 1943, when undersea explorer
Jacques-Yves Cousteau and engineer Émile Gagnan developed the
Aqualung underwater breathing apparatus, which made possible
unfettered and largely unencumbered exploration down to about
60 meters. This was by no means deep diving, but it was clearly a
step along the lines that Beebe had envisioned for underwater research.
A further step came in the development of the bathyscaphe by
102 / Bathysphere
Auguste Piccard, the renowned Swiss physicist, who, in the 1930’s,
had conquered the stratosphere in high-altitude balloons. The bathyscaphe
was a balloon that operated in reverse. Aspherical steel passenger
cabin was attached beneath a large float filled with gasoline
for buoyancy. Several tons of iron pellets held by electromagnets
acted as ballast. The bathyscaphe would sink slowly to the bottom
of the ocean, and when its passengers wished to return, the ballast
would be dumped. The craft would then slowly rise to the surface.
On September 30, 1953, Piccard touched bottom off the coast of Italy,
some 3,000 meters below sea level.
04 February 2009
Bathyscaphe
The invention: A submersible vessel capable of exploring the
deepest trenches of the world’s oceans.
The people behind the invention:
William Beebe (1877-1962), an American biologist and explorer
Auguste Piccard (1884-1962), a Swiss-born Belgian physicist
Jacques Piccard (1922- ), a Swiss ocean engineer
Early Exploration of the Deep Sea
The first human penetration of the deep ocean was made byWilliam
Beebe in 1934, when he descended 923 meters into the Atlantic
Ocean near Bermuda. His diving chamber was a 1.5-meter steel ball
that he named Bathysphere, from the Greek word bathys (deep) and
the word sphere, for its shape. He found that a sphere resists pressure
in all directions equally and is not easily crushed if it is constructed
of thick steel. The bathysphere weighed 2.5 metric tons. It
had no buoyancy and was lowered from a surface ship on a single
2.2-centimeter cable; a broken cable would have meant certain
death for the bathysphere’s passengers.
Numerous deep dives by Beebe and his engineer colleague, Otis
Barton, were the first uses of submersibles for science. Through two
small viewing ports, they were able to observe and photograph
many deep-sea creatures in their natural habitats for the first time.
They also made valuable observations on the behavior of light as
the submersible descended, noting that the green surface water became
pale blue at 100 meters, dark blue at 200 meters, and nearly
black at 300 meters. A technique called “contour diving” was particularly
dangerous. In this practice, the bathysphere was slowly
towed close to the seafloor. On one such dive, the bathysphere narrowly
missed crashing into a coral crag, but the explorers learned a
great deal about the submarine geology of Bermuda and the biology
of a coral-reef community. Beebe wrote several popular and scientific
books about his adventures that did much to arouse interest in
the ocean.
Testing the Bathyscaphe
The next important phase in the exploration of the deep ocean
was led by the Swiss physicist Auguste Piccard. In 1948, he launched
a new type of deep-sea research craft that did not require a cable and
that could return to the surface by means of its own buoyancy. He
called the craft a bathyscaphe, which is Greek for “deep boat.”
Piccard began work on the bathyscaphe in 1937, supported by a
grant from the Belgian National Scientific Research Fund. The German
occupation of Belgium early in World War II cut the project
short, but Piccard continued his work after the war. The finished
bathyscaphe was named FNRS 2, for the initials of the Belgian fund
that had sponsored the project. The vessel was ready for testing in
the fall of 1948.
The first bathyscaphe, as well as later versions, consisted of
two basic components: first, a heavy steel cabin to accommodate
observers, which looked somewhat like an enlarged version of
Beebe’s bathysphere; and second, a light container called a float,
filled with gasoline, that provided lifting power because it was
lighter than water. Enough iron shot was stored in silos to cause
the vessel to descend. When this ballast was released, the gasoline
in the float gave the bathyscaphe sufficient buoyancy to return to
the surface.
Piccard’s bathyscaphe had a number of ingenious devices. Jacques-
Yves Cousteau, inventor of the Aqualung six years earlier, contributed
a mechanical claw that was used to take samples of rocks, sediment,
and bottom creatures. A seven-barreled harpoon gun, operated
by water pressure, was attached to the sphere to capture
specimens of giant squids or other large marine animals for study.
The harpoons had electrical-shock heads to stun the “sea monsters,”
and if that did not work, the harpoon could give a lethal injection of
strychnine poison. Inside the sphere were various instruments for
measuring the deep-sea environment, including a Geiger counter
for monitoring cosmic rays. The air-purification system could support
two people for up to twenty-four hours. The bathyscaphe had a
radar mast to broadcast its location as soon as it surfaced. This was
essential because there was no way for the crew to open the sphere
from the inside.The FNRS 2 was first tested off the Cape Verde Islands with the
assistance of the French navy. Although Piccard descended to only
25 meters, the dive demonstrated the potential of the bathyscaphe.
On the second dive, the vessel was severely damaged by waves, and
further tests were suspended. Aredesigned and rebuilt bathyscaphe,
renamed FNRS 3 and operated by the French navy, descended to a
depth of 4,049 meters off Dakar, Senegal, on the west coast of Africa
in early 1954.
In August, 1953, Auguste Piccard, with his son Jacques, launched a greatly improved bathyscaphe, the Trieste, which they named for the
Italian city in which it was built. In September of the same year, the
Trieste successfully dived to 3,150 meters in the Mediterranean Sea. The
Piccards glimpsed, for the first time, animals living on the seafloor at
that depth. In 1958, the U.S. Navy purchased the Trieste and transported
it to California, where it was equipped with a new cabin designed
to enable the vessel to reach the seabed of the great oceanic
trenches. Several successful descents were made in the Pacific by
Jacques Piccard, and on January 23, 1960, Piccard, accompanied by
Lieutenant DonaldWalsh of the U.S. Navy, dived a record 10,916 meters
to the bottom of the Mariana Trench near the island of Guam.
Impact
The oceans have always raised formidable barriers to humanity’s
curiosity and understanding. In 1960, two events demonstrated the
ability of humans to travel underwater for prolonged periods and to
observe the extreme depths of the ocean. The nuclear submarine
Triton circumnavigated the world while submerged, and Jacques
Piccard and Lieutenant Donald Walsh descended nearly 11 kilometers
to the bottom of the ocean’s greatest depression aboard the
Trieste. After sinking for four hours and forty-eight minutes, the
Trieste landed in the Challenger Deep of the Mariana Trench, the
deepest known spot on the ocean floor. The explorers remained on
the bottom for only twenty minutes, but they answered one of the
biggest questions about the sea: Can animals live in the immense
cold and pressure of the deep trenches? Observations of red shrimp
and flatfishes proved that the answer was yes.
The Trieste played another important role in undersea exploration
when, in 1963, it located and photographed the wreckage of the
nuclear submarine Thresher. The Thresher had mysteriously disappeared
on a test dive off the New England coast, and the Navy had
been unable to find a trace of the lost submarine using surface vessels
equipped with sonar and remote-control cameras on cables.
Only the Trieste could actually search the bottom. On its third dive,
the bathyscaphe found a piece of the wreckage, and it eventually
photographed a 3,000-meter trail of debris that led to Thresher‘s hull,
at a depth of 2.5 kilometers.These exploits showed clearly that scientific submersibles could
be used anywhere in the ocean. Piccard’s work thus opened the last
geographic frontier on Earth.
BASIC programming language
The invention: An interactive computer system and simple programming
language that made it easier for nontechnical people
to use computers.
The people behind the invention:
John G. Kemeny (1926-1992), the chairman of Dartmouth’s
mathematics department
Thomas E. Kurtz (1928- ), the director of the Kiewit
Computation Center at Dartmouth
Bill Gates (1955- ), a cofounder and later chairman of the
board and chief operating officer of the Microsoft
Corporation
The Evolution of Programming
The first digital computers were developed duringWorldWar II
(1939-1945) to speed the complex calculations required for ballistics,
cryptography, and other military applications. Computer technology
developed rapidly, and the 1950’s and 1960’s saw computer systems
installed throughout the world. These systems were very large
and expensive, requiring many highly trained people for their operation.
The calculations performed by the first computers were determined
solely by their electrical circuits. In the 1940’s, The American
mathematician John von Neumann and others pioneered the idea of
computers storing their instructions in a program, so that changes
in calculations could be made without rewiring their circuits. The
programs were written in machine language, long lists of zeros and
ones corresponding to on and off conditions of circuits. During the
1950’s, “assemblers” were introduced that used short names for
common sequences of instructions and were, in turn, transformed
into the zeros and ones intelligible to the computer. The late 1950’s
saw the introduction of high-level languages, notably Formula Translation
(FORTRAN), CommonBusinessOriented Language (COBOL),
and Algorithmic Language (ALGOL), which used English words to communicate instructions to the computer. Unfortunately, these
high-level languages were complicated; they required some knowledge
of the computer equipment and were designed to be used by
scientists, engineers, and other technical experts.
Developing BASIC
John G. Kemeny was chairman of the department of mathematics
at Dartmouth College in Hanover, New Hampshire. In 1962,
Thomas E. Kurtz, Dartmouth’s computing director, approached
Kemeny with the idea of implementing a computer system at Dartmouth
College. Both men were dedicated to the idea that liberal arts
students should be able to make use of computers. Although the English
commands of FORTRAN and ALGOL were a tremendous improvement
over the cryptic instructions of assembly language, they
were both too complicated for beginners. Kemeny convinced Kurtz
that they needed a completely new language, simple enough for beginners
to learn quickly, yet flexible enough for many different
kinds of applications.
The language they developed was known as the “Beginner’s Allpurpose
Symbolic Instruction Code,” or BASIC. The original language
consisted of fourteen different statements. Each line of a
BASIC program was preceded by a number. Line numbers were referenced
by control flow statements, such as, “IF X = 9 THEN GOTO
200.” Line numbers were also used as an editing reference. If line 30
of a program contained an error, the programmer could make the
necessary correction merely by retyping line 30.
Programming in BASIC was first taught at Dartmouth in the fall
of 1964. Students were ready to begin writing programs after two
hours of classroom lectures. By June of 1968, more than 80 percent of
the undergraduates at Dartmouth could write a BASIC program.
Most of them were not science majors and used their programs in
conjunction with other nontechnical courses.
Kemeny and Kurtz, and later others under their supervision,
wrote more powerful versions of BASIC that included support for
graphics on video terminals and structured programming. The creators
of BASIC, however, always tried to maintain their original design
goal of keeping BASIC simple enough for beginners.
Consequences
Kemeny and Kurtz encouraged the widespread adoption of BASIC
by allowing other institutions to use their computer system and
by placing BASIC in the public domain. Over time, they shaped BASIC
into a powerful language with numerous features added in response
to the needs of its users. What Kemeny and Kurtz had not
foreseen was the advent of the microprocessor chip in the early
1970’s, which revolutionized computer technology. By 1975, microcomputer
kits were being sold to hobbyists for well under a thousand
dollars. The earliest of these was the Altair.
That same year, prelaw studentWilliam H. Gates (1955- ) was
persuaded by a friend, Paul Allen, to drop out of Harvard University
and help create a version of BASIC that would run on the Altair.
Gates and Allen formed a company, Microsoft Corporation, to sell
their BASIC interpreter, which was designed to fit into the tiny
memory of the Altair. It was about as simple as the original Dartmouth
BASIC but had to depend heavily on the computer hardware.
Most computers purchased for home use still include a version
of Microsoft Corporation’s BASIC.
See also BINAC computer; COBOL computer language; FORTRAN
programming language; SAINT; Supercomputer.
Autochrome plate
The invention: The first commercially successful process in which
a single exposure in a regular camera produced a color image.
The people behind the invention:
Louis Lumière (1864-1948), a French inventor and scientist
Auguste Lumière (1862-1954), an inventor, physician, physicist,
chemist, and botanist
Alphonse Seyewetz, a skilled scientist and assistant of the
Lumière brothers
Adding Color
In 1882, Antoine Lumière, painter, pioneer photographer, and father
of Auguste and Louis, founded a factory to manufacture photographic
gelatin dry-plates. After the Lumière brothers took over the
factory’s management, they expanded production to include roll
film and printing papers in 1887 and also carried out joint research
that led to fundamental discoveries and improvements in photographic
development and other aspects of photographic chemistry.
While recording and reproducing the actual colors of a subject
was not possible at the time of photography’s inception (about
1822), the first practical photographic process, the daguerreotype,
was able to render both striking detail and good tonal quality. Thus,
the desire to produce full-color images, or some approximation to
realistic color, occupied the minds of many photographers and inventors,
including Louis and Auguste Lumière, throughout the
nineteenth century.
As researchers set out to reproduce the colors of nature, the first
process that met with any practical success was based on the additive
color theory expounded by the Scottish physicist James Clerk
Maxwell in 1861. He believed that any color can be created by
adding together red, green, and blue light in certain proportions.
Maxwell, in his experiments, had taken three negatives through
screens or filters of these additive primary colors. He then took
slides made from these negatives and projected the slides through the same filters onto a screen so that their images were superimposed.
As a result, he found that it was possible to reproduce the exact
colors as well as the form of an object.
Unfortunately, since colors could not be printed in their tonal
relationships on paper before the end of the nineteenth century,Maxwell’s experiment was unsuccessful. Although Frederick E.
Ives of Philadelphia, in 1892, optically united three transparencies
so that they could be viewed in proper alignment by looking through
a peephole, viewing the transparencies was still not as simple as
looking at a black-and-white photograph.
The Autochrome Plate
The first practical method of making a single photograph that
could be viewed without any apparatus was devised by John Joly of
Dublin in 1893. Instead of taking three separate pictures through
three colored filters, he took one negative through one filter minutely
checkered with microscopic areas colored red, green, and
blue. The filter and the plate were exactly the same size and were
placed in contact with each other in the camera. After the plate was
developed, a transparency was made, and the filter was permanently
attached to it. The black-and-white areas of the picture allowed
more or less light to shine through the filters; if viewed froma
proper distance, the colored lights blended to form the various colors
of nature.
In sum, the potential principles of additive color and other methods
and their potential applications in photography had been discovered
and even experimentally demonstrated by 1880. Yet a practical
process of color photography utilizing these principles could
not be produced until a truly panchromatic emulsion was available,
since making a color print required being able to record the primary
colors of the light cast by the subject.
Louis and Auguste Lumière, along with their research associate
Alphonse Seyewetz, succeeded in creating a single-plate process
based on this method in 1903. It was introduced commercially as the
autochrome plate in 1907 and was soon in use throughout the
world. This process is one of many that take advantage of the limited
resolving power of the eye. Grains or dots too small to be recognized
as separate units are accepted in their entirety and, to the
sense of vision, appear as tones and continuous color.Impact
While the autochrome plate remained one of the most popular
color processes until the 1930’s, soon this process was superseded by
subtractive color processes. Leopold Mannes and Leopold Godowsky,
both musicians and amateur photographic researchers who eventually
joined forces with Eastman Kodak research scientists, did the
most to perfect the Lumière brothers’ advances in making color
photography practical. Their collaboration led to the introduction in
1935 of Kodachrome, a subtractive process in which a single sheet of
film is coated with three layers of emulsion, each sensitive to one
primary color. A single exposure produces a color image.
Color photography is now commonplace. The amateur market is
enormous, and the snapshot is almost always taken in color. Commercial
and publishing markets use color extensively. Even photography
as an art form, which was done in black and white through
most of its history, has turned increasingly to color.
Atomic-powered ship
The invention: The world’s first atomic-powered merchant ship
demonstrated a peaceful use of atomic power.
The people behind the invention:
Otto Hahn (1879-1968), a German chemist
Enrico Fermi (1901-1954), an Italian American physicist
Dwight D. Eisenhower (1890-1969), president of the United
States, 1953-1961
Splitting the Atom
In 1938, Otto Hahn, working at the Kaiser Wilhelm Institute for
Chemistry, discovered that bombarding uranium atoms with neutrons
causes them to split into two smaller, lighter atoms. A large
amount of energy is released during this process, which is called
“fission.” When one kilogram of uranium is fissioned, it releases the
same amount of energy as does the burning of 3,000 metric tons of
coal. The fission process also releases new neutrons.
Enrico Fermi suggested that these new neutrons could be used to
split more uranium atoms and produce a chain reaction. Fermi and
his assistants produced the first human-made chain reaction at the
University of Chicago on December 2, 1942. Although the first use
of this new energy source was the atomic bombs that were used to
defeat Japan in World War II, it was later realized that a carefully
controlled chain reaction could produce useful energy. The submarine
Nautilus, launched in 1954, used the energy released from fission
to make steam to drive its turbines.
U.S. President Dwight David Eisenhower proposed his “Atoms
for Peace” program in December, 1953. On April 25, 1955, President
Eisenhower announced that the “Atoms for Peace” program would
be expanded to include the design and construction of an atomicpowered
merchant ship, and he signed the legislation authorizing
the construction of the ship in 1956.Savannah’s Design and Construction
A contract to design an atomic-powered merchant ship was
awarded to George G. Sharp, Inc., on April 4, 1957. The ship was to
carry approximately one hundred passengers (later reduced to sixty
to reduce the ship’s cost) and 10,886 metric tons of cargo while making
a speed of 21 knots, about 39 kilometers per hour. The ship was
to be 181 meters long and 23.7 meters wide. The reactor was to provide
steam for a 20,000-horsepower turbine that would drive the
ship’s propeller. Most of the ship’s machinery was similar to that of
existing ships; the major difference was that steam came from a reactor
instead of a coal- or oil-burning boiler.
New York Shipbuilding Corporation of Camden, New Jersey,
won the contract to build the ship on November 16, 1957. States Marine
Lines was selected in July, 1958, to operate the ship. It was christened
Savannah and launched on July 21, 1959. The name Savannah
was chosen to honor the first ship to use steam power while crossing
an ocean. This earlier Savannah was launched in New York City
in 1818.
Ships are normally launched long before their construction is
complete, and the new Savannah was no exception. It was finally
turned over to States Marine Lines on May 1, 1962. After extensive
testing by its operators and delays caused by labor union disputes,
it began its maiden voyage from Yorktown, Virginia, to Savannah,
Georgia, on August 20, 1962. The original budget for design and
construction was $35 million, but by this time, the actual cost was
about $80 million.
Savannah‘s nuclear reactor was fueled with about 7,000 kilograms
(15,400 pounds) of uranium. Uranium consists of two forms,
or “isotopes.” These are uranium 235, which can fission, and uranium
238, which cannot. Naturally occurring uranium is less than 1
percent uranium 235, but the uranium in Savannah‘s reactor had
been enriched to contain nearly 5 percent of this isotope. Thus, there
was less than 362 kilograms of usable uranium in the reactor. The
ship was able to travel about 800,000 kilometers on this initial fuel
load. Three and a half million kilograms of water per hour flowed
through the reactor under a pressure of 5,413 kilograms per square
centimeter. It entered the reactor at 298.8 degrees Celsius and left at
317.7 degrees Celsius. Water leaving the reactor passed through a
heat exchanger called a “steam generator.” In the steam generator,
reactor water flowed through many small tubes. Heat passed through
the walls of these tubes and boiled water outside them. About
113,000 kilograms of steam per hour were produced in this way at a
pressure of 1,434 kilograms per square centimeter and a temperature
of 240.5 degrees Celsius.
Labor union disputes dogged Savannah‘s early operations, and it
did not start its first trans-Atlantic crossing until June 8, 1964. Savannah
was never a money maker. Even in the 1960’s, the trend was toward
much bigger ships. It was announced that the ship would be
retired in August, 1967, but that did not happen. It was finally put
out of service in 1971. Later, Savannah was placed on permanent display
at Charleston, South Carolina.
Consequences
Following the United States’ lead, Germany and Japan built
atomic-powered merchant ships. The Soviet Union is believed to
have built several atomic-powered icebreakers. Germany’s Otto
Hahn, named for the scientist who first split the atom, began service
in 1968, and Japan’s Mutsuai was under construction as Savannah retired.
Numerous studies conducted in the early 1970’s claimed to prove
that large atomic-powered merchant ships were more profitable
than oil-fired ships of the same size. Several conferences devoted to
this subject were held, but no new ships were built.
Although the U.S. Navy has continued to use reactors to power
submarines, aircraft carriers, and cruisers, atomic power has not
been widely used for merchant-ship propulsion. Labor union problems
such as those that haunted Savannah, high insurance costs, and
high construction costs are probably the reasons. Public opinion, after
the reactor accidents at Three Mile Island (in 1979) and Chernobyl
(in 1986) is also a factor.
Subscribe to:
Posts (Atom)