16 May 2009
Compact disc
Compact disc
The invention: A plastic disk on which digitized music or computer
data is stored.
The people behind the invention:
Akio Morita (1921- ), a Japanese physicist and engineer
who was a cofounder of Sony
Wisse Dekker (1924- ), a Dutch businessman who led the
Philips company
W. R. Bennett (1904-1983), an American engineer who was a
pioneer in digital communications and who played an
important part in the Bell Laboratories research program
Digital Recording
The digital system of sound recording, like the analog methods
that preceded it, was developed by the telephone companies to improve
the quality and speed of telephone transmissions. The system
of electrical recording introduced by Bell Laboratories in the 1920s
was part of this effort. Even Edison’s famous invention of the phonograph
in 1877 was originally conceived as an accompaniment to
the telephone. Although developed within the framework of telephone
communications, these innovations found wide applications
in the entertainment industry.
The basis of the digital recording system was a technique of sampling
the electrical waveforms of sound called PCM, or pulse code
modulation. PCM measures the characteristics of these waves and
converts them into numbers. This technique was developed at Bell
Laboratories in the 1930’s to transmit speech. At the end of World
War II, engineers of the Bell System began to adaptPCMtechnology
for ordinary telephone communications.
The problem of turning sound waves into numbers was that of
finding a method that could quickly and reliably manipulate millions
of them. The answer to this problem was found in electronic computers,
which used binary code to handle millions of computations in a
few seconds. The rapid advance of computer technology and the semiconductor circuits that gave computers the power to handle
complex calculations provided the means to bring digital sound technology
into commercial use. In the 1960’s, digital transmission and
switching systems were introduced to the telephone network.
Pulse coded modulation of audio signals into digital code achieved
standards of reproduction that exceeded even the best analog system,
creating an enormous dynamic range of sounds with no distortion
or background noise. The importance of digital recording went
beyond the transmission of sound because it could be applied to all
types of magnetic recording in which the source signal is transformed
into an electric current. There were numerous commercial
applications for such a system, and several companies began to explore
the possibilities of digital recording in the 1970’s.
Researchers at the Sony, Matsushita, and Mitsubishi electronics
companies in Japan produced experimental digital recording systems.
Each developed its own PCM processor, an integrated circuit
that changes audio signals into digital code. It does not continuously
transform sound but instead samples it by analyzing thousands
of minute slices of it per second. Sony’s PCM-F1 was the first
analog-to-digital conversion chip to be produced. This gave Sony a
lead in the research into and development of digital recording.
All three companies had strong interests in both audio and video
electronics equipment and saw digital recording as a key technology
because it could deal with both types of information simultaneously.
They devised recorders for use in their manufacturing operations.
After using PCM techniques to turn sound into digital code, they recorded
this information onto tape, using not magnetic audio tape but
the more advanced video tape, which could handle much more information.
The experiments with digital recording occurred simultaneously
with the accelerated development of video recording technology
and owed much to the enhanced capabilities of video recorders.
At this time, videocassette recorders were being developed in
several corporate laboratories in Japan and Europe. The Sony Corporation
was one of the companies developing video recorders at this
time. Its U-matic machines were successfully used to record digitally.
In 1972, the Nippon Columbia Company began to make its master recordings
digitally on an Ampex video recording machine.
Links Among New Technologies
There were powerful links between the new sound recording
systems and the emerging technologies of storing and retrieving
video images. The television had proved to be the most widely used
and profitable electronic product of the 1950’s, but with the market
for color television saturated by the end of the 1960’s, manufacturers
had to look for a replacement product.Amachine to save and replay
television images was seen as the ideal companion to the family
TV set. The great consumer electronics companies—General
Electric and RCAin the United States, Philips and Telefunken in Europe,
and Sony and Matsushita in Japan—began experimental programs
to find a way to save video images.
RCA’s experimental teams took the lead in developing an optical
videodisc system, called Selectavision, that used an electronic stylus
to read changes in capacitance on the disc. The greatest challenge to
them came from the Philips company of Holland. Its optical videodisc
used a laser beam to read information on a revolving disc, in
which a layer of plastic contained coded information. With the aid
of the engineering department of the Deutsche Grammophon record
company, Philips had an experimental laser disc in hand by
1964.
The Philips Laservision videodisc was not a commercial success,
but it carried forward an important idea. The research and engineering
work carried out in the laboratories at Eindhoven in Holland
proved that the laser reader could do the job. More important,
Philips engineers had found that this fragile device could be mass
produced as a cheap and reliable component of a commercial product.
The laser optical decoder was applied to reading the binary
codes of digital sound. By the end of the 1970’s, Philips engineers
had produced a working system.
Ten years of experimental work on the Laservision system proved
to be a valuable investment for the Philips corporation. Around
1979, it started to work on a digital audio disc (DAD) playback system.
This involved more than the basic idea of converting the output
of the PCM conversion chip onto a disc. The lines of pits on the
compact disc carry a great amount of information: the left- and
right-hand tracks of the stereo system are identified, and a sequence of pits also controls the motor speed and corrects any error in the laser
reading of the binary codes.
This research was carried out jointly with the Sony Corporation
of Japan, which had produced a superior method of encoding digital
sound with its PCM chips. The binary codes that carried the information
were manipulated by Sony’s sixteen-bit microprocessor.
Its PCM chip for analog-to-digital conversion was also employed.
Together, Philips and Sony produced a commercial digital playback
record that they named the compact disc. The name is significant, as
it does more than indicate the size of the disc—it indicates family
ties with the highly successful compact cassette. Philips and Sony
had already worked to establish this standard in the magnetic tape
format and aimed to make their compact disc the standard for digital
sound reproduction.Philips and Sony began to demonstrate their compact digital disc
(CD) system to representatives of the audio industry in 1981. They
were not alone in digital recording. The Japanese Victor Company, a
subsidiary of Matsushita, had developed a version of digital recording
from its VHD video disc design. It was called audio high density
disc (AHD). Instead of
the small CD disc, the AHD
system used a ten-inch vinyl
disc. Each digital recording
system used a different
PCM chip with a
different rate of sampling
the audio signal.The recording and electronics
industries’ decision
to standardize on the Philips/
Sony CD system was
therefore a major victory for
these companies and an important
event in the digital
era of sound recording.
Sony had found out the
hard way that the technical
performance of an innovation is irrelevant when compared with the politics of turning it into
an industrywide standard. Although the pioneer in videocassette
recorders, Sony had been beaten by its rival, Matsushita, in establishing
the video recording standard. This mistake was not repeated
in the digital standards negotiations, and many companies were
persuaded to license the new technology. In 1982, the technology
was announced to the public. The following year, the compact disc
was on the market.
The Apex of Sound Technology
The compact disc represented the apex of recorded sound technology.
Simply put, here at last was a system of recording in which
there was no extraneous noise—no surface noise of scratches and
pops, no tape hiss, no background hum—and no damage was done
to the recording as it was played. In principle, a digital recording
will last forever, and each play will sound as pure as the first. The
compact disc could also play much longer than the vinyl record or
long-playing cassette tape.
Despite these obvious technical advantages, the commercial success
of digital recording was not ensured. There had been several
other advanced systems that had not fared well in the marketplace,
and the conspicuous failure of quadrophonic sound in the 1970’s
had not been forgotten within the industry of recorded sound. Historically,
there were two key factors in the rapid acceptance of a new
system of sound recording and reproduction: a library of prerecorded
music to tempt the listener into adopting the system and a
continual decrease in the price of the playing units to bring them
within the budgets of more buyers.
By 1984, there were about a thousand titles available on compact
disc in the United States; that number had doubled by 1985. Although
many of these selections were classical music—it was naturally
assumed that audiophiles would be the first to buy digital
equipment—popular music was well represented. The firstCDavailable
for purchase was an album by popular entertainer Billy Joel.
The first CD-playing units cost more than $1,000, but Akio Morita
of Sony was determined that the company should reduce the
price of players even if it meant selling them below cost. Sony’s audio engineering department improved the performance of the
players while reducing size and cost. By 1984, Sony had a small CD
unit on the market for $300. Several of Sony’s competitors, including
Matsushita, had followed its lead into digital reproduction.
There were several compact disc players available in 1985 that cost
less than $500. Sony quickly applied digital technology to the popular
personal stereo and to automobile sound systems. Sales of CD
units increased roughly tenfold from 1983 to 1985.
Impact on Vinyl Recording
When the compact disc was announced in 1982, the vinyl record
was the leading form of recorded sound, with 273 million units sold
annually compared to 125 million prerecorded cassette tapes. The
compact disc sold slowly, beginning with 800,000 units shipped in
1983 and rising to 53 million in 1986. By that time, the cassette tape
had taken the lead, with slightly fewer than 350 million units. The
vinyl record was in decline, with only about 110 million units
shipped. Compact discs first outsold vinyl records in 1988. In the ten
years from 1979 to 1988, the sales of vinyl records dropped nearly 80
percent. In 1989, CDs accounted for more than 286 million sales, but
cassettes still led the field with total sales of 446 million. The compact
disc finally passed the cassette in total sales in 1992, when more
than 300 million CDs were shipped, an increase of 22 percent over
the figure for 1991.
The introduction of digital recording had an invigorating effect
on the industry of recorded sound, which had been unable to fully
recover from the slump of the late 1970’s. Sales of recorded music
had stagnated in the early 1980’s, and an industry accustomed to
steady increases in output became eager to find a new product or
style of music to boost its sales. The compact disc was the product to
revitalize the market for both recordings and players. During the
1980’s, worldwide sales of recorded music jumped from $12 billion
to $22 billion, with about half of the sales volume accounted for by
digital recordings by the end of the decade.
The success of digital recording served in the long run to undermine
the commercial viability of the compact disc. This was a playonly
technology, like the vinyl record before it. Once users had become accustomed to the pristine digital sound, they clamored for
digital recording capability. The alliance of Sony and Philips broke
down in the search for a digital tape technology for home use. Sony
produced a digital tape system calledDAT, while Philips responded
with a digital version of its compact audio tape called DCC. Sony
answered the challenge of DCC with its Mini Disc (MD) product,
which can record and replay digitally.
The versatility of digital recording has opened up a wide range of
consumer products. Compact disc technology has been incorporated
into the computer, in which CD-ROM readers convert the digital
code of the disc into sound and images. Many home computers have
the capability to record and replay sound digitally. Digital recording
is the basis for interactive audio/video computer programs in which
the user can interface with recorded sound and images. Philips has
established a strong foothold in interactive digital technology with its
CD-I (compact disc interactive) system, which was introduced in
1990. This acts as a multimedia entertainer, providing sound, moving
images, games, and interactive sound and image publications such as
encyclopedias. The future of digital recording will be broad-based
systems that can record and replay a wide variety of sounds and images
and that can be manipulated by users of home computers.
13 May 2009
Community antenna television
The invention:
Asystem for connecting households in isolated areas to common antennas to improve television reception, community antenna television was a forerunner of modern cabletelevision systems.
The people behind the invention:
Robert J. Tarlton, the founder of CATV in eastern Pennsylvania
Ed Parsons, the founder of CATV in Oregon
Ted Turner (1938- ), founder of the first cable superstation,WTBS
08 May 2009
Communications satellite
The invention: Telstar I, the world’s first commercial communications
satellite, opened the age of live, worldwide television by
connecting the United States and Europe.
The people behind the invention:
Arthur C. Clarke (1917- ), a British science-fiction writer
who in 1945 first proposed the idea of using satellites as
communications relays
John R. Pierce (1910- ), an American engineer who worked
on the Echo and Telstar satellite communications projects
Science Fiction?
In 1945, Arthur C. Clarke suggested that a satellite orbiting high
above the earth could relay television signals between different stations
on the ground, making for a much wider range of transmission
than that of the usual ground-based systems. Writing in the
February, 1945, issue of Wireless World, Clarke said that satellites
“could give television and microwave coverage to the entire
planet.”
In 1956, John R. Pierce at the Bell Telephone Laboratories of the
American Telephone & Telegraph Company (AT&T) began to urge
the development of communications satellites. He saw these satellites
as a replacement for the ocean-bottom cables then being used to
carry transatlantic telephone calls. In 1950, about one-and-a-half
million transatlantic calls were made, and that number was expected
to grow to three million by 1960, straining the capacity of the
existing cables; in 1970, twenty-one million calls were made.
Communications satellites offered a good, cost-effective alternative
to building more transatlantic telephone cables. On January 19,
1961, the Federal Communications Commission (FCC) gave permission
for AT&T to begin Project Telstar, the first commercial communications
satellite bridging the Atlantic Ocean.AT&T reached an
agreement with the National Aeronautics and Space Administration
(NASA) in July, 1961, in which AT&T would pay $3 million for each Telstar launch. The Telstar project involved about four hundred
scientists, engineers, and technicians at the Bell Telephone
Laboratories, twenty more technical personnel at AT&T headquarters,
and the efforts of more than eight hundred other companies
that provided equipment or services.
Telstar 1 was shaped like a faceted sphere, was 88 centimeters in
diameter, and weighed 80 kilograms. Most of its exterior surface
(sixty of the seventy-four facets) was covered by 3,600 solar cells to
convert sunlight into 15 watts of electricity to power the satellite.
Each solar cell was covered with artificial sapphire to reduce the
damage caused by radiation. The main instrument was a two-way
radio able to handle six hundred telephone calls at a time or one
television channel.
The signal that the radio would send back to Earth was very
weak—less than one-thirtieth the energy used by a household light
bulb. Large ground antennas were needed to receive Telstar’s faint
signal. The main ground station was built by AT&T in Andover,
Maine, on a hilltop informally called “Space Hill.” A horn-shaped
antenna, weighing 380 tons, with a length of 54 meters and an open
end with an area of 1,097 square meters, was mounted so that it
could rotate to track Telstar across the sky. To protect it from wind
and weather, the antenna was built inside an inflated dome, 64 meters
in diameter and 49 meters tall. It was, at the time, the largest inflatable
structure ever built. A second, smaller horn antenna in
Holmdel, New Jersey, was also used.International Cooperation
In February, 1961, the governments of the United States and England
agreed to let the British Post Office and NASAwork together
to test experimental communications satellites. The British Post Office
built a 26-meter-diameter steerable dish antenna of its own design
at Goonhilly Downs, near Cornwall, England. Under a similar
agreement, the French National Center for Telecommunications
Studies constructed a ground station, almost identical to the Andover
station, at Pleumeur-Bodou, Brittany, France.
After testing, Telstar 1 was moved to Cape Canaveral, Florida,
and attached to the Thor-Delta launch vehicle built by the Douglas Aircraft Company. The Thor-Delta was launched at 3:35 a.m. eastern
standard time (EST) on July 10, 1962. Once in orbit, Telstar 1 took
157.8 minutes to circle the globe. The satellite came within range of
the Andover station on its sixth orbit, and a television test pattern
was transmitted to the satellite at 6:26 p.m. EST. At 6:30 p.m. EST, a
tape-recorded black-and-white image of the American flag with the
Andover station in the background, transmitted from Andover to
Holmdel, opened the first television show ever broadcast by satellite.
Live pictures of U.S. vice president Lyndon B. Johnson and
other officials gathered at Carnegie Institution inWashington, D.C.,
followed on the AT&T program carried live on all three American
networks.
Up to the moment of launch, it was uncertain if the French station
would be completed in time to participate in the initial test. At 6:47
p.m. EST, however, Telstar’s signal was picked up by the station in
Pleumeur-Bodou, and Johnson’s image became the first television
transmission to cross the Atlantic. Pictures received at the French
station were reported to be so clear that they looked like they had
been sent from only forty kilometers away. Because of technical difficulties,
the English station was unable to receive a clear signal.
The first formal exchange of programming between the United
States and Europe occurred on July 23, 1962. This special eighteenminute
program, produced by the European Broadcasting Union,
consisted of live scenes from major cities throughout Europe and
was transmitted from Goonhilly Downs, where the technical difficulties
had been corrected, to Andover via Telstar.
On the previous orbit, a program entitled “America, July 23,
1962,” showing scenes from fifty television cameras around the
United States, was beamed from Andover to Pleumeur-Bodou and
seen by an estimated one hundred million viewers throughout Europe.Consequences
Telstar 1 and the communications satellites that followed it revolutionized
the television news and sports industries. Before, television
networks had to ship film across the oceans, meaning delays of
hours or days between the time an event occurred and the broadcast of pictures of that event on television on another continent. Now,
news of major significance, as well as sporting events, can be viewed
live around the world. The impact on international relations also
was significant, with world opinion becoming able to influence the
actions of governments and individuals, since those actions could
be seen around the world as the events were still in progress.
More powerful launch vehicles allowed new satellites to be placed
in geosynchronous orbits, circling the earth at a speed the same as
the earth’s rotation rate. When viewed from the ground, these satellites
appeared to remain stationary in the sky. This allowed continuous
communications and greatly simplified the ground antenna
system. By the late 1970’s, private individuals had built small antennas
in their backyards to receive television signals directly from the
satellites.
04 May 2009
Colossus computer
The invention: The first all-electronic calculating device, the Colossus
computer was built to decipher German military codes
during World War II.
The people behind the invention:
Thomas H. Flowers, an electronics expert
Max H. A. Newman (1897-1984), a mathematician
Alan Mathison Turing (1912-1954), a mathematician
C. E. Wynn-Williams, a member of the Telecommunications
Research Establishment
An Undercover Operation
In 1939, during World War II (1939-1945), a team of scientists,
mathematicians, and engineers met at Bletchley Park, outside London,
to discuss the development of machines that would break the
secret code used in Nazi military communications. The Germans
were using a machine called “Enigma” to communicate in code between
headquarters and field units. Polish scientists, however, had
been able to examine a German Enigma and between 1928 and 1938
were able to break the codes by using electromechanical codebreaking
machines called “bombas.” In 1938, the Germans made the
Enigma more complicated, and the Polish were no longer able to
break the codes. In 1939, the Polish machines and codebreaking
knowledge passed to the British.
Alan Mathison Turing was one of the mathematicians gathered
at Bletchley Park to work on codebreaking machines. Turing was
one of the first people to conceive of the universality of digital computers.
He first mentioned the “Turing machine” in 1936 in an article
published in the Proceedings of the London Mathematical Society.
The Turing machine, a hypothetical device that can solve any
problem that involves mathematical computation, is not restricted
to only one task—hence the universality feature.
Turing suggested an improvement to the Bletchley codebreaking
machine, the “Bombe,” which had been modeled on the Polish bomba. This improvement increased the computing power of the
machine. The new codebreaking machine replaced the tedious
method of decoding by hand, which in addition to being slow,
was ineffective in dealing with complicated encryptions that were
changed daily.
Building a Better Mousetrap
The Bombe was very useful. In 1942, when the Germans started
using a more sophisticated cipher machine known as the “Fish,”
Max H. A. Newman, who was in charge of one subunit at Bletchley
Park, believed that an automated device could be designed to break
the codes produced by the Fish. Thomas H. Flowers, who was in
charge of a switching group at the Post Office Research Station at
Dollis Hill, had been approached to build a special-purpose electromechanical
device for Bletchley Park in 1941. The device was not
useful, and Flowers was assigned to other problems.
Flowers began to work closely with Turing, Newman, and C. E.
Wynn-Williams of the Telecommunications Research Establishment
(TRE) to develop a machine that could break the Fish codes. The
Dollis Hill team worked on the tape driving and reading problems,
and Wynn-Williams’s team at TRE worked on electronic counters
and the necessary circuitry. Their efforts produced the “Heath Robinson,”
which could read two thousand characters per second. The
Heath Robinson used vacuum tubes, an uncommon component in
the early 1940’s. The vacuum tubes performed more reliably and
rapidly than the relays that had been used for counters. Heath Robinson
and the companion machines proved that high-speed electronic
devices could successfully do cryptoanalytic work (solve decoding
problems).
Entirely automatic in operation once started, the Heath Robinson
was put together at Bletchley Park in the spring of 1943. The Heath
Robinson became obsolete for codebreaking shortly after it was put
into use, so work began on a bigger, faster, and more powerful machine:
the Colossus.
Flowers led the team that designed and built the Colossus in
eleven months at Dollis Hill. The first Colossus (Mark I) was a bigger,
faster version of the Heath Robinson and read about five thousand characters per second. Colossus had approximately fifteen
hundred vacuum tubes, which was the largest number that had
ever been used at that time. Although Turing and Wynn-Williams
were not directly involved with the design of the Colossus, their
previous work on the Heath Robinson was crucial to the project,
since the first Colossus was based on the Heath Robinson.
Colossus became operational at Bletchley Park in December,
1943, and Flowers made arrangements for the manufacture of its
components in case other machines were required. The request for
additional machines came in March, 1944. The second Colossus, the
Mark II, was extensively redesigned and was able to read twentyfive
thousand characters per second because it was capable of performing
parallel operations (carrying out several different operations
at once, instead of one at a time); it also had a short-term
memory. The Mark II went into operation on June 1, 1944. More
machines were made, each with further modifications, until there
were ten. The Colossus machines were special-purpose, programcontrolled
electronic digital computers, the only known electronic
programmable computers in existence in 1944. The use of electronics
allowed for a tremendous increase in the internal speed of the
machine.
Impact
The Colossus machines gave Britain the best codebreaking machines
of World War II and provided information that was crucial
for the Allied victory. The information decoded by Colossus, the actual
messages, and their influence on military decisions would remain
classified for decades after the war.
The later work of several of the people involved with the Bletchley
Park projects was important in British computer development
after the war. Newman’s and Turing’s postwar careers were closely
tied to emerging computer advances. Newman, who was interested
in the impact of computers on mathematics, received a grant from
the Royal Society in 1946 to establish a calculating machine laboratory
at Manchester University. He was also involved with postwar
computer growth in Britain.
Several other members of the Bletchley Park team, including Turing, joined Newman at Manchester in 1948. Before going to Manchester
University, however, Turing joined Britain’s National Physical
Laboratory (NPL). At NPL, Turing worked on an advanced
computer known as the Pilot Automatic Computing Engine (Pilot
ACE). While at NPL, Turing proposed the concept of a stored program,
which was a controversial but extremely important idea in
computing. A“stored” program is one that remains in residence inside
the computer, making it possible for a particular program and
data to be fed through an input device simultaneously. (The Heath
Robinson and Colossus machines were limited by utilizing separate
input tapes, one for the program and one for the data to be analyzed.)
Turing was among the first to explain the stored-program
concept in print. He was also among the first to imagine how subroutines
could be included in a program. (Asubroutine allows separate
tasks within a large program to be done in distinct modules; in
effect, it is a detour within a program. After the completion of the
subroutine, the main program takes control again.)
22 April 2009
Color television
The invention:
System for broadcasting full-color images over the
airwaves.
The people behind the invention:
Peter Carl Goldmark (1906-1977), the head of the CBS research
and development laboratory
William S. Paley (1901-1990), the businessman who took over
CBS
David Sarnoff (1891-1971), the founder of RCA
11 April 2009
Color film
The invention:Aphotographic medium used to take full-color pictures.
The people behind the invention:
Rudolf Fischer (1881-1957), a German chemist
H. Siegrist (1885-1959), a German chemist and Fischer’s
collaborator
Benno Homolka (1877-1949), a German chemist
The Process Begins
Around the turn of the twentieth century, Arthur-Louis Ducos du
Hauron, a French chemist and physicist, proposed a tripack (threelayer)
process of film development in which three color negatives
would be taken by means of superimposed films. This was a subtractive
process. (In the “additive method” of making color pictures,
the three colors are added in projection—that is, the colors are formed
by the mixture of colored light of the three primary hues. In the
“subtractive method,” the colors are produced by the superposition
of prints.) In Ducos du Hauron’s process, the blue-light negative
would be taken on the top film of the pack; a yellow filter below it
would transmit the yellow light, which would reach a green-sensitive
film and then fall upon the bottom of the pack, which would be sensitive
to red light. Tripacks of this type were unsatisfactory, however,
because the light became diffused in passing through the emulsion
layers, so the green and red negatives were not sharp.
To obtain the real advantage of a tripack, the three layers must
be coated one over the other so that the distance between the bluesensitive
and red-sensitive layers is a small fraction of a thousandth
of an inch. Tripacks of this type were suggested by the early pioneers
of color photography, who had the idea that the packs would
be separated into three layers for development and printing. The
manipulation of such systems proved to be very difficult in practice.
It was also suggested, however, that it might be possible to develop
such tripacks as a unit and then, by chemical treatment, convert the
silver images into dye images.Fischer’s Theory
One of the earliest subtractive tripack methods that seemed to
hold great promise was that suggested by Rudolf Fischer in 1912. He
proposed a tripack that would be made by coating three emulsions
on top of one another; the lowest one would be red-sensitive, the
middle one would be green-sensitive, and the top one would be bluesensitive.
Chemical substances called “couplers,” which would produce
dyes in the development process, would be incorporated into
the layers. In this method, the molecules of the developing agent, after
becoming oxidized by developing the silver image, would react
with the unoxidized form (the coupler) to produce the dye image.
The two types of developing agents described by Fischer are
paraminophenol and paraphenylenediamine (or their derivatives).
The five types of dye that Fischer discovered are formed when silver
images are developed by these two developing agents in the presence
of suitable couplers. The five classes of dye he used (indophenols,
indoanilines, indamines, indothiophenols, and azomethines)
were already known when Fischer did his work, but it was he who
discovered that the photographic latent image could be used to promote
their formulation from “coupler” and “developing agent.”
The indoaniline and azomethine types have been found to possess
the necessary properties, but the other three suffer from serious defects.
Because only p-phenylenediamine and its derivatives can be
used to form the indoaniline and azomethine dyes, it has become
the most widely used color developing agent.Impact
In the early 1920’s, Leopold Mannes and Leopold Godowsky
made a great advance beyond the Fischer process. Working on a
new process of color photography, they adopted coupler development,
but instead of putting couplers into the emulsion as Fischer
had, they introduced them during processing. Finally, in 1935, the
film was placed on the market under the name “Kodachrome,” a
name that had been used for an early two-color process.
The first use of the new Kodachrome process in 1935 was for 16-
millimeter film. Color motion pictures could be made by the Kodachrome process as easily as black-and-white pictures, because the
complex work involved (the color development of the film) was
done under precise technical control. The definition (quality of the
image) given by the process was soon sufficient to make it practical
for 8-millimeter pictures, and in 1936, Kodachrome film was introduced
in a 35-millimeter size for use in popular miniature cameras.
Soon thereafter, color processes were developed on a larger scale
and new color materials were rapidly introduced. In 1940, the Kodak
Research Laboratories worked out a modification of the Fischer
process in which the couplers were put into the emulsion layers.
These couplers are not dissolved in the gelatin layer itself, as the
Fischer couplers are, but are carried in small particles of an oily material
that dissolves the couplers, protects them from the gelatin,
and protects the silver bromide from any interaction with the couplers.
When development takes place, the oxidation product of the
developing agent penetrates into the organic particles and reacts
with the couplers so that the dyes are formed in small particles that
are dispersed throughout the layers. In one form of this material,
Ektachrome (originally intended for use in aerial photography), the
film is reversed to produce a color positive. It is first developed with
a black-and-white developer, then reexposed and developed with a
color developer that recombines with the couplers in each layer to
produce the appropriate dyes, all three of which are produced simultaneously
in one development.
In summary, although Fischer did not succeed in putting his theory
into practice, his work still forms the basis of most modern color
photographic systems. Not only did he demonstrate the general
principle of dye-coupling development, but the art is still mainly
confined to one of the two types of developing agent, and two of the
five types of dye, described by him.
COBOL computer language
The invention: The first user-friendly computer programming language,
COBOL was originally designed to solve ballistics problems.
The people behind the invention:
Grace Murray Hopper (1906-1992), an American
mathematician
Howard Hathaway Aiken (1900-1973), an American
mathematician
Plain Speaking
Grace Murray Hopper, a mathematician, was a faculty member
at Vassar College when World War II (1939-1945) began. She enlisted
in the Navy and in 1943 was assigned to the Bureau of Ordnance
Computation Project, where she worked on ballistics problems.
In 1944, the Navy began using one of the first electronic
computers, the Automatic Sequence Controlled Calculator (ASCC),
designed by an International Business Machines (IBM) Corporation
team of engineers headed by Howard Hathaway Aiken, to solve
ballistics problems. Hopper became the third programmer of the
ASCC.
Hopper’s interest in computer programming continued after
the war ended. By the early 1950’s, Hopper’s work with programming
languages had led to her development of FLOW-MATIC, the
first English-language data processing compiler. Hopper’s work
on FLOW-MATIC paved the way for her later work with COBOL
(Common Business Oriented Language).
Until Hopper developed FLOW-MATIC, digital computer programming
was all machine-specific and was written in machine
code. A program designed for one computer could not be used on
another. Every program was both machine-specific and problemspecific
in that the programmer would be told what problem the
machine was going to be asked and then would write a completely
new program for that specific problem in the machine code.Machine code was based on the programmer’s knowledge of the
physical characteristics of the computer as well as the requirements of
the problem to be solved; that is, the programmer had to know what
was happening within the machine as it worked through a series of calculations, which relays tripped when and in what order, and what
mathematical operations were necessary to solve the problem. Programming
was therefore a highly specialized skill requiring a unique
combination of linguistic, reasoning, engineering, and mathematical
abilities that not even all the mathematicians and electrical engineers
who designed and built the early computers possessed.
While every computer still operates in response to the programming,
or instructions, built into it, which are formatted in machine
code, modern computers can accept programs written in nonmachine
code—that is, in various automatic programming languages. They
are able to accept nonmachine code programs because specialized
programs now exist to translate those programs into the appropriate
machine code. These translating programs are known as “compilers,”
or “assemblers,” andFLOW-MATIC was the first such program.
Hopper developed FLOW-MATIC after realizing that it would
be necessary to eliminate unnecessary steps in programming to
make computers more efficient. FLOW-MATIC was based, in part,
on Hopper’s recognition that certain elements, or commands, were
common to many different programming applications. Hopper theorized
that it would not be necessary to write a lengthy series of instructions
in machine code to instruct a computer to begin a series of
operations; instead, she believed that it would be possible to develop
commands in an assembly language in such a way that a programmer
could write one command, such as the word add, that
would translate into a sequence of several commands in machine
code. Hopper’s successful development of a compiler to translate
programming languages into machine code thus meant that programming
became faster and easier. From assembly languages such
asFLOW-MATIC, it was a logical progression to the development of
high-level computer languages, such as FORTRAN (Formula Translation)
and COBOL.The Language of Business
Between 1955 (when FLOW-MATIC was introduced) and 1959, a
number of attempts at developing a specific business-oriented language
were made. IBM and Remington Rand believed that the only
way to market computers to the business community was through the development of a language that business people would be
comfortable using. Remington Rand officials were especially committed
to providing a language that resembled English. None of
the attempts to develop a business-oriented language succeeded,
however, and by 1959 Hopper and other members of the U.S. Department
of Defense had persuaded representatives of various companies
of the need to cooperate.
On May 28 and 29, 1959, a conference sponsored by the Department
of Defense was held at the Pentagon to discuss the problem of
establishing a common language for the adaptation of electronic
computers for data processing. As a result, the first distribution of
COBOL was accomplished on December 17, 1959. Although many
people were involved in the development of COBOL, Hopper played
a particularly important role. She not only found solutions to technical
problems but also succeeded in selling the concept of a common
language from an administrative and managerial point of view. Hopper
recognized that while the companies involved in the commercial
development of computers were in competition with one another, the
use of a common, business-oriented language would contribute to
the growth of the computer industry as a whole, as well as simplify
the training of computer programmers and operators.
Consequences
COBOL was the first compiler developed for business data processing
operations. Its development simplified the training required
for computer users in business applications and demonstrated that
computers could be practical tools in government and industry as
well as in science. Prior to the development of COBOL, electronic
computers had been characterized as expensive, oversized adding
machines that were adequate for performing time-consuming mathematics
but lacked the flexibility that business people required.
In addition, the development of COBOL freed programmers not
only from the need to know machine code but also from the need to
understand the physical functioning of the computers they were using.
Programming languages could be written that were both machine-
independent and almost universally convertible from one
computer to another.Finally, because Hopper and the other committee members worked
under the auspices of the Department of Defense, the software
was not copyrighted, and in a short period of time COBOL became
widely available to anyone who wanted to use it. It diffused rapidly
throughout the industry and contributed to the widespread adaptation
of computers for use in countless settings.
04 April 2009
Cloud seeding
The invention: Technique for inducing rainfall by distributing dry
ice or silver nitrate into reluctant rainclouds.
The people behind the invention:
Vincent Joseph Schaefer (1906-1993), an American chemist and
meteorologist
Irving Langmuir (1881-1957), an American physicist and
chemist who won the 1932 Nobel Prize in Chemistry
Bernard Vonnegut (1914-1997), an American physical chemist
and meteorologist
Praying for Rain
Beginning in 1943, an intense interest in the study of clouds developed
into the practice of weather “modification.” Working for
the General Electric Research Laboratory, Nobel laureate Irving
Langmuir and his assistant researcher and technician, Vincent Joseph
Schaefer, began an intensive study of precipitation and its
causes.
Past research and study had indicated two possible ways that
clouds produce rain. The first possibility is called “coalescing,” a
process by which tiny droplets of water vapor in a cloud merge after
bumping into one another and become heavier and fatter until they
drop to earth. The second possibility is the “Bergeron process” of
droplet growth, named after the Swedish meteorologist Tor Bergeron.
Bergeron’s process relates to supercooled clouds, or clouds
that are at or below freezing temperatures and yet still contain both
ice crystals and liquid water droplets. The size of the water droplets
allows the droplets to remain liquid despite freezing temperatures;
while small droplets can remain liquid only down to 4 degrees Celsius,
larger droplets may not freeze until reaching -15 degrees
Celsius. Precipitation occurs when the ice crystals become heavy
enough to fall. If the temperature at some point below the cloud is
warm enough, it will melt the ice crystals before they reach the
earth, producing rain. If the temperature remains at the freezing point, the ice crystals retain their form and fall as snow.
Schaefer used a deep-freezing unit in order to observe water
droplets in pure cloud form. In order to observe the droplets better,
Schaefer lined the chest with black velvet and concentrated a beam
of light inside. The first agent he introduced inside the supercooled
freezer was his own breath. When that failed to form the desired ice
crystals, he proceeded to try other agents. His hope was to form ice
crystals that would then cause the moisture in the surrounding air
to condense into more ice crystals, which would produce a miniature
snowfall.
He eventually achieved success when he tossed a handful of dry
ice inside and was rewarded with the long-awaited snow. The
freezer was set at the freezing point of water, 0 degrees Celsius, but
not all the particles were ice crystals, so when the dry ice was introduced
all the stray water droplets froze instantly, producing ice
crystals, or snowflakes.
Planting the First Seeds
On November 13, 1946, Schaefer took to the air over Mount
Greylock with several pounds of dry ice in order to repeat the experiment
in nature. After he had finished sprinkling, or seeding, a
supercooled cloud, he instructed the pilot to fly underneath the
cloud he had just seeded. Schaefer was greeted by the sight of snow.
By the time it reached the ground, it had melted into the first-ever
human-made rainfall.
Independently of Schaefer and Langmuir, another General Electric
scientist, Bernard Vonnegut, was also seeking a way to cause
rain. He found that silver iodide crystals, which have the same size
and shape as ice crystals, could “fool” water droplets into condensing
on them. When a certain chemical mixture containing silver iodide
is heated on a special burner called a “generator,” silver iodide
crystals appear in the smoke of the mixture. Vonnegut’s discovery
allowed seeding to occur in a way very different from seeding with
dry ice, but with the same result. Using Vonnegut’s process, the
seeding is done from the ground. The generators are placed outside
and the chemicals are mixed. As the smoke wafts upward, it carries
the newly formed silver iodide crystals with it into the clouds.
The results of the scientific experiments by Langmuir, Vonnegut,
and Schaefer were alternately hailed and rejected as legitimate.
Critics argue that the process of seeding is too complex and
would have to require more than just the addition of dry ice or silver
nitrate in order to produce rain. One of the major problems surrounding
the question of weather modification by cloud seeding is
the scarcity of knowledge about the earth’s atmosphere. Ajourney
begun about fifty years ago is still a long way from being completed.
Impact
Although the actual statistical and other proofs needed to support
cloud seeding are lacking, the discovery in 1946 by the General
Electric employees set off a wave of interest and demand for information
that far surpassed the interest generated by the discovery of
nuclear fission shortly before. The possibility of ending drought
and, in the process, hunger excited many people. The discovery also
prompted both legitimate and false “rainmakers” who used the information
gathered by Schaefer, Langmuir, and Vonnegut to set up
cloud-seeding businesses.Weather modification, in its current stage
of development, cannot be used to end worldwide drought. It does,
however, have beneficial results in some cases on the crops of
smaller farms that have been affected by drought.
In order to understand the advances made in weather modification,
new instruments are needed to record accurately the results of
further experimentation. The storm of interest—both favorable and
nonfavorable—generated by the discoveries of Schaefer, Langmuir,
and Vonnegut has had and will continue to have far-reaching effects
on many aspects of society.
25 March 2009
Cloning
The invention: Experimental technique for creating exact duplicates
of living organisms by recreating their DNA.
The people behind the invention:
Ian Wilmut, an embryologist with the Roslin Institute
Keith H. S. Campbell, an experiment supervisor with the Roslin
Institute
J. McWhir, a researcher with the Roslin Institute
W. A. Ritchie, a researcher with the Roslin Institute
Making Copies
On February 22, 1997, officials of the Roslin Institute, a biological
research institution near Edinburgh, Scotland, held a press conference
to announce startling news: They had succeeded in creating
a clone—a biologically identical copy—from cells taken from
an adult sheep. Although cloning had been performed previously
with simpler organisms, the Roslin Institute experiment marked
the first time that a large, complex mammal had been successfully
cloned.
Cloning, or the production of genetically identical individuals,
has long been a staple of science fiction and other popular literature.
Clones do exist naturally, as in the example of identical twins. Scientists
have long understood the process by which identical twins
are created, and agricultural researchers have often dreamed of a
method by which cheap identical copies of superior livestock could
be created.
The discovery of the double helix structure of deoxyribonucleic
acid (DNA), or the genetic code, by JamesWatson and Francis Crick
in the 1950’s led to extensive research into cloning and genetic engineering.
Using the discoveries ofWatson and Crick, scientists were
soon able to develop techniques to clone laboratory mice; however,
the cloning of complex, valuable animals such as livestock proved
to be hard going.
Early versions of livestock cloning were technical attempts at duplicating the natural process of fertilized egg splitting that leads to the
birth of identical twins. Artificially inseminated eggs were removed,
split, and then reinserted into surrogate mothers. This method proved
to be overly costly for commercial purposes, a situation aggravated by
a low success rate.
Nuclear Transfer
Researchers at the Roslin Institute found these earlier attempts to
be fundamentally flawed. Even if the success rate could be improved,
the number of clones created (of sheep, in this case) would
still be limited. The Scots, led by embryologist Ian Wilmut and experiment
supervisor Keith Campbell, decided to take an entirely
different approach. The result was the first live birth of a mammal
produced through a process known as “nuclear transfer.”
Nuclear transfer involves the replacement of the nucleus of an
immature egg with a nucleus taken from another cell. Previous attempts
at nuclear transfer had cells from a single embryo divided
up and implanted into an egg. Because a sheep embryo has only
about forty usable cells, this method also proved limiting.
The Roslin team therefore decided to grow their own cells in a
laboratory culture. They took more mature embryonic cells than
those previously used, and they experimented with the use of a nutrient
mixture. One of their breakthroughs occurred when they discovered
that these “cell lines” grew much more quickly when certain
nutrients were absent.Using this technique, the Scots were able to produce a theoretically
unlimited number of genetically identical cell lines. The next
step was to transfer the cell lines of the sheep into the nucleus of unfertilized
sheep eggs.
First, 277 nuclei with a full set of chromosomes were transferred
to the unfertilized eggs. An electric shock was then used to cause the
eggs to begin development, the shock performing the duty of fertilization.
Of these eggs, twenty-nine developed enough to be inserted
into surrogate mothers.
All the embryos died before birth except one: a ewe the scientists
named “Dolly.” Her birth on July 5, 1996, was witnessed by only a
veterinarian and a few researchers. Not until the clone had survived
the critical earliest stages of life was the success of the experiment
disclosed; Dolly was more than seven months old by the time her
birth was announced to a startled world.Impact
The news that the cloning of sophisticated organisms had left the
realm of science fiction and become a matter of accomplished scientific
fact set off an immediate uproar. Ethicists and media commentators
quickly began to debate the moral consequences of the use—
and potential misuse—of the technology. Politicians in numerous
countries responded to the news by calling for legal restrictions on
cloning research. Scientists, meanwhile, speculated about the possible
benefits and practical limitations of the process.
The issue that stirred the imagination of the broader public and
sparked the most spirited debate was the possibility that similar experiments
might soon be performed using human embryos. Although
most commentators seemed to agree that such efforts would
be profoundly immoral, many experts observed that they would be
virtually impossible to prevent. “Could someone do this tomorrow
morning on a human embryo?” Arthur L. Caplan, the director of the
University of Pennsylvania’s bioethics center, asked reporters. “Yes.
It would not even take too much science. The embryos are out
there.”
Such observations conjured visions of a future that seemed marvelous
to some, nightmarish to others. Optimists suggested that the best and brightest of humanity could be forever perpetuated, creating
an endless supply of Albert Einsteins and Wolfgang Amadeus
Mozarts. Pessimists warned of a world overrun by clones of selfserving
narcissists and petty despots, or of the creation of a secondary
class of humans to serve as organ donors for their progenitors.
The Roslin Institute’s researchers steadfastly proclaimed their
own opposition to human experimentation. Moreover, most scientists
were quick to point out that such scenarios were far from realization,
noting the extremely high failure rate involved in the creation
of even a single sheep. In addition, most experts emphasized
more practical possible uses of the technology: improving agricultural
stock by cloning productive and disease-resistant animals, for
example, or regenerating endangered or even extinct species. Even
such apparently benign schemes had their detractors, however, as
other observers remarked on the potential dangers of thus narrowing
a species’ genetic pool.
Even prior to the Roslin Institute’s announcement, most European
nations had adopted a bioethics code that flatly prohibited genetic
experiments on human subjects. Ten days after the announcement,
U.S. president Bill Clinton issued an executive order that
banned the use of federal money for human cloning research, and
he called on researchers in the private sector to refrain from such experiments
voluntarily. Nevertheless, few observers doubted that
Dolly’s birth marked only the beginning of an intriguing—and possibly
frightening—new chapter in the history of science.
20 March 2009
Cell phone
The invention:
Mobile telephone system controlled by computers
to use a region’s radio frequencies, or channels, repeatedly,
thereby accommodating large numbers of users.
The people behind the invention:
William Oliver Baker (1915- ), the president of Bell
Laboratories
Richard H. Fefrenkiel, the head of the mobile systems
engineering department at Bell
10 March 2009
CAT scanner
The invention:
A technique that collects X-ray data from solid,
opaque masses such as human bodies and uses a computer to
construct a three-dimensional image.
The people behind the invention:
Godfrey Newbold Hounsfield (1919- ), an English
electronics engineer who shared the 1979 Nobel Prize in
Physiology or Medicine
Allan M. Cormack (1924-1998), a South African-born American
physicist who shared the 1979 Nobel Prize in Physiology or
Medicine
James Ambrose, an English radiologist
Cassette recording
The invention: Self-contained system making it possible to record
and repeatedly play back sound without having to thread tape
through a machine.
The person behind the invention:
Fritz Pfleumer, a German engineer whose work on audiotapes
paved the way for audiocassette production
Smaller Is Better
The introduction of magnetic audio recording tape in 1929 was
met with great enthusiasm, particularly in the entertainment industry,
and specifically among radio broadcasters. Although somewhat
practical methods for recording and storing sound for later playback
had been around for some time, audiotape was much easier to
use, store, and edit, and much less expensive to produce.
It was Fritz Pfleumer, a German engineer, who in 1929 filed the
first audiotape patent. His detailed specifications indicated that
tape could be made by bonding a thin coating of oxide to strips of either
paper or film. Pfleumer also suggested that audiotape could be
attached to filmstrips to provide higher-quality sound than was
available with the film sound technologies in use at that time. In
1935, the German electronics firm AEG produced a reliable prototype
of a record-playback machine based on Pfleumer’s idea. By
1947, the American company 3M had refined the concept to the
point where it was able to produce a high-quality tape using a plastic-
based backing and red oxide. The tape recorded and reproduced
sound with a high degree of clarity and dynamic range and would
soon become the standard in the industry.
Still, the tape was sold and used in a somewhat inconvenient
open-reel format. The user had to thread it through a machine and
onto a take-up reel. This process was somewhat cumbersome and
complicated for the layperson. For many years, sound-recording
technology remained a tool mostly for professionals.
In 1963, the first audiocassette was introduced by the Netherlands-based PhilipsNVcompany. This device could be inserted into
a machine without threading. Rewind and fast-forward were faster,
and it made no difference where the tape was stopped prior to the
ejection of the cassette. By contrast, open-reel audiotape required
that the tape be wound fully onto one or the other of the two reels
before it could be taken off the machine.
Technical advances allowed the cassette tape to be much narrower
than the tape used in open reels and also allowed the tape
speed to be reduced without sacrificing sound quality. Thus, the
cassette was easier to carry around, and more sound could be recorded
on a cassette tape. In addition, the enclosed cassette decreased
wear and tear on the tape and protected it from contamination.
Creating a Market
One of the most popular uses for audiocassettes was to record
music from radios and other audio sources for later playback. During
the 1970’s, many radio stations developed “all music” formats
in which entire albums were often played without interruption.
That gave listeners an opportunity to record the music for later
playback. At first, the music recording industry complained about
this practice, charging that unauthorized recording of music from
the radio was a violation of copyright laws. Eventually, the issue
died down as the same companies began to recognize this new, untapped
market for recorded music on cassette.
Audiocassettes, all based on the original Philips design, were being
manufactured by more than sixty companies within only a few
years of their introduction. In addition, spin-offs of that design were
being used in many specialized applications, including dictation,
storage of computer information, and surveillance. The emergence
of videotape resulted in a number of formats for recording and
playing back video based on the same principle. Although each is
characterized by different widths of tape, each uses the same technique
for tape storage and transport.
The cassette has remained a popular means of storing and retrieving
information on magnetic tape for more than a quarter of a
century. During the early 1990’s, digital technologies such as audio
CDs (compact discs) and the more advanced CD-ROM (compact discs that reproduce sound, text, and images via computer) were beginning
to store information in revolutionary new ways. With the
development of this increasingly sophisticated technology, need for
the audiocassette, once the most versatile, reliable, portable, and
economical means of recording, storing, and playing-back sound,
became more limited.
Consequences
The cassette represented a new level of convenience for the audiophile,
resulting in a significant increase in the use of recording
technology in all walks of life. Even small children could operate
cassette recorders and players, which led to their use in schools for a
variety of instructional tasks and in the home for entertainment. The
recording industry realized that audiotape cassettes would allow
consumers to listen to recorded music in places where record players
were impractical: in automobiles, at the beach, even while camping.
The industry also saw the need for widespread availability of
music and information on cassette tape. It soon began distributing
albums on audiocassette in addition to the long-play vinyl discs,
and recording sales increased substantially. This new technology
put recorded music into automobiles for the first time, again resulting
in a surge in sales for recorded music. Eventually, information,
including language instruction and books-on-tape, became popular
commuter fare.
With the invention of the microchip, audiotape players became
available in smaller and smaller sizes, making them truly portable.
Audiocassettes underwent another explosion in popularity during
the early 1980’s, when the Sony Corporation introduced the
Walkman, an extremely compact, almost weightless cassette player
that could be attached to clothing and used with lightweight earphones
virtually anywhere. At the same time, cassettes were suddenly
being used with microcomputers for backing up magnetic
data files.
Home video soon exploded onto the scene, bringing with it new
applications for cassettes. As had happened with audiotape, video
camera-recorder units, called “camcorders,” were miniaturized to
the point where 8-millimeter videocassettes capable of recording up to 90 minutes of live action and sound were widely available. These
cassettes closely resembled the audiocassette first introduced in
1963.
Carbon dating
The invention: Atechnique that measures the radioactive decay of
carbon 14 in organic substances to determine the ages of artifacts
as old as ten thousand years.
The people behind the invention:
Willard Frank Libby (1908-1980), an American chemist who won
the 1960 Nobel Prize in Chemistry
Charles Wesley Ferguson (1922-1986), a scientist who
demonstrated that carbon 14 dates before 1500 b.c. needed to
be corrected
One in a Trillion
Carbon dioxide in the earth’s atmosphere contains a mixture of
three carbon isotopes (isotopes are atoms of the same element that
contain different numbers of neutrons), which occur in the following
percentages: about 99 percent carbon 12, about 1 percent carbon
13, and approximately one atom in a trillion of radioactive carbon
14. Plants absorb carbon dioxide from the atmosphere during photosynthesis,
and then animals eat the plants, so all living plants and
animals contain a small amount of radioactive carbon.
When a plant or animal dies, its radioactivity slowly decreases as
the radioactive carbon 14 decays. The time it takes for half of any radioactive
substance to decay is known as its “half-life.” The half-life
for carbon 14 is known to be about fifty-seven hundred years. The
carbon 14 activity will drop to one-half after one half-life, onefourth
after two half-lives, one-eighth after three half-lives, and so
forth. After ten or twenty half-lives, the activity becomes too low to
be measurable. Coal and oil, which were formed from organic matter
millions of years ago, have long since lost any carbon 14 activity.
Wood samples from an Egyptian tomb or charcoal from a prehistoric
fireplace a few thousand years ago, however, can be dated with
good reliability from the leftover radioactivity.
In the 1940’s, the properties of radioactive elements were still
being discovered and were just beginning to be used to solve problems.
Scientists still did not know the half-life of carbon 14, and archaeologists still depended mainly on historical evidence to determine
the ages of ancient objects.
In early 1947,Willard Frank Libby started a crucial experiment in
testing for radioactive carbon. He decided to test samples of methane
gas from two different sources. One group of samples came
from the sewage disposal plant at Baltimore, Maryland, which was
rich in fresh organic matter. The other sample of methane came from
an oil refinery, which should have contained only ancient carbon
from fossils whose radioactivity should have completely decayed.
The experimental results confirmed Libby’s suspicions: The methane
from fresh sewage was radioactive, but the methane from oil
was not. Evidently, radioactive carbon was present in fresh organic
material, but it decays away eventually.
Tree-Ring Dating
In order to establish the validity of radiocarbon dating, Libby analyzed
known samples of varying ages. These included tree-ring
samples from the years 575 and 1075 and one redwood from 979
b.c.e., as well as artifacts from Egyptian tombs going back to about
3000 b.c.e. In 1949, he published an article in the journal Science that
contained a graph comparing the historical ages and the measured
radiocarbon ages of eleven objects. The results were accurate within
10 percent, which meant that the general method was sound.
The first archaeological object analyzed by carbon dating, obtained
from the Metropolitan Museum of Art in New York, was a
piece of cypress wood from the tomb of King Djoser of Egypt. Based
on historical evidence, the age of this piece of wood was about fortysix
hundred years. A small sample of carbon obtained from this
wood was deposited on the inside of Libby’s radiation counter, giving
a count rate that was about 40 percent lower than that of modern
organic carbon. The resulting age of the wood calculated from its residual
radioactivity was about thirty-eight hundred years, a difference
of eight hundred years. Considering that this was the first object
to be analyzed, even such a rough agreement with the historic
age was considered to be encouraging.
The validity of radiocarbon dating depends on an important assumption—
namely, that the abundance of carbon 14 in nature has been constant for many thousands of years. If carbon 14 was less
abundant at some point in history, organic samples from that era
would have started with less radioactivity. When analyzed today,
their reduced activity would make them appear to be older than
they really are.Charles Wesley Ferguson from the Tree-Ring Research Laboratory
at the University of Arizona tackled this problem. He measured
the age of bristlecone pine trees both by counting the rings and by
using carbon 14 methods. He found that carbon 14 dates before
1500 b.c.e. needed to be corrected. The results show that radiocarbon
dates are older than tree-ring counting dates by as much as several
hundred years for the oldest samples. He knew that the number
of tree rings had given him the correct age of the pines, because trees
accumulate one ring of growth for every year of life. Apparently, the
carbon 14 content in the atmosphere has not been constant. Fortunately,
tree-ring counting gives reliable dates that can be used to
correct radiocarbon measurements back to about 6000 b.c.e.
Impact
Some interesting samples were dated by Libby’s group. The
Dead Sea Scrolls had been found in a cave by an Arab shepherd in
1947, but some Bible scholars at first questioned whether they were
genuine. The linen wrapping from the Book of Isaiah was tested for
carbon 14, giving a date of 100 b.c.e., which helped to establish its
authenticity. Human hair from an Egyptian tomb was determined
to be nearly five thousand years old.Well-preserved sandals from a
cave in eastern Oregon were determined to be ninety-three hundred
years old. A charcoal sample from a prehistoric site in western
South Dakota was found to be about seven thousand years old.
The Shroud of Turin, located in Turin, Italy, has been a controversial
object for many years. It is a linen cloth, more than four meters
long, which shows the image of a man’s body, both front and back.
Some people think it may have been the burial shroud of Jesus
Christ after his crucifixion. Ateam of scientists in 1978 was permitted
to study the shroud, using infrared photography, analysis of
possible blood stains, microscopic examination of the linen fibers,
and other methods. The results were ambiguous. A carbon 14 test
was not permitted because it would have required cutting a piece
about the size of a handkerchief from the shroud.
Anew method of measuring carbon 14 was developed in the late
1980’s. It is called “accelerator mass spectrometry,” or AMS. Unlike
Libby’s method, it does not count the radioactivity of carbon. Instead, a mass spectrometer directly measures the ratio of carbon 14
to ordinary carbon. The main advantage of this method is that the
sample size needed for analysis is about a thousand times smaller
than before. The archbishop of Turin permitted three laboratories
with the appropriate AMS apparatus to test the shroud material.
The results agreed that the material was from the fourteenth century,
not from the time of Christ. The figure on the shroud may be a
watercolor painting on linen.
Since Libby’s pioneering experiments in the late 1940’s, carbon
14 dating has established itself as a reliable dating technique for archaeologists
and cultural historians. Further improvements are expected
to increase precision, to make it possible to use smaller samples,
and to extend the effective time range of the method back to
fifty thousand years or earlier.
05 March 2009
CAD/CAM
The invention: Computer-Aided Design (CAD) and Computer-
Aided Manufacturing (CAM) enhanced flexibility in engineering
design, leading to higher quality and reduced time for manufacturing
The people behind the invention:
Patrick Hanratty, a General Motors Research Laboratory
worker who developed graphics programs
Jack St. Clair Kilby (1923- ), a Texas Instruments employee
who first conceived of the idea of the integrated circuit
Robert Noyce (1927-1990), an Intel Corporation employee who
developed an improved process of manufacturing
integrated circuits on microchips
Don Halliday, an early user of CAD/CAM who created the
Made-in-America car in only four months by using CAD
and project management software
Fred Borsini, an early user of CAD/CAM who demonstrated
its power
Summary of Event
Computer-Aided Design (CAD) is a technique whereby geometrical
descriptions of two-dimensional (2-D) or three-dimensional (3-
D) objects can be created and stored, in the form of mathematical
models, in a computer system. Points, lines, and curves are represented
as graphical coordinates. When a drawing is requested from
the computer, transformations are performed on the stored data,
and the geometry of a part or a full view from either a two- or a
three-dimensional perspective is shown. CAD systems replace the
tedious process of manual drafting, and computer-aided drawing
and redrawing that can be retrieved when needed has improved
drafting efficiency. A CAD system is a combination of computer
hardware and software that facilitates the construction of geometric
models and, in many cases, their analysis. It allows a wide variety of
visual representations of those models to be displayed.Computer-Aided Manufacturing (CAM) refers to the use of computers
to control, wholly or partly, manufacturing processes. In
practice, the term is most often applied to computer-based developments
of numerical control technology; robots and flexible manufacturing
systems (FMS) are included in the broader use of CAM
systems. A CAD/CAM interface is envisioned as a computerized
database that can be accessed and enriched by either design or manufacturing
professionals during various stages of the product development
and production cycle.
In CAD systems of the early 1990’s, the ability to model solid objects
became widely available. The use of graphic elements such as
lines and arcs and the ability to create a model by adding and subtracting
solids such as cubes and cylinders are the basic principles of
CADand of simulating objects within a computer.CADsystems enable
computers to simulate both taking things apart (sectioning)
and putting things together for assembly. In addition to being able
to construct prototypes and store images of different models, CAD
systems can be used for simulating the behavior of machines, parts,
and components. These abilities enable CAD to construct models
that can be subjected to nondestructive testing; that is, even before
engineers build a physical prototype, the CAD model can be subjected
to testing and the results can be analyzed. As another example,
designers of printed circuit boards have the ability to test their
circuits on a CAD system by simulating the electrical properties of
components.
During the 1950’s, the U.S. Air Force recognized the need for reducing
the development time for special aircraft equipment. As a
result, the Air Force commissioned the Massachusetts Institute of
Technology to develop numerically controlled (NC) machines that
were programmable. A workable demonstration of NC machines
was made in 1952; this began a new era for manufacturing. As the
speed of an aircraft increased, the cost of manufacturing also increased
because of stricter technical requirements. This higher cost
provided a stimulus for the further development of NC technology,
which promised to reduce errors in design before the prototype
stage.
The early 1960’s saw the development of mainframe computers.
Many industries valued computing technology for its speed and for its accuracy in lengthy and tedious numerical operations in design,
manufacturing, and other business functional areas. Patrick
Hanratty, working for General Motors Research Laboratory, saw
other potential applications and developed graphics programs for
use on mainframe computers. The use of graphics in software aided
the development of CAD/CAM, allowing visual representations of
models to be presented on computer screens and printers.
The 1970’s saw an important development in computer hardware,
namely the development and growth of personal computers
(PCs). Personal computers became smaller as a result of the development
of integrated circuits. Jack St. Clair Kilby, working for Texas
Instruments, first conceived of the integrated circuit; later, Robert
Noyce, working for Intel Corporation, developed an improved process
of manufacturing integrated circuits on microchips. Personal
computers using these microchips offered both speed and accuracy
at costs much lower than those of mainframe computers.
Five companies offered integrated commercial computer-aided
design and computer-aided manufacturing systems by the first half
of 1973. Integration meant that both design and manufacturing
were contained in one system. Of these five companies—Applicon,
Computervision, Gerber Scientific, Manufacturing and Consulting
Services (MCS), and United Computing—four offered turnkey systems
exclusively. Turnkey systems provide design, development,
training, and implementation for each customer (company) based
on the contractual agreement; they are meant to be used as delivered,
with no need for the purchaser to make significant adjustments
or perform programming.
The 1980’s saw a proliferation of mini- and microcomputers with
a variety of platforms (processors) with increased speed and better
graphical resolution. This made the widespread development of
computer-aided design and computer-aided manufacturing possible
and practical. Major corporations spent large research and development
budgets developing CAD/CAM systems that would
automate manual drafting and machine tool movements. Don Halliday,
working for Truesports Inc., provided an early example of the
benefits of CAD/CAM. He created the Made-in-America car in only
four months by using CAD and project management software. In
the late 1980’s, Fred Borsini, the president of Leap Technologies in Michigan, brought various products to market in record time through
the use of CAD/CAM.
In the early 1980’s, much of theCAD/CAMindustry consisted of
software companies. The cost for a relatively slow interactive system
in 1980 was close to $100,000. The late 1980’s saw the demise of
minicomputer-based systems in favor of Unix work stations and
PCs based on 386 and 486 microchips produced by Intel. By the time
of the International Manufacturing Technology show in September,
1992, the industry could show numerous CAD/CAM innovations
including tools, CAD/CAM models to evaluate manufacturability
in early design phases, and systems that allowed use of the same
data for a full range of manufacturing functions.
Impact
In 1990, CAD/CAM hardware sales by U.S. vendors reached
$2.68 billion. In software alone, $1.42 billion worth of CAD/CAM
products and systems were sold worldwide by U.S. vendors, according
to International Data Corporation figures for 1990. CAD/
CAM systems were in widespread use throughout the industrial
world. Development lagged in advanced software applications,
particularly in image processing, and in the communications software
and hardware that ties processes together.
A reevaluation of CAD/CAM systems was being driven by the
industry trend toward increased functionality of computer-driven
numerically controlled machines. Numerical control (NC) software
enables users to graphically define the geometry of the parts in a
product, develop paths that machine tools will follow, and exchange
data among machines on the shop floor. In 1991, NC configuration
software represented 86 percent of total CAM sales. In 1992,
the market shares of the five largest companies in the CAD/CAM
market were 29 percent for International Business Machines, 17 percent
for Intergraph, 11 percent for Computervision, 9 percent for
Hewlett-Packard, and 6 percent for Mentor Graphics.
General Motors formed a joint venture with Ford and Chrysler to
develop a common computer language in order to make the next
generation of CAD/CAM systems easier to use. The venture was
aimed particularly at problems that posed barriers to speeding up the design of new automobiles. The three car companies all had sophisticated
computer systems that allowed engineers to design
parts on computers and then electronically transmit specifications
to tools that make parts or dies.
CAD/CAM technology was expected to advance on many fronts.
As of the early 1990’s, different CAD/CAM vendors had developed
systems that were often incompatible with one another, making it
difficult to transfer data from one system to another. Large corporations,
such as the major automakers, developed their own interfaces
and network capabilities to allow different systems to communicate.
Major users of CAD/CAM saw consolidation in the industry
through the establishment of standards as being in their interests.
Resellers of CAD/CAM products also attempted to redefine
their markets. These vendors provide technical support and service
to users. The sale of CAD/CAM products and systems offered substantial
opportunities, since demand remained strong. Resellers
worked most effectively with small and medium-sized companies,
which often were neglected by the primary sellers of CAD/CAM
equipment because they did not generate a large volume of business.
Some projections held that by 1995 half of all CAD/CAM systems
would be sold through resellers, at a cost of $10,000 or less for
each system. The CAD/CAM market thus was in the process of dividing
into two markets: large customers (such as aerospace firms
and automobile manufacturers) that would be served by primary
vendors, and small and medium-sized customers that would be serviced
by resellers.
CAD will find future applications in marketing, the construction
industry, production planning, and large-scale projects such as shipbuilding
and aerospace. Other likely CAD markets include hospitals,
the apparel industry, colleges and universities, food product
manufacturers, and equipment manufacturers. As the linkage between
CAD and CAM is enhanced, systems will become more productive.
The geometrical data from CAD will be put to greater use
by CAM systems.
CAD/CAM already had proved that it could make a big difference
in productivity and quality. Customer orders could be changed
much faster and more accurately than in the past, when a change
could require a manual redrafting of a design. Computers could do automatically in minutes what once took hours manually. CAD/
CAM saved time by reducing, and in some cases eliminating, human
error. Many flexible manufacturing systems (FMS) had machining
centers equipped with sensing probes to check the accuracy
of the machining process. These self-checks can be made part of numerical
control (NC) programs. With the technology of the early
1990’s, some experts estimated that CAD/CAM systems were in
many cases twice as productive as the systems they replaced; in the
long run, productivity is likely to improve even more, perhaps up to
three times that of older systems or even higher. As costs for CAD/
CAM systems concurrently fall, the investment in a system will be
recovered more quickly. Some analysts estimated that by the mid-
1990’s, the recovery time for an average system would be about
three years.
Another frontier in the development of CAD/CAM systems is
expert (or knowledge-based) systems, which combine data with a
human expert’s knowledge, expressed in the form of rules that the
computer follows. Such a system will analyze data in a manner
mimicking intelligence. For example, a 3-D model might be created
from standard 2-D drawings. Expert systems will likely play a
pivotal role in CAM applications. For example, an expert system
could determine the best sequence of machining operations to produce
a component.
Continuing improvements in hardware, especially increased
speed, will benefit CAD/CAM systems. Software developments,
however, may produce greater benefits. Wider use of CAD/CAM
systems will depend on the cost savings from improvements in
hardware and software as well as on the productivity of the systems
and the quality of their product. The construction, apparel,
automobile, and aerospace industries have already experienced
increases in productivity, quality, and profitability through the use
of CAD/CAM. A case in point is Boeing, which used CAD from
start to finish in the design of the 757.
Buna rubber
The invention: The first practical synthetic rubber product developed,
Buna inspired the creation of other other synthetic substances
that eventually replaced natural rubber in industrial applications.
The people behind the invention:
Charles de la Condamine (1701-1774), a French naturalist
Charles Goodyear (1800-1860), an American inventor
Joseph Priestley (1733-1804), an English chemist
Charles Greville Williams (1829-1910), an English chemist
A New Synthetic Rubber
The discovery of natural rubber is often credited to the French
scientist Charles de la Condamine, who, in 1736, sent the French
Academy of Science samples of an elastic material used by Peruvian
Indians to make balls that bounced. The material was primarily a
curiosity until 1770, when Joseph Priestley, an English chemist, discovered
that it rubbed out pencil marks, after which he called it
“rubber.” Natural rubber, made from the sap of the rubber tree
(Hevea brasiliensis), became important after Charles Goodyear discovered
in 1830 that heating rubber with sulfur (a process called
“vulcanization”) made it more elastic and easier to use. Vulcanized
natural rubber came to be used to make raincoats, rubber bands,
and motor vehicle tires.
Natural rubber is difficult to obtain (making one tire requires
the amount of rubber produced by one tree in two years), and wars
have often cut off supplies of this material to various countries.
Therefore, efforts to manufacture synthetic rubber began in the
late eighteenth century. Those efforts followed the discovery by
English chemist Charles GrevilleWilliams and others in the 1860’s
that natural rubber was composed of thousands of molecules of a
chemical called isoprene that had been joined to form giant, necklace-
like molecules. The first successful synthetic rubber, Buna,
was patented by Germany’s I. G. Farben Industrie in 1926. The success of this rubber led to the development of many other synthetic
rubbers, which are now used in place of natural rubber in many
applications.From Erasers to Gas Pumps
Natural rubber belongs to the group of chemicals called “polymers.”
Apolymer is a giant molecule that is made up of many simpler
chemical units (“monomers”) that are attached chemically to
form long strings. In natural rubber, the monomer is isoprene
(dimethylbutadiene). The first efforts to make a synthetic rubber
used the discovery that isoprene could be made and converted
into an elastic polymer. The synthetic rubber that was created from
isoprene was, however, inferior to natural rubber. The first Buna
rubber, which was patented by I. G. Farben in 1926, was better, but it
was still less than ideal. Buna rubber was made by polymerizing the
monomer butadiene in the presence of sodium. The name Buna
comes from the first two letters of the words “butadiene” and “natrium”
(German for sodium). Natural and Buna rubbers are called
homopolymers because they contain only one kind of monomer.
The ability of chemists to make Buna rubber, along with its successful
use, led to experimentation with the addition of other monomers
to isoprene-like chemicals used to make synthetic rubber.
Among the first great successes were materials that contained two
alternating monomers; such materials are called “copolymers.” If
the two monomers are designated Aand B, part of a polymer molecule
can be represented as (ABABABABABABABABAB). Numerous
synthetic copolymers, which are often called “elastomers,” now
replace natural rubber in applications where they have superior
properties. All elastomers are rubbers, since objects made from
them both stretch greatly when pulled and return quickly to their
original shape when the tension is released.
Two other well-known rubbers developed by I. G. Farben are the
copolymers called Buna-N and Buna-S. These materials combine butadiene
and the monomers acrylonitrile and styrene, respectively.
Many modern motor vehicle tires are made of synthetic rubber that
differs little from Buna-S rubber. This rubber was developed after
the United States was cut off in the 1940’s, during World War II,
from its Asian source of natural rubber. The solution to this problem
was the development of a synthetic rubber industry based on GR-S
rubber (government rubber plus styrene), which was essentially
Buna-S rubber. This rubber is still widely used.Buna-S rubber is often made by mixing butadiene and styrene in
huge tanks of soapy water, stirring vigorously, and heating the mixture.
The polymer contains equal amounts of butadiene and styrene
(BSBSBSBSBSBSBSBS). When the molecules of the Buna-S polymer
reach the desired size, the polymerization is stopped and the rubber
is coagulated (solidified) chemically. Then, water and all the unused
starting materials are removed, after which the rubber is dried and
shipped to various plants for use in tires and other products. The
major difference between Buna-S and GR-S rubber is that the method
of making GR-S rubber involves the use of low temperatures.
Buna-N rubber is made in a fashion similar to that used for Buna-
S, using butadiene and acrylonitrile. Both Buna-N and the related
neoprene rubber, invented by Du Pont, are very resistant to gasoline
and other liquid vehicle fuels. For this reason, they can be used in
gas-pump hoses. All synthetic rubbers are vulcanized before they
are used in industry.
Impact
Buna rubber became the basis for the development of the other
modern synthetic rubbers. These rubbers have special properties
that make them suitable for specific applications. One developmental
approach involved the use of chemically modified butadiene in
homopolymers such as neoprene. Made of chloroprene (chlorobutadiene),
neoprene is extremely resistant to sun, air, and chemicals.
It is so widely used in machine parts, shoe soles, and hoses that
more than 400 million pounds are produced annually.
Another developmental approach involved copolymers that alternated
butadiene with other monomers. For example, the successful
Buna-N rubber (butadiene and acrylonitrile) has properties
similar to those of neoprene. It differs sufficiently from neoprene,
however, to be used to make items such as printing press rollers.
About 200 million pounds of Buna-N are produced annually. Some
4 billion pounds of the even more widely used polymer Buna-S/
GR-S are produced annually, most of which is used to make tires.
Several other synthetic rubbers have significant industrial applications,
and efforts to make copolymers for still other purposes continue.
20 February 2009
Bullet train
The invention: An ultrafast passenger railroad system capable of
moving passengers at speeds double or triple those of ordinary
trains.
The people behind the invention:
Ikeda Hayato (1899-1965), Japanese prime minister from 1960 to
1964, who pushed for the expansion of public expenditures
Shinji Sogo (1901-1971), the president of the Japanese National
Railways, the “father of the bullet train”
Building a Faster Train
By 1900, Japan had a world-class railway system, a logical result
of the country’s dense population and the needs of its modernizing
economy. After 1907, the government controlled the system
through the Japanese National Railways (JNR). In 1938, JNR engineers
first suggested the idea of a train that would travel 125 miles
per hour from Tokyo to the southern city of Shimonoseki. Construction
of a rapid train began in 1940 but was soon stopped because of
World War II.
The 311-mile railway between Tokyo and Osaka, the Tokaido
Line, has always been the major line in Japan. By 1957, a business express
along the line operated at an average speed of 57 miles per
hour, but the double-track line was rapidly reaching its transport capacity.
The JNR established two investigative committees to explore
alternative solutions. In 1958, the second committee recommended
the construction of a high-speed railroad on a separate double track,
to be completed in time for the Tokyo Olympics of 1964. The Railway
Technical Institute of the JNR concluded that it was feasible to
design a line that would operate at an average speed of about 130
miles per hour, cutting time for travel between Tokyo and Osaka
from six hours to three hours.
By 1962, about 17 miles of the proposed line were completed for
test purposes. During the next two years, prototype trains were
tested to correct flaws and make improvements in the design. The entire project was completed on schedule in July, 1964, with total construction
costs of more than $1 billion, double the original estimates.
The Speeding Bullet
Service on the Shinkansen, or New Trunk Line, began on October
1, 1964, ten days before the opening of the Olympic Games.
Commonly called the “bullet train” because of its shape and speed,
the Shinkansen was an instant success with the public, both in Japan
and abroad. As promised, the time required to travel between Tokyo
and Osaka was cut in half. Initially, the system provided daily
services of sixty trains consisting of twelve cars each, but the number
of scheduled trains was almost doubled by the end of the year.
The Shinkansen was able to operate at its unprecedented speed
because it was designed and operated as an integrated system,
making use of countless technological and scientific developments.
Tracks followed the standard gauge of 56.5 inches, rather than the
more narrow gauge common in Japan. For extra strength, heavy welded rails were attached directly onto reinforced concrete slabs.
The minimum radius of a curve was 8,200 feet, except where sharper
curves were mandated by topography. In many ways similar to
modern airplanes, the railway cars were made airtight in order to
prevent ear discomfort caused by changes in pressure when trains
enter tunnels.
The Shinkansen trains were powered by electric traction motors,
with four 185-kilowatt motors on each car—one motor attached to
each axle. This design had several advantages: It provided an even
distribution of axle load for reducing strain on the tracks; it allowed
the application of dynamic brakes (where the motor was used for
braking) on all axles; and it prevented the failure of one or two units
from interrupting operation of the entire train. The 25,000-volt electrical
current was carried by trolley wire to the cars, where it was
rectified into a pulsating current to drive the motors.
The Shinkansen system established a casualty-free record because
of its maintenance policies combined with its computerized
Centralized Traffic Control system. The control room at Tokyo Station
was designed to maintain timely information about the location
of all trains and the condition of all routes. Although train operators
had some discretion in determining speed, automatic brakes
also operated to ensure a safe distance between trains. At least once
each month, cars were thoroughly inspected; every ten days, an inspection
train examined the conditions of tracks, communication
equipment, and electrical systems.
Impact
Public usage of the Tokyo-Osaka bullet train increased steadily
because of the system’s high speed, comfort, punctuality, and superb
safety record. Businesspeople were especially happy that the
rapid service allowed them to make the round-trip without the necessity
of an overnight stay, and continuing modernization soon allowed
nonstop trains to make a one-way trip in two and one-half
hours, requiring speeds of 160 miles per hour in some stretches. By
the early 1970’s, the line was transporting a daily average of 339,000
passengers in 240 trains, meaning that a train departed from Tokyo
about every ten minutes The popularity of the Shinkansen system quickly resulted in demands
for its extension into other densely populated regions. In
1972, a 100-mile stretch between Osaka and Okayama was opened
for service. By 1975, the line was further extended to Hakata on the
island of Kyushu, passing through the Kammon undersea tunnel.
The cost of this 244-mile stretch was almost $2.5 billion. In 1982,
lines were completed from Tokyo to Niigata and from Tokyo to
Morioka. By 1993, the system had grown to 1,134 miles of track.
Since high usage made the system extremely profitable, the sale of
the JNR to private companies in 1987 did not appear to produce adverse
consequences.
The economic success of the Shinkansen had a revolutionary effect
on thinking about the possibilities of modern rail transportation,
leading one authority to conclude that the line acted as “a
savior of the declining railroad industry.” Several other industrial
countries were stimulated to undertake large-scale railway projects;
France, especially, followed Japan’s example by constructing highspeed
electric railroads from Paris to Nice and to Lyon. By the mid-
1980’s, there were experiments with high-speed trains based on
magnetic levitation and other radical innovations, but it was not
clear whether such designs would be able to compete with the
Shinkansen model.
Bubble memory
The invention: An early nonvolatile medium for storing information
on computers.
The person behind the invention:
Andrew H. Bobeck (1926- ), a Bell Telephone Laboratories
scientist
Magnetic Technology
The fanfare over the commercial prospects of magnetic bubbles
was begun on August 8, 1969, by a report appearing in both The New
York Times and TheWall Street Journal. The early 1970’s would see the
anticipation mount (at least in the computer world) with each prediction
of the benefits of this revolution in information storage technology.
Although it was not disclosed to the public until August of 1969,
magnetic bubble technology had held the interest of a small group
of researchers around the world for many years. The organization
that probably can claim the greatest research advances with respect
to computer applications of magnetic bubbles is Bell Telephone
Laboratories (later part of American Telephone and Telegraph). Basic
research into the properties of certain ferrimagnetic materials
started at Bell Laboratories shortly after the end of World War II
(1939-1945).
Ferrimagnetic substances are typically magnetic iron oxides. Research
into the properties of these and related compounds accelerated
after the discovery of ferrimagnetic garnets in 1956 (these are a
class of ferrimagnetic oxide materials that have the crystal structure
of garnet). Ferrimagnetism is similar to ferromagnetism, the phenomenon
that accounts for the strong attraction of one magnetized
body for another. The ferromagnetic materials most suited for bubble
memories contain, in addition to iron, the element yttrium or a
metal from the rare earth series.
It was a fruitful collaboration between scientist and engineer,
between pure and applied science, that produced this promising breakthrough in data storage technology. In 1966, Bell Laboratories
scientist Andrew H. Bobeck and his coworkers were the first to realize
the data storage potential offered by the strange behavior of thin
slices of magnetic iron oxides under an applied magnetic field. The
first U.S. patent for a memory device using magnetic bubbles was
filed by Bobeck in the fall of 1966 and issued on August 5, 1969.
Bubbles Full of Memories
The three basic functional elements of a computer are the central
processing unit, the input/output unit, and memory. Most implementations
of semiconductor memory require a constant power
source to retain the stored data. If the power is turned off, all stored
data are lost. Memory with this characteristic is called “volatile.”
Disks and tapes, which are typically used for secondary memory,
are “nonvolatile.” Nonvolatile memory relies on the orientation of
magnetic domains, rather than on electrical currents, to sustain its
existence.
One can visualize by analogy how this will work by taking a
group of permanent bar magnets that are labeled withNfor north at
one end and S for south at the other. If an arrow is painted starting
from the north end with the tip at the south end on each magnet, an
orientation can then be assigned to a magnetic domain (here one
whole bar magnet). Data are “stored” with these bar magnets by arranging
them in rows, some pointing up, some pointing down. Different
arrangements translate to different data. In the binary world
of the computer, all information is represented by two states. A
stored data item (known as a “bit,” or binary digit) is either on or off,
up or down, true or false, depending on the physical representation.
The “on” state is commonly labeled with the number 1 and the “off”
state with the number 0. This is the principle behind magnetic disk
and tape data storage.
Now imagine a thin slice of a certain type of magnetic material in
the shape of a 3-by-5-inch index card. Under a microscope, using a
special source of light, one can see through this thin slice in many regions
of the surface. Darker, snakelike regions can also be seen, representing
domains of an opposite orientation (polarity) to the transparent
regions. If a weak external magnetic field is then applied by placing a permanent magnet of the same shape as the card on the
underside of the slice, a strange thing happens to the dark serpentine
pattern—the long domains shrink and eventually contract into
“bubbles,” tiny magnetized spots. Viewed from the side of the slice,
the bubbles are cylindrically shaped domains having a polarity opposite
to that of the material on which they rest. The presence or absence
of a bubble indicates either a 0 or a 1 bit. Data bits are stored by
moving the bubbles in the thin film. As long as the field is applied
by the permanent magnet substrate, the data will be retained. The
bubble is thus a nonvolatile medium for data storage.Consequences
Magnetic bubble memory created quite a stir in 1969 with its
splashy public introduction. Most of the manufacturers of computer
chips immediately instituted bubble memory development projects.
Texas Instruments, Philips, Hitachi, Motorola, Fujitsu, and International
Business Machines (IBM) joined the race with Bell Laboratories
to mass-produce bubble memory chips. Texas Instruments
became the first major chip manufacturer to mass-produce bubble
memories in the mid-to-late 1970’s. By 1990, however, almost all the
research into magnetic bubble technology had shifted to Japan.
Hitachi and Fujitsu began to invest heavily in this area.
Mass production proved to be the most difficult task. Although
the materials it uses are different, the process of producing magnetic
bubble memory chips is similar to the process applied in producing
semiconductor-based chips such as those used for random access
memory (RAM). It is for this reason that major semiconductor manufacturers
and computer companies initially invested in this technology.
Lower fabrication yields and reliability issues plagued
early production runs, however, and, although these problems
have mostly been solved, gains in the performance characteristics of
competing conventional memories have limited the impact that
magnetic bubble technology has had on the marketplace. The materials
used for magnetic bubble memories are costlier and possess
more complicated structures than those used for semiconductor or
disk memory.
Speed and cost of materials are not the only bases for comparison. It is possible to perform some elementary logic with magnetic
bubbles. Conventional semiconductor-based memory offers storage
only. The capability of performing logic with magnetic bubbles
puts bubble technology far ahead of other magnetic technologies
with respect to functional versatility.
Asmall niche market for bubble memory developed in the 1980’s.
Magnetic bubble memory can be found in intelligent terminals, desktop
computers, embedded systems, test equipment, and similar microcomputer-
based systems.
Subscribe to:
Posts (Atom)