04 May 2009
Colossus computer
The invention: The first all-electronic calculating device, the Colossus
computer was built to decipher German military codes
during World War II.
The people behind the invention:
Thomas H. Flowers, an electronics expert
Max H. A. Newman (1897-1984), a mathematician
Alan Mathison Turing (1912-1954), a mathematician
C. E. Wynn-Williams, a member of the Telecommunications
Research Establishment
An Undercover Operation
In 1939, during World War II (1939-1945), a team of scientists,
mathematicians, and engineers met at Bletchley Park, outside London,
to discuss the development of machines that would break the
secret code used in Nazi military communications. The Germans
were using a machine called “Enigma” to communicate in code between
headquarters and field units. Polish scientists, however, had
been able to examine a German Enigma and between 1928 and 1938
were able to break the codes by using electromechanical codebreaking
machines called “bombas.” In 1938, the Germans made the
Enigma more complicated, and the Polish were no longer able to
break the codes. In 1939, the Polish machines and codebreaking
knowledge passed to the British.
Alan Mathison Turing was one of the mathematicians gathered
at Bletchley Park to work on codebreaking machines. Turing was
one of the first people to conceive of the universality of digital computers.
He first mentioned the “Turing machine” in 1936 in an article
published in the Proceedings of the London Mathematical Society.
The Turing machine, a hypothetical device that can solve any
problem that involves mathematical computation, is not restricted
to only one task—hence the universality feature.
Turing suggested an improvement to the Bletchley codebreaking
machine, the “Bombe,” which had been modeled on the Polish bomba. This improvement increased the computing power of the
machine. The new codebreaking machine replaced the tedious
method of decoding by hand, which in addition to being slow,
was ineffective in dealing with complicated encryptions that were
changed daily.
Building a Better Mousetrap
The Bombe was very useful. In 1942, when the Germans started
using a more sophisticated cipher machine known as the “Fish,”
Max H. A. Newman, who was in charge of one subunit at Bletchley
Park, believed that an automated device could be designed to break
the codes produced by the Fish. Thomas H. Flowers, who was in
charge of a switching group at the Post Office Research Station at
Dollis Hill, had been approached to build a special-purpose electromechanical
device for Bletchley Park in 1941. The device was not
useful, and Flowers was assigned to other problems.
Flowers began to work closely with Turing, Newman, and C. E.
Wynn-Williams of the Telecommunications Research Establishment
(TRE) to develop a machine that could break the Fish codes. The
Dollis Hill team worked on the tape driving and reading problems,
and Wynn-Williams’s team at TRE worked on electronic counters
and the necessary circuitry. Their efforts produced the “Heath Robinson,”
which could read two thousand characters per second. The
Heath Robinson used vacuum tubes, an uncommon component in
the early 1940’s. The vacuum tubes performed more reliably and
rapidly than the relays that had been used for counters. Heath Robinson
and the companion machines proved that high-speed electronic
devices could successfully do cryptoanalytic work (solve decoding
problems).
Entirely automatic in operation once started, the Heath Robinson
was put together at Bletchley Park in the spring of 1943. The Heath
Robinson became obsolete for codebreaking shortly after it was put
into use, so work began on a bigger, faster, and more powerful machine:
the Colossus.
Flowers led the team that designed and built the Colossus in
eleven months at Dollis Hill. The first Colossus (Mark I) was a bigger,
faster version of the Heath Robinson and read about five thousand characters per second. Colossus had approximately fifteen
hundred vacuum tubes, which was the largest number that had
ever been used at that time. Although Turing and Wynn-Williams
were not directly involved with the design of the Colossus, their
previous work on the Heath Robinson was crucial to the project,
since the first Colossus was based on the Heath Robinson.
Colossus became operational at Bletchley Park in December,
1943, and Flowers made arrangements for the manufacture of its
components in case other machines were required. The request for
additional machines came in March, 1944. The second Colossus, the
Mark II, was extensively redesigned and was able to read twentyfive
thousand characters per second because it was capable of performing
parallel operations (carrying out several different operations
at once, instead of one at a time); it also had a short-term
memory. The Mark II went into operation on June 1, 1944. More
machines were made, each with further modifications, until there
were ten. The Colossus machines were special-purpose, programcontrolled
electronic digital computers, the only known electronic
programmable computers in existence in 1944. The use of electronics
allowed for a tremendous increase in the internal speed of the
machine.
Impact
The Colossus machines gave Britain the best codebreaking machines
of World War II and provided information that was crucial
for the Allied victory. The information decoded by Colossus, the actual
messages, and their influence on military decisions would remain
classified for decades after the war.
The later work of several of the people involved with the Bletchley
Park projects was important in British computer development
after the war. Newman’s and Turing’s postwar careers were closely
tied to emerging computer advances. Newman, who was interested
in the impact of computers on mathematics, received a grant from
the Royal Society in 1946 to establish a calculating machine laboratory
at Manchester University. He was also involved with postwar
computer growth in Britain.
Several other members of the Bletchley Park team, including Turing, joined Newman at Manchester in 1948. Before going to Manchester
University, however, Turing joined Britain’s National Physical
Laboratory (NPL). At NPL, Turing worked on an advanced
computer known as the Pilot Automatic Computing Engine (Pilot
ACE). While at NPL, Turing proposed the concept of a stored program,
which was a controversial but extremely important idea in
computing. A“stored” program is one that remains in residence inside
the computer, making it possible for a particular program and
data to be fed through an input device simultaneously. (The Heath
Robinson and Colossus machines were limited by utilizing separate
input tapes, one for the program and one for the data to be analyzed.)
Turing was among the first to explain the stored-program
concept in print. He was also among the first to imagine how subroutines
could be included in a program. (Asubroutine allows separate
tasks within a large program to be done in distinct modules; in
effect, it is a detour within a program. After the completion of the
subroutine, the main program takes control again.)
22 April 2009
Color television
The invention:
System for broadcasting full-color images over the
airwaves.
The people behind the invention:
Peter Carl Goldmark (1906-1977), the head of the CBS research
and development laboratory
William S. Paley (1901-1990), the businessman who took over
CBS
David Sarnoff (1891-1971), the founder of RCA
11 April 2009
Color film
The invention:Aphotographic medium used to take full-color pictures.
The people behind the invention:
Rudolf Fischer (1881-1957), a German chemist
H. Siegrist (1885-1959), a German chemist and Fischer’s
collaborator
Benno Homolka (1877-1949), a German chemist
The Process Begins
Around the turn of the twentieth century, Arthur-Louis Ducos du
Hauron, a French chemist and physicist, proposed a tripack (threelayer)
process of film development in which three color negatives
would be taken by means of superimposed films. This was a subtractive
process. (In the “additive method” of making color pictures,
the three colors are added in projection—that is, the colors are formed
by the mixture of colored light of the three primary hues. In the
“subtractive method,” the colors are produced by the superposition
of prints.) In Ducos du Hauron’s process, the blue-light negative
would be taken on the top film of the pack; a yellow filter below it
would transmit the yellow light, which would reach a green-sensitive
film and then fall upon the bottom of the pack, which would be sensitive
to red light. Tripacks of this type were unsatisfactory, however,
because the light became diffused in passing through the emulsion
layers, so the green and red negatives were not sharp.
To obtain the real advantage of a tripack, the three layers must
be coated one over the other so that the distance between the bluesensitive
and red-sensitive layers is a small fraction of a thousandth
of an inch. Tripacks of this type were suggested by the early pioneers
of color photography, who had the idea that the packs would
be separated into three layers for development and printing. The
manipulation of such systems proved to be very difficult in practice.
It was also suggested, however, that it might be possible to develop
such tripacks as a unit and then, by chemical treatment, convert the
silver images into dye images.Fischer’s Theory
One of the earliest subtractive tripack methods that seemed to
hold great promise was that suggested by Rudolf Fischer in 1912. He
proposed a tripack that would be made by coating three emulsions
on top of one another; the lowest one would be red-sensitive, the
middle one would be green-sensitive, and the top one would be bluesensitive.
Chemical substances called “couplers,” which would produce
dyes in the development process, would be incorporated into
the layers. In this method, the molecules of the developing agent, after
becoming oxidized by developing the silver image, would react
with the unoxidized form (the coupler) to produce the dye image.
The two types of developing agents described by Fischer are
paraminophenol and paraphenylenediamine (or their derivatives).
The five types of dye that Fischer discovered are formed when silver
images are developed by these two developing agents in the presence
of suitable couplers. The five classes of dye he used (indophenols,
indoanilines, indamines, indothiophenols, and azomethines)
were already known when Fischer did his work, but it was he who
discovered that the photographic latent image could be used to promote
their formulation from “coupler” and “developing agent.”
The indoaniline and azomethine types have been found to possess
the necessary properties, but the other three suffer from serious defects.
Because only p-phenylenediamine and its derivatives can be
used to form the indoaniline and azomethine dyes, it has become
the most widely used color developing agent.Impact
In the early 1920’s, Leopold Mannes and Leopold Godowsky
made a great advance beyond the Fischer process. Working on a
new process of color photography, they adopted coupler development,
but instead of putting couplers into the emulsion as Fischer
had, they introduced them during processing. Finally, in 1935, the
film was placed on the market under the name “Kodachrome,” a
name that had been used for an early two-color process.
The first use of the new Kodachrome process in 1935 was for 16-
millimeter film. Color motion pictures could be made by the Kodachrome process as easily as black-and-white pictures, because the
complex work involved (the color development of the film) was
done under precise technical control. The definition (quality of the
image) given by the process was soon sufficient to make it practical
for 8-millimeter pictures, and in 1936, Kodachrome film was introduced
in a 35-millimeter size for use in popular miniature cameras.
Soon thereafter, color processes were developed on a larger scale
and new color materials were rapidly introduced. In 1940, the Kodak
Research Laboratories worked out a modification of the Fischer
process in which the couplers were put into the emulsion layers.
These couplers are not dissolved in the gelatin layer itself, as the
Fischer couplers are, but are carried in small particles of an oily material
that dissolves the couplers, protects them from the gelatin,
and protects the silver bromide from any interaction with the couplers.
When development takes place, the oxidation product of the
developing agent penetrates into the organic particles and reacts
with the couplers so that the dyes are formed in small particles that
are dispersed throughout the layers. In one form of this material,
Ektachrome (originally intended for use in aerial photography), the
film is reversed to produce a color positive. It is first developed with
a black-and-white developer, then reexposed and developed with a
color developer that recombines with the couplers in each layer to
produce the appropriate dyes, all three of which are produced simultaneously
in one development.
In summary, although Fischer did not succeed in putting his theory
into practice, his work still forms the basis of most modern color
photographic systems. Not only did he demonstrate the general
principle of dye-coupling development, but the art is still mainly
confined to one of the two types of developing agent, and two of the
five types of dye, described by him.
COBOL computer language
The invention: The first user-friendly computer programming language,
COBOL was originally designed to solve ballistics problems.
The people behind the invention:
Grace Murray Hopper (1906-1992), an American
mathematician
Howard Hathaway Aiken (1900-1973), an American
mathematician
Plain Speaking
Grace Murray Hopper, a mathematician, was a faculty member
at Vassar College when World War II (1939-1945) began. She enlisted
in the Navy and in 1943 was assigned to the Bureau of Ordnance
Computation Project, where she worked on ballistics problems.
In 1944, the Navy began using one of the first electronic
computers, the Automatic Sequence Controlled Calculator (ASCC),
designed by an International Business Machines (IBM) Corporation
team of engineers headed by Howard Hathaway Aiken, to solve
ballistics problems. Hopper became the third programmer of the
ASCC.
Hopper’s interest in computer programming continued after
the war ended. By the early 1950’s, Hopper’s work with programming
languages had led to her development of FLOW-MATIC, the
first English-language data processing compiler. Hopper’s work
on FLOW-MATIC paved the way for her later work with COBOL
(Common Business Oriented Language).
Until Hopper developed FLOW-MATIC, digital computer programming
was all machine-specific and was written in machine
code. A program designed for one computer could not be used on
another. Every program was both machine-specific and problemspecific
in that the programmer would be told what problem the
machine was going to be asked and then would write a completely
new program for that specific problem in the machine code.Machine code was based on the programmer’s knowledge of the
physical characteristics of the computer as well as the requirements of
the problem to be solved; that is, the programmer had to know what
was happening within the machine as it worked through a series of calculations, which relays tripped when and in what order, and what
mathematical operations were necessary to solve the problem. Programming
was therefore a highly specialized skill requiring a unique
combination of linguistic, reasoning, engineering, and mathematical
abilities that not even all the mathematicians and electrical engineers
who designed and built the early computers possessed.
While every computer still operates in response to the programming,
or instructions, built into it, which are formatted in machine
code, modern computers can accept programs written in nonmachine
code—that is, in various automatic programming languages. They
are able to accept nonmachine code programs because specialized
programs now exist to translate those programs into the appropriate
machine code. These translating programs are known as “compilers,”
or “assemblers,” andFLOW-MATIC was the first such program.
Hopper developed FLOW-MATIC after realizing that it would
be necessary to eliminate unnecessary steps in programming to
make computers more efficient. FLOW-MATIC was based, in part,
on Hopper’s recognition that certain elements, or commands, were
common to many different programming applications. Hopper theorized
that it would not be necessary to write a lengthy series of instructions
in machine code to instruct a computer to begin a series of
operations; instead, she believed that it would be possible to develop
commands in an assembly language in such a way that a programmer
could write one command, such as the word add, that
would translate into a sequence of several commands in machine
code. Hopper’s successful development of a compiler to translate
programming languages into machine code thus meant that programming
became faster and easier. From assembly languages such
asFLOW-MATIC, it was a logical progression to the development of
high-level computer languages, such as FORTRAN (Formula Translation)
and COBOL.The Language of Business
Between 1955 (when FLOW-MATIC was introduced) and 1959, a
number of attempts at developing a specific business-oriented language
were made. IBM and Remington Rand believed that the only
way to market computers to the business community was through the development of a language that business people would be
comfortable using. Remington Rand officials were especially committed
to providing a language that resembled English. None of
the attempts to develop a business-oriented language succeeded,
however, and by 1959 Hopper and other members of the U.S. Department
of Defense had persuaded representatives of various companies
of the need to cooperate.
On May 28 and 29, 1959, a conference sponsored by the Department
of Defense was held at the Pentagon to discuss the problem of
establishing a common language for the adaptation of electronic
computers for data processing. As a result, the first distribution of
COBOL was accomplished on December 17, 1959. Although many
people were involved in the development of COBOL, Hopper played
a particularly important role. She not only found solutions to technical
problems but also succeeded in selling the concept of a common
language from an administrative and managerial point of view. Hopper
recognized that while the companies involved in the commercial
development of computers were in competition with one another, the
use of a common, business-oriented language would contribute to
the growth of the computer industry as a whole, as well as simplify
the training of computer programmers and operators.
Consequences
COBOL was the first compiler developed for business data processing
operations. Its development simplified the training required
for computer users in business applications and demonstrated that
computers could be practical tools in government and industry as
well as in science. Prior to the development of COBOL, electronic
computers had been characterized as expensive, oversized adding
machines that were adequate for performing time-consuming mathematics
but lacked the flexibility that business people required.
In addition, the development of COBOL freed programmers not
only from the need to know machine code but also from the need to
understand the physical functioning of the computers they were using.
Programming languages could be written that were both machine-
independent and almost universally convertible from one
computer to another.Finally, because Hopper and the other committee members worked
under the auspices of the Department of Defense, the software
was not copyrighted, and in a short period of time COBOL became
widely available to anyone who wanted to use it. It diffused rapidly
throughout the industry and contributed to the widespread adaptation
of computers for use in countless settings.
04 April 2009
Cloud seeding
The invention: Technique for inducing rainfall by distributing dry
ice or silver nitrate into reluctant rainclouds.
The people behind the invention:
Vincent Joseph Schaefer (1906-1993), an American chemist and
meteorologist
Irving Langmuir (1881-1957), an American physicist and
chemist who won the 1932 Nobel Prize in Chemistry
Bernard Vonnegut (1914-1997), an American physical chemist
and meteorologist
Praying for Rain
Beginning in 1943, an intense interest in the study of clouds developed
into the practice of weather “modification.” Working for
the General Electric Research Laboratory, Nobel laureate Irving
Langmuir and his assistant researcher and technician, Vincent Joseph
Schaefer, began an intensive study of precipitation and its
causes.
Past research and study had indicated two possible ways that
clouds produce rain. The first possibility is called “coalescing,” a
process by which tiny droplets of water vapor in a cloud merge after
bumping into one another and become heavier and fatter until they
drop to earth. The second possibility is the “Bergeron process” of
droplet growth, named after the Swedish meteorologist Tor Bergeron.
Bergeron’s process relates to supercooled clouds, or clouds
that are at or below freezing temperatures and yet still contain both
ice crystals and liquid water droplets. The size of the water droplets
allows the droplets to remain liquid despite freezing temperatures;
while small droplets can remain liquid only down to 4 degrees Celsius,
larger droplets may not freeze until reaching -15 degrees
Celsius. Precipitation occurs when the ice crystals become heavy
enough to fall. If the temperature at some point below the cloud is
warm enough, it will melt the ice crystals before they reach the
earth, producing rain. If the temperature remains at the freezing point, the ice crystals retain their form and fall as snow.
Schaefer used a deep-freezing unit in order to observe water
droplets in pure cloud form. In order to observe the droplets better,
Schaefer lined the chest with black velvet and concentrated a beam
of light inside. The first agent he introduced inside the supercooled
freezer was his own breath. When that failed to form the desired ice
crystals, he proceeded to try other agents. His hope was to form ice
crystals that would then cause the moisture in the surrounding air
to condense into more ice crystals, which would produce a miniature
snowfall.
He eventually achieved success when he tossed a handful of dry
ice inside and was rewarded with the long-awaited snow. The
freezer was set at the freezing point of water, 0 degrees Celsius, but
not all the particles were ice crystals, so when the dry ice was introduced
all the stray water droplets froze instantly, producing ice
crystals, or snowflakes.
Planting the First Seeds
On November 13, 1946, Schaefer took to the air over Mount
Greylock with several pounds of dry ice in order to repeat the experiment
in nature. After he had finished sprinkling, or seeding, a
supercooled cloud, he instructed the pilot to fly underneath the
cloud he had just seeded. Schaefer was greeted by the sight of snow.
By the time it reached the ground, it had melted into the first-ever
human-made rainfall.
Independently of Schaefer and Langmuir, another General Electric
scientist, Bernard Vonnegut, was also seeking a way to cause
rain. He found that silver iodide crystals, which have the same size
and shape as ice crystals, could “fool” water droplets into condensing
on them. When a certain chemical mixture containing silver iodide
is heated on a special burner called a “generator,” silver iodide
crystals appear in the smoke of the mixture. Vonnegut’s discovery
allowed seeding to occur in a way very different from seeding with
dry ice, but with the same result. Using Vonnegut’s process, the
seeding is done from the ground. The generators are placed outside
and the chemicals are mixed. As the smoke wafts upward, it carries
the newly formed silver iodide crystals with it into the clouds.
The results of the scientific experiments by Langmuir, Vonnegut,
and Schaefer were alternately hailed and rejected as legitimate.
Critics argue that the process of seeding is too complex and
would have to require more than just the addition of dry ice or silver
nitrate in order to produce rain. One of the major problems surrounding
the question of weather modification by cloud seeding is
the scarcity of knowledge about the earth’s atmosphere. Ajourney
begun about fifty years ago is still a long way from being completed.
Impact
Although the actual statistical and other proofs needed to support
cloud seeding are lacking, the discovery in 1946 by the General
Electric employees set off a wave of interest and demand for information
that far surpassed the interest generated by the discovery of
nuclear fission shortly before. The possibility of ending drought
and, in the process, hunger excited many people. The discovery also
prompted both legitimate and false “rainmakers” who used the information
gathered by Schaefer, Langmuir, and Vonnegut to set up
cloud-seeding businesses.Weather modification, in its current stage
of development, cannot be used to end worldwide drought. It does,
however, have beneficial results in some cases on the crops of
smaller farms that have been affected by drought.
In order to understand the advances made in weather modification,
new instruments are needed to record accurately the results of
further experimentation. The storm of interest—both favorable and
nonfavorable—generated by the discoveries of Schaefer, Langmuir,
and Vonnegut has had and will continue to have far-reaching effects
on many aspects of society.
25 March 2009
Cloning
The invention: Experimental technique for creating exact duplicates
of living organisms by recreating their DNA.
The people behind the invention:
Ian Wilmut, an embryologist with the Roslin Institute
Keith H. S. Campbell, an experiment supervisor with the Roslin
Institute
J. McWhir, a researcher with the Roslin Institute
W. A. Ritchie, a researcher with the Roslin Institute
Making Copies
On February 22, 1997, officials of the Roslin Institute, a biological
research institution near Edinburgh, Scotland, held a press conference
to announce startling news: They had succeeded in creating
a clone—a biologically identical copy—from cells taken from
an adult sheep. Although cloning had been performed previously
with simpler organisms, the Roslin Institute experiment marked
the first time that a large, complex mammal had been successfully
cloned.
Cloning, or the production of genetically identical individuals,
has long been a staple of science fiction and other popular literature.
Clones do exist naturally, as in the example of identical twins. Scientists
have long understood the process by which identical twins
are created, and agricultural researchers have often dreamed of a
method by which cheap identical copies of superior livestock could
be created.
The discovery of the double helix structure of deoxyribonucleic
acid (DNA), or the genetic code, by JamesWatson and Francis Crick
in the 1950’s led to extensive research into cloning and genetic engineering.
Using the discoveries ofWatson and Crick, scientists were
soon able to develop techniques to clone laboratory mice; however,
the cloning of complex, valuable animals such as livestock proved
to be hard going.
Early versions of livestock cloning were technical attempts at duplicating the natural process of fertilized egg splitting that leads to the
birth of identical twins. Artificially inseminated eggs were removed,
split, and then reinserted into surrogate mothers. This method proved
to be overly costly for commercial purposes, a situation aggravated by
a low success rate.
Nuclear Transfer
Researchers at the Roslin Institute found these earlier attempts to
be fundamentally flawed. Even if the success rate could be improved,
the number of clones created (of sheep, in this case) would
still be limited. The Scots, led by embryologist Ian Wilmut and experiment
supervisor Keith Campbell, decided to take an entirely
different approach. The result was the first live birth of a mammal
produced through a process known as “nuclear transfer.”
Nuclear transfer involves the replacement of the nucleus of an
immature egg with a nucleus taken from another cell. Previous attempts
at nuclear transfer had cells from a single embryo divided
up and implanted into an egg. Because a sheep embryo has only
about forty usable cells, this method also proved limiting.
The Roslin team therefore decided to grow their own cells in a
laboratory culture. They took more mature embryonic cells than
those previously used, and they experimented with the use of a nutrient
mixture. One of their breakthroughs occurred when they discovered
that these “cell lines” grew much more quickly when certain
nutrients were absent.Using this technique, the Scots were able to produce a theoretically
unlimited number of genetically identical cell lines. The next
step was to transfer the cell lines of the sheep into the nucleus of unfertilized
sheep eggs.
First, 277 nuclei with a full set of chromosomes were transferred
to the unfertilized eggs. An electric shock was then used to cause the
eggs to begin development, the shock performing the duty of fertilization.
Of these eggs, twenty-nine developed enough to be inserted
into surrogate mothers.
All the embryos died before birth except one: a ewe the scientists
named “Dolly.” Her birth on July 5, 1996, was witnessed by only a
veterinarian and a few researchers. Not until the clone had survived
the critical earliest stages of life was the success of the experiment
disclosed; Dolly was more than seven months old by the time her
birth was announced to a startled world.Impact
The news that the cloning of sophisticated organisms had left the
realm of science fiction and become a matter of accomplished scientific
fact set off an immediate uproar. Ethicists and media commentators
quickly began to debate the moral consequences of the use—
and potential misuse—of the technology. Politicians in numerous
countries responded to the news by calling for legal restrictions on
cloning research. Scientists, meanwhile, speculated about the possible
benefits and practical limitations of the process.
The issue that stirred the imagination of the broader public and
sparked the most spirited debate was the possibility that similar experiments
might soon be performed using human embryos. Although
most commentators seemed to agree that such efforts would
be profoundly immoral, many experts observed that they would be
virtually impossible to prevent. “Could someone do this tomorrow
morning on a human embryo?” Arthur L. Caplan, the director of the
University of Pennsylvania’s bioethics center, asked reporters. “Yes.
It would not even take too much science. The embryos are out
there.”
Such observations conjured visions of a future that seemed marvelous
to some, nightmarish to others. Optimists suggested that the best and brightest of humanity could be forever perpetuated, creating
an endless supply of Albert Einsteins and Wolfgang Amadeus
Mozarts. Pessimists warned of a world overrun by clones of selfserving
narcissists and petty despots, or of the creation of a secondary
class of humans to serve as organ donors for their progenitors.
The Roslin Institute’s researchers steadfastly proclaimed their
own opposition to human experimentation. Moreover, most scientists
were quick to point out that such scenarios were far from realization,
noting the extremely high failure rate involved in the creation
of even a single sheep. In addition, most experts emphasized
more practical possible uses of the technology: improving agricultural
stock by cloning productive and disease-resistant animals, for
example, or regenerating endangered or even extinct species. Even
such apparently benign schemes had their detractors, however, as
other observers remarked on the potential dangers of thus narrowing
a species’ genetic pool.
Even prior to the Roslin Institute’s announcement, most European
nations had adopted a bioethics code that flatly prohibited genetic
experiments on human subjects. Ten days after the announcement,
U.S. president Bill Clinton issued an executive order that
banned the use of federal money for human cloning research, and
he called on researchers in the private sector to refrain from such experiments
voluntarily. Nevertheless, few observers doubted that
Dolly’s birth marked only the beginning of an intriguing—and possibly
frightening—new chapter in the history of science.
20 March 2009
Cell phone
The invention:
Mobile telephone system controlled by computers
to use a region’s radio frequencies, or channels, repeatedly,
thereby accommodating large numbers of users.
The people behind the invention:
William Oliver Baker (1915- ), the president of Bell
Laboratories
Richard H. Fefrenkiel, the head of the mobile systems
engineering department at Bell
10 March 2009
CAT scanner
The invention:
A technique that collects X-ray data from solid,
opaque masses such as human bodies and uses a computer to
construct a three-dimensional image.
The people behind the invention:
Godfrey Newbold Hounsfield (1919- ), an English
electronics engineer who shared the 1979 Nobel Prize in
Physiology or Medicine
Allan M. Cormack (1924-1998), a South African-born American
physicist who shared the 1979 Nobel Prize in Physiology or
Medicine
James Ambrose, an English radiologist
Cassette recording
The invention: Self-contained system making it possible to record
and repeatedly play back sound without having to thread tape
through a machine.
The person behind the invention:
Fritz Pfleumer, a German engineer whose work on audiotapes
paved the way for audiocassette production
Smaller Is Better
The introduction of magnetic audio recording tape in 1929 was
met with great enthusiasm, particularly in the entertainment industry,
and specifically among radio broadcasters. Although somewhat
practical methods for recording and storing sound for later playback
had been around for some time, audiotape was much easier to
use, store, and edit, and much less expensive to produce.
It was Fritz Pfleumer, a German engineer, who in 1929 filed the
first audiotape patent. His detailed specifications indicated that
tape could be made by bonding a thin coating of oxide to strips of either
paper or film. Pfleumer also suggested that audiotape could be
attached to filmstrips to provide higher-quality sound than was
available with the film sound technologies in use at that time. In
1935, the German electronics firm AEG produced a reliable prototype
of a record-playback machine based on Pfleumer’s idea. By
1947, the American company 3M had refined the concept to the
point where it was able to produce a high-quality tape using a plastic-
based backing and red oxide. The tape recorded and reproduced
sound with a high degree of clarity and dynamic range and would
soon become the standard in the industry.
Still, the tape was sold and used in a somewhat inconvenient
open-reel format. The user had to thread it through a machine and
onto a take-up reel. This process was somewhat cumbersome and
complicated for the layperson. For many years, sound-recording
technology remained a tool mostly for professionals.
In 1963, the first audiocassette was introduced by the Netherlands-based PhilipsNVcompany. This device could be inserted into
a machine without threading. Rewind and fast-forward were faster,
and it made no difference where the tape was stopped prior to the
ejection of the cassette. By contrast, open-reel audiotape required
that the tape be wound fully onto one or the other of the two reels
before it could be taken off the machine.
Technical advances allowed the cassette tape to be much narrower
than the tape used in open reels and also allowed the tape
speed to be reduced without sacrificing sound quality. Thus, the
cassette was easier to carry around, and more sound could be recorded
on a cassette tape. In addition, the enclosed cassette decreased
wear and tear on the tape and protected it from contamination.
Creating a Market
One of the most popular uses for audiocassettes was to record
music from radios and other audio sources for later playback. During
the 1970’s, many radio stations developed “all music” formats
in which entire albums were often played without interruption.
That gave listeners an opportunity to record the music for later
playback. At first, the music recording industry complained about
this practice, charging that unauthorized recording of music from
the radio was a violation of copyright laws. Eventually, the issue
died down as the same companies began to recognize this new, untapped
market for recorded music on cassette.
Audiocassettes, all based on the original Philips design, were being
manufactured by more than sixty companies within only a few
years of their introduction. In addition, spin-offs of that design were
being used in many specialized applications, including dictation,
storage of computer information, and surveillance. The emergence
of videotape resulted in a number of formats for recording and
playing back video based on the same principle. Although each is
characterized by different widths of tape, each uses the same technique
for tape storage and transport.
The cassette has remained a popular means of storing and retrieving
information on magnetic tape for more than a quarter of a
century. During the early 1990’s, digital technologies such as audio
CDs (compact discs) and the more advanced CD-ROM (compact discs that reproduce sound, text, and images via computer) were beginning
to store information in revolutionary new ways. With the
development of this increasingly sophisticated technology, need for
the audiocassette, once the most versatile, reliable, portable, and
economical means of recording, storing, and playing-back sound,
became more limited.
Consequences
The cassette represented a new level of convenience for the audiophile,
resulting in a significant increase in the use of recording
technology in all walks of life. Even small children could operate
cassette recorders and players, which led to their use in schools for a
variety of instructional tasks and in the home for entertainment. The
recording industry realized that audiotape cassettes would allow
consumers to listen to recorded music in places where record players
were impractical: in automobiles, at the beach, even while camping.
The industry also saw the need for widespread availability of
music and information on cassette tape. It soon began distributing
albums on audiocassette in addition to the long-play vinyl discs,
and recording sales increased substantially. This new technology
put recorded music into automobiles for the first time, again resulting
in a surge in sales for recorded music. Eventually, information,
including language instruction and books-on-tape, became popular
commuter fare.
With the invention of the microchip, audiotape players became
available in smaller and smaller sizes, making them truly portable.
Audiocassettes underwent another explosion in popularity during
the early 1980’s, when the Sony Corporation introduced the
Walkman, an extremely compact, almost weightless cassette player
that could be attached to clothing and used with lightweight earphones
virtually anywhere. At the same time, cassettes were suddenly
being used with microcomputers for backing up magnetic
data files.
Home video soon exploded onto the scene, bringing with it new
applications for cassettes. As had happened with audiotape, video
camera-recorder units, called “camcorders,” were miniaturized to
the point where 8-millimeter videocassettes capable of recording up to 90 minutes of live action and sound were widely available. These
cassettes closely resembled the audiocassette first introduced in
1963.
Carbon dating
The invention: Atechnique that measures the radioactive decay of
carbon 14 in organic substances to determine the ages of artifacts
as old as ten thousand years.
The people behind the invention:
Willard Frank Libby (1908-1980), an American chemist who won
the 1960 Nobel Prize in Chemistry
Charles Wesley Ferguson (1922-1986), a scientist who
demonstrated that carbon 14 dates before 1500 b.c. needed to
be corrected
One in a Trillion
Carbon dioxide in the earth’s atmosphere contains a mixture of
three carbon isotopes (isotopes are atoms of the same element that
contain different numbers of neutrons), which occur in the following
percentages: about 99 percent carbon 12, about 1 percent carbon
13, and approximately one atom in a trillion of radioactive carbon
14. Plants absorb carbon dioxide from the atmosphere during photosynthesis,
and then animals eat the plants, so all living plants and
animals contain a small amount of radioactive carbon.
When a plant or animal dies, its radioactivity slowly decreases as
the radioactive carbon 14 decays. The time it takes for half of any radioactive
substance to decay is known as its “half-life.” The half-life
for carbon 14 is known to be about fifty-seven hundred years. The
carbon 14 activity will drop to one-half after one half-life, onefourth
after two half-lives, one-eighth after three half-lives, and so
forth. After ten or twenty half-lives, the activity becomes too low to
be measurable. Coal and oil, which were formed from organic matter
millions of years ago, have long since lost any carbon 14 activity.
Wood samples from an Egyptian tomb or charcoal from a prehistoric
fireplace a few thousand years ago, however, can be dated with
good reliability from the leftover radioactivity.
In the 1940’s, the properties of radioactive elements were still
being discovered and were just beginning to be used to solve problems.
Scientists still did not know the half-life of carbon 14, and archaeologists still depended mainly on historical evidence to determine
the ages of ancient objects.
In early 1947,Willard Frank Libby started a crucial experiment in
testing for radioactive carbon. He decided to test samples of methane
gas from two different sources. One group of samples came
from the sewage disposal plant at Baltimore, Maryland, which was
rich in fresh organic matter. The other sample of methane came from
an oil refinery, which should have contained only ancient carbon
from fossils whose radioactivity should have completely decayed.
The experimental results confirmed Libby’s suspicions: The methane
from fresh sewage was radioactive, but the methane from oil
was not. Evidently, radioactive carbon was present in fresh organic
material, but it decays away eventually.
Tree-Ring Dating
In order to establish the validity of radiocarbon dating, Libby analyzed
known samples of varying ages. These included tree-ring
samples from the years 575 and 1075 and one redwood from 979
b.c.e., as well as artifacts from Egyptian tombs going back to about
3000 b.c.e. In 1949, he published an article in the journal Science that
contained a graph comparing the historical ages and the measured
radiocarbon ages of eleven objects. The results were accurate within
10 percent, which meant that the general method was sound.
The first archaeological object analyzed by carbon dating, obtained
from the Metropolitan Museum of Art in New York, was a
piece of cypress wood from the tomb of King Djoser of Egypt. Based
on historical evidence, the age of this piece of wood was about fortysix
hundred years. A small sample of carbon obtained from this
wood was deposited on the inside of Libby’s radiation counter, giving
a count rate that was about 40 percent lower than that of modern
organic carbon. The resulting age of the wood calculated from its residual
radioactivity was about thirty-eight hundred years, a difference
of eight hundred years. Considering that this was the first object
to be analyzed, even such a rough agreement with the historic
age was considered to be encouraging.
The validity of radiocarbon dating depends on an important assumption—
namely, that the abundance of carbon 14 in nature has been constant for many thousands of years. If carbon 14 was less
abundant at some point in history, organic samples from that era
would have started with less radioactivity. When analyzed today,
their reduced activity would make them appear to be older than
they really are.Charles Wesley Ferguson from the Tree-Ring Research Laboratory
at the University of Arizona tackled this problem. He measured
the age of bristlecone pine trees both by counting the rings and by
using carbon 14 methods. He found that carbon 14 dates before
1500 b.c.e. needed to be corrected. The results show that radiocarbon
dates are older than tree-ring counting dates by as much as several
hundred years for the oldest samples. He knew that the number
of tree rings had given him the correct age of the pines, because trees
accumulate one ring of growth for every year of life. Apparently, the
carbon 14 content in the atmosphere has not been constant. Fortunately,
tree-ring counting gives reliable dates that can be used to
correct radiocarbon measurements back to about 6000 b.c.e.
Impact
Some interesting samples were dated by Libby’s group. The
Dead Sea Scrolls had been found in a cave by an Arab shepherd in
1947, but some Bible scholars at first questioned whether they were
genuine. The linen wrapping from the Book of Isaiah was tested for
carbon 14, giving a date of 100 b.c.e., which helped to establish its
authenticity. Human hair from an Egyptian tomb was determined
to be nearly five thousand years old.Well-preserved sandals from a
cave in eastern Oregon were determined to be ninety-three hundred
years old. A charcoal sample from a prehistoric site in western
South Dakota was found to be about seven thousand years old.
The Shroud of Turin, located in Turin, Italy, has been a controversial
object for many years. It is a linen cloth, more than four meters
long, which shows the image of a man’s body, both front and back.
Some people think it may have been the burial shroud of Jesus
Christ after his crucifixion. Ateam of scientists in 1978 was permitted
to study the shroud, using infrared photography, analysis of
possible blood stains, microscopic examination of the linen fibers,
and other methods. The results were ambiguous. A carbon 14 test
was not permitted because it would have required cutting a piece
about the size of a handkerchief from the shroud.
Anew method of measuring carbon 14 was developed in the late
1980’s. It is called “accelerator mass spectrometry,” or AMS. Unlike
Libby’s method, it does not count the radioactivity of carbon. Instead, a mass spectrometer directly measures the ratio of carbon 14
to ordinary carbon. The main advantage of this method is that the
sample size needed for analysis is about a thousand times smaller
than before. The archbishop of Turin permitted three laboratories
with the appropriate AMS apparatus to test the shroud material.
The results agreed that the material was from the fourteenth century,
not from the time of Christ. The figure on the shroud may be a
watercolor painting on linen.
Since Libby’s pioneering experiments in the late 1940’s, carbon
14 dating has established itself as a reliable dating technique for archaeologists
and cultural historians. Further improvements are expected
to increase precision, to make it possible to use smaller samples,
and to extend the effective time range of the method back to
fifty thousand years or earlier.
05 March 2009
CAD/CAM
The invention: Computer-Aided Design (CAD) and Computer-
Aided Manufacturing (CAM) enhanced flexibility in engineering
design, leading to higher quality and reduced time for manufacturing
The people behind the invention:
Patrick Hanratty, a General Motors Research Laboratory
worker who developed graphics programs
Jack St. Clair Kilby (1923- ), a Texas Instruments employee
who first conceived of the idea of the integrated circuit
Robert Noyce (1927-1990), an Intel Corporation employee who
developed an improved process of manufacturing
integrated circuits on microchips
Don Halliday, an early user of CAD/CAM who created the
Made-in-America car in only four months by using CAD
and project management software
Fred Borsini, an early user of CAD/CAM who demonstrated
its power
Summary of Event
Computer-Aided Design (CAD) is a technique whereby geometrical
descriptions of two-dimensional (2-D) or three-dimensional (3-
D) objects can be created and stored, in the form of mathematical
models, in a computer system. Points, lines, and curves are represented
as graphical coordinates. When a drawing is requested from
the computer, transformations are performed on the stored data,
and the geometry of a part or a full view from either a two- or a
three-dimensional perspective is shown. CAD systems replace the
tedious process of manual drafting, and computer-aided drawing
and redrawing that can be retrieved when needed has improved
drafting efficiency. A CAD system is a combination of computer
hardware and software that facilitates the construction of geometric
models and, in many cases, their analysis. It allows a wide variety of
visual representations of those models to be displayed.Computer-Aided Manufacturing (CAM) refers to the use of computers
to control, wholly or partly, manufacturing processes. In
practice, the term is most often applied to computer-based developments
of numerical control technology; robots and flexible manufacturing
systems (FMS) are included in the broader use of CAM
systems. A CAD/CAM interface is envisioned as a computerized
database that can be accessed and enriched by either design or manufacturing
professionals during various stages of the product development
and production cycle.
In CAD systems of the early 1990’s, the ability to model solid objects
became widely available. The use of graphic elements such as
lines and arcs and the ability to create a model by adding and subtracting
solids such as cubes and cylinders are the basic principles of
CADand of simulating objects within a computer.CADsystems enable
computers to simulate both taking things apart (sectioning)
and putting things together for assembly. In addition to being able
to construct prototypes and store images of different models, CAD
systems can be used for simulating the behavior of machines, parts,
and components. These abilities enable CAD to construct models
that can be subjected to nondestructive testing; that is, even before
engineers build a physical prototype, the CAD model can be subjected
to testing and the results can be analyzed. As another example,
designers of printed circuit boards have the ability to test their
circuits on a CAD system by simulating the electrical properties of
components.
During the 1950’s, the U.S. Air Force recognized the need for reducing
the development time for special aircraft equipment. As a
result, the Air Force commissioned the Massachusetts Institute of
Technology to develop numerically controlled (NC) machines that
were programmable. A workable demonstration of NC machines
was made in 1952; this began a new era for manufacturing. As the
speed of an aircraft increased, the cost of manufacturing also increased
because of stricter technical requirements. This higher cost
provided a stimulus for the further development of NC technology,
which promised to reduce errors in design before the prototype
stage.
The early 1960’s saw the development of mainframe computers.
Many industries valued computing technology for its speed and for its accuracy in lengthy and tedious numerical operations in design,
manufacturing, and other business functional areas. Patrick
Hanratty, working for General Motors Research Laboratory, saw
other potential applications and developed graphics programs for
use on mainframe computers. The use of graphics in software aided
the development of CAD/CAM, allowing visual representations of
models to be presented on computer screens and printers.
The 1970’s saw an important development in computer hardware,
namely the development and growth of personal computers
(PCs). Personal computers became smaller as a result of the development
of integrated circuits. Jack St. Clair Kilby, working for Texas
Instruments, first conceived of the integrated circuit; later, Robert
Noyce, working for Intel Corporation, developed an improved process
of manufacturing integrated circuits on microchips. Personal
computers using these microchips offered both speed and accuracy
at costs much lower than those of mainframe computers.
Five companies offered integrated commercial computer-aided
design and computer-aided manufacturing systems by the first half
of 1973. Integration meant that both design and manufacturing
were contained in one system. Of these five companies—Applicon,
Computervision, Gerber Scientific, Manufacturing and Consulting
Services (MCS), and United Computing—four offered turnkey systems
exclusively. Turnkey systems provide design, development,
training, and implementation for each customer (company) based
on the contractual agreement; they are meant to be used as delivered,
with no need for the purchaser to make significant adjustments
or perform programming.
The 1980’s saw a proliferation of mini- and microcomputers with
a variety of platforms (processors) with increased speed and better
graphical resolution. This made the widespread development of
computer-aided design and computer-aided manufacturing possible
and practical. Major corporations spent large research and development
budgets developing CAD/CAM systems that would
automate manual drafting and machine tool movements. Don Halliday,
working for Truesports Inc., provided an early example of the
benefits of CAD/CAM. He created the Made-in-America car in only
four months by using CAD and project management software. In
the late 1980’s, Fred Borsini, the president of Leap Technologies in Michigan, brought various products to market in record time through
the use of CAD/CAM.
In the early 1980’s, much of theCAD/CAMindustry consisted of
software companies. The cost for a relatively slow interactive system
in 1980 was close to $100,000. The late 1980’s saw the demise of
minicomputer-based systems in favor of Unix work stations and
PCs based on 386 and 486 microchips produced by Intel. By the time
of the International Manufacturing Technology show in September,
1992, the industry could show numerous CAD/CAM innovations
including tools, CAD/CAM models to evaluate manufacturability
in early design phases, and systems that allowed use of the same
data for a full range of manufacturing functions.
Impact
In 1990, CAD/CAM hardware sales by U.S. vendors reached
$2.68 billion. In software alone, $1.42 billion worth of CAD/CAM
products and systems were sold worldwide by U.S. vendors, according
to International Data Corporation figures for 1990. CAD/
CAM systems were in widespread use throughout the industrial
world. Development lagged in advanced software applications,
particularly in image processing, and in the communications software
and hardware that ties processes together.
A reevaluation of CAD/CAM systems was being driven by the
industry trend toward increased functionality of computer-driven
numerically controlled machines. Numerical control (NC) software
enables users to graphically define the geometry of the parts in a
product, develop paths that machine tools will follow, and exchange
data among machines on the shop floor. In 1991, NC configuration
software represented 86 percent of total CAM sales. In 1992,
the market shares of the five largest companies in the CAD/CAM
market were 29 percent for International Business Machines, 17 percent
for Intergraph, 11 percent for Computervision, 9 percent for
Hewlett-Packard, and 6 percent for Mentor Graphics.
General Motors formed a joint venture with Ford and Chrysler to
develop a common computer language in order to make the next
generation of CAD/CAM systems easier to use. The venture was
aimed particularly at problems that posed barriers to speeding up the design of new automobiles. The three car companies all had sophisticated
computer systems that allowed engineers to design
parts on computers and then electronically transmit specifications
to tools that make parts or dies.
CAD/CAM technology was expected to advance on many fronts.
As of the early 1990’s, different CAD/CAM vendors had developed
systems that were often incompatible with one another, making it
difficult to transfer data from one system to another. Large corporations,
such as the major automakers, developed their own interfaces
and network capabilities to allow different systems to communicate.
Major users of CAD/CAM saw consolidation in the industry
through the establishment of standards as being in their interests.
Resellers of CAD/CAM products also attempted to redefine
their markets. These vendors provide technical support and service
to users. The sale of CAD/CAM products and systems offered substantial
opportunities, since demand remained strong. Resellers
worked most effectively with small and medium-sized companies,
which often were neglected by the primary sellers of CAD/CAM
equipment because they did not generate a large volume of business.
Some projections held that by 1995 half of all CAD/CAM systems
would be sold through resellers, at a cost of $10,000 or less for
each system. The CAD/CAM market thus was in the process of dividing
into two markets: large customers (such as aerospace firms
and automobile manufacturers) that would be served by primary
vendors, and small and medium-sized customers that would be serviced
by resellers.
CAD will find future applications in marketing, the construction
industry, production planning, and large-scale projects such as shipbuilding
and aerospace. Other likely CAD markets include hospitals,
the apparel industry, colleges and universities, food product
manufacturers, and equipment manufacturers. As the linkage between
CAD and CAM is enhanced, systems will become more productive.
The geometrical data from CAD will be put to greater use
by CAM systems.
CAD/CAM already had proved that it could make a big difference
in productivity and quality. Customer orders could be changed
much faster and more accurately than in the past, when a change
could require a manual redrafting of a design. Computers could do automatically in minutes what once took hours manually. CAD/
CAM saved time by reducing, and in some cases eliminating, human
error. Many flexible manufacturing systems (FMS) had machining
centers equipped with sensing probes to check the accuracy
of the machining process. These self-checks can be made part of numerical
control (NC) programs. With the technology of the early
1990’s, some experts estimated that CAD/CAM systems were in
many cases twice as productive as the systems they replaced; in the
long run, productivity is likely to improve even more, perhaps up to
three times that of older systems or even higher. As costs for CAD/
CAM systems concurrently fall, the investment in a system will be
recovered more quickly. Some analysts estimated that by the mid-
1990’s, the recovery time for an average system would be about
three years.
Another frontier in the development of CAD/CAM systems is
expert (or knowledge-based) systems, which combine data with a
human expert’s knowledge, expressed in the form of rules that the
computer follows. Such a system will analyze data in a manner
mimicking intelligence. For example, a 3-D model might be created
from standard 2-D drawings. Expert systems will likely play a
pivotal role in CAM applications. For example, an expert system
could determine the best sequence of machining operations to produce
a component.
Continuing improvements in hardware, especially increased
speed, will benefit CAD/CAM systems. Software developments,
however, may produce greater benefits. Wider use of CAD/CAM
systems will depend on the cost savings from improvements in
hardware and software as well as on the productivity of the systems
and the quality of their product. The construction, apparel,
automobile, and aerospace industries have already experienced
increases in productivity, quality, and profitability through the use
of CAD/CAM. A case in point is Boeing, which used CAD from
start to finish in the design of the 757.
Buna rubber
The invention: The first practical synthetic rubber product developed,
Buna inspired the creation of other other synthetic substances
that eventually replaced natural rubber in industrial applications.
The people behind the invention:
Charles de la Condamine (1701-1774), a French naturalist
Charles Goodyear (1800-1860), an American inventor
Joseph Priestley (1733-1804), an English chemist
Charles Greville Williams (1829-1910), an English chemist
A New Synthetic Rubber
The discovery of natural rubber is often credited to the French
scientist Charles de la Condamine, who, in 1736, sent the French
Academy of Science samples of an elastic material used by Peruvian
Indians to make balls that bounced. The material was primarily a
curiosity until 1770, when Joseph Priestley, an English chemist, discovered
that it rubbed out pencil marks, after which he called it
“rubber.” Natural rubber, made from the sap of the rubber tree
(Hevea brasiliensis), became important after Charles Goodyear discovered
in 1830 that heating rubber with sulfur (a process called
“vulcanization”) made it more elastic and easier to use. Vulcanized
natural rubber came to be used to make raincoats, rubber bands,
and motor vehicle tires.
Natural rubber is difficult to obtain (making one tire requires
the amount of rubber produced by one tree in two years), and wars
have often cut off supplies of this material to various countries.
Therefore, efforts to manufacture synthetic rubber began in the
late eighteenth century. Those efforts followed the discovery by
English chemist Charles GrevilleWilliams and others in the 1860’s
that natural rubber was composed of thousands of molecules of a
chemical called isoprene that had been joined to form giant, necklace-
like molecules. The first successful synthetic rubber, Buna,
was patented by Germany’s I. G. Farben Industrie in 1926. The success of this rubber led to the development of many other synthetic
rubbers, which are now used in place of natural rubber in many
applications.From Erasers to Gas Pumps
Natural rubber belongs to the group of chemicals called “polymers.”
Apolymer is a giant molecule that is made up of many simpler
chemical units (“monomers”) that are attached chemically to
form long strings. In natural rubber, the monomer is isoprene
(dimethylbutadiene). The first efforts to make a synthetic rubber
used the discovery that isoprene could be made and converted
into an elastic polymer. The synthetic rubber that was created from
isoprene was, however, inferior to natural rubber. The first Buna
rubber, which was patented by I. G. Farben in 1926, was better, but it
was still less than ideal. Buna rubber was made by polymerizing the
monomer butadiene in the presence of sodium. The name Buna
comes from the first two letters of the words “butadiene” and “natrium”
(German for sodium). Natural and Buna rubbers are called
homopolymers because they contain only one kind of monomer.
The ability of chemists to make Buna rubber, along with its successful
use, led to experimentation with the addition of other monomers
to isoprene-like chemicals used to make synthetic rubber.
Among the first great successes were materials that contained two
alternating monomers; such materials are called “copolymers.” If
the two monomers are designated Aand B, part of a polymer molecule
can be represented as (ABABABABABABABABAB). Numerous
synthetic copolymers, which are often called “elastomers,” now
replace natural rubber in applications where they have superior
properties. All elastomers are rubbers, since objects made from
them both stretch greatly when pulled and return quickly to their
original shape when the tension is released.
Two other well-known rubbers developed by I. G. Farben are the
copolymers called Buna-N and Buna-S. These materials combine butadiene
and the monomers acrylonitrile and styrene, respectively.
Many modern motor vehicle tires are made of synthetic rubber that
differs little from Buna-S rubber. This rubber was developed after
the United States was cut off in the 1940’s, during World War II,
from its Asian source of natural rubber. The solution to this problem
was the development of a synthetic rubber industry based on GR-S
rubber (government rubber plus styrene), which was essentially
Buna-S rubber. This rubber is still widely used.Buna-S rubber is often made by mixing butadiene and styrene in
huge tanks of soapy water, stirring vigorously, and heating the mixture.
The polymer contains equal amounts of butadiene and styrene
(BSBSBSBSBSBSBSBS). When the molecules of the Buna-S polymer
reach the desired size, the polymerization is stopped and the rubber
is coagulated (solidified) chemically. Then, water and all the unused
starting materials are removed, after which the rubber is dried and
shipped to various plants for use in tires and other products. The
major difference between Buna-S and GR-S rubber is that the method
of making GR-S rubber involves the use of low temperatures.
Buna-N rubber is made in a fashion similar to that used for Buna-
S, using butadiene and acrylonitrile. Both Buna-N and the related
neoprene rubber, invented by Du Pont, are very resistant to gasoline
and other liquid vehicle fuels. For this reason, they can be used in
gas-pump hoses. All synthetic rubbers are vulcanized before they
are used in industry.
Impact
Buna rubber became the basis for the development of the other
modern synthetic rubbers. These rubbers have special properties
that make them suitable for specific applications. One developmental
approach involved the use of chemically modified butadiene in
homopolymers such as neoprene. Made of chloroprene (chlorobutadiene),
neoprene is extremely resistant to sun, air, and chemicals.
It is so widely used in machine parts, shoe soles, and hoses that
more than 400 million pounds are produced annually.
Another developmental approach involved copolymers that alternated
butadiene with other monomers. For example, the successful
Buna-N rubber (butadiene and acrylonitrile) has properties
similar to those of neoprene. It differs sufficiently from neoprene,
however, to be used to make items such as printing press rollers.
About 200 million pounds of Buna-N are produced annually. Some
4 billion pounds of the even more widely used polymer Buna-S/
GR-S are produced annually, most of which is used to make tires.
Several other synthetic rubbers have significant industrial applications,
and efforts to make copolymers for still other purposes continue.
20 February 2009
Bullet train
The invention: An ultrafast passenger railroad system capable of
moving passengers at speeds double or triple those of ordinary
trains.
The people behind the invention:
Ikeda Hayato (1899-1965), Japanese prime minister from 1960 to
1964, who pushed for the expansion of public expenditures
Shinji Sogo (1901-1971), the president of the Japanese National
Railways, the “father of the bullet train”
Building a Faster Train
By 1900, Japan had a world-class railway system, a logical result
of the country’s dense population and the needs of its modernizing
economy. After 1907, the government controlled the system
through the Japanese National Railways (JNR). In 1938, JNR engineers
first suggested the idea of a train that would travel 125 miles
per hour from Tokyo to the southern city of Shimonoseki. Construction
of a rapid train began in 1940 but was soon stopped because of
World War II.
The 311-mile railway between Tokyo and Osaka, the Tokaido
Line, has always been the major line in Japan. By 1957, a business express
along the line operated at an average speed of 57 miles per
hour, but the double-track line was rapidly reaching its transport capacity.
The JNR established two investigative committees to explore
alternative solutions. In 1958, the second committee recommended
the construction of a high-speed railroad on a separate double track,
to be completed in time for the Tokyo Olympics of 1964. The Railway
Technical Institute of the JNR concluded that it was feasible to
design a line that would operate at an average speed of about 130
miles per hour, cutting time for travel between Tokyo and Osaka
from six hours to three hours.
By 1962, about 17 miles of the proposed line were completed for
test purposes. During the next two years, prototype trains were
tested to correct flaws and make improvements in the design. The entire project was completed on schedule in July, 1964, with total construction
costs of more than $1 billion, double the original estimates.
The Speeding Bullet
Service on the Shinkansen, or New Trunk Line, began on October
1, 1964, ten days before the opening of the Olympic Games.
Commonly called the “bullet train” because of its shape and speed,
the Shinkansen was an instant success with the public, both in Japan
and abroad. As promised, the time required to travel between Tokyo
and Osaka was cut in half. Initially, the system provided daily
services of sixty trains consisting of twelve cars each, but the number
of scheduled trains was almost doubled by the end of the year.
The Shinkansen was able to operate at its unprecedented speed
because it was designed and operated as an integrated system,
making use of countless technological and scientific developments.
Tracks followed the standard gauge of 56.5 inches, rather than the
more narrow gauge common in Japan. For extra strength, heavy welded rails were attached directly onto reinforced concrete slabs.
The minimum radius of a curve was 8,200 feet, except where sharper
curves were mandated by topography. In many ways similar to
modern airplanes, the railway cars were made airtight in order to
prevent ear discomfort caused by changes in pressure when trains
enter tunnels.
The Shinkansen trains were powered by electric traction motors,
with four 185-kilowatt motors on each car—one motor attached to
each axle. This design had several advantages: It provided an even
distribution of axle load for reducing strain on the tracks; it allowed
the application of dynamic brakes (where the motor was used for
braking) on all axles; and it prevented the failure of one or two units
from interrupting operation of the entire train. The 25,000-volt electrical
current was carried by trolley wire to the cars, where it was
rectified into a pulsating current to drive the motors.
The Shinkansen system established a casualty-free record because
of its maintenance policies combined with its computerized
Centralized Traffic Control system. The control room at Tokyo Station
was designed to maintain timely information about the location
of all trains and the condition of all routes. Although train operators
had some discretion in determining speed, automatic brakes
also operated to ensure a safe distance between trains. At least once
each month, cars were thoroughly inspected; every ten days, an inspection
train examined the conditions of tracks, communication
equipment, and electrical systems.
Impact
Public usage of the Tokyo-Osaka bullet train increased steadily
because of the system’s high speed, comfort, punctuality, and superb
safety record. Businesspeople were especially happy that the
rapid service allowed them to make the round-trip without the necessity
of an overnight stay, and continuing modernization soon allowed
nonstop trains to make a one-way trip in two and one-half
hours, requiring speeds of 160 miles per hour in some stretches. By
the early 1970’s, the line was transporting a daily average of 339,000
passengers in 240 trains, meaning that a train departed from Tokyo
about every ten minutes The popularity of the Shinkansen system quickly resulted in demands
for its extension into other densely populated regions. In
1972, a 100-mile stretch between Osaka and Okayama was opened
for service. By 1975, the line was further extended to Hakata on the
island of Kyushu, passing through the Kammon undersea tunnel.
The cost of this 244-mile stretch was almost $2.5 billion. In 1982,
lines were completed from Tokyo to Niigata and from Tokyo to
Morioka. By 1993, the system had grown to 1,134 miles of track.
Since high usage made the system extremely profitable, the sale of
the JNR to private companies in 1987 did not appear to produce adverse
consequences.
The economic success of the Shinkansen had a revolutionary effect
on thinking about the possibilities of modern rail transportation,
leading one authority to conclude that the line acted as “a
savior of the declining railroad industry.” Several other industrial
countries were stimulated to undertake large-scale railway projects;
France, especially, followed Japan’s example by constructing highspeed
electric railroads from Paris to Nice and to Lyon. By the mid-
1980’s, there were experiments with high-speed trains based on
magnetic levitation and other radical innovations, but it was not
clear whether such designs would be able to compete with the
Shinkansen model.
Bubble memory
The invention: An early nonvolatile medium for storing information
on computers.
The person behind the invention:
Andrew H. Bobeck (1926- ), a Bell Telephone Laboratories
scientist
Magnetic Technology
The fanfare over the commercial prospects of magnetic bubbles
was begun on August 8, 1969, by a report appearing in both The New
York Times and TheWall Street Journal. The early 1970’s would see the
anticipation mount (at least in the computer world) with each prediction
of the benefits of this revolution in information storage technology.
Although it was not disclosed to the public until August of 1969,
magnetic bubble technology had held the interest of a small group
of researchers around the world for many years. The organization
that probably can claim the greatest research advances with respect
to computer applications of magnetic bubbles is Bell Telephone
Laboratories (later part of American Telephone and Telegraph). Basic
research into the properties of certain ferrimagnetic materials
started at Bell Laboratories shortly after the end of World War II
(1939-1945).
Ferrimagnetic substances are typically magnetic iron oxides. Research
into the properties of these and related compounds accelerated
after the discovery of ferrimagnetic garnets in 1956 (these are a
class of ferrimagnetic oxide materials that have the crystal structure
of garnet). Ferrimagnetism is similar to ferromagnetism, the phenomenon
that accounts for the strong attraction of one magnetized
body for another. The ferromagnetic materials most suited for bubble
memories contain, in addition to iron, the element yttrium or a
metal from the rare earth series.
It was a fruitful collaboration between scientist and engineer,
between pure and applied science, that produced this promising breakthrough in data storage technology. In 1966, Bell Laboratories
scientist Andrew H. Bobeck and his coworkers were the first to realize
the data storage potential offered by the strange behavior of thin
slices of magnetic iron oxides under an applied magnetic field. The
first U.S. patent for a memory device using magnetic bubbles was
filed by Bobeck in the fall of 1966 and issued on August 5, 1969.
Bubbles Full of Memories
The three basic functional elements of a computer are the central
processing unit, the input/output unit, and memory. Most implementations
of semiconductor memory require a constant power
source to retain the stored data. If the power is turned off, all stored
data are lost. Memory with this characteristic is called “volatile.”
Disks and tapes, which are typically used for secondary memory,
are “nonvolatile.” Nonvolatile memory relies on the orientation of
magnetic domains, rather than on electrical currents, to sustain its
existence.
One can visualize by analogy how this will work by taking a
group of permanent bar magnets that are labeled withNfor north at
one end and S for south at the other. If an arrow is painted starting
from the north end with the tip at the south end on each magnet, an
orientation can then be assigned to a magnetic domain (here one
whole bar magnet). Data are “stored” with these bar magnets by arranging
them in rows, some pointing up, some pointing down. Different
arrangements translate to different data. In the binary world
of the computer, all information is represented by two states. A
stored data item (known as a “bit,” or binary digit) is either on or off,
up or down, true or false, depending on the physical representation.
The “on” state is commonly labeled with the number 1 and the “off”
state with the number 0. This is the principle behind magnetic disk
and tape data storage.
Now imagine a thin slice of a certain type of magnetic material in
the shape of a 3-by-5-inch index card. Under a microscope, using a
special source of light, one can see through this thin slice in many regions
of the surface. Darker, snakelike regions can also be seen, representing
domains of an opposite orientation (polarity) to the transparent
regions. If a weak external magnetic field is then applied by placing a permanent magnet of the same shape as the card on the
underside of the slice, a strange thing happens to the dark serpentine
pattern—the long domains shrink and eventually contract into
“bubbles,” tiny magnetized spots. Viewed from the side of the slice,
the bubbles are cylindrically shaped domains having a polarity opposite
to that of the material on which they rest. The presence or absence
of a bubble indicates either a 0 or a 1 bit. Data bits are stored by
moving the bubbles in the thin film. As long as the field is applied
by the permanent magnet substrate, the data will be retained. The
bubble is thus a nonvolatile medium for data storage.Consequences
Magnetic bubble memory created quite a stir in 1969 with its
splashy public introduction. Most of the manufacturers of computer
chips immediately instituted bubble memory development projects.
Texas Instruments, Philips, Hitachi, Motorola, Fujitsu, and International
Business Machines (IBM) joined the race with Bell Laboratories
to mass-produce bubble memory chips. Texas Instruments
became the first major chip manufacturer to mass-produce bubble
memories in the mid-to-late 1970’s. By 1990, however, almost all the
research into magnetic bubble technology had shifted to Japan.
Hitachi and Fujitsu began to invest heavily in this area.
Mass production proved to be the most difficult task. Although
the materials it uses are different, the process of producing magnetic
bubble memory chips is similar to the process applied in producing
semiconductor-based chips such as those used for random access
memory (RAM). It is for this reason that major semiconductor manufacturers
and computer companies initially invested in this technology.
Lower fabrication yields and reliability issues plagued
early production runs, however, and, although these problems
have mostly been solved, gains in the performance characteristics of
competing conventional memories have limited the impact that
magnetic bubble technology has had on the marketplace. The materials
used for magnetic bubble memories are costlier and possess
more complicated structures than those used for semiconductor or
disk memory.
Speed and cost of materials are not the only bases for comparison. It is possible to perform some elementary logic with magnetic
bubbles. Conventional semiconductor-based memory offers storage
only. The capability of performing logic with magnetic bubbles
puts bubble technology far ahead of other magnetic technologies
with respect to functional versatility.
Asmall niche market for bubble memory developed in the 1980’s.
Magnetic bubble memory can be found in intelligent terminals, desktop
computers, embedded systems, test equipment, and similar microcomputer-
based systems.
Brownie camera
The invention: The first inexpensive and easy-to-use camera available
to the general public, the Brownie revolutionized photography
by making it possible for every person to become a photographer.
The people behind the invention:
George Eastman (1854-1932), founder of the Eastman Kodak
Company
Frank A. Brownell, a camera maker for the Kodak Company
who designed the Brownie
Henry M. Reichenbach, a chemist who worked with Eastman to
develop flexible film
William H. Walker, a Rochester camera manufacturer who
collaborated with Eastman
A New Way to Take Pictures
In early February of 1900, the first shipments of a new small box
camera called the Brownie reached Kodak dealers in the United
States and England. George Eastman, eager to put photography
within the reach of everyone, had directed Frank Brownell to design
a small camera that could be manufactured inexpensively but that
would still take good photographs.
Advertisements for the Brownie proclaimed that everyone—
even children—could take good pictures with the camera. The
Brownie was aimed directly at the children’s market, a fact indicated
by its box, which was decorated with drawings of imaginary
elves called “Brownies” created by the Canadian illustrator Palmer
Cox. Moreover, the camera cost only one dollar.
The Brownie was made of jute board and wood, with a hinged
back fastened by a sliding catch. It had an inexpensive two-piece
glass lens and a simple rotary shutter that allowed both timed and
instantaneous exposures to be made. With a lens aperture of approximately
f14 and a shutter speed of approximately 1/50 of a second,
the Brownie was certainly capable of taking acceptable snapshots. It had no viewfinder; however, an optional clip-on reflecting
viewfinder was available. The camera came loaded with a six-exposure
roll of Kodak film that produced square negatives 2.5 inches on
a side. This film could be developed, printed, and mounted for forty
cents, and a new roll could be purchased for fifteen cents.
George Eastman’s first career choice had been banking, but when
he failed to receive a promotion he thought he deserved, he decided
to devote himself to his hobby, photography. Having worked with a
rigorous wet-plate process, he knew why there were few amateur
photographers at the time—the whole process, from plate preparation
to printing, was too expensive and too much trouble. Even so,
he had already begun to think about the commercial possibilities of
photography; after reading of British experiments with dry-plate
technology, he set up a small chemical laboratory and came up with
a process of his own. The Eastman Dry Plate Company became one
of the most successful producers of gelatin dry plates.
Dry-plate photography had attracted more amateurs, but it was
still a complicated and expensive hobby. Eastman realized that the
number of photographers would have to increase considerably if
the market for cameras and supplies were to have any potential. In
the early 1880’s, Eastman first formulated the policies that would
make the Eastman Kodak Company so successful in years to come:
mass production, low prices, foreign and domestic distribution, and
selling through extensive advertising and by demonstration.
In his efforts to expand the amateur market, Eastman first tackled
the problem of the glass-plate negative, which was heavy, fragile,
and expensive to make. By 1884, his experiments with paper
negatives had been successful enough that he changed the name of
his company to The Eastman Dry Plate and Film Company. Since
flexible roll film needed some sort of device to hold it steady in the
camera’s focal plane, Eastman collaborated with William Walker
to develop the Eastman-Walker roll-holder. Eastman’s pioneering
manufacture and use of roll films led to the appearance on the market
in the 1880’s of a wide array of hand cameras from a number of
different companies. Such cameras were called “detective cameras”
because they were small and could be used surreptitiously. The
most famous of these, introduced by Eastman in 1888, was named
the “Kodak”—a word he coined to be terse, distinctive, and easily pronounced in any language. This camera’s simplicity of operation
was appealing to the general public and stimulated the growth of
amateur photography.
The Camera
The Kodak was a box about seven inches long and four inches
wide, with a one-speed shutter and a fixed-focus lens that produced
reasonably sharp pictures. It came loaded with enough roll film to
make one hundred exposures. The camera’s initial price of twentyfive
dollars included the cost of processing the first roll of film; the
camera also came with a leather case and strap. After the film was
exposed, the camera was mailed, unopened, to the company’s plant
in Rochester, New York, where the developing and printing were
done. For an additional ten dollars, the camera was reloaded and
sent back to the customer.
The Kodak was advertised in mass-market publications, rather
than in specialized photographic journals, with the slogan: “You
press the button, we do the rest.”With his introduction of a camera
that was easy to use and a service that eliminated the need to know
anything about processing negatives, Eastman revolutionized the
photographic market. Thousands of people no longer depended
upon professional photographers for their portraits but instead
learned to make their own. In 1892, the Eastman Dry Plate and Film
Company became the Eastman Kodak Company, and by the mid-
1890’s, one hundred thousand Kodak cameras had been manufactured
and sold, half of them in Europe by Kodak Limited.
Having popularized photography with the first Kodak, in 1900
Eastman turned his attention to the children’s market with the introduction
of the Brownie. The first five thousand cameras sent to
dealers were sold immediately; by the end of the following year, almost
a quarter of a million had been sold. The Kodak Company organized
Brownie camera clubs and held competitions specifically
for young photographers. The Brownie came with an instruction
booklet that gave children simple directions for taking successful
pictures, and “The Brownie Boy,” an appealing youngster who
loved photography, became a standard feature of Kodak’s advertisements.
Impact
Eastman followed the success of the first Brownie by introducing
several additional models between 1901 and 1917. Each was a more
elaborate version of the original. These Brownie box cameras were
on the market until the early 1930’s, and their success inspired other
companies to manufacture box cameras of their own. In 1906, the
Ansco company produced the Buster Brown camera in three sizes
that corresponded to Kodak’s Brownie camera range; in 1910 and
1914, Ansco made three more versions. The Seneca company’s
Scout box camera, in three sizes, appeared in 1913, and Sears Roebuck’s
Kewpie cameras, in five sizes, were sold beginning in 1916.
In England, the Houghtons company introduced its first Scout camera
in 1901, followed by another series of four box cameras in 1910
sold under the Ensign trademark. Other English manufacturers of
box cameras included the James Sinclair company, with its Traveller
Una of 1909, and the Thornton-Pickard company, with a Filma camera
marketed in four sizes in 1912.
After World War I ended, several series of box cameras were
manufactured in Germany by companies that had formerly concentrated
on more advanced and expensive cameras. The success of
box cameras in other countries, led by Kodak’s Brownie, undoubtedly
prompted this trend in the German photographic industry. The
Ernemann Film K series of cameras in three sizes, introduced in
1919, and the all-metal Trapp LittleWonder of 1922 are examples of
popular German box cameras.
In the early 1920’s, camera manufacturers began making boxcamera
bodies from metal rather than from wood and cardboard.
Machine-formed metal was less expensive than the traditional handworked
materials. In 1924, Kodak’s two most popular Brownie sizes
appeared with aluminum bodies.
In 1928, Kodak Limited of England added two important new
features to the Brownie—a built-in portrait lens, which could be
brought in front of the taking lens by pressing a lever, and camera
bodies in a range of seven different fashion colors. The Beau
Brownie cameras, made in 1930, were the most popular of all the
colored box cameras. The work ofWalter Dorwin Teague, a leading
American designer, these cameras had an Art Deco geometric pattern on the front panel, which was enameled in a color matching the
leatherette covering of the camera body. Several other companies,
including Ansco, again followed Kodak’s lead and introduced their
own lines of colored cameras.
In the 1930’s, several new box cameras with interesting features appeared,
many manufactured by leading film companies. In France, the
Lumiere Company advertised a series of box cameras—the Luxbox,
Scoutbox, and Lumibox—that ranged from a basic camera to one with
an adjustable lens and shutter. In 1933, the German Agfa company restyled
its entire range of box cameras, and in 1939, the Italian Ferrania
company entered the market with box cameras in two sizes. In 1932,
Kodak redesigned its Brownie series to take the new 620 roll film,
which it had just introduced. This film and the new Six-20 Brownies inspired
other companies to experiment with variations of their own;
some box cameras, such as the Certo Double-box, the Coronet Every
Distance, and the Ensign E-20 cameras, offered a choice of two picture
formats.
Another new trend was a move toward smaller-format cameras
using standard 127 roll film. In 1934, Kodak marketed the small
Baby Brownie. Designed by Teague and made from molded black
plastic, this little camera with a folding viewfinder sold for only one
dollar—the price of the original Brownie in 1900.
The Baby Brownie, the first Kodak camera made of molded plastic,
heralded the move to the use of plastic in camera manufacture.
Soon many others, such as the Altissa series of box cameras and the
Voigtlander Brilliant V/6 camera, were being made from this new material.
Later Trends
By the late 1930’s, flashbulbs had replaced flash powder for taking
pictures in low light; again, the Eastman Kodak Company led
the way in introducing this new technology as a feature on the inexpensive
box camera. The Falcon Press-Flash, marketed in 1939, was
the first mass-produced camera to have flash synchronization and
was followed the next year by the Six-20 Flash Brownie, which had a
detachable flash gun. In the early 1940’s, other companies, such as
Agfa-Ansco, introduced this feature on their own box cameras.In the years after World War II, the box camera evolved into an
eye-level camera, making it more convenient to carry and use.
Many amateur photographers, however, still had trouble handling paper-backed roll film and were taking their cameras back to dealers
to be unloaded and reloaded. Kodak therefore developed a new
system of film loading, using the Kodapak cartridge, which could
be mass-produced with a high degree of accuracy by precision plastic-
molding techniques. To load the camera, the user simply opened
the camera back and inserted the cartridge. This new film was introduced
in 1963, along with a series of Instamatic cameras designed
for its use. Both were immediately successful.
The popularity of the film cartridge ended the long history of the
simple and inexpensive roll film camera. The last English Brownie
was made in 1967, and the series of Brownies made in the United
States was discontinued in 1970. Eastman’s original marketing strategy
of simplifying photography in order to increase the demand for
cameras and film continued, however, with the public’s acceptance
of cartridge-loading cameras such as the Instamatic.
From the beginning, Eastman had recognized that there were
two kinds of photographers other than professionals. The first, he
declared, were the true amateurs who devoted time enough to acquire
skill in the complex processing procedures of the day. The second
were those who merely wanted personal pictures or memorabilia
of their everyday lives, families, and travels. The second class,
he observed, outnumbered the first by almost ten to one. Thus, it
was to this second kind of amateur photographer that Eastman had
appealed, both with his first cameras and with his advertising slogan,
“You press the button, we do the rest.” Eastman had done
much more than simply invent cameras and films; he had invented
a system and then developed the means for supporting that system.
This is essentially what the Eastman Kodak Company continued to
accomplish with the series of Instamatics and other descendants of
the original Brownie. In the decade between 1963 and 1973, for example,
approximately sixty million Instamatics were sold throughout
the world.
The research, manufacturing, and marketing activities of the
Eastman Kodak Company have been so complex and varied that no
one would suggest that the company’s prosperity rests solely on the
success of its line of inexpensive cameras and cartridge films, although
these have continued to be important to the company. Like
Kodak, however, most large companies in the photographic industry have expanded their research to satisfy the ever-growing demand
from amateurs. The amateurism that George Eastman recognized
and encouraged at the beginning of the twentieth century
thus still flourished at its end.
Subscribe to:
Posts (Atom)