10 March 2009
Cassette recording
The invention: Self-contained system making it possible to record
and repeatedly play back sound without having to thread tape
through a machine.
The person behind the invention:
Fritz Pfleumer, a German engineer whose work on audiotapes
paved the way for audiocassette production
Smaller Is Better
The introduction of magnetic audio recording tape in 1929 was
met with great enthusiasm, particularly in the entertainment industry,
and specifically among radio broadcasters. Although somewhat
practical methods for recording and storing sound for later playback
had been around for some time, audiotape was much easier to
use, store, and edit, and much less expensive to produce.
It was Fritz Pfleumer, a German engineer, who in 1929 filed the
first audiotape patent. His detailed specifications indicated that
tape could be made by bonding a thin coating of oxide to strips of either
paper or film. Pfleumer also suggested that audiotape could be
attached to filmstrips to provide higher-quality sound than was
available with the film sound technologies in use at that time. In
1935, the German electronics firm AEG produced a reliable prototype
of a record-playback machine based on Pfleumer’s idea. By
1947, the American company 3M had refined the concept to the
point where it was able to produce a high-quality tape using a plastic-
based backing and red oxide. The tape recorded and reproduced
sound with a high degree of clarity and dynamic range and would
soon become the standard in the industry.
Still, the tape was sold and used in a somewhat inconvenient
open-reel format. The user had to thread it through a machine and
onto a take-up reel. This process was somewhat cumbersome and
complicated for the layperson. For many years, sound-recording
technology remained a tool mostly for professionals.
In 1963, the first audiocassette was introduced by the Netherlands-based PhilipsNVcompany. This device could be inserted into
a machine without threading. Rewind and fast-forward were faster,
and it made no difference where the tape was stopped prior to the
ejection of the cassette. By contrast, open-reel audiotape required
that the tape be wound fully onto one or the other of the two reels
before it could be taken off the machine.
Technical advances allowed the cassette tape to be much narrower
than the tape used in open reels and also allowed the tape
speed to be reduced without sacrificing sound quality. Thus, the
cassette was easier to carry around, and more sound could be recorded
on a cassette tape. In addition, the enclosed cassette decreased
wear and tear on the tape and protected it from contamination.
Creating a Market
One of the most popular uses for audiocassettes was to record
music from radios and other audio sources for later playback. During
the 1970’s, many radio stations developed “all music” formats
in which entire albums were often played without interruption.
That gave listeners an opportunity to record the music for later
playback. At first, the music recording industry complained about
this practice, charging that unauthorized recording of music from
the radio was a violation of copyright laws. Eventually, the issue
died down as the same companies began to recognize this new, untapped
market for recorded music on cassette.
Audiocassettes, all based on the original Philips design, were being
manufactured by more than sixty companies within only a few
years of their introduction. In addition, spin-offs of that design were
being used in many specialized applications, including dictation,
storage of computer information, and surveillance. The emergence
of videotape resulted in a number of formats for recording and
playing back video based on the same principle. Although each is
characterized by different widths of tape, each uses the same technique
for tape storage and transport.
The cassette has remained a popular means of storing and retrieving
information on magnetic tape for more than a quarter of a
century. During the early 1990’s, digital technologies such as audio
CDs (compact discs) and the more advanced CD-ROM (compact discs that reproduce sound, text, and images via computer) were beginning
to store information in revolutionary new ways. With the
development of this increasingly sophisticated technology, need for
the audiocassette, once the most versatile, reliable, portable, and
economical means of recording, storing, and playing-back sound,
became more limited.
Consequences
The cassette represented a new level of convenience for the audiophile,
resulting in a significant increase in the use of recording
technology in all walks of life. Even small children could operate
cassette recorders and players, which led to their use in schools for a
variety of instructional tasks and in the home for entertainment. The
recording industry realized that audiotape cassettes would allow
consumers to listen to recorded music in places where record players
were impractical: in automobiles, at the beach, even while camping.
The industry also saw the need for widespread availability of
music and information on cassette tape. It soon began distributing
albums on audiocassette in addition to the long-play vinyl discs,
and recording sales increased substantially. This new technology
put recorded music into automobiles for the first time, again resulting
in a surge in sales for recorded music. Eventually, information,
including language instruction and books-on-tape, became popular
commuter fare.
With the invention of the microchip, audiotape players became
available in smaller and smaller sizes, making them truly portable.
Audiocassettes underwent another explosion in popularity during
the early 1980’s, when the Sony Corporation introduced the
Walkman, an extremely compact, almost weightless cassette player
that could be attached to clothing and used with lightweight earphones
virtually anywhere. At the same time, cassettes were suddenly
being used with microcomputers for backing up magnetic
data files.
Home video soon exploded onto the scene, bringing with it new
applications for cassettes. As had happened with audiotape, video
camera-recorder units, called “camcorders,” were miniaturized to
the point where 8-millimeter videocassettes capable of recording up to 90 minutes of live action and sound were widely available. These
cassettes closely resembled the audiocassette first introduced in
1963.
Carbon dating
The invention: Atechnique that measures the radioactive decay of
carbon 14 in organic substances to determine the ages of artifacts
as old as ten thousand years.
The people behind the invention:
Willard Frank Libby (1908-1980), an American chemist who won
the 1960 Nobel Prize in Chemistry
Charles Wesley Ferguson (1922-1986), a scientist who
demonstrated that carbon 14 dates before 1500 b.c. needed to
be corrected
One in a Trillion
Carbon dioxide in the earth’s atmosphere contains a mixture of
three carbon isotopes (isotopes are atoms of the same element that
contain different numbers of neutrons), which occur in the following
percentages: about 99 percent carbon 12, about 1 percent carbon
13, and approximately one atom in a trillion of radioactive carbon
14. Plants absorb carbon dioxide from the atmosphere during photosynthesis,
and then animals eat the plants, so all living plants and
animals contain a small amount of radioactive carbon.
When a plant or animal dies, its radioactivity slowly decreases as
the radioactive carbon 14 decays. The time it takes for half of any radioactive
substance to decay is known as its “half-life.” The half-life
for carbon 14 is known to be about fifty-seven hundred years. The
carbon 14 activity will drop to one-half after one half-life, onefourth
after two half-lives, one-eighth after three half-lives, and so
forth. After ten or twenty half-lives, the activity becomes too low to
be measurable. Coal and oil, which were formed from organic matter
millions of years ago, have long since lost any carbon 14 activity.
Wood samples from an Egyptian tomb or charcoal from a prehistoric
fireplace a few thousand years ago, however, can be dated with
good reliability from the leftover radioactivity.
In the 1940’s, the properties of radioactive elements were still
being discovered and were just beginning to be used to solve problems.
Scientists still did not know the half-life of carbon 14, and archaeologists still depended mainly on historical evidence to determine
the ages of ancient objects.
In early 1947,Willard Frank Libby started a crucial experiment in
testing for radioactive carbon. He decided to test samples of methane
gas from two different sources. One group of samples came
from the sewage disposal plant at Baltimore, Maryland, which was
rich in fresh organic matter. The other sample of methane came from
an oil refinery, which should have contained only ancient carbon
from fossils whose radioactivity should have completely decayed.
The experimental results confirmed Libby’s suspicions: The methane
from fresh sewage was radioactive, but the methane from oil
was not. Evidently, radioactive carbon was present in fresh organic
material, but it decays away eventually.
Tree-Ring Dating
In order to establish the validity of radiocarbon dating, Libby analyzed
known samples of varying ages. These included tree-ring
samples from the years 575 and 1075 and one redwood from 979
b.c.e., as well as artifacts from Egyptian tombs going back to about
3000 b.c.e. In 1949, he published an article in the journal Science that
contained a graph comparing the historical ages and the measured
radiocarbon ages of eleven objects. The results were accurate within
10 percent, which meant that the general method was sound.
The first archaeological object analyzed by carbon dating, obtained
from the Metropolitan Museum of Art in New York, was a
piece of cypress wood from the tomb of King Djoser of Egypt. Based
on historical evidence, the age of this piece of wood was about fortysix
hundred years. A small sample of carbon obtained from this
wood was deposited on the inside of Libby’s radiation counter, giving
a count rate that was about 40 percent lower than that of modern
organic carbon. The resulting age of the wood calculated from its residual
radioactivity was about thirty-eight hundred years, a difference
of eight hundred years. Considering that this was the first object
to be analyzed, even such a rough agreement with the historic
age was considered to be encouraging.
The validity of radiocarbon dating depends on an important assumption—
namely, that the abundance of carbon 14 in nature has been constant for many thousands of years. If carbon 14 was less
abundant at some point in history, organic samples from that era
would have started with less radioactivity. When analyzed today,
their reduced activity would make them appear to be older than
they really are.Charles Wesley Ferguson from the Tree-Ring Research Laboratory
at the University of Arizona tackled this problem. He measured
the age of bristlecone pine trees both by counting the rings and by
using carbon 14 methods. He found that carbon 14 dates before
1500 b.c.e. needed to be corrected. The results show that radiocarbon
dates are older than tree-ring counting dates by as much as several
hundred years for the oldest samples. He knew that the number
of tree rings had given him the correct age of the pines, because trees
accumulate one ring of growth for every year of life. Apparently, the
carbon 14 content in the atmosphere has not been constant. Fortunately,
tree-ring counting gives reliable dates that can be used to
correct radiocarbon measurements back to about 6000 b.c.e.
Impact
Some interesting samples were dated by Libby’s group. The
Dead Sea Scrolls had been found in a cave by an Arab shepherd in
1947, but some Bible scholars at first questioned whether they were
genuine. The linen wrapping from the Book of Isaiah was tested for
carbon 14, giving a date of 100 b.c.e., which helped to establish its
authenticity. Human hair from an Egyptian tomb was determined
to be nearly five thousand years old.Well-preserved sandals from a
cave in eastern Oregon were determined to be ninety-three hundred
years old. A charcoal sample from a prehistoric site in western
South Dakota was found to be about seven thousand years old.
The Shroud of Turin, located in Turin, Italy, has been a controversial
object for many years. It is a linen cloth, more than four meters
long, which shows the image of a man’s body, both front and back.
Some people think it may have been the burial shroud of Jesus
Christ after his crucifixion. Ateam of scientists in 1978 was permitted
to study the shroud, using infrared photography, analysis of
possible blood stains, microscopic examination of the linen fibers,
and other methods. The results were ambiguous. A carbon 14 test
was not permitted because it would have required cutting a piece
about the size of a handkerchief from the shroud.
Anew method of measuring carbon 14 was developed in the late
1980’s. It is called “accelerator mass spectrometry,” or AMS. Unlike
Libby’s method, it does not count the radioactivity of carbon. Instead, a mass spectrometer directly measures the ratio of carbon 14
to ordinary carbon. The main advantage of this method is that the
sample size needed for analysis is about a thousand times smaller
than before. The archbishop of Turin permitted three laboratories
with the appropriate AMS apparatus to test the shroud material.
The results agreed that the material was from the fourteenth century,
not from the time of Christ. The figure on the shroud may be a
watercolor painting on linen.
Since Libby’s pioneering experiments in the late 1940’s, carbon
14 dating has established itself as a reliable dating technique for archaeologists
and cultural historians. Further improvements are expected
to increase precision, to make it possible to use smaller samples,
and to extend the effective time range of the method back to
fifty thousand years or earlier.
05 March 2009
CAD/CAM
The invention: Computer-Aided Design (CAD) and Computer-
Aided Manufacturing (CAM) enhanced flexibility in engineering
design, leading to higher quality and reduced time for manufacturing
The people behind the invention:
Patrick Hanratty, a General Motors Research Laboratory
worker who developed graphics programs
Jack St. Clair Kilby (1923- ), a Texas Instruments employee
who first conceived of the idea of the integrated circuit
Robert Noyce (1927-1990), an Intel Corporation employee who
developed an improved process of manufacturing
integrated circuits on microchips
Don Halliday, an early user of CAD/CAM who created the
Made-in-America car in only four months by using CAD
and project management software
Fred Borsini, an early user of CAD/CAM who demonstrated
its power
Summary of Event
Computer-Aided Design (CAD) is a technique whereby geometrical
descriptions of two-dimensional (2-D) or three-dimensional (3-
D) objects can be created and stored, in the form of mathematical
models, in a computer system. Points, lines, and curves are represented
as graphical coordinates. When a drawing is requested from
the computer, transformations are performed on the stored data,
and the geometry of a part or a full view from either a two- or a
three-dimensional perspective is shown. CAD systems replace the
tedious process of manual drafting, and computer-aided drawing
and redrawing that can be retrieved when needed has improved
drafting efficiency. A CAD system is a combination of computer
hardware and software that facilitates the construction of geometric
models and, in many cases, their analysis. It allows a wide variety of
visual representations of those models to be displayed.Computer-Aided Manufacturing (CAM) refers to the use of computers
to control, wholly or partly, manufacturing processes. In
practice, the term is most often applied to computer-based developments
of numerical control technology; robots and flexible manufacturing
systems (FMS) are included in the broader use of CAM
systems. A CAD/CAM interface is envisioned as a computerized
database that can be accessed and enriched by either design or manufacturing
professionals during various stages of the product development
and production cycle.
In CAD systems of the early 1990’s, the ability to model solid objects
became widely available. The use of graphic elements such as
lines and arcs and the ability to create a model by adding and subtracting
solids such as cubes and cylinders are the basic principles of
CADand of simulating objects within a computer.CADsystems enable
computers to simulate both taking things apart (sectioning)
and putting things together for assembly. In addition to being able
to construct prototypes and store images of different models, CAD
systems can be used for simulating the behavior of machines, parts,
and components. These abilities enable CAD to construct models
that can be subjected to nondestructive testing; that is, even before
engineers build a physical prototype, the CAD model can be subjected
to testing and the results can be analyzed. As another example,
designers of printed circuit boards have the ability to test their
circuits on a CAD system by simulating the electrical properties of
components.
During the 1950’s, the U.S. Air Force recognized the need for reducing
the development time for special aircraft equipment. As a
result, the Air Force commissioned the Massachusetts Institute of
Technology to develop numerically controlled (NC) machines that
were programmable. A workable demonstration of NC machines
was made in 1952; this began a new era for manufacturing. As the
speed of an aircraft increased, the cost of manufacturing also increased
because of stricter technical requirements. This higher cost
provided a stimulus for the further development of NC technology,
which promised to reduce errors in design before the prototype
stage.
The early 1960’s saw the development of mainframe computers.
Many industries valued computing technology for its speed and for its accuracy in lengthy and tedious numerical operations in design,
manufacturing, and other business functional areas. Patrick
Hanratty, working for General Motors Research Laboratory, saw
other potential applications and developed graphics programs for
use on mainframe computers. The use of graphics in software aided
the development of CAD/CAM, allowing visual representations of
models to be presented on computer screens and printers.
The 1970’s saw an important development in computer hardware,
namely the development and growth of personal computers
(PCs). Personal computers became smaller as a result of the development
of integrated circuits. Jack St. Clair Kilby, working for Texas
Instruments, first conceived of the integrated circuit; later, Robert
Noyce, working for Intel Corporation, developed an improved process
of manufacturing integrated circuits on microchips. Personal
computers using these microchips offered both speed and accuracy
at costs much lower than those of mainframe computers.
Five companies offered integrated commercial computer-aided
design and computer-aided manufacturing systems by the first half
of 1973. Integration meant that both design and manufacturing
were contained in one system. Of these five companies—Applicon,
Computervision, Gerber Scientific, Manufacturing and Consulting
Services (MCS), and United Computing—four offered turnkey systems
exclusively. Turnkey systems provide design, development,
training, and implementation for each customer (company) based
on the contractual agreement; they are meant to be used as delivered,
with no need for the purchaser to make significant adjustments
or perform programming.
The 1980’s saw a proliferation of mini- and microcomputers with
a variety of platforms (processors) with increased speed and better
graphical resolution. This made the widespread development of
computer-aided design and computer-aided manufacturing possible
and practical. Major corporations spent large research and development
budgets developing CAD/CAM systems that would
automate manual drafting and machine tool movements. Don Halliday,
working for Truesports Inc., provided an early example of the
benefits of CAD/CAM. He created the Made-in-America car in only
four months by using CAD and project management software. In
the late 1980’s, Fred Borsini, the president of Leap Technologies in Michigan, brought various products to market in record time through
the use of CAD/CAM.
In the early 1980’s, much of theCAD/CAMindustry consisted of
software companies. The cost for a relatively slow interactive system
in 1980 was close to $100,000. The late 1980’s saw the demise of
minicomputer-based systems in favor of Unix work stations and
PCs based on 386 and 486 microchips produced by Intel. By the time
of the International Manufacturing Technology show in September,
1992, the industry could show numerous CAD/CAM innovations
including tools, CAD/CAM models to evaluate manufacturability
in early design phases, and systems that allowed use of the same
data for a full range of manufacturing functions.
Impact
In 1990, CAD/CAM hardware sales by U.S. vendors reached
$2.68 billion. In software alone, $1.42 billion worth of CAD/CAM
products and systems were sold worldwide by U.S. vendors, according
to International Data Corporation figures for 1990. CAD/
CAM systems were in widespread use throughout the industrial
world. Development lagged in advanced software applications,
particularly in image processing, and in the communications software
and hardware that ties processes together.
A reevaluation of CAD/CAM systems was being driven by the
industry trend toward increased functionality of computer-driven
numerically controlled machines. Numerical control (NC) software
enables users to graphically define the geometry of the parts in a
product, develop paths that machine tools will follow, and exchange
data among machines on the shop floor. In 1991, NC configuration
software represented 86 percent of total CAM sales. In 1992,
the market shares of the five largest companies in the CAD/CAM
market were 29 percent for International Business Machines, 17 percent
for Intergraph, 11 percent for Computervision, 9 percent for
Hewlett-Packard, and 6 percent for Mentor Graphics.
General Motors formed a joint venture with Ford and Chrysler to
develop a common computer language in order to make the next
generation of CAD/CAM systems easier to use. The venture was
aimed particularly at problems that posed barriers to speeding up the design of new automobiles. The three car companies all had sophisticated
computer systems that allowed engineers to design
parts on computers and then electronically transmit specifications
to tools that make parts or dies.
CAD/CAM technology was expected to advance on many fronts.
As of the early 1990’s, different CAD/CAM vendors had developed
systems that were often incompatible with one another, making it
difficult to transfer data from one system to another. Large corporations,
such as the major automakers, developed their own interfaces
and network capabilities to allow different systems to communicate.
Major users of CAD/CAM saw consolidation in the industry
through the establishment of standards as being in their interests.
Resellers of CAD/CAM products also attempted to redefine
their markets. These vendors provide technical support and service
to users. The sale of CAD/CAM products and systems offered substantial
opportunities, since demand remained strong. Resellers
worked most effectively with small and medium-sized companies,
which often were neglected by the primary sellers of CAD/CAM
equipment because they did not generate a large volume of business.
Some projections held that by 1995 half of all CAD/CAM systems
would be sold through resellers, at a cost of $10,000 or less for
each system. The CAD/CAM market thus was in the process of dividing
into two markets: large customers (such as aerospace firms
and automobile manufacturers) that would be served by primary
vendors, and small and medium-sized customers that would be serviced
by resellers.
CAD will find future applications in marketing, the construction
industry, production planning, and large-scale projects such as shipbuilding
and aerospace. Other likely CAD markets include hospitals,
the apparel industry, colleges and universities, food product
manufacturers, and equipment manufacturers. As the linkage between
CAD and CAM is enhanced, systems will become more productive.
The geometrical data from CAD will be put to greater use
by CAM systems.
CAD/CAM already had proved that it could make a big difference
in productivity and quality. Customer orders could be changed
much faster and more accurately than in the past, when a change
could require a manual redrafting of a design. Computers could do automatically in minutes what once took hours manually. CAD/
CAM saved time by reducing, and in some cases eliminating, human
error. Many flexible manufacturing systems (FMS) had machining
centers equipped with sensing probes to check the accuracy
of the machining process. These self-checks can be made part of numerical
control (NC) programs. With the technology of the early
1990’s, some experts estimated that CAD/CAM systems were in
many cases twice as productive as the systems they replaced; in the
long run, productivity is likely to improve even more, perhaps up to
three times that of older systems or even higher. As costs for CAD/
CAM systems concurrently fall, the investment in a system will be
recovered more quickly. Some analysts estimated that by the mid-
1990’s, the recovery time for an average system would be about
three years.
Another frontier in the development of CAD/CAM systems is
expert (or knowledge-based) systems, which combine data with a
human expert’s knowledge, expressed in the form of rules that the
computer follows. Such a system will analyze data in a manner
mimicking intelligence. For example, a 3-D model might be created
from standard 2-D drawings. Expert systems will likely play a
pivotal role in CAM applications. For example, an expert system
could determine the best sequence of machining operations to produce
a component.
Continuing improvements in hardware, especially increased
speed, will benefit CAD/CAM systems. Software developments,
however, may produce greater benefits. Wider use of CAD/CAM
systems will depend on the cost savings from improvements in
hardware and software as well as on the productivity of the systems
and the quality of their product. The construction, apparel,
automobile, and aerospace industries have already experienced
increases in productivity, quality, and profitability through the use
of CAD/CAM. A case in point is Boeing, which used CAD from
start to finish in the design of the 757.
Buna rubber
The invention: The first practical synthetic rubber product developed,
Buna inspired the creation of other other synthetic substances
that eventually replaced natural rubber in industrial applications.
The people behind the invention:
Charles de la Condamine (1701-1774), a French naturalist
Charles Goodyear (1800-1860), an American inventor
Joseph Priestley (1733-1804), an English chemist
Charles Greville Williams (1829-1910), an English chemist
A New Synthetic Rubber
The discovery of natural rubber is often credited to the French
scientist Charles de la Condamine, who, in 1736, sent the French
Academy of Science samples of an elastic material used by Peruvian
Indians to make balls that bounced. The material was primarily a
curiosity until 1770, when Joseph Priestley, an English chemist, discovered
that it rubbed out pencil marks, after which he called it
“rubber.” Natural rubber, made from the sap of the rubber tree
(Hevea brasiliensis), became important after Charles Goodyear discovered
in 1830 that heating rubber with sulfur (a process called
“vulcanization”) made it more elastic and easier to use. Vulcanized
natural rubber came to be used to make raincoats, rubber bands,
and motor vehicle tires.
Natural rubber is difficult to obtain (making one tire requires
the amount of rubber produced by one tree in two years), and wars
have often cut off supplies of this material to various countries.
Therefore, efforts to manufacture synthetic rubber began in the
late eighteenth century. Those efforts followed the discovery by
English chemist Charles GrevilleWilliams and others in the 1860’s
that natural rubber was composed of thousands of molecules of a
chemical called isoprene that had been joined to form giant, necklace-
like molecules. The first successful synthetic rubber, Buna,
was patented by Germany’s I. G. Farben Industrie in 1926. The success of this rubber led to the development of many other synthetic
rubbers, which are now used in place of natural rubber in many
applications.From Erasers to Gas Pumps
Natural rubber belongs to the group of chemicals called “polymers.”
Apolymer is a giant molecule that is made up of many simpler
chemical units (“monomers”) that are attached chemically to
form long strings. In natural rubber, the monomer is isoprene
(dimethylbutadiene). The first efforts to make a synthetic rubber
used the discovery that isoprene could be made and converted
into an elastic polymer. The synthetic rubber that was created from
isoprene was, however, inferior to natural rubber. The first Buna
rubber, which was patented by I. G. Farben in 1926, was better, but it
was still less than ideal. Buna rubber was made by polymerizing the
monomer butadiene in the presence of sodium. The name Buna
comes from the first two letters of the words “butadiene” and “natrium”
(German for sodium). Natural and Buna rubbers are called
homopolymers because they contain only one kind of monomer.
The ability of chemists to make Buna rubber, along with its successful
use, led to experimentation with the addition of other monomers
to isoprene-like chemicals used to make synthetic rubber.
Among the first great successes were materials that contained two
alternating monomers; such materials are called “copolymers.” If
the two monomers are designated Aand B, part of a polymer molecule
can be represented as (ABABABABABABABABAB). Numerous
synthetic copolymers, which are often called “elastomers,” now
replace natural rubber in applications where they have superior
properties. All elastomers are rubbers, since objects made from
them both stretch greatly when pulled and return quickly to their
original shape when the tension is released.
Two other well-known rubbers developed by I. G. Farben are the
copolymers called Buna-N and Buna-S. These materials combine butadiene
and the monomers acrylonitrile and styrene, respectively.
Many modern motor vehicle tires are made of synthetic rubber that
differs little from Buna-S rubber. This rubber was developed after
the United States was cut off in the 1940’s, during World War II,
from its Asian source of natural rubber. The solution to this problem
was the development of a synthetic rubber industry based on GR-S
rubber (government rubber plus styrene), which was essentially
Buna-S rubber. This rubber is still widely used.Buna-S rubber is often made by mixing butadiene and styrene in
huge tanks of soapy water, stirring vigorously, and heating the mixture.
The polymer contains equal amounts of butadiene and styrene
(BSBSBSBSBSBSBSBS). When the molecules of the Buna-S polymer
reach the desired size, the polymerization is stopped and the rubber
is coagulated (solidified) chemically. Then, water and all the unused
starting materials are removed, after which the rubber is dried and
shipped to various plants for use in tires and other products. The
major difference between Buna-S and GR-S rubber is that the method
of making GR-S rubber involves the use of low temperatures.
Buna-N rubber is made in a fashion similar to that used for Buna-
S, using butadiene and acrylonitrile. Both Buna-N and the related
neoprene rubber, invented by Du Pont, are very resistant to gasoline
and other liquid vehicle fuels. For this reason, they can be used in
gas-pump hoses. All synthetic rubbers are vulcanized before they
are used in industry.
Impact
Buna rubber became the basis for the development of the other
modern synthetic rubbers. These rubbers have special properties
that make them suitable for specific applications. One developmental
approach involved the use of chemically modified butadiene in
homopolymers such as neoprene. Made of chloroprene (chlorobutadiene),
neoprene is extremely resistant to sun, air, and chemicals.
It is so widely used in machine parts, shoe soles, and hoses that
more than 400 million pounds are produced annually.
Another developmental approach involved copolymers that alternated
butadiene with other monomers. For example, the successful
Buna-N rubber (butadiene and acrylonitrile) has properties
similar to those of neoprene. It differs sufficiently from neoprene,
however, to be used to make items such as printing press rollers.
About 200 million pounds of Buna-N are produced annually. Some
4 billion pounds of the even more widely used polymer Buna-S/
GR-S are produced annually, most of which is used to make tires.
Several other synthetic rubbers have significant industrial applications,
and efforts to make copolymers for still other purposes continue.
20 February 2009
Bullet train
The invention: An ultrafast passenger railroad system capable of
moving passengers at speeds double or triple those of ordinary
trains.
The people behind the invention:
Ikeda Hayato (1899-1965), Japanese prime minister from 1960 to
1964, who pushed for the expansion of public expenditures
Shinji Sogo (1901-1971), the president of the Japanese National
Railways, the “father of the bullet train”
Building a Faster Train
By 1900, Japan had a world-class railway system, a logical result
of the country’s dense population and the needs of its modernizing
economy. After 1907, the government controlled the system
through the Japanese National Railways (JNR). In 1938, JNR engineers
first suggested the idea of a train that would travel 125 miles
per hour from Tokyo to the southern city of Shimonoseki. Construction
of a rapid train began in 1940 but was soon stopped because of
World War II.
The 311-mile railway between Tokyo and Osaka, the Tokaido
Line, has always been the major line in Japan. By 1957, a business express
along the line operated at an average speed of 57 miles per
hour, but the double-track line was rapidly reaching its transport capacity.
The JNR established two investigative committees to explore
alternative solutions. In 1958, the second committee recommended
the construction of a high-speed railroad on a separate double track,
to be completed in time for the Tokyo Olympics of 1964. The Railway
Technical Institute of the JNR concluded that it was feasible to
design a line that would operate at an average speed of about 130
miles per hour, cutting time for travel between Tokyo and Osaka
from six hours to three hours.
By 1962, about 17 miles of the proposed line were completed for
test purposes. During the next two years, prototype trains were
tested to correct flaws and make improvements in the design. The entire project was completed on schedule in July, 1964, with total construction
costs of more than $1 billion, double the original estimates.
The Speeding Bullet
Service on the Shinkansen, or New Trunk Line, began on October
1, 1964, ten days before the opening of the Olympic Games.
Commonly called the “bullet train” because of its shape and speed,
the Shinkansen was an instant success with the public, both in Japan
and abroad. As promised, the time required to travel between Tokyo
and Osaka was cut in half. Initially, the system provided daily
services of sixty trains consisting of twelve cars each, but the number
of scheduled trains was almost doubled by the end of the year.
The Shinkansen was able to operate at its unprecedented speed
because it was designed and operated as an integrated system,
making use of countless technological and scientific developments.
Tracks followed the standard gauge of 56.5 inches, rather than the
more narrow gauge common in Japan. For extra strength, heavy welded rails were attached directly onto reinforced concrete slabs.
The minimum radius of a curve was 8,200 feet, except where sharper
curves were mandated by topography. In many ways similar to
modern airplanes, the railway cars were made airtight in order to
prevent ear discomfort caused by changes in pressure when trains
enter tunnels.
The Shinkansen trains were powered by electric traction motors,
with four 185-kilowatt motors on each car—one motor attached to
each axle. This design had several advantages: It provided an even
distribution of axle load for reducing strain on the tracks; it allowed
the application of dynamic brakes (where the motor was used for
braking) on all axles; and it prevented the failure of one or two units
from interrupting operation of the entire train. The 25,000-volt electrical
current was carried by trolley wire to the cars, where it was
rectified into a pulsating current to drive the motors.
The Shinkansen system established a casualty-free record because
of its maintenance policies combined with its computerized
Centralized Traffic Control system. The control room at Tokyo Station
was designed to maintain timely information about the location
of all trains and the condition of all routes. Although train operators
had some discretion in determining speed, automatic brakes
also operated to ensure a safe distance between trains. At least once
each month, cars were thoroughly inspected; every ten days, an inspection
train examined the conditions of tracks, communication
equipment, and electrical systems.
Impact
Public usage of the Tokyo-Osaka bullet train increased steadily
because of the system’s high speed, comfort, punctuality, and superb
safety record. Businesspeople were especially happy that the
rapid service allowed them to make the round-trip without the necessity
of an overnight stay, and continuing modernization soon allowed
nonstop trains to make a one-way trip in two and one-half
hours, requiring speeds of 160 miles per hour in some stretches. By
the early 1970’s, the line was transporting a daily average of 339,000
passengers in 240 trains, meaning that a train departed from Tokyo
about every ten minutes The popularity of the Shinkansen system quickly resulted in demands
for its extension into other densely populated regions. In
1972, a 100-mile stretch between Osaka and Okayama was opened
for service. By 1975, the line was further extended to Hakata on the
island of Kyushu, passing through the Kammon undersea tunnel.
The cost of this 244-mile stretch was almost $2.5 billion. In 1982,
lines were completed from Tokyo to Niigata and from Tokyo to
Morioka. By 1993, the system had grown to 1,134 miles of track.
Since high usage made the system extremely profitable, the sale of
the JNR to private companies in 1987 did not appear to produce adverse
consequences.
The economic success of the Shinkansen had a revolutionary effect
on thinking about the possibilities of modern rail transportation,
leading one authority to conclude that the line acted as “a
savior of the declining railroad industry.” Several other industrial
countries were stimulated to undertake large-scale railway projects;
France, especially, followed Japan’s example by constructing highspeed
electric railroads from Paris to Nice and to Lyon. By the mid-
1980’s, there were experiments with high-speed trains based on
magnetic levitation and other radical innovations, but it was not
clear whether such designs would be able to compete with the
Shinkansen model.
Bubble memory
The invention: An early nonvolatile medium for storing information
on computers.
The person behind the invention:
Andrew H. Bobeck (1926- ), a Bell Telephone Laboratories
scientist
Magnetic Technology
The fanfare over the commercial prospects of magnetic bubbles
was begun on August 8, 1969, by a report appearing in both The New
York Times and TheWall Street Journal. The early 1970’s would see the
anticipation mount (at least in the computer world) with each prediction
of the benefits of this revolution in information storage technology.
Although it was not disclosed to the public until August of 1969,
magnetic bubble technology had held the interest of a small group
of researchers around the world for many years. The organization
that probably can claim the greatest research advances with respect
to computer applications of magnetic bubbles is Bell Telephone
Laboratories (later part of American Telephone and Telegraph). Basic
research into the properties of certain ferrimagnetic materials
started at Bell Laboratories shortly after the end of World War II
(1939-1945).
Ferrimagnetic substances are typically magnetic iron oxides. Research
into the properties of these and related compounds accelerated
after the discovery of ferrimagnetic garnets in 1956 (these are a
class of ferrimagnetic oxide materials that have the crystal structure
of garnet). Ferrimagnetism is similar to ferromagnetism, the phenomenon
that accounts for the strong attraction of one magnetized
body for another. The ferromagnetic materials most suited for bubble
memories contain, in addition to iron, the element yttrium or a
metal from the rare earth series.
It was a fruitful collaboration between scientist and engineer,
between pure and applied science, that produced this promising breakthrough in data storage technology. In 1966, Bell Laboratories
scientist Andrew H. Bobeck and his coworkers were the first to realize
the data storage potential offered by the strange behavior of thin
slices of magnetic iron oxides under an applied magnetic field. The
first U.S. patent for a memory device using magnetic bubbles was
filed by Bobeck in the fall of 1966 and issued on August 5, 1969.
Bubbles Full of Memories
The three basic functional elements of a computer are the central
processing unit, the input/output unit, and memory. Most implementations
of semiconductor memory require a constant power
source to retain the stored data. If the power is turned off, all stored
data are lost. Memory with this characteristic is called “volatile.”
Disks and tapes, which are typically used for secondary memory,
are “nonvolatile.” Nonvolatile memory relies on the orientation of
magnetic domains, rather than on electrical currents, to sustain its
existence.
One can visualize by analogy how this will work by taking a
group of permanent bar magnets that are labeled withNfor north at
one end and S for south at the other. If an arrow is painted starting
from the north end with the tip at the south end on each magnet, an
orientation can then be assigned to a magnetic domain (here one
whole bar magnet). Data are “stored” with these bar magnets by arranging
them in rows, some pointing up, some pointing down. Different
arrangements translate to different data. In the binary world
of the computer, all information is represented by two states. A
stored data item (known as a “bit,” or binary digit) is either on or off,
up or down, true or false, depending on the physical representation.
The “on” state is commonly labeled with the number 1 and the “off”
state with the number 0. This is the principle behind magnetic disk
and tape data storage.
Now imagine a thin slice of a certain type of magnetic material in
the shape of a 3-by-5-inch index card. Under a microscope, using a
special source of light, one can see through this thin slice in many regions
of the surface. Darker, snakelike regions can also be seen, representing
domains of an opposite orientation (polarity) to the transparent
regions. If a weak external magnetic field is then applied by placing a permanent magnet of the same shape as the card on the
underside of the slice, a strange thing happens to the dark serpentine
pattern—the long domains shrink and eventually contract into
“bubbles,” tiny magnetized spots. Viewed from the side of the slice,
the bubbles are cylindrically shaped domains having a polarity opposite
to that of the material on which they rest. The presence or absence
of a bubble indicates either a 0 or a 1 bit. Data bits are stored by
moving the bubbles in the thin film. As long as the field is applied
by the permanent magnet substrate, the data will be retained. The
bubble is thus a nonvolatile medium for data storage.Consequences
Magnetic bubble memory created quite a stir in 1969 with its
splashy public introduction. Most of the manufacturers of computer
chips immediately instituted bubble memory development projects.
Texas Instruments, Philips, Hitachi, Motorola, Fujitsu, and International
Business Machines (IBM) joined the race with Bell Laboratories
to mass-produce bubble memory chips. Texas Instruments
became the first major chip manufacturer to mass-produce bubble
memories in the mid-to-late 1970’s. By 1990, however, almost all the
research into magnetic bubble technology had shifted to Japan.
Hitachi and Fujitsu began to invest heavily in this area.
Mass production proved to be the most difficult task. Although
the materials it uses are different, the process of producing magnetic
bubble memory chips is similar to the process applied in producing
semiconductor-based chips such as those used for random access
memory (RAM). It is for this reason that major semiconductor manufacturers
and computer companies initially invested in this technology.
Lower fabrication yields and reliability issues plagued
early production runs, however, and, although these problems
have mostly been solved, gains in the performance characteristics of
competing conventional memories have limited the impact that
magnetic bubble technology has had on the marketplace. The materials
used for magnetic bubble memories are costlier and possess
more complicated structures than those used for semiconductor or
disk memory.
Speed and cost of materials are not the only bases for comparison. It is possible to perform some elementary logic with magnetic
bubbles. Conventional semiconductor-based memory offers storage
only. The capability of performing logic with magnetic bubbles
puts bubble technology far ahead of other magnetic technologies
with respect to functional versatility.
Asmall niche market for bubble memory developed in the 1980’s.
Magnetic bubble memory can be found in intelligent terminals, desktop
computers, embedded systems, test equipment, and similar microcomputer-
based systems.
Brownie camera
The invention: The first inexpensive and easy-to-use camera available
to the general public, the Brownie revolutionized photography
by making it possible for every person to become a photographer.
The people behind the invention:
George Eastman (1854-1932), founder of the Eastman Kodak
Company
Frank A. Brownell, a camera maker for the Kodak Company
who designed the Brownie
Henry M. Reichenbach, a chemist who worked with Eastman to
develop flexible film
William H. Walker, a Rochester camera manufacturer who
collaborated with Eastman
A New Way to Take Pictures
In early February of 1900, the first shipments of a new small box
camera called the Brownie reached Kodak dealers in the United
States and England. George Eastman, eager to put photography
within the reach of everyone, had directed Frank Brownell to design
a small camera that could be manufactured inexpensively but that
would still take good photographs.
Advertisements for the Brownie proclaimed that everyone—
even children—could take good pictures with the camera. The
Brownie was aimed directly at the children’s market, a fact indicated
by its box, which was decorated with drawings of imaginary
elves called “Brownies” created by the Canadian illustrator Palmer
Cox. Moreover, the camera cost only one dollar.
The Brownie was made of jute board and wood, with a hinged
back fastened by a sliding catch. It had an inexpensive two-piece
glass lens and a simple rotary shutter that allowed both timed and
instantaneous exposures to be made. With a lens aperture of approximately
f14 and a shutter speed of approximately 1/50 of a second,
the Brownie was certainly capable of taking acceptable snapshots. It had no viewfinder; however, an optional clip-on reflecting
viewfinder was available. The camera came loaded with a six-exposure
roll of Kodak film that produced square negatives 2.5 inches on
a side. This film could be developed, printed, and mounted for forty
cents, and a new roll could be purchased for fifteen cents.
George Eastman’s first career choice had been banking, but when
he failed to receive a promotion he thought he deserved, he decided
to devote himself to his hobby, photography. Having worked with a
rigorous wet-plate process, he knew why there were few amateur
photographers at the time—the whole process, from plate preparation
to printing, was too expensive and too much trouble. Even so,
he had already begun to think about the commercial possibilities of
photography; after reading of British experiments with dry-plate
technology, he set up a small chemical laboratory and came up with
a process of his own. The Eastman Dry Plate Company became one
of the most successful producers of gelatin dry plates.
Dry-plate photography had attracted more amateurs, but it was
still a complicated and expensive hobby. Eastman realized that the
number of photographers would have to increase considerably if
the market for cameras and supplies were to have any potential. In
the early 1880’s, Eastman first formulated the policies that would
make the Eastman Kodak Company so successful in years to come:
mass production, low prices, foreign and domestic distribution, and
selling through extensive advertising and by demonstration.
In his efforts to expand the amateur market, Eastman first tackled
the problem of the glass-plate negative, which was heavy, fragile,
and expensive to make. By 1884, his experiments with paper
negatives had been successful enough that he changed the name of
his company to The Eastman Dry Plate and Film Company. Since
flexible roll film needed some sort of device to hold it steady in the
camera’s focal plane, Eastman collaborated with William Walker
to develop the Eastman-Walker roll-holder. Eastman’s pioneering
manufacture and use of roll films led to the appearance on the market
in the 1880’s of a wide array of hand cameras from a number of
different companies. Such cameras were called “detective cameras”
because they were small and could be used surreptitiously. The
most famous of these, introduced by Eastman in 1888, was named
the “Kodak”—a word he coined to be terse, distinctive, and easily pronounced in any language. This camera’s simplicity of operation
was appealing to the general public and stimulated the growth of
amateur photography.
The Camera
The Kodak was a box about seven inches long and four inches
wide, with a one-speed shutter and a fixed-focus lens that produced
reasonably sharp pictures. It came loaded with enough roll film to
make one hundred exposures. The camera’s initial price of twentyfive
dollars included the cost of processing the first roll of film; the
camera also came with a leather case and strap. After the film was
exposed, the camera was mailed, unopened, to the company’s plant
in Rochester, New York, where the developing and printing were
done. For an additional ten dollars, the camera was reloaded and
sent back to the customer.
The Kodak was advertised in mass-market publications, rather
than in specialized photographic journals, with the slogan: “You
press the button, we do the rest.”With his introduction of a camera
that was easy to use and a service that eliminated the need to know
anything about processing negatives, Eastman revolutionized the
photographic market. Thousands of people no longer depended
upon professional photographers for their portraits but instead
learned to make their own. In 1892, the Eastman Dry Plate and Film
Company became the Eastman Kodak Company, and by the mid-
1890’s, one hundred thousand Kodak cameras had been manufactured
and sold, half of them in Europe by Kodak Limited.
Having popularized photography with the first Kodak, in 1900
Eastman turned his attention to the children’s market with the introduction
of the Brownie. The first five thousand cameras sent to
dealers were sold immediately; by the end of the following year, almost
a quarter of a million had been sold. The Kodak Company organized
Brownie camera clubs and held competitions specifically
for young photographers. The Brownie came with an instruction
booklet that gave children simple directions for taking successful
pictures, and “The Brownie Boy,” an appealing youngster who
loved photography, became a standard feature of Kodak’s advertisements.
Impact
Eastman followed the success of the first Brownie by introducing
several additional models between 1901 and 1917. Each was a more
elaborate version of the original. These Brownie box cameras were
on the market until the early 1930’s, and their success inspired other
companies to manufacture box cameras of their own. In 1906, the
Ansco company produced the Buster Brown camera in three sizes
that corresponded to Kodak’s Brownie camera range; in 1910 and
1914, Ansco made three more versions. The Seneca company’s
Scout box camera, in three sizes, appeared in 1913, and Sears Roebuck’s
Kewpie cameras, in five sizes, were sold beginning in 1916.
In England, the Houghtons company introduced its first Scout camera
in 1901, followed by another series of four box cameras in 1910
sold under the Ensign trademark. Other English manufacturers of
box cameras included the James Sinclair company, with its Traveller
Una of 1909, and the Thornton-Pickard company, with a Filma camera
marketed in four sizes in 1912.
After World War I ended, several series of box cameras were
manufactured in Germany by companies that had formerly concentrated
on more advanced and expensive cameras. The success of
box cameras in other countries, led by Kodak’s Brownie, undoubtedly
prompted this trend in the German photographic industry. The
Ernemann Film K series of cameras in three sizes, introduced in
1919, and the all-metal Trapp LittleWonder of 1922 are examples of
popular German box cameras.
In the early 1920’s, camera manufacturers began making boxcamera
bodies from metal rather than from wood and cardboard.
Machine-formed metal was less expensive than the traditional handworked
materials. In 1924, Kodak’s two most popular Brownie sizes
appeared with aluminum bodies.
In 1928, Kodak Limited of England added two important new
features to the Brownie—a built-in portrait lens, which could be
brought in front of the taking lens by pressing a lever, and camera
bodies in a range of seven different fashion colors. The Beau
Brownie cameras, made in 1930, were the most popular of all the
colored box cameras. The work ofWalter Dorwin Teague, a leading
American designer, these cameras had an Art Deco geometric pattern on the front panel, which was enameled in a color matching the
leatherette covering of the camera body. Several other companies,
including Ansco, again followed Kodak’s lead and introduced their
own lines of colored cameras.
In the 1930’s, several new box cameras with interesting features appeared,
many manufactured by leading film companies. In France, the
Lumiere Company advertised a series of box cameras—the Luxbox,
Scoutbox, and Lumibox—that ranged from a basic camera to one with
an adjustable lens and shutter. In 1933, the German Agfa company restyled
its entire range of box cameras, and in 1939, the Italian Ferrania
company entered the market with box cameras in two sizes. In 1932,
Kodak redesigned its Brownie series to take the new 620 roll film,
which it had just introduced. This film and the new Six-20 Brownies inspired
other companies to experiment with variations of their own;
some box cameras, such as the Certo Double-box, the Coronet Every
Distance, and the Ensign E-20 cameras, offered a choice of two picture
formats.
Another new trend was a move toward smaller-format cameras
using standard 127 roll film. In 1934, Kodak marketed the small
Baby Brownie. Designed by Teague and made from molded black
plastic, this little camera with a folding viewfinder sold for only one
dollar—the price of the original Brownie in 1900.
The Baby Brownie, the first Kodak camera made of molded plastic,
heralded the move to the use of plastic in camera manufacture.
Soon many others, such as the Altissa series of box cameras and the
Voigtlander Brilliant V/6 camera, were being made from this new material.
Later Trends
By the late 1930’s, flashbulbs had replaced flash powder for taking
pictures in low light; again, the Eastman Kodak Company led
the way in introducing this new technology as a feature on the inexpensive
box camera. The Falcon Press-Flash, marketed in 1939, was
the first mass-produced camera to have flash synchronization and
was followed the next year by the Six-20 Flash Brownie, which had a
detachable flash gun. In the early 1940’s, other companies, such as
Agfa-Ansco, introduced this feature on their own box cameras.In the years after World War II, the box camera evolved into an
eye-level camera, making it more convenient to carry and use.
Many amateur photographers, however, still had trouble handling paper-backed roll film and were taking their cameras back to dealers
to be unloaded and reloaded. Kodak therefore developed a new
system of film loading, using the Kodapak cartridge, which could
be mass-produced with a high degree of accuracy by precision plastic-
molding techniques. To load the camera, the user simply opened
the camera back and inserted the cartridge. This new film was introduced
in 1963, along with a series of Instamatic cameras designed
for its use. Both were immediately successful.
The popularity of the film cartridge ended the long history of the
simple and inexpensive roll film camera. The last English Brownie
was made in 1967, and the series of Brownies made in the United
States was discontinued in 1970. Eastman’s original marketing strategy
of simplifying photography in order to increase the demand for
cameras and film continued, however, with the public’s acceptance
of cartridge-loading cameras such as the Instamatic.
From the beginning, Eastman had recognized that there were
two kinds of photographers other than professionals. The first, he
declared, were the true amateurs who devoted time enough to acquire
skill in the complex processing procedures of the day. The second
were those who merely wanted personal pictures or memorabilia
of their everyday lives, families, and travels. The second class,
he observed, outnumbered the first by almost ten to one. Thus, it
was to this second kind of amateur photographer that Eastman had
appealed, both with his first cameras and with his advertising slogan,
“You press the button, we do the rest.” Eastman had done
much more than simply invent cameras and films; he had invented
a system and then developed the means for supporting that system.
This is essentially what the Eastman Kodak Company continued to
accomplish with the series of Instamatics and other descendants of
the original Brownie. In the decade between 1963 and 1973, for example,
approximately sixty million Instamatics were sold throughout
the world.
The research, manufacturing, and marketing activities of the
Eastman Kodak Company have been so complex and varied that no
one would suggest that the company’s prosperity rests solely on the
success of its line of inexpensive cameras and cartridge films, although
these have continued to be important to the company. Like
Kodak, however, most large companies in the photographic industry have expanded their research to satisfy the ever-growing demand
from amateurs. The amateurism that George Eastman recognized
and encouraged at the beginning of the twentieth century
thus still flourished at its end.
17 February 2009
Broadcaster guitar
The invention: The first commercially manufactured solid-body
electric guitar, the Broadcaster revolutionized the guitar industry
and changed the face of popular music
The people behind the invention:
Leo Fender (1909-1991), designer of affordable and easily massproduced
solid-body electric guitars
Les Paul (Lester William Polfuss, 1915- ), a legendary
guitarist and designer of solid-body electric guitars
Charlie Christian (1919-1942), an influential electric jazz
guitarist of the 1930’s
Early Electric Guitars
It has been estimated that between 1931 and 1937, approximately
twenty-seven hundred electric guitars and amplifiers were sold in
the United States. The Electro String Instrument Company, run
by Adolph Rickenbacker and his designer partners, George Beauchamp
and Paul Barth, produced two of the first commercially manufactured
electric guitars—the Rickenbacker A-22 and A-25—in
1931. The Rickenbacker models were what are known as “lap steel”
or Hawaiian guitars. A Hawaiian guitar is played with the instrument
lying flat across a guitarist’s knees. By the mid-1930’s, the Gibson
company had introduced an electric Spanish guitar, the ES-150.
Legendary jazz guitarist Charlie Christian made this model famous
while playing for Benny Goodman’s orchestra. Christian was the
first electric guitarist to be heard by a large American audience.
He became an inspiration for future electric guitarists, because he
proved that the electric guitar could have its own unique solo
sound. Along with Christian, the other electric guitar figures who
put the instrument on the musical map were blues guitarist T-Bone
Walker, guitarist and inventor Les Paul, and engineer and inventor
Leo Fender.
Early electric guitars were really no more than acoustic guitars,
with the addition of one or more pickups, which convert string vibrations to electrical signals that can be played through a speaker.
Amplification of a guitar made it a more assertive musical instrument.
The electrification of the guitar ultimately would make it
more flexible, giving it a more prominent role in popular music. Les
Paul, always a compulsive inventor, began experimenting with
ways of producing an electric solid-body guitar in the late 1930’s. In
1929, at the age of thirteen, he had amplified his first acoustic guitar.
Another influential inventor of the 1940’s was Paul Bigsby. He built
a prototype solid-body guitar for country music star Merle Travis in
1947. It was Leo Fender who revolutionized the electric guitar industry
by producing the first commercially viable solid-body electric
guitar, the Broadcaster, in 1948.Leo Fender
Leo Fender was born in the Anaheim, California, area in 1909. As
a teenager, he began to build and repair guitars. By the 1930’s,
Fender was building and renting out public address systems for
group gatherings. In 1937, after short tenures of employment with
the Division of Highways and the U.S. Tire Company, he opened a
radio repair company in Fullerton, California. Always looking to
expand and invent new and exciting electrical gadgets, Fender and
Clayton Orr “Doc” Kauffman started the K & F Company in 1944.
Kauffman was a musician and a former employee of the Electro
String Instrument Company. The K & F Company lasted until 1946
and produced steel guitars and amplifiers. After that partnership
ended, Fender founded the Fender Electric Instruments Company.
With the help of George Fullerton, who joined the company in
1948, Fender developed the Fender Broadcaster. The body of the
Broadcaster was made of a solid plank of ash wood. The corners of
the ash body were rounded. There was a cutaway located under the
joint with the solid maple neck, making it easier for the guitarist to
access the higher frets. The maple neck was bolted to the body of the
guitar, which was unusual, since most guitar necks prior to the
Broadcaster had been glued to the body. Frets were positioned directly
into designed cuts made in the maple of the neck. The guitar
had two pickups.
The Fender Electric Instruments Company made fewer than one thousand Broadcasters. In 1950, the name of the guitar was changed
from the Broadcaster to the Telecaster, as the Gretsch company had
already registered the name Broadcaster for some of its drums and
banjos. Fender decided not to fight in court over use of the name.
Leo Fender has been called the Henry Ford of the solid-body
electric guitar, and the Telecaster became known as the Model T of
the industry. The early Telecasters sold for $189.50. Besides being inexpensive,
the Telecaster was a very durable instrument. Basically,
the Telecaster was a continuation of the Broadcaster. Fender did not
file for a patent on its unique bridge pickup until January 13, 1950,
and he did not file for a patent on the Telecaster’s unique body
shape until April 3, 1951.
In the music industry during the late 1940’s, it was important for
a company to unveil new instruments at trade shows. At this time,
there was only one important trade show, sponsored by the National
Association of Music Merchants. The Broadcaster was first
sprung on the industry at the 1948 trade show in Chicago. The industry
had seen nothing like this guitar ever before. This new guitar
existed only to be amplified; it was not merely an acoustic guitar
that had been converted.
Impact
The Telecaster, as it would be called after 1950, remained in continuous
production for more years than any other guitar of its type
and was one of the industry’s best sellers. From the beginning, it
looked and sounded unique. The electrified acoustic guitars had a
mellow woody tone, whereas the Telecaster had a clean twangy
tone. This tone made it popular with country and blues guitarists.
The Telecaster could also be played at higher volume than previous
electric guitars.
Because Leo Fender attempted something revolutionary by introducing
an electric solid-body guitar, there was no guarantee that
his business venture would succeed. Fender Electric Instruments
Company had fifteen employees in 1947. At times, during the early
years of the company, it looked as though Fender’s dreams would
not come to fruition, but the company persevered and grew. Between
1948 and 1955 with an increase of employees, the company was able to produce ten thousand Broadcaster/Telecaster guitars.
Fender had taken a big risk, but it paid off enormously. Between
1958 and the mid-1970’s, Fender produced more than 250,000 Telecasters.
Other guitar manufacturers were placed in a position of
having to catch up. Fender had succeeded in developing a process
by which electric solid-body guitars could be manufactured profitably
on a large scale.
Early Guitar Pickups
The first pickups used on a guitar can be traced back to the 1920’s
and the efforts of Lloyd Loar, but there was not strong interest on the
part of the American public for the guitar to be amplified. The public
did not become intrigued until the 1930’s. Charlie Christian’s
electric guitar performances with Benny Goodman woke up the
public to the potential of this new and exciting sound. It was not until
the 1950’s, though, that the electric guitar became firmly established.
Leo Fender was the right man in the right place. He could not
have known that his Fender guitars would help to usher in a whole
new musical landscape. Since the electric guitar was the newest
member of the family of guitars, it took some time for musical audiences
to fully appreciate what it could do. The electric solid-body
guitar has been called a dangerous, uncivilized instrument. The
youth culture of the 1950’s found in this new guitar a voice for their
rebellion. Fender unleashed a revolution not only in the construction
of a guitar but also in the way popular music would be approached
henceforth.
Because of the ever-increasing demand for the Fender product,
Fender Sales was established as a separate distribution company in
1953 by Don Randall. Fender Electric Instruments Company had fifteen
employees in 1947, but by 1955, the company employed fifty
people. By 1960, the number of employees had risen to more than
one hundred. Before Leo Fender sold the company to CBS on January
4, 1965, for $13 million, the company occupied twenty-seven
buildings and employed more than five hundred workers.
Always interested in finding new ways of designing a more nearly
perfect guitar, Leo Fender again came up with a remarkable guitar in
1954, with the Stratocaster. There was talk in the guitar industry that
Fender had gone too far with the introduction of the Stratocaster, but
it became a huge success because of its versatility. It was the first commercial
solid-body electric guitar to have three pickups and a vibrato
bar. It was also easier to play than the Telecaster because of its double
cutaway, contoured body, and scooped back. The Stratocaster sold
for $249.50. Since its introduction, the Stratocaster has undergone
some minor changes, but Fender and his staff basically got it right the
first time.
The Gibson company entered the solid-body market in 1952 with
the unveiling of the “Les Paul” model. After the Telecaster, the Les
Paul guitar was the next significant solid-body to be introduced. Les
Paul was a legendary guitarist who also had been experimenting
with electric guitar designs for many years. The Gibson designers
came up with a striking model that produced a thick rounded tone.
Over the years, the Les Paul model has won a loyal following.
The Precision Bass
In 1951, Leo Fender introduced another revolutionary guitar, the
Precision bass. At a cost of $195.50, the first electric bass would go on
to dominate the market. The Fender company has manufactured numerous
guitar models over the years, but the three that stand above
all others in the field are the Telecaster, the Precision bass, and the
Stratocaster. The Telecaster is considered to be more of a workhorse,
whereas the Stratocaster is thought of as the thoroughbred of electric
guitars. The Precision bass was in its own right a revolutionary guitar.
With a styling that had been copied from the Telecaster, the Precision
freed musicians from bulky oversized acoustic basses, which
were prone to feedback. The name Precision had meaning. Fender’s
electric bass made it possible, with its frets, for the precise playing of
notes; many acoustic basses were fretless. The original Precision bass
model was manufactured from 1951 to 1954. The next version lasted
from 1954 until June of 1957. The Precision bass that went into production
in June, 1957, with its split humbucking pickup, continued to
be the standard electric bass on the market into the 1990’s.
By 1964, the Fender Electric Instruments Company had grown
enormously. In addition to Leo Fender, a number of crucial people
worked for the organization, including George Fullerton and Don Randall. Fred Tavares joined the company’s research and development
team in 1953. In May, 1954, Forrest White became Fender’s
plant manager. All these individuals played vital roles in the success
of Fender, but the driving force behind the scene was always
Leo Fender. As Fender’s health deteriorated, Randall commenced
negotiations with CBS to sell the Fender company. In January, 1965,
CBS bought Fender for $13 million. Eventually, Leo Fender regained
his health, and he was hired as a technical adviser by CBS/Fender.
He continued in this capacity until 1970. He remained determined
to create more guitar designs of note. Although he never again produced
anything that could equal his previous success, he never
stopped trying to attain a new perfection of guitar design.
Fender died on March 21, 1991, in Fullerton, California. He had
suffered for years from Parkinson’s disease, and he died of complications
from the disease. He is remembered for his Broadcaster/
Telecaster, Precision bass, and Stratocaster, which revolutionized
popular music. Because the Fender company was able to mass produce
these and other solid-body electric guitars, new styles of music
that relied on the sound made by an electric guitar exploded onto
the scene. The electric guitar manufacturing business grew rapidly
after Fender introduced mass production. Besides American companies,
there are guitar companies that have flourished in Europe
and Japan.
The marriage between rock music and solid-body electric guitars
was initiated by the Fender guitars. The Telecaster, Precision bass,
and Stratocaster become synonymous with the explosive character
of rock and roll music. The multi-billion-dollar music business can
point to Fender as the pragmatic visionary who put the solid-body
electric guitar into the forefront of the musical scene. His innovative
guitars have been used by some of the most important guitarists of
the rock era, including Jimi Hendrix, Eric Clapton, and Jeff Beck.
More important, Fender guitars have remained bestsellers with
the public worldwide. Amateur musicians purchased them by the
thousands for their own entertainment. Owning and playing a
Fender guitar, or one of the other electric guitars that followed, allowed
these amateurs to feel closer to their musician idols. A large
market for sheet music from popular artists also developed.
In 1992, Fender was inducted into the Rock and Roll Hall of Fame. He is one of the few non-musicians ever to be inducted. The
sound of an electric guitar is the sound of exuberance, and since the
Broadcaster was first unveiled in 1948, that sound has grown to be
pervasive and enormously profitable.
Breeder reactor
The invention: A plant that generates electricity from nuclear fission
while creating new fuel.
The person behind the invention:
Walter Henry Zinn (1906-2000), the first director of the Argonne
National Laboratory
Producing Electricity with More Fuel
The discovery of nuclear fission involved both the discovery that
the nucleus of a uranium atom would split into two lighter elements
when struck by a neutron and the observation that additional neutrons,
along with a significant amount of energy, were released at
the same time. These neutrons might strike other atoms and cause
them to fission (split) also. That, in turn, would release more energy
and more neutrons, triggering a chain reaction as the process continued
to repeat itself, yielding a continuing supply of heat.
Besides the possibility that an explosive weapon could be constructed,
early speculation about nuclear fission included its use in
the generation of electricity. The occurrence of World War II (1939-
1945) meant that the explosive weapon would be developed first.
Both the weapons technology and the basic physics for the electrical
reactor had their beginnings in Chicago with the world’s first nuclear
chain reaction. The first self-sustaining nuclear chain reaction occurred
in a laboratory at the University of Chicago on December 2, 1942.
It also became apparent at that time that there was more than one
way to build a bomb. At this point, two paths were taken: One was
to build an atomic bomb with enough fissionable uranium in it to
explode when detonated, and another was to generate fissionable
plutonium and build a bomb. Energy was released in both methods,
but the second method also produced another fissionable substance.
The observation that plutonium and energy could be produced together
meant that it would be possible to design electric power systems
that would produce fissionable plutonium in quantities as large
as, or larger than, the amount of fissionable material consumed. This is the breeder concept, the idea that while using up fissionable uranium
235, another fissionable element can be made. The full development
of this concept for electric power was delayed until the end of
WorldWar II.
Electricity from Atomic Energy
On August 1, 1946, the Atomic Energy Commission (AEC) was
established to control the development and explore the peaceful
uses of nuclear energy. The Argonne National Laboratory was assigned
the major responsibilities for pioneering breeder reactor
technologies.Walter Henry Zinn was the laboratory’s first director.
He led a team that planned a modest facility (Experimental Breeder
Reactor I, or EBR-I) for testing the validity of the breeding principle.
Planning for this had begun in late 1944 and grew as a natural extension
of the physics that developed the plutonium atomic bomb.
The conceptual design details for a breeder-electric reactor were
reasonably complete by late 1945. On March 1, 1949, the AEC announced
the selection of a site in Idaho for the National Reactor Station
(later to be named the Idaho National Engineering Laboratory,
or INEL). Construction at the INEL site in Arco, Idaho, began in October,
1949. Critical mass was reached in August, 1951. (“Critical
mass” is the amount and concentration of fissionable material required
to produce a self-sustaining chain reaction.)
The system was brought to full operating power, 1.1 megawatts
of thermal power, on December 19, 1951. The next day, December
20, at 11:00 a.m., steam was directed to a turbine generator. At 1:23
p.m., the generator was connected to the electrical grid at the site,
and “electricity flowed from atomic energy,” in the words of Zinn’s
console log of that day. Approximately 200 kilowatts of electric
power were generated most of the time that the reactor was run.
This was enough to satisfy the needs of the EBR-I facilities. The reactor
was shut down in 1964 after five years of use primarily as a test
facility. It had also produced the first pure plutonium.
With the first fuel loading, a conversion ratio of 1.01 was achieved,
meaning that more new fuel was generated than was consumed by
about 1 percent. When later fuel loadings were made with plutonium,
the conversion ratios were more favorable, reaching as high as 1.27. EBR-I was the first reactor to generate its own fuel and the
first power reactor to use plutonium for fuel.
The use of EBR-I also included pioneering work on fuel recovery
and reprocessing. During its five-year lifetime, EBR-I operated with
four different fuel loadings, each designed to establish specific
benchmarks of breeder technology. This reactor was seen as the first
in a series of increasingly large reactors in a program designed to
develop breeder technology. The reactor was replaced by EBR-II,
which had been proposed in 1953 and was constructed from 1955 to
1964. EBR-II was capable of producing 20 megawatts of electrical
power. It was approximately fifty times more powerful than EBR-I
but still small compared to light-water commercial reactors of 600 to
1,100 megawatts in use toward the end of the twentieth century.
Consequences
The potential for peaceful uses of nuclear fission were dramatized
with the start-up of EBR-I in 1951: It was the first in the world
to produce electricity, while also being the pioneer in a breeder reactor
program. The breeder program was not the only reactor program
being developed, however, and it eventually gave way to the
light-water reactor design for use in the United States. Still, if energy
resources fall into short supply, it is likely that the technologies first
developed with EBR-I will find new importance. In France and Japan,
commercial reactors make use of breeder reactor technology;
these reactors require extensive fuel reprocessing.
Following the completion of tests with plutonium loading in 1964,
EBR-I was shut down and placed in standby status. In 1966, it was declared
a national historical landmark under the stewardship of the
U.S. Department of the Interior. The facility was opened to the public
in June, 1975.
12 February 2009
Blood transfusion
The invention: A technique that greatly enhanced surgery patients’
chances of survival by replenishing the blood they lose in
surgery with a fresh supply.
The people behind the invention:
Charles Drew (1904-1950), American pioneer in blood
transfusion techniques
George Washington Crile (1864-1943), an American surgeon,
author, and brigadier general in the U.S. Army Medical
Officers’ Reserve Corps
Alexis Carrel (1873-1944), a French surgeon
Samuel Jason Mixter (1855-1923), an American surgeon
Nourishing Blood Transfusions
It is impossible to say when and where the idea of blood transfusion
first originated, although descriptions of this procedure are
found in ancient Egyptian and Greek writings. The earliest documented
case of a blood transfusion is that of Pope Innocent VII. In
April, 1492, the pope, who was gravely ill, was transfused with the
blood of three young boys. As a result, all three boys died without
bringing any relief to the pope.
In the centuries that followed, there were occasional descriptions
of blood transfusions, but it was not until the middle of the seventeenth
century that the technique gained popularity following the
English physician and anatomistWilliam Harvey’s discovery of the
circulation of the blood in 1628. In the medical thought of those
times, blood transfusion was considered to have a nourishing effect
on the recipient. In many of those experiments, the human recipient
received animal blood, usually from a lamb or a calf. Blood transfusion
was tried as a cure for many different diseases, mainly those
that caused hemorrhages, as well as for other medical problems and
even for marital problems.
Blood transfusions were a dangerous procedure, causing many
deaths of both donor and recipient as a result of excessive blood loss, infection, passage of blood clots into the circulatory systems of
the recipients, passage of air into the blood vessels (air embolism),
and transfusion reaction as a result of incompatible blood types. In
the mid-nineteenth century, blood transfusions from animals to humans
stopped after it was discovered that the serum of one species
agglutinates and dissolves the blood cells of other species. A sharp
drop in the use of blood transfusion came with the introduction of
physiologic salt solution in 1875. Infusion of salt solution was simple
and was safer than blood transfusion.Direct-Connection Blood Transfusions
In 1898, when GeorgeWashington Crile began his work on blood
transfusions, the major obstacle he faced was solving the problem of
blood clotting during transfusions. He realized that salt solutions
were not helpful in severe cases of blood loss, when there is a need to
restore the patient to consciousness, steady the heart action, and raise
the blood pressure. At that time, he was experimenting with indirect
blood transfusions by drawing the blood of the donor into a vessel,
then transferring it into the recipient’s vein by tube, funnel, and cannula,
the same technique used in the infusion of saline solution.
The solution to the problem of blood clotting came in 1902 when
Alexis Carrel developed the technique of surgically joining blood
vessels without exposing the blood to air or germs, either of which
can lead to clotting. Crile learned this technique from Carrel and
used it to join the peripheral artery in the donor to a peripheral vein
of the recipient. Since the transfused blood remained sealed in the
inner lining of the vessels, blood clotting did not occur.
The first human blood transfusion of this type was performed by
Crile in December, 1905. The patient, a thirty-five-year-old woman,
was transfused by her husband but died a few hours after the procedure.
The second, but first successful, transfusion was performed on
August 8, 1906. The patient, a twenty-three-year-old male, suffered
from severe hemorrhaging following surgery to remove kidney
stones. After all attempts to stop the bleeding were exhausted with
no results, and the patient was dangerously weak, transfusion was
considered as a last resort. One of the patient’s brothers was the dofew days later, another transfusion was done. This time, too, he
showed remarkable improvement, which continued until his complete
recovery.
For his first transfusions, Crile used the Carrel suture method,
which required using very fine needles and thread. It was a very
delicate and time-consuming procedure. At the suggestion of Samuel
Jason Mixter, Crile developed a new method using a short tubal
device with an attached handle to connect the blood vessels. By this
method, 3 or 4 centimeters of the vessels to be connected were surgically
exposed, clamped, and cut, just as under the previous method.
Yet, instead of suturing of the blood vessels, the recipient’s vein was
passed through the tube and then cuffed back over the tube and tied
to it. Then the donor’s artery was slipped over the cuff. The clamps
were opened, and blood was allowed to flow from the donor to the
recipient. In order to accommodate different-sized blood vessels,
tubes of four different sizes were made, ranging in diameter from
1.5 to 3 millimeters.Impact,
Crile’s method was the preferred method of blood transfusion
for a number of years. Following the publication of his book on
transfusion, a number of modifications to the original method were
published in medical journals. In 1913, Edward Lindeman developed
a method of transfusing blood simply by inserting a needle
through the patient’s skin and into a surface vein, making it for the
first time a nonsurgical method. This method allowed one to measure
the exact quantity of blood transfused. It also allowed the donor
to serve in multiple transfusions. This development opened the
field of transfusions to all physicians. Lindeman’s needle and syringe
method also eliminated another major drawback of direct
blood transfusion: the need to have both donor and recipient right
next to each other.
Birth control pill
The invention: An orally administered drug that inhibits ovulation
in women, thereby greatly reducing the chance of pregnancy.
The people behind the invention:
Gregory Pincus (1903-1967), an American biologist
Min-Chueh Chang (1908-1991), a Chinese-born reproductive
biologist
John Rock (1890-1984), an American gynecologist
Celso-Ramon Garcia (1921- ), a physician
Edris Rice-Wray (1904- ), a physician
Katherine Dexter McCormick (1875-1967), an American
millionaire
Margaret Sanger (1879-1966), an American activist
An Ardent Crusader
Margaret Sanger was an ardent crusader for birth control and
family planning. Having decided that a foolproof contraceptive was
necessary, Sanger met with her friend, the wealthy socialite Katherine
Dexter McCormick. A1904 graduate in biology from the Massachusetts
Institute of Technology, McCormick had the knowledge
and the vision to invest in biological research. Sanger arranged a
meeting between McCormick and Gregory Pincus, head of the
Worcester Institutes of Experimental Biology. After listening to Sanger’s
pleas for an effective contraceptive and McCormick’s offer of financial
backing, Pincus agreed to focus his energies on finding a pill
that would prevent pregnancy.
Pincus organized a team to conduct research on both laboratory
animals and humans. The laboratory studies were conducted under
the direction of Min-Chueh Chang, a Chinese-born scientist who
had been studying sperm biology, artificial insemination, and in vitro
fertilization. The goal of his research was to see whether pregnancy
might be prevented by manipulation of the hormones usually
found in a woman.It was already known that there was one time when a woman
could not become pregnant—when she was already pregnant. In
1921, Ludwig Haberlandt, an Austrian physiologist, had transplanted
the ovaries from a pregnant rabbit into a nonpregnant one.
The latter failed to produce ripe eggs, showing that some substance
from the ovaries of a pregnant female prevents ovulation. This substance
was later identified as the hormone progesterone by George
W. Corner, Jr., and Willard M. Allen in 1928.
If progesterone could inhibit ovulation during pregnancy, maybe
progesterone treatment could prevent ovulation in nonpregnant females
as well. In 1937, this was shown to be the case by scientists
from the University of Pennsylvania, who prevented ovulation in
rabbits with injections of progesterone. It was not until 1951, however,
when Carl Djerassi and other chemists devised inexpensive
ways of producing progesterone in the laboratory, that serious consideration
was given to the medical use of progesterone. The synthetic
version of progesterone was called “progestin.”
Testing the Pill
In the laboratory, Chang tried more than two hundred different
progesterone and progestin compounds, searching for one that
would inhibit ovulation in rabbits and rats. Finally, two compounds
were chosen: progestins derived from the root of a wild Mexican
yam. Pincus arranged for clinical tests to be carried out by Celso-
Ramon Garcia, a physician, and John Rock, a gynecologist.
Rock had already been conducting experiments with progesterone
as a treatment for infertility. The treatment was effective in some
women but required that large doses of expensive progesterone be
injected daily. Rock was hopeful that the synthetic progestin that
Chang had found effective in animals would be helpful in infertile
women as well. With Garcia and Pincus, Rock treated another
group of fifty infertile women with the synthetic progestin. After
treatment ended, seven of these previously infertile women became
pregnant within half a year. Garcia, Pincus, and Rock also took several
physiological measurements of the women while they were
taking the progestin and were able to conclude that ovulation did
not occur while the women were taking the progestin pill.Having shown that the hormone could effectively prevent ovulation
in both animals and humans, the investigators turned their attention
back to birth control. They were faced with several problems:
whether side effects might occur in women using progestins for a
long time, and whether women would remember to take the pill day
after day, for months or even years. To solve these problems, the birth
control pill was tested on a large scale. Because of legal problems in
the United States, Pincus decided to conduct the test in Puerto Rico.
The test started in April of 1956. Edris Rice-Wray, a physician,
was responsible for the day-to-day management of the project. As
director of the Puerto Rico Family Planning Association, she had
seen firsthand the need for a cheap, reliable contraceptive. The
women she recruited for the study were married women from a
low-income population living in a housing development in RÃo
Piedras, a suburb of San Juan. Word spread quickly, and soon
women were volunteering to take the pill that would prevent pregnancy.
In the first study, 221 women took a pill containing 10 milligrams
of progestin and 0.15 milligrams of estrogen. (The estrogen
was added to help control breakthrough bleeding.)
Results of the test were reported in 1957. Overall, the pill proved
highly effective in preventing conception. None of the women
who took the pill according to directions became pregnant, and
most women who wanted to get pregnant after stopping the pill
had no difficulty. Nevertheless, 17 percent of the women had some
unpleasant reactions, such as nausea or dizziness. The scientists
believed that these mild side effects, as well as one death from congestive
heart failure, were unrelated to the use of the pill.
Even before the final results were announced, additional field
tests were begun. In 1960, the U.S. Food and Drug Administration
(FDA) approved the use of the pill developed by Pincus and his collaborators
as an oral contraceptive.Consequences
Within two years of approval by the FDA, more than a million
women in the United States were using the birth control pill. New
contraceptives were developed in the 1960’s and 1970’s, but the
birth control pill remains the most widely used method of preventing pregnancy. More than 60
million women use the pill
worldwide.
The greatest impact of the
pill has been in the social and
political world. Before Sanger
began the push for the pill,
birth control was regarded often
as socially immoral and
often illegal as well. Women
in those post-World War II
years were expected to have
a lifelong career as a mother
to their many children.
With the advent of the pill,
a radical change occurred
in society’s attitude toward
women’s work.Women had increased
freedom to work and enter careers previously closed to them
because of fears that they might get pregnant. Women could control
more precisely when they would get pregnant and how many children
they would have. The women’s movement of the 1960’s—with its
change to more liberal social and sexual values—gained much of its
strength from the success of the birth control pill.
Subscribe to:
Posts (Atom)