14 June 2009
Electrocardiogram
The invention: Device for analyzing the electrical currents of the
human heart.
The people behind the invention:
Willem Einthoven (1860-1927), a Dutch physiologist and
winner of the 1924 Nobel Prize in Physiology or Medicine
Augustus D. Waller (1856-1922), a German physician and
researcher
Sir Thomas Lewis (1881-1945), an English physiologist
Horse Vibrations
In the late 1800’s, there was substantial research interest in the
electrical activity that took place in the human body. Researchers
studied many organs and systems in the body, including the nerves,
eyes, lungs, muscles, and heart. Because of a lack of available technology,
this research was tedious and frequently inaccurate. Therefore,
the development of the appropriate instrumentation was as
important as the research itself.
The initial work on the electrical activity of the heart (detected
from the surface of the body) was conducted by Augustus D.Waller
and published in 1887. Many credit him with the development of
the first electrocardiogram. Waller used a Lippmann’s capillary
electrometer (named for its inventor, the French physicist Gabriel-
Jonas Lippmann) to determine the electrical charges in the heart and
called his recording a “cardiograph.” The recording was made by
placing a series of small tubes on the surface of the body. The tubes
contained mercury and sulfuric acid. As an electrical current passed
through the tubes, the mercury would expand and contract. The resulting
images were projected onto photographic paper to produce
the first cardiograph. Yet Waller had only limited sucess with the
device and eventually abandoned it.
In the early 1890’s,Willem Einthoven, who became a good friend
of Waller, began using the same type of capillary tube to study the
electrical currents of the heart. Einthoven also had a difficult time working with the instrument. His laboratory was located in an old
wooden building near a cobblestone street. Teams of horses pulling
heavy wagons would pass by and cause his laboratory to vibrate.
This vibration affected the capillary tube, causing the cardiograph
to be unclear. In his frustration, Einthoven began to modify his laboratory.
He removed the floorboards and dug a hole some ten to fifteen
feet deep. He lined the walls with large rocks to stabilize his instrument.
When this failed to solve the problem, Einthoven, too,
abandoned the Lippmann’s capillary tube. Yet Einthoven did not
abandon the idea, and he began to experiment with other instruments.
Electrocardiographs over the Phone
In order to continue his research on the electrical currents of the
heart, Einthoven began to work with a new device, the d’Arsonval
galvanometer (named for its inventor, the French biophysicist
Arsène d’Arsonval). This instrument had a heavy coil of wire suspended
between the poles of a horseshoe magnet. Changes in electrical
activity would cause the coil to move; however, Einthoven
found that the coil was too heavy to record the small electrical
changes found in the heart. Therefore, he modified the instrument
by replacing the coil with a silver-coated quartz thread (string).
The movements could be recorded by transmitting the deflections
through a microscope and projecting them on photographic film.
Einthoven called the new instrument the “string galvanometer.”
In developing his string galvanomter, Einthoven was influenced
by the work of one of his teachers, Johannes Bosscha. In the 1850’s,
Bosscha had published a study describing the technical complexities
of measuring very small amounts of electricity. He proposed the
idea that a galvanometer modified with a needle hanging from a
silk thread would be more sensitive in measuring the tiny electric
currents of the heart.
By 1905, Einthoven had improved the string galvanometer to
the point that he could begin using it for clinical studies. In 1906,
he had his laboratory connected to the hospital in Leiden by a telephone
wire.With this arrangement, Einthoven was able to study in
his laboratory electrocardiograms derived from patients in the hospital, which was located a mile away. With this source of subjects,
Einthoven was able to use his galvanometer to study many
heart problems. As a result of these studies, Einthoven identified
the following heart problems: blocks in the electrical conduction
system of the heart; premature beats of the heart, including two
premature beats in a row; and enlargements of the various chambers
of the heart. He was also able to study how the heart behaved
during the administration of cardiac drugs.A major researcher who communicated with Einthoven about
the electrocardiogram was Sir Thomas Lewis, who is credited with
developing the electrocardiogram into a useful clinical tool. One of
Lewis’s important accomplishments was his identification of atrial
fibrillation, the overactive state of the upper chambers of the heart.
During World War I, Lewis was involved with studying soldiers’
hearts. He designed a series of graded exercises, which he used to
test the soldiers’ ability to perform work. From this study, Lewis
was able to use similar tests to diagnose heart disease and to screen
recruits who had heart problems.
Impact
As Einthoven published additional studies on the string galvanometer
in 1903, 1906, and 1908, greater interest in his instrument
was generated around the world. In 1910, the instrument, now
called the “electrocardiograph,” was installed in the United States.
It was the foundation of a new laboratory for the study of heart disease
at Johns Hopkins University.
As time passed, the use of the electrocardiogram—or “EKG,” as
it is familiarly known—increased substantially. The major advantage
of the EKG is that it can be used to diagnose problems in the
heart without incisions or the use of needles. It is relatively painless
for the patient; in comparison with other diagnostic techniques,
moreover, it is relatively inexpensive.
Recent developments in the use of the EKG have been in the area
of stress testing. Since many heart problems are more evident during
exercise, when the heart is working harder, EKGs are often
given to patients as they exercise, generally on a treadmill. The clinician
gradually increases the intensity of work the patient is doing
while monitoring the patient’s heart. The use of stress testing has
helped to make the EKG an even more valuable diagnostic tool.
12 June 2009
Electric refrigerator
The invention:
An electrically powered and hermetically sealed
food-storage appliance that replaced iceboxes, improved production,
and lowered food-storage costs.
The people behind the invention:
Marcel Audiffren, a French monk
Christian Steenstrup (1873-1955), an American engineer
Fred Wolf, an American engineer
Electric clock
The invention: Electrically powered time-keeping device with a
quartz resonator that has led to the development of extremely accurate,
relatively inexpensive electric clocks that are used in computers
and microprocessors.
The person behind the invention:
Warren Alvin Marrison (1896-1980), an American scientist
From Complex Mechanisms to Quartz Crystals
William Alvin Marrison’s fabrication of the electric clock began a
new era in time-keeping. Electric clocks are more accurate and more
reliable than mechanical clocks, since they have fewer moving parts
and are less likely to malfunction.
An electric clock is a device that generates a string of electric
pulses. The most frequently used electric clocks are called “free running”
and “periodic,” which means that they generate a continuous
sequence of electric pulses that are equally spaced. There are various
kinds of electronic “oscillators” (materials that vibrate) that can
be used to manufacture electric clocks.
The material most commonly used as an oscillator in electric
clocks is crystalline quartz. Because quartz (silicon dioxide) is a
completely oxidized compound (which means that it does not deteriorate
readily) and is virtually insoluble in water, it is chemically
stable and resists chemical processes that would break down other
materials. Quartz is a “piezoelectric” material, which means that it
is capable of generating electricity when it is subjected to pressure
or stress of some kind. In addition, quartz has the advantage of generating
electricity at a very stable frequency, with little variation. For
these reasons, quartz is an ideal material to use as an oscillator.The Quartz Clock
Aquartz clock is an electric clock that makes use of the piezoelectric
properties of a quartz crystal. When a quartz crystal vibrates, a difference of electric potential is produced between two of its faces.
The crystal has a natural frequency (rate) of vibration that is determined
by its size and shape. If the crystal is placed in an oscillating
electric circuit that has a frequency that is nearly the same as that of
the crystal, it will vibrate at its natural frequency and will cause the
frequency of the entire circuit to match its own frequency.
Piezoelectricity is electricity, or “electric polarity,” that is caused
by the application of mechanical pressure on a “dielectric” material
(one that does not conduct electricity), such as a quartz crystal. The
process also works in reverse; if an electric charge is applied to the
dielectric material, the material will experience a mechanical distortion.
This reciprocal relationship is called “the piezoelectric effect.”
The phenomenon of electricity being generated by the application
of mechanical pressure is called the direct piezoelectric effect, and
the phenomenon of mechanical stress being produced as a result of
the application of electricity is called the converse piezoelectric
effect.
When a quartz crystal is used to create an oscillator, the natural
frequency of the crystal can be used to produce other frequencies
that can power clocks. The natural frequency of a quartz crystal is
nearly constant if precautions are taken when it is cut and polished
and if it is maintained at a nearly constant temperature and pressure.
After a quartz crystal has been used for some time, its frequency usually varies slowly as a result of physical changes. If allowances
are made for such changes, quartz-crystal clocks such as
those used in laboratories can be manufactured that will accumulate
errors of only a few thousandths of a second per month. The
quartz crystals that are typically used in watches, however, may accumulate
errors of tens of seconds per year.
There are other materials that can be used to manufacture accurate
electric clocks. For example, clocks that use the element rubidium
typically would accumulate errors no larger than a few tenthousandths
of a second per year, and those that use the element cesium
would experience errors of only a few millionths of a second
per year. Quartz is much less expensive than rarer materials such as rubidium and cesium, and it is easy to use in such common applications
as computers. Thus, despite their relative inaccuracy, electric
quartz clocks are extremely useful and popular, particularly for applications
that require accurate timekeeping over a relatively short
period of time. In such applications, quartz clocks may be adjusted
periodically to correct for accumulated errors.
Impact
The electric quartz clock has contributed significantly to the development
of computers and microprocessors. The computer’s control
unit controls and synchronizes all data transfers and transformations
in the computer system and is the key subsystem in the
computer itself. Every action that the computer performs is implemented
by the control unit.
The computer’s control unit uses inputs from a quartz clock to
derive timing and control signals that regulate the actions in the system
that are associated with each computer instruction. The control
unit also accepts, as input, control signals generated by other devices
in the computer system.
The other primary impact of the quartz clock is in making the
construction of multiphase clocks a simple task. A multiphase
clock is a clock that has several outputs that oscillate at the same
frequency. These outputs may generate electric waveforms of different
shapes or of the same shape, which makes them useful for
various applications. It is common for a computer to incorporate a
single-phase quartz clock that is used to generate a two-phase
clock.
09 June 2009
Dolby noise reduction
The invention: Electronic device that reduces the signal-to-noise
ratio of sound recordings and greatly improves the sound quality
of recorded music.
The people behind the invention:
Emil Berliner (1851-1929), a German inventor
Ray Milton Dolby (1933- ), an American inventor
Thomas Alva Edison (1847-1931), an American inventor
Phonographs, Tapes, and Noise Reduction
The main use of record, tape, and compact disc players is to listen
to music, although they are also used to listen to recorded speeches,
messages, and various forms of instruction. Thomas Alva Edison
invented the first sound-reproducing machine, which he called the
“phonograph,” and patented it in 1877. Ten years later, a practical
phonograph (the “gramophone”) was marketed by a German, Emil
Berliner. Phonographs recorded sound by using diaphragms that
vibrated in response to sound waves and controlled needles that cut
grooves representing those vibrations into the first phonograph records,
which in Edison’s machine were metal cylinders and in Berliner’s
were flat discs. The recordings were then played by reversing
the recording process: Placing a needle in the groove in the recorded
cylinder or disk caused the diaphragm to vibrate, re-creating the
original sound that had been recorded.
In the 1920’s, electrical recording methods developed that produced
higher-quality recordings, and then, in the 1930’s, stereophonic
recording was developed by various companies, including
the British company Electrical and Musical Industries (EMI). Almost
simultaneously, the technology of tape recording was developed.
By the 1940’s, long-playing stereo records and tapes were
widely available. As recording techniques improved further, tapes
became very popular, and by the 1960’s, they had evolved into both
studio master recording tapes and the audio cassettes used by consumers.Hisses and other noises associated with sound recording and its
environment greatly diminished the quality of recorded music. In
1967, Ray Dolby invented a noise reducer, later named “Dolby A,”
that could be used by recording studios to reduce tape signal-tonoise
ratios. Several years later, his “Dolby B” system, designed
for home use, became standard equipment in all types of playback
machines. Later, Dolby and others designed improved noisesuppression
systems.
Recording and Tape Noise
Sound is made up of vibrations of varying frequencies—sound
waves—that sound recorders can convert into grooves on plastic records,
varying magnetic arrangements on plastic tapes covered
with iron particles, or tiny pits on compact discs. The following discussion
will focus on tape recordings, for which the original Dolby
noise reducers were designed.
Tape recordings are made by a process that converts sound
waves into electrical impulses that cause the iron particles in a tape
to reorganize themselves into particular magnetic arrangements.
The process is reversed when the tape is played back. In this process,
the particle arrangements are translated first into electrical impulses
and then into sound that is produced by loudspeakers.
Erasing a tape causes the iron particles to move back into their original
spatial arrangement.
Whenever a recording is made, undesired sounds such as hisses,
hums, pops, and clicks can mask the nuances of recorded sound, annoying
and fatiguing listeners. The first attempts to do away with
undesired sounds (noise) involved making tapes, recording devices,
and recording studios quieter. Such efforts did not, however,
remove all undesired sounds.
Furthermore, advances in recording technology increased the
problem of noise by producing better instruments that “heard” and
transmitted to recordings increased levels of noise. Such noise is often
caused by the components of the recording system; tape hiss is
an example of such noise. This type of noise is most discernible in
quiet passages of recordings, because loud recorded sounds often
mask it.Because of the problem of noise in quiet passages of recorded
sound, one early attempt at noise suppression involved the reduction
of noise levels by using “dynaural” noise suppressors. These
devices did not alter the loud portions of a recording; instead, they
reduced the very high and very low frequencies in the quiet passages
in which noise became most audible. The problem with such
devices was, however, that removing the high and low frequencies
could also affect the desirable portions of the recorded sound.
These suppressors could not distinguish desirable from undesirable
sounds. As recording techniques improved, dynaural noise suppressors caused more and more problems, and their use was finally
discontinued.
Another approach to noise suppression is sound compression
during the recording process. This compression is based on the fact
that most noise remains at a constant level throughout a recording,
regardless of the sound level of a desired signal (such as music). To
carry out sound compression, the lowest-level signals in a recording
are electronically elevated above the sound level of all noise. Musical
nuances can be lost when the process is carried too far, because
the maximum sound level is not increased by devices that use
sound compression. To return the music or other recorded sound to
its normal sound range for listening, devices that “expand” the recorded
music on playback are used. Two potential problems associated
with the use of sound compression and expansion are the difficulty
of matching the two processes and the introduction into the
recording of noise created by the compression devices themselves.
In 1967, Ray Dolby developed Dolby Ato solve these problems as
they related to tape noise (but not to microphone signals) in the recording
and playing back of studio master tapes. The system operated
by carrying out ten-decibel compression during recording and
then restoring (noiselessly) the range of the music on playback. This
was accomplished by expanding the sound exactly to its original
range. Dolby Awas very expensive and was thus limited to use in recording
studios. In the early 1970’s, however, Dolby invented the less
expensive Dolby B system, which was intended for consumers.
Consequences
The development of Dolby Aand Dolby B noise-reduction systems
is one of the most important contributions to the high-quality
recording and reproduction of sound. For this reason, Dolby A
quickly became standard in the recording industry. In similar fashion,
Dolby B was soon incorporated into virtually every highfidelity
stereo cassette deck to be manufactured.
Dolby’s discoveries spurred advances in the field of noise reduction.
For example, the German company Telefunken and the Japanese
companies Sanyo and Toshiba, among others, developed their
own noise-reduction systems. Dolby Laboratories countered by producing an improved system: Dolby C. The competition in the
area of noise reduction continues, and it will continue as long as
changes in recording technology produce new, more sensitive recording
equipment.
Disposable razor
The invention: An inexpensive shaving blade that replaced the traditional
straight-edged razor and transformed shaving razors
into a frequent household purchase item.
The people behind the invention:
King Camp Gillette (1855-1932), inventor of the disposable razor
Steven Porter, the machinist who created the first three
disposable razors for King Camp Gillette
William Emery Nickerson (1853-1930), an expert machine
inventor who created the machines necessary for mass
production
Jacob Heilborn, an industrial promoter who helped Gillette start
his company and became a partner
Edward J. Stewart, a friend and financial backer of Gillette
Henry Sachs, an investor in the Gillette Safety Razor Company
John Joyce, an investor in the Gillette Safety Razor Company
William Painter (1838-1906), an inventor who inspired Gillette
George Gillette, an inventor, King Camp Gillette’s father
A Neater Way to Shave
In 1895, King Camp Gillette thought of the idea of a disposable razor
blade. Gillette spent years drawing different models, and finally
Steven Porter, a machinist and Gillette’s associate, created from those
drawings the first three disposable razors that worked. Gillette soon
founded the Gillette Safety Razor Company, which became the leading
seller of disposable razor blades in the United States.
George Gillette, King Camp Gillette’s father, had been a newspaper
editor, a patent agent, and an inventor. He never invented a very
successful product, but he loved to experiment. He encouraged all
of his sons to figure out how things work and how to improve on
them. King was always inventing something new and had many
patents, but he was unsuccessful in turning them into profitable
businesses.
Gillette worked as a traveling salesperson for Crown Cork and Seal Company.William Painter, one of Gillette’s friends and the inventor
of the crown cork, presented Gillette with a formula for making
a fortune: Invent something that would constantly need to be replaced.
Painter’s crown cork was used to cap beer and soda bottles.
It was a tin cap covered with cork, used to form a tight seal over a
bottle. Soda and beer companies could use a crown cork only once
and needed a steady supply.
King took Painter’s advice and began thinking of everyday items
that needed to be replaced often. After owning a Star safety razor
for some time, King realized that the razor blade had not been improved
for a long time. He studied all the razors on the market and
found that both the common straight razor and the safety razor featured
a heavy V-shaped piece of steel, sharpened on one side. King
reasoned that a thin piece of steel sharpened on both sides would
create a better shave and could be thrown away once it became dull.
The idea of the disposable razor had been born.
Gillette made several drawings of disposable razors. He then
made a wooden model of the razor to better explain his idea.
Gillette’s first attempt to construct a working model was unsuccessful,
as the steel was too flimsy. Steven Porter, a Boston machinist, decided
to try to make Gillette’s razor from his drawings. He produced
three razors, and in the summer of 1899 King was the first
man to shave with a disposable razor.
Changing Consumer Opinion
In the early 1900’s, most people considered a razor to be a oncein-
a-lifetime purchase. Many fathers handed down their razors to
their sons. Straight razors needed constant and careful attention to
keep them sharp. The thought of throwing a razor in the garbage after
several uses was contrary to the general public’s idea of a razor.
If Gillette’s razor had not provided a much less painful and faster
shave, it is unlikely that the disposable would have been a success.
Even with its advantages, public opinion against the product was
still difficult to overcome.
Financing a company to produce the razor proved to be a major
obstacle. King did not have the money himself, and potential investors
were skeptical. Skepticism arose both because of public perceptions of the product and because of its manufacturing process. Mass
production appeared to be impossible, but the disposable razor
would never be profitable if produced using the methods used to
manufacture its predecessor.
William Emery Nickerson, an expert machine inventor, had looked
at Gillette’s razor and said it was impossible to create a machine to
produce it. He was convinced to reexamine the idea and finally created
a machine that would create a workable blade. In the process,
Nickerson changed Gillette’s original model. He improved the handle
and frame so that it would better support the thin steel blade.
In the meantime, Gillette was busy getting his patent assigned to
the newly formed American Safety Razor Company, owned by
Gillette, Jacob Heilborn, Edward J. Stewart, and Nickerson. Gillette
owned considerably more shares than anyone else. Henry Sachs
provided additional capital, buying shares from Gillette.
The stockholders decided to rename the company the Gillette
Safety Razor Company. It soon spent most of its money on machinery
and lacked the capital it needed to produce and advertise its
product. The only offer the company had received was from a group
of New York investors who were willing to give $125,000 in exchange
for 51 percent of the company. None of the directors wanted
to lose control of the company, so they rejected the offer.
John Joyce, a friend of Gillette, rescued the financially insecure
new company. He agreed to buy $100,000 worth of bonds from the
company for sixty cents on the dollar, purchasing the bonds gradually
as the company needed money. He also received an equivalent
amount of company stock. After an investment of $30,000, Joyce
had the option of backing out. This deal enabled the company to
start manufacturing and advertising.Impact
The company used $18,000 to perfect the machinery to produce
the disposable razor blades and razors. Originally the directors
wanted to sell each razor with twenty blades for three dollars. Joyce
insisted on a price of five dollars. In 1903, five dollars was about
one-third of the average American’s weekly salary, and a highquality
straight razor could be purchased for about half that price.The other directors were skeptical, but Joyce threatened to buy up
all the razors for three dollars and sell them himself for five dollars.
Joyce had the financial backing to make this promise good, so the directors
agreed to the higher price.
The Gillette Safety Razor Company contracted with Townsend&
Hunt for exclusive sales. The contract stated that Townsend & Hunt
would buy 50,000 razors with twenty blades each during a period of
slightly more than a year and would purchase 100,000 sets per year
for the following four years. The first advertisement for the product
appeared in System Magazine in early fall of 1903, offering the razors
by mail order. By the end of 1903, only fifty-one razors had been
sold.
Since Gillette and most of the directors of the company were not
salaried, Gillette had needed to keep his job as salesman with
Crown Cork and Seal. At the end of 1903, he received a promotion
that meant relocation from Boston to London. Gillette did not want
to go and pleaded with the other directors, but they insisted that the
company could not afford to put him on salary. The company decided
to reduce the number of blades in a set from twenty to twelve
in an effort to increase profits without noticeably raising the cost of a
set. Gillette resigned the title of company president and left for England.
Shortly thereafter, Townsend & Hunt changed its name to the
Gillette Sales Company, and three years later the sales company
sold out to the parent company for $300,000. Sales of the new type
of razor were increasing rapidly in the United States, and Joyce
wanted to sell patent rights to European companies for a small percentage
of sales. Gillette thought that that would be a horrible mistake
and quickly traveled back to Boston. He had two goals: to stop
the sale of patent rights, based on his conviction that the foreign
market would eventually be very lucrative, and to become salaried
by the company. Gillette accomplished both these goals and soon
moved back to Boston.
Despite the fact that Joyce and Gillette had been good friends for
a long time, their business views often differed. Gillette set up a
holding company in an effort to gain back controlling interest in the
Gillette Safety Razor Company. He borrowed money and convinced
his allies in the company to invest in the holding company, eventually regaining control. He was reinstated as president of the company.
One clear disagreement was that Gillette wanted to relocate the company
to Newark, New Jersey, and Joyce thought that that would be a
waste of money. Gillette authorized company funds to be invested in
a Newark site. The idea was later dropped, costing the company a
large amount of capital. Gillette was not a very wise businessman and made many costly mistakes. Joyce even accused him of deliberately
trying to keep the stock price low so that Gillette could purchase
more stock. Joyce eventually bought out Gillette, who retained
his title as president but had little say about company
business.
With Gillette out of a management position, the company became
more stable and more profitable. The biggest problem the
company faced was that it would soon lose its patent rights. After
the patent expired, the company would have competition. The company
decided that it could either cut prices (and therefore profits) to
compete with the lower-priced disposables that would inevitably
enter the market, or it could create a new line of even better razors.
The company opted for the latter strategy. Weeks before the patent
expired, the Gillette Safety Razor Company introduced a new line
of razors.
Both World War I and World War II were big boosts to the company,
which contracted with the government to supply razors to almost
all the troops. This transaction created a huge increase in sales
and introduced thousands of young men to the Gillette razor. Many
of them continued to use Gillettes after returning from the war.
Aside from the shaky start of the company, its worst financial difficulties
were during the Great Depression. Most Americans simply
could not afford Gillette blades, and many used a blade for an extended
time and then resharpened it rather than throwing it away. If
it had not been for the company’s foreign markets, the company
would not have shown a profit during the Great Depression.
Gillette’s obstinancy about not selling patent rights to foreign investors
proved to be an excellent decision.
The company advertised through sponsoring sporting events,
including the World Series. Gillette had many celebrity endorsements
from well-known baseball players. Before it became too expensive
for one company to sponsor an entire event, Gillette had
exclusive advertising during the World Series, various boxing
matches, the Kentucky Derby, and football bowl games. Sponsoring
these events was costly, but sports spectators were the typical
Gillette customers.
The Gillette Company created many products that complemented
razors and blades, including shaving cream, women’s raincluding women’s cosmetics, writing utensils, deodorant, and
wigs. One of the main reasons for obtaining a more diverse product
line was that a one-product company is less stable, especially in a
volatile market. The Gillette Company had learned that lesson in
the Great Depression. Gillette continued to thrive by following the
principles the company had used from the start. The majority of
Gillette’s profits came from foreign markets, and its employees
looked to improve products and find opportunities in other departments
as well as their own.
Dirigible
The invention: Arigid lighter-than-air aircraft that played a major
role in World War I and in international air traffic until a disastrous
accident destroyed the industry.
The people behind the invention:
Ferdinand von Zeppelin (1838-1917), a retired German general
Theodor Kober (1865-1930), Zeppelin’s private engineer
Early Competition
When the Montgolfier brothers launched the first hot-air balloon
in 1783, engineers—especially those in France—began working on
ways to use machines to control the speed and direction of balloons.
They thought of everything: rowing through the air with silk-covered
oars; building movable wings; using a rotating fan, an airscrew, or a
propeller powered by a steam engine (1852) or an electric motor
(1882). At the end of the nineteenth century, the internal combustion
engine was invented. It promised higher speeds and more power.
Up to this point, however, the balloons were not rigid.
Arigid airship could be much larger than a balloon and could fly
farther. In 1890, a rigid airship designed by David Schwarz of
Dalmatia was tested in St. Petersburg, Russia. The test failed because
there were problems with inflating the dirigible. A second
test, in Berlin in 1897, was only slightly more successful, since the
hull leaked and the flight ended in a crash.
Schwarz’s airship was made of an entirely rigid aluminum cylinder.
Ferdinand von Zeppelin had a different idea: His design was
based on a rigid frame. Zeppelin knew about balloons from having
fought in two wars in which they were used: the American Civil
War of 1861-1865 and the Franco-Prussian War of 1870-1871. He
wrote down his first “thoughts about an airship” in his diary on
March 25, 1874, inspired by an article about flying and international
mail. Zeppelin soon lost interest in this idea of civilian uses for an
airship and concentrated instead on the idea that dirigible balloons
might become an important part of modern warfare. He asked the German government to fund his research, pointing out that France
had a better military air force than Germany did. Zeppelin’s patriotism
was what kept him trying, in spite of money problems and
technical difficulties.
In 1893, in order to get more money, Zeppelin tried to persuade
the German military and engineering experts that his invention was
practical. Even though a government committee decided that his
work was worth a small amount of funding, the army was not sure
that Zeppelin’s dirigible was worth the cost. Finally, the committee
chose Schwarz’s design. In 1896, however, Zeppelin won the support
of the powerful Union of German Engineers, which in May,
1898, gave him 800,000 marks to form a stock company called the
Association for the Promotion of Airship Flights. In 1899, Zeppelin
began building his dirigible in Manzell at Lake Constance. In July,
1900, the airship was finished and ready for its first test flight.
Several Attempts
Zeppelin, together with his engineer, Theodor Kober, had worked
on the design since May, 1892, shortly after Zeppelin’s retirement
from the army. They had finished the rough draft by 1894, and
though they made some changes later, this was the basic design of
the Zeppelin. An improved version was patented in December,
1897.
In the final prototype, called the LZ 1, the engineers tried to make
the airship as light as possible. They used a light internal combustion
engine and designed a frame made of the light metal aluminum.
The airship was 128 meters long and had a diameter of 11.7
meters when inflated. Twenty-four zinc-aluminum girders ran the
length of the ship, being drawn together at each end. Sixteen rings
held the body together. The engineers stretched an envelope of
smooth cotton over the framework to reduce wind resistance and to
protect the gas bags fromthe sun’s rays. Seventeen gas bags made of
rubberized cloth were placed inside the framework. Together they
held more than 120,000 cubic meters of hydrogen gas, which would
lift 11,090 kilograms. Two motor gondolas were attached to the
sides, each with a 16-horsepower gasoline engine, spinning four
propellers.The test flight did not go well. The two main questions—whether
the craft was strong enough and fast enough—could not be answered
because little things kept going wrong; for example, a crankshaft
broke and a rudder jammed. The first flight lasted no more
than eighteen minutes, with a maximum speed of 13.7 kilometers
per hour. During all three test flights, the airship was in the air for a
total of only two hours, going no faster than 28.2 kilometers per
hour.
Zeppelin had to drop the project for some years because he ran
out of money, and his company was dissolved. The LZ 1 was wrecked in the spring of 1901. A second airship was tested in November,
1905, and January, 1906. Both tests were unsuccessful, and
in the end the ship was destroyed during a storm.
By 1906, however, the German government was convinced of the
military usefulness of the airship, though it would not give money
to Zeppelin unless he agreed to design one that could stay in the air
for at least twenty-four hours. The third Zeppelin failed this test in
the autumn of 1907. Finally, in the summer of 1908, the LZ 4 not only
proved itself to the military but also attracted great publicity. It flew
for more than twenty-four hours and reached a speed of more than
60 kilometers per hour. Caught in a storm at the end of this flight,
the airship was forced to land and exploded, but money came from
all over Germany to build another.
Impact
Most rigid airships were designed and flown in Germany. Of the
161 that were built between 1900 and 1938, 139 were made in Germany,
and 119 were based on the Zeppelin design.
More than 80 percent of the airships were built for the military.
The Germans used more than one hundred for gathering information
and for bombing during World War I (1914-1918). Starting in
May, 1915, airships bombed Warsaw, Poland; Bucharest, Romania;
Salonika, Greece; and London, England. This was mostly a fear tactic,
since the attacks did not cause great damage, and the English antiaircraft
defense improved quickly. By 1916, the German army had
lost so many airships that it stopped using them, though the navy
continued.
Airships were first used for passenger flights in 1910. By 1914,
the Delag (German Aeronautic Stock Company) used seven passenger
airships for sightseeing trips around German cities. There were
still problems with engine power and weather forecasting, and it
was difficult to move the airships on the ground. AfterWorldWar I,
the Zeppelins that were left were given to the Allies as payment,
and the Germans were not allowed to build airships for their own
use until 1925.
In the 1920’s and 1930’s, it became cheaper to use airplanes for short flights, so airships were useful mostly for long-distance flight.
ABritish airship made the first transatlantic flight in 1919. The British
hoped to connect their empire by means of airships starting in
1924, but the 1930 crash of the R-101, in which most of the leading
English aeronauts were killed, brought that hope to an end.
The United States Navy built the Akron (1931) and the Macon
(1933) for long-range naval reconnaissance, but both airships crashed.
Only the Germans continued to use airships on a regular basis. In
1929, the world tour of the Graf Zeppelin was a success. Regular
flights between Germany and South America started in 1932, and in
1936, German airships bearing Nazi swastikas flew to Lakehurst,
New Jersey. The tragic explosion of the hydrogen-filled Hindenburg
in 1937, however, brought the era of the rigid airship to a close. The
U.S. secretary of the interior vetoed the sale of nonflammable helium,
fearing that the Nazis would use it for military purposes, and
the German government had to stop transatlantic flights for safety
reasons. In 1940, the last two remaining rigid airships were destroyed.
Differential analyzer
The invention: An electromechanical device capable of solving differential
equations.
The people behind the invention:
Vannevar Bush (1890-1974), an American electrical engineer
Harold L. Hazen (1901-1980), an American electrical engineer
Electrical Engineering Problems Become More Complex
AfterWorldWar I, electrical engineers encountered increasingly
difficult differential equations as they worked on vacuum-tube circuitry,
telephone lines, and, particularly, long-distance power transmission
lines. These calculations were lengthy and tedious. Two of
the many steps required to solve them were to draw a graph manually
and then to determine the area under the curve (essentially, accomplishing
the mathematical procedure called integration).
In 1925, Vannevar Bush, a faculty member in the Electrical Engineering
Department at the Massachusetts Institute of Technology
(MIT), suggested that one of his graduate students devise a machine
to determine the area under the curve. They first considered a mechanical
device but later decided to seek an electrical solution. Realizing
that a watt-hour meter such as that used to measure electricity
in most homes was very similar to the device they needed, Bush and
his student refined the meter and linked it to a pen that automatically
recorded the curve.
They called this machine the Product Integraph, and MIT students
began using it immediately. In 1927, Harold L. Hazen, another
MIT faculty member, modified it in order to solve the more complex
second-order differential equations (it originally solved only firstorder
equations).
The Differential Analyzer
The original Product Integraph had solved problems electrically,
and Hazen’s modification had added a mechanical integrator. Although the revised Product Integraph was useful in solving the
types of problems mentioned above, Bush thought the machine
could be improved by making it an entirely mechanical integrator,
rather than a hybrid electrical and mechanical device.
In late 1928, Bush received funding from MIT to develop an entirely
mechanical integrator, and he completed the resulting Differential
Analyzer in 1930. This machine consisted of numerous interconnected
shafts on a long, tablelike framework, with drawing
boards flanking one side and six wheel-and-disk integrators on the
other. Some of the drawing boards were configured to allow an operator
to trace a curve with a pen that was linked to the Analyzer,
thus providing input to the machine. The other drawing boards
were configured to receive output from the Analyzer via a pen that
drew a curve on paper fastened to the drawing board.
The wheel-and-disk integrator, which Hazen had first used in
the revised Product Integraph, was the key to the operation of the
Differential Analyzer. The rotational speed of the horizontal disk
was the input to the integrator, and it represented one of the variables
in the equation. The smaller wheel rolled on the top surface of
the disk, and its speed, which was different from that of the disk,
represented the integrator’s output. The distance from the wheel to
the center of the disk could be changed to accommodate the equation
being solved, and the resulting geometry caused the two shafts
to turn so that the output was the integral of the input. The integrators
were linked mechanically to other devices that could add, subtract,
multiply, and divide. Thus, the Differential Analyzer could
solve complex equations involving many different mathematical
operations. Because all the linkages and calculating devices were
mechanical, the Differential Analyzer actually acted out each calculation.
Computers of this type, which create an analogy to the physical
world, are called analog computers.
The Differential Analyzer fulfilled Bush’s expectations, and students
and researchers found it very useful. Although each different
problem required Bush’s team to set up a new series of mechanical
linkages, the researchers using the calculations viewed this as a minor
inconvenience. Students at MIT used the Differential Analyzer
in research for doctoral dissertations, master’s theses, and bachelor’s
theses. Other researchers worked on a wide range of problems with the Differential Analyzer, mostly in electrical engineering, but
also in atomic physics, astrophysics, and seismology. An English researcher,
Douglas Hartree, visited Bush’s laboratory in 1933 to learn
about the Differential Analyzer and to use it in his own work on the
atomic field of mercury. When he returned to England, he built several
analyzers based on his knowledge of MIT’s machine. The U.S.
Army also built a copy in order to carry out the complex calculations
required to create artillery firing tables (which specified the
proper barrel angle to achieve the desired range). Other analyzers
were built by industry and universities around the world.
Impact
As successful as the Differential Analyzer had been, Bush wanted
to make another, better analyzer that would be more precise, more
convenient to use, and more mathematically flexible. In 1932, Bush
began seeking money for his new machine, but because of the Depression
it was not until 1936 that he received adequate funding for
the Rockefeller Analyzer, as it came to be known. Bush left MIT in
1938, but work on the Rockefeller Analyzer continued. It was first
demonstrated in 1941, and by 1942, it was being used in the war effort
to calculate firing tables and design radar antenna profiles. At
the end of the war, it was the most important computer in existence.
All the analyzers, which were mechanical computers, faced serious
limitations in speed because of the momentum of the machinery,
and in precision because of slippage and wear. The digital computers
that were being developed after World War II (even at MIT)
were faster, more precise, and capable of executing more powerful
operations because they were electrical computers. As a result, during
the 1950’s, they eclipsed differential analyzers such as those
built by Bush. Descendants of the Differential Analyzer remained in
use as late as the 1990’s, but they played only a minor role.
Diesel locomotive
The invention: An internal combustion engine in which ignition is
achieved by the use of high-temperature compressed air, rather
than a spark plug.
The people behind the invention:
Rudolf Diesel (1858-1913), a German engineer and inventor
Sir Dugold Clark (1854-1932), a British engineer
Gottlieb Daimler (1834-1900), a German engineer
Henry Ford (1863-1947), an American automobile magnate
Nikolaus Otto (1832-1891), a German engineer and Daimler’s
teacher
A Beginning in Winterthur
By the beginning of the twentieth century, new means of providing
society with power were needed. The steam engines that were
used to run factories and railways were no longer sufficient, since
they were too heavy and inefficient. At that time, Rudolf Diesel, a
German mechanical engineer, invented a new engine. His diesel engine
was much more efficient than previous power sources. It also
appeared that it would be able to run on a wide variety of fuels,
ranging fromoil to coal dust. Diesel first showed that his engine was
practical by building a diesel-driven locomotive that was tested in
1912.
In the 1912 test runs, the first diesel-powered locomotive was operated
on the track of the Winterthur-Romanston rail line in Switzerland.
The locomotive was built by a German company, Gesellschaft
für Thermo-Lokomotiven, which was owned by Diesel and
his colleagues. Immediately after the test runs atWinterthur proved
its efficiency, the locomotive—which had been designed to pull express
trains on Germany’s Berlin-Magdeburg rail line—was moved
to Berlin and put into service. It worked so well that many additional
diesel locomotives were built. In time, diesel engines were
also widely used to power many other machines, including those
that ran factories, motor vehicles, and ships.Diesels, Diesels Everywhere
In the 1890’s, the best engines available were steam engines that
were able to convert only 5 to 10 percent of input heat energy to useful
work. The burgeoning industrial society and a widespread network
of railroads needed better, more efficient engines to help businesses
make profits and to speed up the rate of transportation
available for moving both goods and people, since the maximum
speed was only about 48 kilometers per hour. In 1894, Rudolf Diesel,
then thirty-five years old, appeared in Augsburg, Germany, with a
new engine that he believed would demonstrate great efficiency.
The diesel engine demonstrated at Augsburg ran for only a
short time. It was, however, more efficient than other existing engines.
In addition, Diesel predicted that his engines would move
trains faster than could be done by existing engines and that they
would run on a wide variety of fuels. Experimentation proved the
truth of his claims; even the first working motive diesel engine (the
one used in the Winterthur test) was capable of pulling heavy
freight and passenger trains at maximum speeds of up to 160 kilometers
per hour.
By 1912, Diesel, a millionaire, saw the wide use of diesel locomotives
in Europe and the United States and the conversion of hundreds
of ships to diesel power. Rudolf Diesel’s role in the story ends
here, a result of his mysterious death in 1913—believed to be a suicide
by the authorities—while crossing the English Channel on the
steamer Dresden. Others involved in the continuing saga of diesel
engines were the Britisher Sir Dugold Clerk, who improved diesel
design, and the American Adolphus Busch (of beer-brewing fame),
who bought the North American rights to the diesel engine.
The diesel engine is related to automobile engines invented by
Nikolaus Otto and Gottlieb Daimler. The standard Otto-Daimler (or
Otto) engine was first widely commercialized by American auto
magnate Henry Ford. The diesel and Otto engines are internalcombustion
engines. This means that they do work when a fuel is
burned and causes a piston to move in a tight-fitting cylinder. In diesel
engines, unlike Otto engines, the fuel is not ignited by a spark
from a spark plug. Instead, ignition is accomplished by the use of
high-temperature compressed air.In common “two-stroke” diesel engines, pioneered by Sir Dugold
Clerk, a starter causes the engine to make its first stroke. This
draws in air and compresses the air sufficiently to raise its temperature
to 900 to 1,000 degrees Fahrenheit. At this point, fuel (usually
oil) is sprayed into the cylinder, ignites, and causes the piston to
make its second, power-producing stroke. At the end of that stroke,
more air enters as waste gases leave the cylinder; air compression
occurs again; and the power-producing stroke repeats itself. This
process then occurs continuously, without restarting.
Impact
Proof of the functionality of the first diesel locomotive set the
stage for the use of diesel engines to power many machines. Although
Rudolf Diesel did not live to see it, diesel engines were
widely used within fifteen years after his death. At first, their main
applications were in locomotives and ships. Then, because diesel
engines are more efficient and more powerful than Otto engines,
they were modified for use in cars, trucks, and buses.
At present, motor vehicle diesel engines are most often used in
buses and long-haul trucks. In contrast, diesel engines are not as
popular in automobiles as Otto engines, although European auto makers make much wider use of diesel engines than American
automakers do. Many enthusiasts, however, view diesel automobiles
as the wave of the future. This optimism is based on the durability
of the engine, its great power, and the wide range and economical
nature of the fuels that can be used to run it. The drawbacks
of diesels include the unpleasant odor and high pollutant content of
their emissions.
Modern diesel engines are widely used in farm and earth-moving
equipment, including balers, threshers, harvesters, bulldozers,rock
crushers, and road graders. Construction of the Alaskan oil pipeline
relied heavily on equipment driven by diesel engines. Diesel engines
are also commonly used in sawmills, breweries, coal mines,
and electric power plants.
Diesel’s brainchild has become a widely used power source, just
as he predicted. It is likely that the use of diesel engines will continue
and will expand, as the demands of energy conservation require
more efficient engines and as moves toward fuel diversification
require engines that can be used with various fuels.
06 June 2009
Cyclotron
The invention: The first successful magnetic resonance accelerator
for protons, the cyclotron gave rise to the modern era of particle
accelerators, which are used by physicists to study the structure
of atoms.
The people behind the invention:
Ernest Orlando Lawrence (1901-1958), an American nuclear
physicist who was awarded the 1939 Nobel Prize in Physics
M. Stanley Livingston (1905-1986), an American nuclear
physicist
Niels Edlefsen (1893-1971), an American physicist
David Sloan (1905- ), an American physicist and electrical
engineer
The Beginning of an Era
The invention of the cyclotron by Ernest Orlando Lawrence
marks the beginning of the modern era of high-energy physics. Although
the energies of newer accelerators have increased steadily,
the principles incorporated in the cyclotron have been fundamental
to succeeding generations of accelerators, many of which were also
developed in Lawrence’s laboratory. The care and support for such
machines have also given rise to “big science”: the massing of scientists,
money, and machines in support of experiments to discover
the nature of the atom and its constituents.
At the University of California, Lawrence took an interest in the
new physics of the atomic nucleus, which had been developed by
the British physicist Ernest Rutherford and his followers in England,
and which was attracting more attention as the development
of quantum mechanics seemed to offer solutions to problems that
had long preoccupied physicists. In order to explore the nucleus of
the atom, however, suitable probes were required. An artificial
means of accelerating ions to high energies was also needed.
During the late 1920’s, various means of accelerating alpha particles,
protons (hydrogen ions), and electrons had been tried, but none had been successful in causing a nuclear transformation when
Lawrence entered the field. The high voltages required exceeded
the resources available to physicists. It was believed that more than
a million volts would be required to accelerate an ion to sufficient
energies to penetrate even the lightest atomic nuclei. At such voltages,
insulators broke down, releasing sparks across great distances.
European researchers even attempted to harness lightning to accomplish
the task, with fatal results.
Early in April, 1929, Lawrence discovered an article by a German
electrical engineer that described a linear accelerator of ions that
worked by passing an ion through two sets of electrodes, each of
which carried the same voltage and increased the energy of the ions
correspondingly. By spacing the electrodes appropriately and using
an alternating electrical field, this “resonance acceleration” of ions
could speed subatomic particles to many times the energy applied
in each step, overcoming the problems presented when one tried to
apply a single charge to an ion all at once. Unfortunately, the spacing
of the electrodes would have to be increased as the ions were accelerated,
since they would travel farther between each alternation
of the phase of the accelerating charge, making an accelerator impractically
long in those days of small-scale physics.
Fast-Moving Streams of Ions
Lawrence knew that a magnetic field would cause the ions to be
deflected and form a curved path. If the electrodes were placed
across the diameter of the circle formed by the ions’ path, they
should spiral out as they were accelerated, staying in phase with the
accelerating charge until they reached the periphery of the magnetic
field. This, it seemed to him, afforded a means of producing indefinitely
high voltages without using high voltages by recycling the accelerated
ions through the same electrodes. Many scientists doubted
that such a method would be effective. No mechanism was known
that would keep the circulating ions in sufficiently tight orbits to
avoid collisions with the walls of the accelerating chamber. Others
tried unsuccessfully to use resonance acceleration.
Agraduate student, M. Stanley Livingston, continued Lawrence’s
work. For his dissertation project, he used a brass cylinder 10 centimeters in diameter sealed with wax to hold a vacuum, a half-pillbox
of copper mounted on an insulated stem to serve as the electrode,
and a Hartley radio frequency oscillator producing 10 watts. The
hydrogen molecular ions were produced by a thermionic cathode (mounted near the center of the apparatus) from hydrogen gas admitted
through an aperture in the side of the cylinder after a vacuum
had been produced by a pump. Once formed, the oscillating
electrical field drew out the ions and accelerated them as they
passed through the cylinder. The accelerated ions spiraled out in a
magnetic field produced by a 10-centimeter electromagnet to a collector.
By November, 1930, Livingston had observed peaks in the
collector current as he tuned the magnetic field through the value
calculated to produce acceleration.
Borrowing a stronger magnet and tuning his radio frequency oscillator
appropriately, Livingston produced 80,000-electronvolt ions
at his collector on January 2, 1931, thus demonstrating the principle
of magnetic resonance acceleration.Impact
Demonstration of the principle led to the construction of a succession
of large cyclotrons, beginning with a 25-centimeter cyclotron
developed in the spring and summer of 1931 that produced
one-million-electronvolt protons. With the support of the Research
Corporation, Lawrence secured a large electromagnet that had been
developed for radio transmission and an unused laboratory to
house it: the Radiation Laboratory.
The 69-centimeter cyclotron built with the magnet was used to
explore nuclear physics. It accelerated deuterons, ions of heavy
water or deuterium that contain, in addition to the proton, the neutron,
which was discovered by Sir James Chadwick in 1932. The accelerated
deuteron, which injected neutrons into target atoms, was
used to produce a wide variety of artificial radioisotopes. Many of
these, such as technetium and carbon 14, were discovered with the
cyclotron and were later used in medicine.
By 1939, Lawrence had built a 152-centimeter cyclotron for medical
applications, including therapy with neutron beams. In that
year, he won the Nobel Prize in Physics for the invention of the cyclotron
and the production of radioisotopes. During World War II,
Lawrence and the members of his Radiation Laboratory developed
electromagnetic separation of uranium ions to produce the uranium
235 required for the atomic bomb. After the war, the 467-centimeter cyclotron was completed as a synchrocyclotron, which modulated
the frequency of the accelerating fields to compensate for the increasing
mass of ions as they approached the speed of light. The
principle of synchronous acceleration, invented by Lawrence’s associate,
the American physicist Edwin Mattison McMillan, became
fundamental to proton and electron synchrotrons.
The cyclotron and the Radiation Laboratory were the center of
accelerator physics throughout the 1930’s and well into the postwar
era. The invention of the cyclotron not only provided a new tool for
probing the nucleus but also gave rise to new forms of organizing
scientific work and to applications in nuclear medicine and nuclear
chemistry. Cyclotrons were built in many laboratories in the United
States, Europe, and Japan, and they became a standard tool of nuclear
physics.
02 June 2009
Cyclamate
The invention: An artificial sweetener introduced to the American
market in 1950 under the tradename Sucaryl.
The person behind the invention:
Michael Sveda (1912-1999), an American chemist
A Foolhardy Experiment
The first synthetic sugar substitute, saccharin, was developed in
1879. It became commercially available in 1907 but was banned for
safety reasons in 1912. Sugar shortages during World War I (1914-
1918) resulted in its reintroduction. Two other artificial sweeteners,
Dulcin and P-4000, were introduced later but were banned in 1950
for causing cancer in laboratory animals.
In 1937, Michael Sveda was a young chemist working on his
Ph.D. at the University of Illinois. Aflood in the Ohio valley had ruined
the local pipe-tobacco crop, and Sveda, a smoker, had been
forced to purchase cigarettes. One day while in the laboratory,
Sveda happened to brush some loose tobacco from his lips and noticed
that his fingers tasted sweet. Having a curious, if rather foolhardy,
nature, Sveda tasted the chemicals on his bench to find which
one was responsible for the taste. The culprit was the forerunner of
cyclohexylsulfamate, the material that came to be known as “cyclamate.”
Later, on reviewing his career, Sveda explained the serendipitous
discovery with the comment: “God looks after . . . fools, children,
and chemists.”
Sveda joined E. I. Du Pont de Nemours and Company in 1939
and assigned the patent for cyclamate to his employer. In June of
1950, after a decade of testing on animals and humans, Abbott Laboratories
announced that it was launching Sveda’s artificial sweetener
under the trade name Sucaryl. Du Pont followed with its
sweetener product, Cyclan. A Time magazine article in 1950 announced
the new product and noted that Abbott had warned that
because the product was a sodium salt, individuals with kidney
problems should consult their doctors before adding it to their food.Cyclamate had no calories, but it was thirty to forty times sweeter
than sugar. Unlike saccharin, cyclamate left no unpleasant aftertaste.
The additive was also found to improve the flavor of some
foods, such as meat, and was used extensively to preserve various
foods. By 1969, about 250 food products contained cyclamates, including
cakes, puddings, canned fruit, ice cream, salad dressings,
and its most important use, carbonated beverages.
It was originally thought that cyclamates were harmless to the
human body. In 1959, the chemical was added to the GRAS (generally
recognized as safe) list. Materials on this list, such as sugar, salt,
pepper, and vinegar, did not have to be rigorously tested before being
added to food. In 1964, however, a report cited evidence that cyclamates
and saccharin, taken together, were a health hazard. Its
publication alarmed the scientific community. Numerous investigations
followed.
Shooting Themselves in the Foot
Initially, the claims against cyclamate had been that it caused diarrhea
or prevented drugs from doing their work in the body.
By 1969, these claims had begun to include the threat of cancer.
Ironically, the evidence that sealed the fate of the artificial sweetener
was provided by Abbott itself.
Aprivate Long Island company had been hired by Abbott to conduct
an extensive toxicity study to determine the effects of longterm
exposure to the cyclamate-saccharin mixtures often found in
commercial products. The team of scientists fed rats daily doses of
the mixture to study the effect on reproduction, unborn fetuses, and
fertility. In each case, the rats were declared to be normal. When the
rats were killed at the end of the study, however, those that had been
exposed to the higher doses showed evidence of bladder tumors.
Abbott shared the report with investigators from the National Cancer
Institute and then with the U.S. Food and Drug Administration
(FDA).
The doses required to produce the tumors were equivalent to an
individual drinking 350 bottles of diet cola a day. That was more
than one hundred times greater than that consumed even by those
people who consumed a high amount of cyclamate. A six-person panel of scientists met to review the data and urged the ban of all cyclamates
from foodstuffs. In October, 1969, amid enormous media
coverage, the federal government announced that cyclamates were
to be withdrawn from the market by the beginning of 1970.
In the years following the ban, the controversy continued. Doubt
was cast on the results of the independent study linking sweetener
use to tumors in rats, because the study was designed not to evaluate
cancer risks but to explain the effects of cyclamate use over
many years. Bladder parasites, known as “nematodes,” found in the
rats may have affected the outcome of the tests. In addition, an impurity
found in some of the saccharin used in the study may have
led to the problems observed. Extensive investigations such as the
three-year project conducted at the National Cancer Research Center
in Heidelberg, Germany, found no basis for the widespread ban.
In 1972, however, rats fed high doses of saccharin alone were
found to have developed bladder tumors. At that time, the sweetener
was removed from the GRAS list. An outright ban was averted
by the mandatory use of labels alerting consumers that certain
products contained saccharin.
Impact
The introduction of cyclamate heralded the start of a new industry.
For individuals who had to restrict their sugar intake for health
reasons, or for those who wished to lose weight, there was now an
alternative to giving up sweet food.
The Pepsi-Cola company put a new diet drink formulation on
the market almost as soon as the ban was instituted. In fact, it ran
advertisements the day after the ban was announced showing the
Diet Pepsi product boldly proclaiming “Sugar added—No Cyclamates.”
Sveda, the discoverer of cyclamates, was not impressed with the
FDA’s decision on the sweetener and its handling of subsequent investigations.
He accused the FDAof “a massive cover-up of elemental
blunders” and claimed that the original ban was based on sugar
politics and bad science.
For the manufacturers of cyclamate, meanwhile, the problem lay
with the wording of the Delaney amendment, the legislation that regulates new food additives. The amendment states that the manufacturer
must prove that its product is safe, rather than the FDAhaving
to prove that it is unsafe. The onus was on Abbott Laboratories
to deflect concerns about the safety of the product, and it remained
unable to do so.
Cruise missile
The invention: Aircraft weapons system that makes it possible to
attack both land and sea targets with extreme accuracy without
endangering the lives of the pilots.
The person behind the invention:
Rear Admiral Walter M. Locke (1930- ), U.S. Navy project
manager
From the Buzz Bombs of World War II
During World War II, Germany developed and used two different
types of missiles: ballistic missiles and cruise missiles.Aballistic
missile is one that does not use aerodynamic lift in order to fly. It is
fired into the air by powerful jet engines and reaches a high altitude;
when its engines are out of fuel, it descends on its flight path toward
its target. The German V-2 was the first ballistic missile. The United
States and other countries subsequently developed a variety of
highly sophisticated and accurate ballistic missiles.
The other missile used by Germany was a cruise missile called
the V-1, which was also called the flying bomb or the buzz bomb.
The V-1 used aerodynamic lift in order to fly, just as airplanes do. It
flew relatively low and was slow; by the end of the war, the British,
against whom it was used, had developed techniques for countering
it, primarily by shooting it down.
After World War II, both the United States and the Soviet Union
carried on the Germans’ development of both ballistic and cruise
missiles. The United States discontinued serious work on cruise
missile technology during the 1950’s: The development of ballistic
missiles of great destructive capability had been very successful.
Ballistic missiles armed with nuclear warheads had become the basis
for the U.S. strategy of attempting to deter enemy attacks with
the threat of a massive missile counterattack. In addition, aircraft
carriers provided an air-attack capability similar to that of cruise
missiles. Finally, cruise missiles were believed to be too vulnerable
to being shot down by enemy aircraft or surface-to-air missiles.While ballistic missiles are excellent for attacking large, fixed targets,
they are not suitable for attacking moving targets. They can be
very accurately aimed, but since they are not very maneuverable
during their final descent, they are limited in their ability to change
course to hit a moving target, such as a ship.
During the 1967 war, the Egyptians used a Soviet-built cruise
missile to sink the Israeli ship Elath. The U.S. military, primarily the
Navy and the Air Force, took note of the Egyptian success and
within a few years initiated cruise missile development programs.
The Development of Cruise Missiles
The United States probably could have developed cruise missiles
similar to 1990’s models as early as the 1960’s, but it would have required
a huge effort. The goal was to develop missiles that could be
launched from ships and planes using existing launching equipment,
could fly long distances at low altitudes at fairly high speeds,
and could reach their targets with a very high degree of accuracy. If
the missiles flew too slowly, they would be fairly easy to shoot
down, like the German V-1’s. If they flew at too high an altitude,
they would be vulnerable to the same type of surface-based missiles
that shot down Gary Powers, the pilot of the U.S. U2 spyplane, in
1960. If they were inaccurate, they would be of little use.
The early Soviet cruise missiles were designed to meet their performance
goals without too much concern about how they would
be launched. They were fairly large, and the ships that launched
them required major modifications. The U.S. goal of being able to
launch using existing equipment, without making major modifications
to the ships and planes that would launch them, played a major
part in their torpedo-like shape: Sea-Launched Cruise Missiles
(SLCMs) had to fit in the submarine’s torpedo tubes, and Air-
Launched Cruise Missiles (ALCMs) were constrained to fit in rotary
launchers. The size limitation also meant that small, efficient jet engines
would be required that could fly the long distances required
without needing too great a fuel load. Small, smart computers were
needed to provide the required accuracy. The engine and computer
technologies began to be available in the 1970’s, and they blossomed
in the 1980’s.The U.S. Navy initiated cruise missile development efforts in
1972; the Air Force followed in 1973. In 1977, the Joint Cruise Missile
Project was established, with the Navy taking the lead. Rear
Admiral Walter M. Locke was named project manager. The goal
was to develop air-, sea-, and ground-launched cruise missiles.
By coordinating activities, encouraging competition, and
requiring the use of common components wherever possible, the
cruise missile development program became a model for future
weapon-system development efforts. The primary contractors
included Boeing Aerospace Company, General Dynamics, and
McDonnell Douglas.
In 1978, SLCMs were first launched from submarines. Over the
next few years, increasingly demanding tests were passed by several
versions of cruise missiles. By the mid-1980’s, both antiship and
antiland missiles were available. An antiland version could be guided
to its target with extreme accuracy by comparing a map programmed
into its computer to the picture taken by an on-board video camera.
The typical cruise missile is between 18 and 21 feet long, about 21
inches in diameter, and has a wingspan of between 8 and 12 feet.
Cruise missiles travel slightly below the speed of sound and have a
range of around 1,350 miles (antiland) or 250 miles (antiship). Both
conventionally armed and nuclear versions have been fielded.
Consequences
Cruise missiles have become an important part of the U.S. arsenal.
They provide a means of attacking targets on land and water
without having to put an aircraft pilot’s life in danger. Their value
was demonstrated in 1991 during the Persian GulfWar. One of their
uses was to “soften up” defenses prior to sending in aircraft, thus reducing
the risk to pilots. Overall estimates are that about 85 percent
of cruise missiles used in the Persian Gulf War arrived on target,
which is an outstanding record. It is believed that their extreme accuracy
also helped to minimize noncombatant casualties.
31 May 2009
Coronary artery bypass surgery
The invention: The most widely used procedure of its type, coronary
bypass surgery uses veins from legs to improve circulation
to the heart.
The people behind the invention:
Rene Favaloro (1923-2000), a heart surgeon
Donald B. Effler (1915- ), a member of the surgical team
that performed the first coronary artery bypass operation
F. Mason Sones (1918- ), a physician who developed an
improved technique of X-raying the heart’s arteries
Fighting Heart Disease
In the mid-1960’s, the leading cause of death in the United States
was coronary artery disease, claiming nearly 250 deaths per 100,000
people. Because this number was so alarming, much research was
being conducted on the heart. Most of the public’s attention was focused
on heart transplants performed separately by the famous surgeons
Christiaan Barnard and Michael DeBakey. Yet other, less dramatic
procedures were being developed and studied.
Amajor problem with coronary artery disease, besides the threat
of death, is chest pain, or angina. Individuals whose arteries are
clogged with fat and cholesterol are frequently unable to deliver
enough oxygen to their heart muscles. This may result in angina,
which causes enough pain to limit their physical activities. Some of
the heart research in the mid-1960’s was an attempt to find a surgical
procedure that would eliminate angina in heart patients. The
various surgical procedures had varying success rates.
In the late 1950’s and early 1960’s, a team of physicians in Cleveland
was studying surgical procedures that would eliminate angina.
The team was composed of Rene Favaloro, Donald B. Effler, F.
Mason Sones, and Laurence Groves. They were working on the concept,
proposed by Dr. Arthur M. Vineberg from McGill University
in Montreal, of implanting a healthy artery from the chest into the
heart. This bypass procedure would provide the heart with another source of blood, resulting
in enough oxygen to overcome
the angina. Yet Vineberg’s
surgery was often
ineffective because it was
hard to determine exactly
where to implant the new
artery.
New Techniques
In order to make Vineberg’s
proposed operation
successful, better diagnostic
tools were needed. This was
accomplished by the work
of Sones. He developed a diagnostic procedure, called “arteriography,”
whereby a catheter was inserted into an artery in the arm,
which he ran all the way into the heart. He then injected a dye into the
coronary arteries and photographed them with a high-speed motionpicture
camera. This provided an image of the heart, which made it
easy to determine where the blockages were in the coronary arteries.
Using this tool, the team tried several new techniques. First, the
surgeons tried to ream out the deposits found in the narrow portion
of the artery. They found, however, that this actually reduced
blood flow. Second, they tried slitting the length of the blocked
area of the artery and suturing a strip of tissue that would increase
the diameter of the opening. This was also ineffective because it often
resulted in turbulent blood flow. Finally, the team attempted to
reroute the flow of blood around the blockage by suturing in other
tissue, such as a portion of a vein from the upper leg. This bypass
procedure removed that part of the artery that was clogged and replaced
it with a clear vessel, thereby restoring blood flow through
the artery. This new method was introduced by Favaloro in 1967.
In order for Favaloro and other heart surgeons to perform coronary
artery surgery successfully, several other medical techniques
had to be developed. These included extracorporeal circulation and
microsurgical techniques.Extracorporeal circulation is the process of diverting the patient’s
blood flow from the heart and into a heart-lung machine.
This procedure was developed in 1953 by U.S. surgeon John H.
Gibbon, Jr. Since the blood does not flow through the heart, the
heart can be temporarily stopped so that the surgeons can isolate
the artery and perform the surgery on motionless tissue.
Microsurgery is necessary because some of the coronary arteries
are less than 1.5 millimeters in diameter. Since these arteries
had to be sutured, optical magnification and very delicate and sophisticated
surgical tools were required. After performing this surgery
on numerous patients, follow-up studieswere able to determine
the surgery’s effectiveness. Only then was the value of coronary artery
bypass surgery recognized as an effective procedure for reducing angina
in heart patients.
Consequences
According to the American Heart Association, approximately
332,000 bypass surgeries were performed in the United States in
1987, an increase of 48,000 from 1986. These figures show that the
work by Favaloro and others has had a major impact on the
health of United States citizens. The future outlook is also positive.
It has been estimated that five million people had coronary
artery disease in 1987. Of this group, an estimated 1.5 million had
heart attacks and 500,000 died. Of those living, many experienced
angina. Research has developed new surgical procedures and
new drugs to help fight coronary artery disease. Yet coronary artery
bypass surgery is still a major form of treatment.
28 May 2009
Contact lenses
The invention: Small plastic devices that fit under the eyelids, contact
lenses, or “contacts,” frequently replace the more familiar
eyeglasses that many people wear to correct vision problems.
The people behind the invention:
Leonardo da Vinci (1452-1519), an Italian artist and scientist
Adolf Eugen Fick (1829-1901), a German glassblower
Kevin Tuohy, an American optician
Otto Wichterle (1913- ), a Czech chemist
William Feinbloom (1904-1985), an American optometrist
An Old Idea
There are two main types of contact lenses: hard and soft. Both
types are made of synthetic polymers (plastics). The basic concept of
the contact lens was conceived by Leonardo da Vinci in 1508. He
proposed that vision could be improved if small glass ampules
filled with water were placed in front of each eye. Nothing came of
the idea until glass scleral lenses were invented by the German
glassblower Adolf Fick. Fick’s large, heavy lenses covered the pupil
of the eye, its colored iris, and part of the sclera (the white of the
eye). Fick’s lenses were not useful, since they were painful to wear.
In the mid-1930’s, however, plastic scleral lenses were developed
by various organizations and people, including the German company
I. G. Farben and the American optometrist William Feinbloom.
These lenses were light and relatively comfortable; they
could be worn for several hours at a time.
In 1945, the American optician Kevin Tuohy developed corneal
lenses, which covered only the cornea of the eye. Reportedly,
Tuohy’s invention was inspired by the fact that his nearsighted wife
could not bear scleral lenses but hated to wear eyeglasses. Tuohy’s
lenses were hard contact lenses made of rigid plastic, but they were
much more comfortable than scleral lenses and could be worn for
longer periods of time. Soon after, other people developed soft contact
lenses, which cover both the cornea and the iris. At present,many kinds of contact lenses are available. Both hard and soft contact
lenses have advantages for particular uses.
Eyes, Tears, and Contact Lenses
The camera-like human eye automatically focuses itself and adjusts
to the prevailing light intensity. In addition, it never runs out of
“film” and makes a continuous series of visual images. In the process
of seeing, light enters the eye and passes through the clear,
dome-shaped cornea, through the hole (the pupil) in the colored
iris, and through the clear eye lens, which can change shape by
means of muscle contraction. The lens focuses the light, which next
passes across the jellylike “vitreous humor” and hits the retina.
There, light-sensitive retinal cells send visual images to the optic
nerve, which transmits them to the brain for interpretation.
Many people have 20/20 (normal) vision, which means that they
can clearly see letters on a designated line of a standard eye chart
placed 20 feet away. Nearsighted (myopic) people have vision of
20/40 or worse. This means that, 20 feet from the eye chart, they see
clearly what people with 20/20 vision can see clearly at a greater
distance.
Myopia (nearsightedness) is one of the four most common visual
defects. The others are hyperopia, astigmatism, and presbyopia. All
are called “refractive errors” and are corrected with appropriate
eyeglasses or contact lenses. Myopia, which occurs in 30 percent of
humans, occurs when the eyeball is too long for the lens’s focusing
ability and images of distant objects focus before they reach the retina,
causing blurry vision. Hyperopia, or farsightedness, occurs
when the eyeballs are too short. In hyperopia, the eye’s lenses cannot
focus images of nearby objects by the time those images reach
the retina, resulting in blurry vision. A more common condition is
astigmatism, in which incorrectly shaped corneas make all objects
appear blurred. Finally, presbyopia, part of the aging process,
causes the lens of the eye to lose its elasticity. It causes progressive
difficulty in seeing nearby objects. In myopic, hyperopic, or astigmatic
people, bifocal (two-lens) systems are used to correct presbyopia,
whereas monofocal systems are used to correct presbyopia in
people whose vision is otherwise normal.Modern contact lenses, which many people prefer to eyeglasses,
are used to correct all common eye defects as well as many others
not mentioned here. The lenses float on the layer of tears that is
made continuously to nourish the eye and keep it moist. They fit under
the eyelids and either over the cornea or over both the cornea
and the iris, and they correct visual errors by altering the eye’s focal
length enough to produce 20/20 vision. In addition to being more attractive
than eyeglasses, contact lenses correct visual defects more effectively
than eyeglasses can. Some soft contact lenses (all are made
of flexible plastics) can be worn almost continuously. Hard lenses are made of more rigid plastic and last longer, though they can usually be
worn only for six to nine hours at a time. The choice of hard or soft
lenses must be made on an individual basis.
The disadvantages of contact lenses include the fact that they must
be cleaned frequently to prevent eye irritation. Furthermore, people
who do not produce adequate amounts of tears (a condition called
“dry eyes”) cannot wear them. Also, arthritis, many allergies, and
poor manual dexterity caused by old age or physical problems make
many people poor candidates for contact lenses.Impact
The invention of Plexiglas hard scleral contact lenses set the stage
for the development of the widely used corneal hard lenses by Tuohy.
The development of soft contact lenses available to the general public
began in Czechoslovakia in the 1960’s. It led to the sale, starting in the
1970’s, of the popular, soft
contact lenses pioneered by
Otto Wichterle. The Wichterle
lenses, which cover
both the cornea and the iris,
are made of a plastic called
HEMA (short for hydroxyethylmethylmethacrylate).
These very thin lenses
have disadvantages that include
the requirement of
disinfection between uses,
incomplete astigmatism correction,
low durability, and
the possibility of chemical
combination with some
medications, which can
damage the eyes. Therefore,
much research is being
carried out to improve
them. For this reason, and
because of the continued popularity of hard lenses, new kinds of soft and hard lenses are continually
coming on the market.
24 May 2009
Computer chips
The invention: Also known as a microprocessor, a computer chip
combines the basic logic circuits of a computer on a single silicon
chip.
The people behind the invention:
Robert Norton Noyce (1927-1990), an American physicist
William Shockley (1910-1989), an American coinventor of the
transistor who was a cowinner of the 1956 Nobel Prize in
Physics
Marcian Edward Hoff, Jr. (1937- ), an American engineer
Jack St. Clair Kilby (1923- ), an American researcher and
assistant vice president of Texas Instruments
The Shockley Eight
The microelectronics industry began shortly after World War II
with the invention of the transistor. While radar was being developed
during the war, it was discovered that certain crystalline substances,
such as germanium and silicon, possess unique electrical
properties that make them excellent signal detectors. This class of
materials became known as “semiconductors,” because they are
neither conductors nor insulators of electricity.
Immediately after the war, scientists at Bell Telephone Laboratories
began to conduct research on semiconductors in the hope that
they might yield some benefits for communications. The Bell physicists
learned to control the electrical properties of semiconductor
crystals by “doping” (treating) them with minute impurities. When
two thin wires for current were attached to this material, a crude device
was obtained that could amplify the voice. The transistor, as
this device was called, was developed late in 1947. The transistor
duplicated many functions of vacuum tubes; it was also smaller, required
less power, and generated less heat. The three Bell Laboratories
scientists who guided its development—William Shockley,
Walter H. Brattain, and John Bardeen—won the 1956 Nobel Prize in
Physics for their work.Shockley left Bell Laboratories and went to Palo Alto, California,
where he formed his own company, Shockley Semiconductor Laboratories,
which was a subsidiary of Beckman Instruments. Palo Alto
is the home of Stanford University, which, in 1954, set aside 655
acres of land for a high-technology industrial area known as Stanford
Research Park. One of the first small companies to lease a site
there was Hewlett-Packard. Many others followed, and the surrounding
area of Santa Clara County gave rise in the 1960’s and
1970’s to a booming community of electronics firms that became
known as “Silicon Valley.” On the strength of his prestige, Shockley
recruited eight young scientists from the eastern United States to
work for him. One was Robert Norton Noyce, an Iowa-bred physicist
with a doctorate from the Massachusetts Institute of Technology.
Noyce came to Shockley’s company in 1956.
The “Shockley Eight,” as they became known in the industry,
soon found themselves at odds with their boss over issues of research
and development. Seven of the dissenting scientists negotiated
with industrialist Sherman Fairchild, and they convinced the
remaining holdout, Noyce, to join them as their leader. The Shockley Eight defected in 1957 to form a new company, Fairchild Semiconductor,
in nearby Mountain View, California. Shockley’s company,
which never recovered from the loss of these scientists, soon
went out of business.Integrating Circuits
Research efforts at Fairchild Semiconductor and Texas Instruments,
in Dallas, Texas, focused on putting several transistors on
one piece, or “chip,” of silicon. The first step involved making miniaturized
electrical circuits. Jack St. Clair Kilby, a researcher at Texas
Instruments, succeeded in making a circuit on a chip that consisted
of tiny resistors, transistors, and capacitors, all of which were connected
with gold wires. He and his company filed for a patent on
this “integrated circuit” in February, 1959. Noyce and his associates
at Fairchild Semiconductor followed in July of that year with an integrated
circuit manufactured by means of a “planar process,”
which involved laying down several layers of semiconductor that
were isolated by layers of insulating material. Although Kilby and
Noyce are generally recognized as coinventors of the integrated circuit,
Kilby alone received a membership in the National Inventors
Hall of Fame for his efforts.
Consequences
By 1968, Fairchild Semiconductor had grown to a point at which
many of its key Silicon Valley managers had major philosophical
differences with the East Coast management of their parent company.
This led to a major exodus of top-level management and engineers.
Many started their own companies. Noyce, Gordon E. Moore,
and Andrew Grove left Fairchild to form a new company in Santa
Clara called Intel with $2 million that had been provided by venture
capitalist Arthur Rock. Intel’s main business was the manufacture
of computer memory integrated circuit chips. By 1970, Intel was
able to develop and bring to market a random-access memory
(RAM) chip that was subsequently purchased in large quantities by
several major computer manufacturers, providing large profits for
Intel.
In 1969, Marcian Edward Hoff, Jr., an Intel research and development
engineer, met with engineers from Busicom, a Japanese firm.
These engineers wanted Intel to design a set of integrated circuits for
Busicom’s desktop calculators, but Hoff told them their specifications
were too complex. Nevertheless, Hoff began to think about the possibility of incorporating all the logic circuits of a computer central processing
unit (CPU) into one chip. He began to design a chip called a
“microprocessor,” which, when combined with a chip that would
hold a program and one that would hold data, would become a small,
general-purpose computer. Noyce encouraged Hoff and his associates
to continue his work on the microprocessor, and Busicom contracted
with Intel to produce the chip. Frederico Faggin, who was hired from
Fairchild, did the chip layout and circuit drawings.
In January, 1971, the Intel team finished its first working microprocessor,
the 4004. The following year, Intel made a higher-capacity
microprocessor, the 8008, for Computer Terminals Corporation.
That company contracted with Texas Instruments to produce a chip
with the same specifications as the 8008, which was produced in
June, 1972. Other manufacturers soon produced their own microprocessors.
The Intel microprocessor became the most widely used computer
chip in the budding personal computer industry and may
take significant credit for the PC “revolution” that soon followed.
Microprocessors have become so common that people use them every
day without realizing it. In addition to being used in computers,the microprocessor has found its way into automobiles, microwave
ovens, wristwatches, telephones, and many other ordinary items.
Subscribe to:
Posts (Atom)