23 June 2009
FORTRAN programming language
The invention: The first major computer programming language,
FORTRAN supported programming in a mathematical language
that was natural to scientists and engineers and achieved unsurpassed
success in scientific computation.
The people behind the invention:
John Backus (1924- ), an American software engineer and
manager
John W. Mauchly (1907-1980), an American physicist and
engineer
Herman Heine Goldstine (1913- ), a mathematician and
computer scientist
John von Neumann (1903-1957), a Hungarian American
mathematician and physicist
Talking to Machines
Formula Translation, or FORTRAN—the first widely accepted
high-level computer language—was completed by John Backus
and his coworkers at the International Business Machines (IBM)
Corporation in April, 1957. Designed to support programming
in a mathematical language that was natural to scientists and engineers,
FORTRAN achieved unsurpassed success in scientific
computation.
Computer languages are means of specifying the instructions
that a computer should execute and the order of those instructions.
Computer languages can be divided into categories of progressively
higher degrees of abstraction. At the lowest level is binary
code, or machine code: Binary digits, or “bits,” specify in
complete detail every instruction that the machine will execute.
This was the only language available in the early days of computers,
when such machines as the ENIAC (Electronic Numerical Integrator
and Calculator) required hand-operated switches and
plugboard connections. All higher levels of language are implemented by having a program translate instructions written in the
higher language into binary machine language (also called “object
code”). High-level languages (also called “programming languages”)
are largely or entirely independent of the underlying
machine structure. FORTRAN was the first language of this type
to win widespread acceptance.
The emergence of machine-independent programming languages
was a gradual process that spanned the first decade of electronic
computation. One of the earliest developments was the invention of
“flowcharts,” or “flow diagrams,” by Herman Heine Goldstine and
John von Neumann in 1947. Flowcharting became the most influential
software methodology during the first twenty years of
computing.
Short Code was the first language to be implemented that contained
some high-level features, such as the ability to use mathematical
equations. The idea came from JohnW. Mauchly, and it was
implemented on the BINAC (Binary Automatic Computer) in 1949
with an “interpreter”; later, it was carried over to the UNIVAC (Universal
Automatic Computer) I. Interpreters are programs that do
not translate commands into a series of object-code instructions; instead,
they directly execute (interpret) those commands. Every time
the interpreter encounters a command, that command must be interpreted
again. “Compilers,” however, convert the entire command
into object code before it is executed.
Much early effort went into creating ways to handle commonly
encountered problems—particularly scientific mathematical
calculations. A number of interpretive languages arose to
support these features. As long as such complex operations had
to be performed by software (computer programs), however, scientific
computation would be relatively slow. Therefore, Backus
lobbied successfully for a direct hardware implementation of these
operations on IBM’s new scientific computer, the 704. Backus then
started the Programming Research Group at IBM in order to develop
a compiler that would allow programs to be written in a
mathematically oriented language rather than a machine-oriented
language. In November of 1954, the group defined an initial version
of FORTRAN.A More Accessible Language
Before FORTRAN was developed, a computer had to perform a
whole series of tasks to make certain types of mathematical calculations.
FORTRAN made it possible for the same calculations to be
performed much more easily. In general, FORTRAN supported constructs
with which scientists were already acquainted, such as functions
and multidimensional arrays. In defining a powerful notation
that was accessible to scientists and engineers, FORTRAN opened
up programming to a much wider community.
Backus’s success in getting the IBM 704’s hardware to support
scientific computation directly, however, posed a major challenge:
Because such computation would be much faster, the object code
produced by FORTRAN would also have to be much faster. The
lower-level compilers preceding FORTRAN produced programs
that were usually five to ten times slower than their hand-coded
counterparts; therefore, efficiency became the primary design objective
for Backus. The highly publicized claims for FORTRAN met
with widespread skepticism among programmers. Much of the
team’s efforts, therefore, went into discovering ways to produce the
most efficient object code.
The efficiency of the compiler produced by Backus, combined
with its clarity and ease of use, guaranteed the system’s success. By
1959, many IBM 704 users programmed exclusively in FORTRAN.
By 1963, virtually every computer manufacturer either had delivered
or had promised a version of FORTRAN.
Incompatibilities among manufacturers were minimized by the
popularity of IBM’s version of FORTRAN; every company wanted
to be able to support IBM programs on its own equipment. Nevertheless,
there was sufficient interest in obtaining a standard for
FORTRAN that the American National Standards Institute adopted
a formal standard for it in 1966. Arevised standard was adopted in
1978, yielding FORTRAN 77.
Consequences
In demonstrating the feasibility of efficient high-level languages,
FORTRAN inaugurated a period of great proliferation of programming languages. Most of these languages attempted to provide similar
or better high-level programming constructs oriented toward a
different, nonscientific programming environment. COBOL, for example,
stands for “Common Business Oriented Language.”
FORTRAN, while remaining the dominant language for scientific
programming, has not found general acceptance among nonscientists.
An IBM project established in 1963 to extend FORTRAN
found the task too unwieldy and instead ended up producing an entirely
different language, PL/I, which was delivered in 1966. In the
beginning, Backus and his coworkers believed that their revolutionary
language would virtually eliminate the burdens of coding and
debugging. Instead, FORTRAN launched software as a field of
study and an industry in its own right.
In addition to stimulating the introduction of new languages,
FORTRAN encouraged the development of operating systems. Programming
languages had already grown into simple operating systems
called “monitors.” Operating systems since then have been
greatly improved so that they support, for example, simultaneously
active programs (multiprogramming) and the networking (combining)
of multiple computers.
21 June 2009
Food freezing
The invention: It was long known that low temperatures helped to
protect food against spoiling; the invention that made frozen
food practical was a method of freezing items quickly. Clarence
Birdseye’s quick-freezing technique made possible a revolution
in food preparation, storage, and distribution.
The people behind the invention:
Clarence Birdseye (1886-1956), a scientist and inventor
Donald K. Tressler (1894-1981), a researcher at Cornell
University
Amanda Theodosia Jones (1835-1914), a food-preservation
pioneer
Feeding the Family
In 1917, Clarence Birdseye developed a means of quick-freezing
meat, fish, vegetables, and fruit without substantially changing
their original taste. His system of freezing was called by Fortune
magazine “one of the most exciting and revolutionary ideas in the
history of food.” Birdseye went on to refine and perfect his method
and to promote the frozen foods industry until it became a commercial
success nationwide.
It was during a trip to Labrador, where he worked as a fur trader,
that Birdseye was inspired by this idea. Birdseye’s new wife and
five-week-old baby had accompanied him there. In order to keep
his family well fed, he placed barrels of fresh cabbages in salt water
and then exposed the vegetables to freezing winds. Successful at
preserving vegetables, he went on to freeze a winter’s supply of
ducks, caribou, and rabbit meat.
In the following years, Birdseye experimented with many freezing
techniques. His equipment was crude: an electric fan, ice, and salt
water. His earliest experiments were on fish and rabbits, which he
froze and packed in old candy boxes. By 1924, he had borrowed
money against his life insurance and was lucky enough to find three
partners willing to invest in his new General Seafoods Company (later renamed General Foods), located in Gloucester, Massachusetts.
Although it was Birdseye’s genius that put the principles of
quick-freezing to work, he did not actually invent quick-freezing.
The scientific principles involved had been known for some time.
As early as 1842, a patent for freezing fish had been issued in England.
Nevertheless, the commercial exploitation of the freezing
process could not have happened until the end of the 1800’s, when
mechanical refrigeration was invented. Even then, Birdseye had to
overcome major obstacles.
Finding a Niche
By the 1920’s, there still were few mechanical refrigerators in
American homes. It would take years before adequate facilities for
food freezing and retail distribution would be established across the
United States. By the late 1930’s, frozen foods had, indeed, found its
role in commerce but still could not compete with canned or fresh
foods. Birdseye had to work tirelessly to promote the industry, writing
and delivering numerous lectures and articles to advance its
popularity. His efforts were helped by scientific research conducted
at Cornell University by Donald K. Tressler and by C. R. Fellers of
what was then Massachusetts State College. Also, during World
War II (1939-1945), more Americans began to accept the idea: Rationing,
combined with a shortage of canned foods, contributed to
the demand for frozen foods. The armed forces made large purchases
of these items as well.
General Foods was the first to use a system of extremely rapid
freezing of perishable foods in packages. Under the Birdseye system,
fresh foods, such as berries or lobster, were packaged snugly in convenient
square containers. Then, the packages were pressed between
refrigerated metal plates under pressure at 50 degrees below zero.
Two types of freezing machines were used. The “double belt” freezer
consisted of two metal belts that moved through a 15-meter freezing
tunnel, while a special salt solution was sprayed on the surfaces of
the belts. This double-belt freezer was used only in permanent installations
and was soon replaced by the “multiplate” freezer, which was
portable and required only 11.5 square meters of floor space compared
to the double belt’s 152 square meters.The multiplate freezer also made it possible to apply the technique
of quick-freezing to seasonal crops. People were able to transport
these freezers easily from one harvesting field to another,
where they were used to freeze crops such as peas fresh off the vine.
The handy multiplate freezer consisted of an insulated cabinet
equipped with refrigerated metal plates. Stacked one above the
other, these plates were capable of being opened and closed to receive
food products and to compress them with evenly distributed
pressure. Each aluminum plate had internal passages through which
ammonia flowed and expanded at a temperature of -3.8 degrees
Celsius, thus causing the foods to freeze.
A major benefit of the new frozen foods was that their taste and vitamin content were not lost. Ordinarily, when food is frozen
slowly, ice crystals form, which slowly rupture food cells, thus altering
the taste of the food. With quick-freezing, however, the food
looks, tastes, and smells like fresh food. Quick-freezing also cuts
down on bacteria.
Impact
During the months between one food harvest and the next, humankind
requires trillions of pounds of food to survive. In many
parts of the world, an adequate supply of food is available; elsewhere,
much food goes to waste and many go hungry. Methods of
food preservation such as those developed by Birdseye have done
much to help those who cannot obtain proper fresh foods. Preserving
perishable foods also means that they will be available in
greater quantity and variety all year-round. In all parts of the world,
both tropical and arctic delicacies can be eaten in any season of the
year.
With the rise in popularity of frozen “fast” foods, nutritionists
began to study their effect on the human body. Research has shown
that fresh is the most beneficial. In an industrial nation with many
people, the distribution of fresh commodities is, however, difficult.
It may be many decades before scientists know the long-term effects
on generations raised primarily on frozen foods.
FM radio
The invention: A method of broadcasting radio signals by modulating
the frequency, rather than the amplitude, of radio waves,
FM radio greatly improved the quality of sound transmission.
The people behind the invention:
Edwin H. Armstrong (1890-1954), the inventor of FM radio
broadcasting
David Sarnoff (1891-1971), the founder of RCA
An Entirely New System
Because early radio broadcasts used amplitude modulation (AM)
to transmit their sounds, they were subject to a sizable amount of interference
and static. Since goodAMreception relies on the amount
of energy transmitted, energy sources in the atmosphere between
the station and the receiver can distort or weaken the original signal.
This is particularly irritating for the transmission of music.
Edwin H. Armstrong provided a solution to this technological
constraint. A graduate of Columbia University, Armstrong made a
significant contribution to the development of radio with his basic
inventions for circuits for AM receivers. (Indeed, the monies Armstrong
received from his earlier inventions financed the development
of the frequency modulation, or FM, system.) Armstrong was
one among many contributors to AM radio. For FM broadcasting,
however, Armstrong must be ranked as the most important inventor.
During the 1920’s, Armstrong established his own research laboratory
in Alpine, New Jersey, across the Hudson River from New
York City. With a small staff of dedicated assistants, he carried out
research on radio circuitry and systems for nearly three decades. At
that time, Armstrong also began to teach electrical engineering at
Columbia University.
From 1928 to 1933, Armstrong worked diligently at his private
laboratory at Columbia University to construct a working model of
an FM radio broadcasting system. With the primitive limitations
then imposed on the state of vacuum tube technology, a number of Armstrong’s experimental circuits required as many as one hundred
tubes. Between July, 1930, and January, 1933, Armstrong filed
four basic FM patent applications. All were granted simultaneously
on December 26, 1933.
Armstrong sought to perfectFMradio broadcasting, not to offer
radio listeners better musical reception but to create an entirely
new radio broadcasting system. On November 5, 1935, Armstrong
made his first public demonstration of FM broadcasting in New
York City to an audience of radio engineers. An amateur station
based in suburban Yonkers, New York, transmitted these first signals.
The scientific world began to consider the advantages and
disadvantages of Armstrong’s system; other laboratories began to
craft their own FM systems.
Corporate Conniving
Because Armstrong had no desire to become a manufacturer or
broadcaster, he approached David Sarnoff, head of the Radio Corporation
of America (RCA). As the owner of the top manufacturer
of radio sets and the top radio broadcasting network, Sarnoff was
interested in all advances of radio technology. Armstrong first demonstrated
FM radio broadcasting for Sarnoff in December, 1933.
This was followed by visits from RCA engineers, who were sufficiently
impressed to recommend to Sarnoff that the company conduct
field tests of the Armstrong system.
In 1934, Armstrong, with the cooperation of RCA, set up a test
transmitter at the top of the Empire State Building, sharing facilities
with the experimental RCAtelevision transmitter. From 1934 through
1935, tests were conducted using the Empire State facility, to mixed
reactions of RCA’s best engineers. AM radio broadcasting already
had a performance record of nearly two decades. The engineers
wondered if this new technology could replace something that had
worked so well.
This less-than-enthusiastic evaluation fueled the skepticism of
RCA lawyers and salespeople. RCA had too much invested in the
AM system, both as a leading manufacturer and as the dominant
owner of the major radio network of the time, the National Broadcasting
Company (NBC). Sarnoff was in no rush to adopt FM. To change systems would risk the millions of dollars RCAwas making
as America emerged from the Great Depression.
In 1935, Sarnoff advised Armstrong that RCA would cease any
further research and development activity in FM radio broadcasting.
(Still, engineers at RCA laboratories continued to work on FM
to protect the corporate patent position.) Sarnoff declared to the
press that his company would push the frontiers of broadcasting by
concentrating on research and development of radio with pictures,
that is, television. As a tangible sign, Sarnoff ordered that Armstrong’s
FM radio broadcasting tower be removed from the top of
the Empire State Building.
Armstrong was outraged. By the mid-1930’s, the development of
FM radio broadcasting had become a mission for Armstrong. For
the remainder of his life, Armstrong devoted his considerable talents
to the promotion of FM radio broadcasting.
Impact
After the break with Sarnoff, Armstrong proceeded with plans to
develop his own FM operation. Allied with two of RCA’s biggest
manufacturing competitors, Zenith and General Electric, Armstrong
pressed ahead. In June of 1936, at a Federal Communications Commission
(FCC) hearing, Armstrong proclaimed that FM broadcasting
was the only static-free, noise-free, and uniform system—both
day and night—available. He argued, correctly, thatAMradio broadcasting
had none of these qualities.
During World War II (1939-1945), Armstrong gave the military
permission to use FM with no compensation. That patriotic gesture
cost Armstrong millions of dollars when the military soon became
all FM. It did, however, expand interest in FM radio broadcasting.
World War II had provided a field test of equipment and use.
By the 1970’s, FM radio broadcasting had grown tremendously.
By 1972, one in three radio listeners tuned into an FM station some
time during the day. Advertisers began to use FM radio stations to
reach the young and affluent audiences that were turning to FM stations
in greater numbers.
By the late 1970’s, FM radio stations were outnumberingAMstations.
By 1980, nearly half of radio listeners tuned into FM stations on a regular basis. Adecade later, FM radio listening accounted for
more than two-thirds of audience time. Armstrong’s predictions
that listeners would prefer the clear, static-free sounds offered by
FM radio broadcasting had come to pass by the mid-1980’s, nearly
fifty years after Armstrong had commenced his struggle to make
FM radio broadcasting a part of commercial radio.
Fluorescent lighting
lighting
The invention: A form of electrical lighting that uses a glass tube
coated with phosphor that gives off a cool bluish light and emits
ultraviolet radiation.
The people behind the invention:
Vincenzo Cascariolo (1571-1624), an Italian alchemist and
shoemaker
Heinrich Geissler (1814-1879), a German glassblower
Peter Cooper Hewitt (1861-1921), an American electrical
engineer
Celebrating the “Twelve Greatest Inventors”
On the night of November 23, 1936, more than one thousand industrialists,
patent attorneys, and scientists assembled in the main
ballroom of the Mayflower Hotel in Washington, D.C., to celebrate
the one hundredth anniversary of the U.S. Patent Office.Atransport
liner over the city radioed the names chosen by the Patent Office as
America’s “Twelve Greatest Inventors,” and, as the distinguished
group strained to hear those names, “the room was flooded for a
moment by the most brilliant light yet used to illuminate a space
that size.”
Thus did The New York Times summarize the commercial introduction
of the fluorescent lamp. The twelve inventors present were
Thomas Alva Edison, Robert Fulton, Charles Goodyear, Charles
Hall, Elias Howe, Cyrus Hall McCormick, Ottmar Mergenthaler,
Samuel F. B. Morse, George Westinghouse, Wilbur Wright, and Eli
Whitney. There was, however, no name to bear the honor for inventing
fluorescent lighting. That honor is shared by many who participated
in a very long series of discoveries.
The fluorescent lamp operates as a low-pressure, electric discharge
inside a glass tube that contains a droplet of mercury and a
gas, commonly argon. The inside of the glass tube is coated with
fine particles of phosphor. When electricity is applied to the gas, the
mercury gives off a bluish light and emits ultraviolet radiation.When bathed in the strong ultraviolet radiation emitted by the mercury,
the phosphor fluoresces (emits light).
The setting for the introduction of the fluorescent lamp began at
the beginning of the 1600’s, when Vincenzo Cascariolo, an Italian
shoemaker and alchemist, discovered a substance that gave off a
bluish glow in the dark after exposure to strong sunlight. The fluorescent
substance was apparently barium sulfide and was so unusual
for that time and so valuable that its formulation was kept secret
for a long time. Gradually, however, scholars became aware of
the preparation secrets of the substance and studied it and other luminescent
materials.
Further studies in fluorescent lighting were made by the German
physicist Johann Wilhelm Ritter. He observed the luminescence of
phosphors that were exposed to various “exciting” lights. In 1801,
he noted that some phosphors shone brightly when illuminated by
light that the eye could not see (ultraviolet light). Ritter thus discovered
the ultraviolet region of the light spectrum. The use of phosphors
to transform ultraviolet light into visible light was an important
step in the continuing development of the fluorescent lamp.
Further studies in fluorescent lighting were made by the German
physicist Johann Wilhelm Ritter. He observed the luminescence of
phosphors that were exposed to various “exciting” lights. In 1801,
he noted that some phosphors shone brightly when illuminated by
light that the eye could not see (ultraviolet light). Ritter thus discovered
the ultraviolet region of the light spectrum. The use of phosphors
to transform ultraviolet light into visible light was an important
step in the continuing development of the fluorescent lamp.
The British mathematician and physicist Sir George Gabriel Stokes
studied the phenomenon as well. It was he who, in 1852, termed the
afterglow “fluorescence.”
Geissler Tubes
While these advances were being made, other workers were trying
to produce a practical form of electric light. In 1706, the English
physicist Francis Hauksbee devised an electrostatic generator, which
is used to accelerate charged particles to very high levels of electrical
energy. He then connected the device to a glass “jar,” used a vacuum pump to evacuate the jar to a low pressure, and tested his
generator. In so doing, Hauksbee obtained the first human-made
electrical glow discharge by “capturing lightning” in a jar.
In 1854, Heinrich Geissler, a glassblower and apparatus maker,
opened his shop in Bonn, Germany, to make scientific instruments;
in 1855, he produced a vacuum pump that used liquid mercury as
an evacuation fluid. That same year, Geissler made the first gaseous
conduction lamps while working in collaboration with the German
scientist Julius Plücker. Plücker referred to these lamps as “Geissler
tubes.” Geissler was able to create red light with neon gas filling a
lamp and light of nearly all colors by using certain types of gas
within each of the lamps. Thus, both the neon sign business and the
science of spectroscopy were born.
Geissler tubes were studied extensively by a variety of workers.
At the beginning of the twentieth century, the practical American
engineer Peter Cooper Hewitt put these studies to use by marketing
the first low-pressure mercury vapor lamps. The lamps were quite
successful, although they required high voltage for operation, emitted
an eerie blue-green, and shone dimly by comparison with their
eventual successor, the fluorescent lamp. At about the same time,
systematic studies of phosphors had finally begun.
By the 1920’s, a number of investigators had discovered that the
low-pressure mercury vapor discharge marketed by Hewitt was an
extremely efficient method for producing ultraviolet light, if the
mercury and rare gas pressures were properly adjusted. With a
phosphor to convert the ultraviolet light back to visible light, the
Hewitt lamp made an excellent light source.
Impact
The introduction of fluorescent lighting in 1936 presented the
public with a completely new form of lighting that had enormous
advantages of high efficiency, long life, and relatively low cost.
By 1938, production of fluorescent lamps was well under way. By
April, 1938, four sizes of fluorescent lamps in various colors had
been offered to the public and more than two hundred thousand
lamps had been sold.
During 1939 and 1940, two great expositions—the New York World’s Fair and the San Francisco International Exposition—
helped popularize fluorescent lighting. Thousands of tubular fluorescent
lamps formed a great spiral in the “motor display salon,”
the car showroom of the General Motors exhibit at the New York
World’s Fair. Fluorescent lamps lit the Polish Restaurant and hung
in vertical clusters on the flagpoles along theAvenue of the Flags at
the fair, while two-meter-long, upright fluorescent tubes illuminated
buildings at the San Francisco International Exposition.
When the United States entered World War II (1939-1945), the
demand for efficient factory lighting soared. In 1941, more than
twenty-one million fluorescent lamps were sold. Technical advances
continued to improve the fluorescent lamp. By the 1990’s,
this type of lamp supplied most of the world’s artificial lighting.
20 June 2009
Floppy disk
The invention: Inexpensive magnetic medium for storing and
moving computer data.
The people behind the invention:
Andrew D. Booth (1918- ), an English inventor who
developed paper disks as a storage medium
Reynold B. Johnson (1906-1998), a design engineer at IBM’s
research facility who oversaw development of magnetic disk
storage devices
Alan Shugart (1930- ), an engineer at IBM’s research
laboratory who first developed the floppy disk as a means of
mass storage for mainframe computers
First Tries
When the International Business Machines (IBM) Corporation
decided to concentrate on the development of computers for business
use in the 1950’s, it faced a problem that had troubled the earliest
computer designers: how to store data reliably and inexpensively.
In the early days of computers (the early 1940’s), a number of
ideas were tried. The English inventor Andrew D. Booth produced
spinning paper disks on which he stored data by means of punched
holes, only to abandon the idea because of the insurmountable engineering
problems he foresaw.
The next step was “punched” cards, an idea first used when the
French inventor Joseph-Marie Jacquard invented an automatic weaving
loom for which patterns were stored in pasteboard cards. The
idea was refined by the English mathematician and inventor Charles
Babbage for use in his “analytical engine,” an attempt to build a kind
of computing machine. Although it was simple and reliable, it was
not fast enough, nor did it store enough data, to be truly practical.
The Ampex Corporation demonstrated its first magnetic audiotape
recorder after World War II (1939-1945). Shortly after that, the
Binary Automatic Computer (BINAC) was introduced with a storage
device that appeared to be a large tape recorder. A more advanced machine, the Universal Automatic Computer (UNIVAC),
used metal tape instead of plastic (plastic was easily stretched or
even broken). Unfortunately, metal tape was considerably heavier,
and its edges were razor-sharp and thus dangerous. Improvements
in plastic tape eventually produced sturdy media, and magnetic
tape became (and remains) a practical medium for storage of computer
data.
Still later designs combined Booth’s spinning paper disks with
magnetic technology to produce rapidly rotating “drums.” Whereas
a tape might have to be fast-forwarded nearly to its end to locate a
specific piece of data, a drum rotating at speeds up to 12,500 revolutions
per minute (rpm) could retrieve data very quickly and
could store more than 1 million bits (or approximately 125 kilobytes)
of data.
In May, 1955, these drums evolved, under the direction of Reynold
B. Johnson, into IBM’s hard disk unit. The hard disk unit consisted
of fifty platters, each 2 feet in diameter, rotating at 1,200 rpm. Both
sides of the disk could be used to store information. When the operator
wished to access the disk, at his or her command a read/write
head was moved to the right disk and to the side of the disk that
held the desired data. The operator could then read data from or record
data onto the disk. To speed things even more, the next version
of the device, similar in design, employed one hundred read/write
heads—one for each of its fifty double-sided disks. The only remaining
disadvantage was its size, which earned IBM’s first commercial
unit the nickname “jukebox.”
The First Floppy
The floppy disk drive developed directly from hard disk technology.
It did not take shape until the late 1960’s under the direction of
Alan Shugart (it was announced by IBM as a ready product in 1970).
First created to help restart the operating systems of mainframe
computers that had gone dead, the floppy seemed in some ways to
be a step back, for it operated more slowly than a hard disk drive
and did not store as much data. Initially, it consisted of a single thin
plastic disk eight inches in diameter and was developed without the
protective envelope in which it is now universally encased. The addition of that jacket gave the floppy its single greatest advantage
over the hard disk: portability with reliability.
Another advantage soon became apparent: The floppy is resilient
to damage. In a hard disk drive, the read/write heads must
hover thousandths of a centimeter over the disk surface in order to
attain maximum performance. Should even a small particle of dust
get in the way, or should the drive unit be bumped too hard, the
head may “crash” into the surface of the disk and ruin its magnetic
coating; the result is a permanent loss of data. Because the floppy
operates with the read-write head in contact with the flexible plastic
disk surface, individual particles of dust or other contaminants are
not nearly as likely to cause disaster.
As a result of its advantages, the floppy disk was the logical
choice for mass storage in personal computers (PCs), which were
developed a few years after the floppy disk’s introduction. The
floppy is still an important storage device even though hard disk
drives for PCs have become less expensive. Moreover, manufacturers
continually are developing new floppy formats and new floppy
disks that can hold more data.Consequences
Personal computing would have developed very differently were
it not for the availability of inexpensive floppy disk drives. When
IBM introduced its PC in 1981, the machine provided as standard
equipment a connection for a cassette tape recorder as a storage device;
a floppy disk was only an option (though an option few did not
take). The awkwardness of tape drives—their slow speed and sequential
nature of storing data—presented clear obstacles to the acceptance
of the personal computer as a basic information tool. By
contrast, the floppy drive gives computer users relatively fast storage
at low cost.
Floppy disks provided more than merely economical data storage.
Since they are built to be removable (unlike hard drives), they
represented a basic means of transferring data between machines.
Indeed, prior to the popularization of local area networks (LANs),
the floppy was known as a “sneaker” network: One merely carried
the disk by foot to another computer.
Floppy disks were long the primary means of distributing new
software to users. Even the very flexible floppy showed itself to be
quite resilient to the wear and tear of postal delivery. Later, the 3.5-
inch disk improved upon the design of the original 8-inch and 5.25-
inch floppies by protecting the disk medium within a hard plastic
shell and by using a sliding metal door to protect the area where the
read/write heads contact the disk.
By the late 1990’s, floppy disks were giving way to new datastorage
media, particularly CD-ROMs—durable laser-encoded disks
that hold more than 700 megabytes of data. As the price of blank
CDs dropped dramatically, floppy disks tended to be used mainly
for short-term storage of small amounts of data. Floppy disks were
also being used less and less for data distribution and transfer, as
computer users turned increasingly to sending files via e-mail on
the Internet, and software providers made their products available
for downloading on Web sites.
19 June 2009
Field ion microscope
The invention:Amicroscope that uses ions formed in high-voltage
electric fields to view atoms on metal surfaces.
The people behind the invention:
Erwin Wilhelm Müller (1911-1977), a physicist, engineer, and
research professor
J. Robert Oppenheimer (1904-1967), an American physicist
To See Beneath the Surface
In the early twentieth century, developments in physics, especially
quantum mechanics, paved the way for the application of
new theoretical and experimental knowledge to the problem of
viewing the atomic structure of metal surfaces. Of primary importance
were American physicist George Gamow’s 1928 theoretical
explanation of the field emission of electrons by quantum mechanical
means and J. Robert Oppenheimer’s 1928 prediction of the
quantum mechanical ionization of hydrogen in a strong electric
field.
In 1936, ErwinWilhelm Müller developed his field emission microscope,
the first in a series of instruments that would exploit
these developments. It was to be the first instrument to view
atomic structures—although not the individual atoms themselves—
directly. Müller’s subsequent field ion microscope utilized the
same basic concepts used in the field emission microscope yet
proved to be a much more powerful and versatile instrument. By
1956, Müller’s invention allowed him to view the crystal lattice
structure of metals in atomic detail; it actually showed the constituent
atoms.
The field emission and field ion microscopes make it possible to
view the atomic surface structures of metals on fluorescent screens.
The field ion microscope is the direct descendant of the field emission
microscope. In the case of the field emission microscope, the
images are projected by electrons emitted directly from the tip of a
metal needle, which constitutes the specimen under investigation.These electrons produce an image of the atomic lattice structure of
the needle’s surface. The needle serves as the electron-donating
electrode in a vacuum tube, also known as the “cathode.” Afluorescent
screen that serves as the electron-receiving electrode, or “anode,”
is placed opposite the needle. When sufficient electrical voltage
is applied across the cathode and anode, the needle tip emits
electrons, which strike the screen. The image produced on the
screen is a projection of the electron source—the needle surface’s
atomic lattice structure.
Müller studied the effect of needle shape on the performance of
the microscope throughout much of 1937. When the needles had
been properly shaped, Müller was able to realize magnifications of
up to 1 million times. This magnification allowed Müller to view
what he called “maps” of the atomic crystal structure of metals,
since the needles were so small that they were often composed of
only one simple crystal of the material. While the magnification
may have been great, however, the resolution of the instrument was
severely limited by the physics of emitted electrons, which caused
the images Müller obtained to be blurred.
Improving the View
In 1943, while working in Berlin, Müller realized that the resolution
of the field emission microscope was limited by two factors.
The electron velocity, a particle property, was extremely high and
uncontrollably random, causing the micrographic images to be
blurred. In addition, the electrons had an unsatisfactorily high wavelength.
When Müller combined these two factors, he was able to determine
that the field emission microscope could never depict single
atoms; it was a physical impossibility for it to distinguish one
atom from another.
By 1951, this limitation led him to develop the technology behind
the field ion microscope. In 1952, Müller moved to the United States
and founded the Pennsylvania State University Field Emission Laboratory.
He perfected the field ion microscope between 1952 and
1956.
The field ion microscope utilized positive ions instead of electrons
to create the atomic surface images on the fluorescent screen.When an easily ionized gas—at first hydrogen, but usually helium,
neon, or argon—was introduced into the evacuated tube, the emitted
electrons ionized the gas atoms, creating a stream of positively
charged particles, much as Oppenheimer had predicted in 1928.
Müller’s use of positive ions circumvented one of the resolution
problems inherent in the use of imaging electrons. Like the electrons,
however, the positive ions traversed the tube with unpredictably random velocities. Müller eliminated this problem by cryogenically
cooling the needle tip with a supercooled liquefied gas such as
nitrogen or hydrogen.
By 1956, Müller had perfected the means of supplying imaging
positive ions by filling the vacuum tube with an extremely small
quantity of an inert gas such as helium, neon, or argon. By using
such a gas, Müller was assured that no chemical reaction would occur
between the needle tip and the gas; any such reaction would alter
the surface atomic structure of the needle and thus alter the resulting
microscopic image. The imaging ions allowed the field ion
microscope to image the emitter surface to a resolution of between
two and three angstroms, making it ten times more accurate than its
close relative, the field emission microscope.
Consequences
The immediate impact of the field ion microscope was its influence
on the study of metallic surfaces. It is a well-known fact of materials
science that the physical properties of metals are influenced
by the imperfections in their constituent lattice structures. It was not
possible to view the atomic structure of the lattice, and thus the finest
detail of any imperfection, until the field ion microscope was developed.
The field ion microscope is the only instrument powerful
enough to view the structural flaws of metal specimens in atomic
detail.
Although the instrument may be extremely powerful, the extremely
large electrical fields required in the imaging process preclude
the instrument’s application to all but the heartiest of metallic
specimens. The field strength of 500 million volts per centimeter
exerts an average stress on metal specimens in the range of almost
1 ton per square millimeter. Metals such as iron and platinum can
withstand this strain because of the shape of the needles into which
they are formed. Yet this limitation of the instrument makes it extremely
difficult to examine biological materials, which cannot withstand
the amount of stress that metals can. Apractical by-product in
the study of field ionization—field evaporation—eventually permitted
scientists to view large biological molecules.
Field evaporation also allowed surface scientists to view the atomic structures of biological molecules. By embedding molecules
such as phthalocyanine within the metal needle, scientists have
been able to view the atomic structures of large biological molecules
by field evaporating much of the surrounding metal until the biological
material remains at the needle’s surface.
18 June 2009
Fiber-optics
The invention: The application of glass fibers to electronic communications
and other fields to carry large volumes of information
quickly, smoothly, and cheaply over great distances.
The people behind the invention:
Samuel F. B. Morse (1791-1872), the American artist and
inventor who developed the electromagnetic telegraph
system
Alexander Graham Bell (1847-1922), the Scottish American
inventor and educator who invented the telephone and the
photophone
Theodore H. Maiman (1927- ), the American physicist and
engineer who invented the solid-state laser
Charles K. Kao (1933- ), a Chinese-born electrical engineer
Zhores I. Alferov (1930- ), a Russian physicist and
mathematician
The Singing Sun
In 1844, Samuel F. B. Morse, inventor of the telegraph, sent his famous
message, “What hath God wrought?” by electrical impulses
traveling at the speed of light over a 66-kilometer telegraph wire
strung between Washington, D.C., and Baltimore. Ever since that
day, scientists have worked to find faster, less expensive, and more
efficient ways to convey information over great distances.
At first, the telegraph was used to report stock-market prices and
the results of political elections. The telegraph was quite important
in the American Civil War (1861-1865). The first transcontinental
telegraph message was sent by Stephen J. Field, chief justice of the
California Supreme Court, to U.S. president Abraham Lincoln on
October 24, 1861. The message declared that California would remain
loyal to the Union. By 1866, telegraph lines had reached all
across the North American continent and a telegraph cable had
been laid beneath the Atlantic Ocean to link the OldWorld with the
New World.Another American inventor made the leap from the telegraph to
the telephone. Alexander Graham Bell, a teacher of the deaf, was interested
in the physical way speech works. In 1875, he started experimenting
with ways to transmit sound vibrations electrically. He realized
that an electrical current could be adjusted to resemble the vibrations of speech. Bell patented his invention on March 7, 1876.
On July 9, 1877, he founded the Bell Telephone Company.
In 1880, Bell invented a device called the “photophone.” He used
it to demonstrate that speech could be transmitted on a beam of
light. Light is a form of electromagnetic energy. It travels in a vibrating
wave. When the amplitude (height) of the wave is adjusted, a
light beam can be made to carry messages. Bell’s invention included
a thin mirrored disk that converted sound waves directly into a
beam of light. At the receiving end, a selenium resistor connected to
a headphone converted the light back into sound. “I have heard a
ray of sun laugh and cough and sing,” Bell wrote of his invention.
Although Bell proved that he could transmit speech over distances
of several hundred meters with the photophone, the device
was awkward and unreliable, and it never became popular as the
telephone did. Not until one hundred years later did researchers find
important practical uses for Bell’s idea of talking on a beam of light.
Two other major discoveries needed to be made first: developdevelopment
of the laser and of high-purity glass. Theodore H. Maiman, an
American physicist and electrical engineer at Hughes Research Laboratories
in Malibu, California, built the first laser. The laser produces
an intense, narrowly focused beam of light that can be adjusted to
carry huge amounts of information. The word itself is an acronym for
light amplification by the stimulated emission of radiation.
It soon became clear, though, that even bright laser light can be
broken up and absorbed by smog, fog, rain, and snow. So in 1966,
Charles K. Kao, an electrical engineer at the Standard Telecommunications
Laboratories in England, suggested that glass fibers could
be used to transmit message-carrying beams of laser light without
disruption from weather.
Fiber Optics Are Tested
Optical glass fiber is made from common materials, mostly silica,
soda, and lime. The inside of a delicate silica glass tube is coated
with a hundred or more layers of extremely thin glass. The tube is
then heated to 2,000 degrees Celsius and collapsed into a thin glass
rod, or preform. The preform is then pulled into thin strands of fiber.
The fibers are coated with plastic to protect them from being nicked
or scratched, and then they are covered in flexible cable.The earliest glass fibers
contained many impurities
and defects, so they did not
carry light well. Signal repeaters
were needed every
few meters to energize
(amplify) the fading pulses
of light. In 1970, however,
researchers at the Corning
Glass Works in New York
developed a fiber pure
enough to carry light at
least one kilometer without
amplification.
The telephone industry
quickly became involved in the new fiber-optics technology. Researchers
believed that a bundle of optical fibers as thin as a pencil
could carry several hundred telephone calls at the same time. Optical
fibers were first tested by telephone companies in big cities,
where the great volume of calls often overloaded standard underground
phone lines.
On May 11, 1977, American Telephone & Telegraph Company
(AT&T), along with Illinois Bell Telephone, Western Electric, and
Bell Telephone Laboratories, began the first commercial test of fiberoptics
telecommunications in downtown Chicago. The system consisted
of a 2.4-kilometer cable laid beneath city streets. The cable,
only 1.3 centimeters in diameter, linked an office building in the
downtown business district with two telephone exchange centers.
Voice and video signals were coded into pulses of laser light and
transmitted through the hair-thin glass fibers. The tests showed that
a single pair of fibers could carry nearly six hundred telephone conversations
at once very reliably and at a reasonable cost.
Six years later, in October, 1983, Bell Laboratories succeeded in
transmitting the equivalent of six thousand telephone signals through
an optical fiber cable that was 161 kilometers long. Since that time,
countries all over the world, fromEngland to Indonesia, have developed
optical communications systems.Consequences
Fiber optics has had a great impact on telecommunications. Asingle
fiber can now carry thousands of conversations with no electrical
interference. These fibers are less expensive, weigh less, and take up
much less space than copper wire. As a result, people can carry on
conversations over long distances without static and at a low cost.
One of the first uses of fiber optics and perhaps its best-known
application is the fiberscope, a medical instrument that permits internal
examination of the human body without surgery or X-ray
techniques. The fiberscope, or endoscope, consists of two fiber
bundles. One of the fiber bundles transmits bright light into the patient,
while the other conveys a color image back to the eye of the
physician. The fiberscope has been used to look for ulcers, cancer,
and polyps in the stomach, intestine, and esophagus of humans.
Medical instruments, such as forceps, can be attached to the fiberscope,
allowing the physician to perform a range of medical procedures,
such as clearing a blocked windpipe or cutting precancerous
polyps from the colon.
Fax machine
The invention: Originally known as the “facsimile machine,” a
machine that converts written and printed images into electrical
signals that can be sent via telephone, computer, or radio.
The person behind the invention:
Alexander Bain (1818-1903), a Scottish inventor
Sending Images
The invention of the telegraph and telephone during the latter
half of the nineteenth century gave people the ability to send information
quickly over long distances.With the invention of radio and
television technologies, voices and moving pictures could be seen
around the world as well. Oddly, however, the facsimile process—
which involves the transmission of pictures, documents, or other
physical data over distance—predates all these modern devices,
since a simple facsimile apparatus (usually called a fax machine)
was patented in 1843 by Alexander Bain. This early device used a
pendulum to synchronize the transmitting and receiving units; it
did not convert the image into an electrical format, however, and it
was quite crude and impractical. Nevertheless, it reflected the desire
to send images over long distances, which remained a technological
goal for more than a century.
Facsimile machines developed in the period around 1930 enabled
news services to provide newspapers around the world with
pictures for publication. It was not until the 1970’s, however, that
technological advances made small fax machines available for everyday
office use.
Scanning Images
Both the fax machines of the 1930’s and those of today operate on
the basis of the same principle: scanning. In early machines, an image
(a document or a picture) was attached to a roller, placed in the
fax machine, and rotated at a slow and fixed speed (which must be the same at each end of the link) in a bright light. Light from the image
was reflected from the document in varying degrees, since dark
areas reflect less light than lighter areas do. Alens moved across the
page one line at a time, concentrating and directing the reflected
light to a photoelectric tube. This tube would respond to the change
in light level by varying its electric output, thus converting the image
into an output signal whose intensity varied with the changing
light and dark spots of the image. Much like the signal from a microphone
or television camera, this modulated (varying) wave could
then be broadcast by radio or sent over telephone lines to a receiver
that performed a reverse function. At the receiving end, a light bulb
was made to vary its intensity to match the varying intensity of the
incoming signal. The output of the light bulb was concentrated
through a lens onto photographically sensitive paper, thus re-creating
the original image as the paper was rotated.
Early fax machines were bulky and often difficult to operate.
Advances in semiconductor and computer technology in the 1970’s,
however, made the goal of creating an easy-to-use and inexpensive
fax machine realistic. Instead of a photoelectric tube that consumes
a relatively large amount of electrical power, a row of small photodiode
semiconductors is used to measure light intensity. Instead of a
power-consuming light source, low-power light-emitting diodes
(LEDs) are used. Some 1,728 light-sensitive diodes are placed in a
row, and the image to be scanned is passed over them one line at a
time. Each diode registers either a dark or a light portion of the image.
As each diode is checked in sequence, it produces a signal for
one picture element, also known as a “pixel” or “pel.” Because
many diodes are used, there is no need for a focusing lens; the diode
bar is as wide as the page being scanned, and each pixel represents a
portion of a line on that page.
Since most fax transmissions take place over public telephone
system lines, the signal from the photodiodes is transmitted by
means of a built-in computer modem in much the same format that
computers use to transmit data over telephone lines. The receiving
fax uses its modem to convert the audible signal into a sequence that
varies in intensity in proportion to the original signal. This varying
signal is then sent in proper sequence to a row of 1,728 small wires
over which a chemically treated paper is passed. As each wire receives a signal that represents a black portion of the scanned image,
the wire heats and, in contact with the paper, produces a black dot
that corresponds to the transmitted pixel. As the page is passed over
these wires one line at a time, the original image is re-created.
Consequences
The fax machine has long been in use in many commercial and
scientific fields.Weather data in the form of pictures are transmitted
from orbiting satellites to ground stations; newspapers receive photographs
from international news sources via fax; and, using a very
expensive but very high-quality fax device, newspapers and magazines
are able to transmit full-size proof copies of each edition to
printers thousands of miles away so that a publication edited in one
country can reach newsstands around the world quickly.
With the technological advances that have been made in recent
years, however, fax transmission has become a part of everyday life,
particularly in business and research environments. The ability to
send quickly a copy of a letter, document, or report over thousands
of miles means that information can be shared in a matter of minutes
rather than in a matter of days. In fields such as advertising and
architecture, it is often necessary to send pictures or drawings to remote
sites. Indeed, the fax machine has played an important role in
providing information to distant observers of political unrest when
other sources of information (such as radio, television, and newspapers)
are shut down.
In fact, there has been a natural coupling of computers, modems,
and fax devices. Since modern faxes are sent as computer data over
phone lines, specialized and inexpensive modems (which allow
two computers to share data) have been developed that allow any
computer user to send and receive faxes without bulky machines.
For example, a document—including drawings, pictures, or graphics
of some kind—is created in a computer and transmitted directly
to another fax machine. That computer can also receive a fax transmission
and either display it on the computer’s screen or print it on
the local printer. Since fax technology is now within the reach of almost
anyone who is interested in using it, there is little doubt that it
will continue to grow in popularity.
ENIAC computer
The invention:
The first general-purpose electronic digital computer.
The people behind the invention:
John Presper Eckert (1919-1995), an electrical engineer
John William Mauchly (1907-1980), a physicist, engineer, and
professor
John von Neumann (1903-1957), a Hungarian American
mathematician, physicist, and logician
Herman Heine Goldstine (1913- ), an army mathematician
Arthur Walter Burks (1915- ), a philosopher, engineer, and
professor
John Vincent Atanasoff (1903-1995), a mathematician and
physicist
Electronic synthesizer
The invention: Portable electronic device that both simulates the
sounds of acoustic instruments and creates entirely new sounds.
The person behind the invention:
Robert A. Moog (1934- ), an American physicist, engineer,
and inventor
From Harmonium to Synthesizer
The harmonium, or acoustic reed organ, is commonly viewed as
having evolved into the modern electronic synthesizer that can be
used to create many kinds of musical sounds, from the sounds of
single or combined acoustic musical instruments to entirely original
sounds. The first instrument to be called a synthesizer was patented
by the Frenchman J. A. Dereux in 1949. Dereux’s synthesizer, which
amplified the acoustic properties of harmoniums, led to the development
of the recording organ.
Next, several European and American inventors altered and
augmented the properties of such synthesizers. This stage of the
process was followed by the invention of electronic synthesizers,
which initially used electronically generated sounds to imitate
acoustic instruments. It was not long, however, before such synthesizers
were used to create sounds that could not be produced by any
other instrument. Among the early electronic synthesizers were
those made in Germany by Herbert Elmert and Robert Beyer in
1953, and the American Olsen-Belar synthesizers, which were developed
in 1954. Continual research produced better and better versions
of these large, complex electronic devices.
Portable synthesizers, which are often called “keyboards,” were
then developed for concert and home use. These instruments became
extremely popular, especially in rock music. In 1964, Robert A.
Moog, an electronics professor, created what are thought by many
to be the first portable synthesizers to be made available to the public.
Several other well-known portable synthesizers, such as ARP
and Buchla synthesizers, were also introduced at about the same time. Currently, many companies manufacture studio-quality synthesizers
of various types.
Synthesizer Components and Operation
Modern synthesizers make music electronically by building up
musical phrases via numerous electronic circuits and combining
those phrases to create musical compositions. In addition to duplicating
the sounds of many instruments, such synthesizers also enable
their users to create virtually any imaginable sound. Many
sounds have been created on synthesizers that could not have been
created in any other way.
Synthesizers use sound-processing and sound-control equipment
that controls “white noise” audio generators and oscillator circuits.
This equipment can be manipulated to produce a huge variety of
sound frequencies and frequency mixtures in the same way that a
beam of white light can be manipulated to produce a particular
color or mixture of colors.
Once the desired products of a synthesizer’s noise generator and
oscillators are produced, percussive sounds that contain all or many
audio frequencies are mixed with many chosen individual sounds
and altered by using various electronic processing components. The
better the quality of the synthesizer, the more processing components
it will possess. Among these components are sound amplifiers,
sound mixers, sound filters, reverberators, and sound combination
devices.
Sound amplifiers are voltage-controlled devices that change the
dynamic characteristics of any given sound made by a synthesizer.
Sound mixers make it possible to combine and blend two or more
manufactured sounds while controlling their relative volumes.
Sound filters affect the frequency content of sound mixtures by increasing
or decreasing the amplitude of the sound frequencies
within particular frequency ranges, which are called “bands.”
Sound filters can be either band-pass filters or band-reject filters.
They operate by increasing or decreasing the amplitudes of sound
frequencies within given ranges (such as treble or bass). Reverberators
(or “reverb” units) produce artificial echoes that can have significant
musical effects. There are also many other varieties of soundprocessing elements, among them sound-envelope generators,
spatial locators, and frequency shifters. Ultimately, the soundcombination
devices put together the results of the various groups
of audio generating and processing elements, shaping the sound
that has been created into its final form.Avariety of control elements are used to integrate the operation
of synthesizers. Most common is the keyboard, which provides the
name most often used for portable electronic synthesizers. Portable
synthesizer keyboards are most often pressure-sensitive devices
(meaning that the harder one presses the key, the louder the resulting
sound will be) that resemble the black-and-white keyboards of
more conventional musical instruments such as the piano and the
organ. These synthesizer keyboards produce two simultaneous outputs:
control voltages that govern the pitches of oscillators, and timing
pulses that sustain synthesizer responses for as long as a particular
key is depressed.
Unseen but present are the integrated voltage controls that control
overall signal generation and processing. In addition to voltage
controls and keyboards, synthesizers contain buttons and other
switches that can transpose their sound ranges and other qualities.
Using the appropriate buttons or switches makes it possible for a
single synthesizer to imitate different instruments—or groups of instruments—
at different times. Other synthesizer control elements
include sample-and-hold devices and random voltage sources that
make it possible to sustain particular musical effects and to add various
effects to the music that is being played, respectively.
Electronic synthesizers are complex and flexible instruments.
The various types and models of synthesizers make it possible to
produce many different kinds of music, and many musicians use a
variety of keyboards to give them great flexibility in performing
and recording.
Impact
The development and wide dissemination of studio and portable
synthesizers has led to their frequent use to combine the sound
properties of various musical instruments; a single musician can
thus produce, inexpensively and with a single instrument, sound
combinations that previously could have been produced only by a
large number of musicians playing various instruments. (Understandably,
many players of acoustic instruments have been upset by
this development, since it means that they are hired to play less often
than they were before synthesizers were developed.) Another consequence of synthesizer use has been the development of entirely
original varieties of sound, although this area has been less
thoroughly explored, for commercial reasons. The development of
synthesizers has also led to the design of other new electronic music-
making techniques and to the development of new electronic
musical instruments.
Opinions about synthesizers vary from person to person—and,
in the case of certain illustrious musicians, from time to time. One
well-known musician initially proposed that electronic synthesizers
would replace many or all conventional instruments, particularly
pianos. Two decades later, though, this same musician noted
that not even the best modern synthesizers could match the quality
of sound produced by pianos made by manufacturers such as
Steinway and Baldwin.
Electron microscope
The invention:
A device for viewing extremely small objects that
uses electron beams and “electron lenses” instead of the light
rays and optical lenses used by ordinary microscopes.
The people behind the invention:
Ernst Ruska (1906-1988), a German engineer, researcher, and
inventor who shared the 1986 Nobel Prize in Physics
Hans Busch (1884-1973), a German physicist
Max Knoll (1897-1969), a German engineer and professor
Louis de Broglie (1892-1987), a French physicist who won the
1929 Nobel Prize in Physics
14 June 2009
Electroencephalogram
The invention: A system of electrodes that measures brain wave
patterns in humans, making possible a new era of neurophysiology.
The people behind the invention:
Hans Berger (1873-1941), a German psychiatrist and research
scientist
Richard Caton (1842-1926), an English physiologist and surgeon
The Electrical Activity of the Brain
Hans Berger’s search for the human electroencephalograph (English
physiologist Richard Caton had described the electroencephalogram,
or “brain wave,” in rabbits and monkeys in 1875) was motivated
by his desire to find a physiological method that might be
applied successfully to the study of the long-standing problem of
the relationship between the mind and the brain. His scientific career,
therefore, was directed toward revealing the psychophysical
relationship in terms of principles that would be rooted firmly in the
natural sciences and would not have to rely upon vague philosophical
or mystical ideas.
During his early career, Berger attempted to study psychophysical
relationships by making plethysmographic measurements of
changes in the brain circulation of patients with skull defects. In
plethysmography, an instrument is used to indicate and record by
tracings the variations in size of an organ or part of the body. Later,
Berger investigated temperature changes occurring in the human
brain during mental activity and the action of psychoactive drugs.
He became disillusioned, however, by the lack of psychophysical
understanding generated by these investigations.
Next, Berger turned to the study of the electrical activity of the
brain, and in the 1920’s he set out to search for the human electroencephalogram.
He believed that the electroencephalogram would finally
provide him with a physiological method capable of furnishing
insight into mental functions and their disturbances.Berger made his first unsuccessful attempt at recording the electrical
activity of the brain in 1920, using the scalp of a bald medical
student. He then attempted to stimulate the cortex of patients with
skull defects by using a set of electrodes to apply an electrical current
to the skin covering the defect. The main purpose of these
stimulation experiments was to elicit subjective sensations. Berger
hoped that eliciting these sensations might give him some clue
about the nature of the relationship between the physiochemical
events produced by the electrical stimulus and the mental processes
revealed by the patients’ subjective experience. The availability
of many patients with skull defects—in whom the pulsating
surface of the brain was separated from the stimulating electrodes
by only a few millimeters of tissue—reactivated Berger’s interest
in recording the brain’s electrical activity.Small, Tremulous Movements
Berger used several different instruments in trying to detect
brain waves, but all of them used a similar method of recording.
Electrical oscillations deflected a mirror upon which a light beam
was projected. The deflections of the light beam were proportional
to the magnitude of the electrical signals. The movement of the spot
of the light beam was recorded on photographic paper moving at a
speed no greater than 3 centimeters per second.
In July, 1924, Berger observed small, tremulous movements of
the instrument while recording from the skin overlying a bone defect
in a seventeen-year-old patient. In his first paper on the electroencephalogram,
Berger described this case briefly as his first successful
recording of an electroencephalogram. At the time of these
early studies, Berger already had used the term “electroencephalogram”
in his diary. Yet for several years he had doubts about the origin
of the electrical signals he recorded. As late as 1928, he almost
abandoned his electrical recording studies.
The publication of Berger’s first paper on the human encephalogram
in 1929 had little impact on the scientific world. It was either
ignored or regarded with open disbelief. At this time, even
when Berger himself was not completely free of doubts about the
validity of his findings, he managed to continue his work. He published
additional contributions to the study of the electroencephalogram
in a series of fourteen papers. As his research progressed,
Berger became increasingly confident and convinced of the significance
of his discovery.
Impact
The long-range impact of Berger’s work is incontestable. When
Berger published his last paper on the human encephalogram in
1938, the new approach to the study of brain function that he inaugurated
in 1929 had gathered momentum in many centers, both in
Europe and in the United States. As a result of his pioneering work,
a new diagnostic method had been introduced into medicine. Physiology
had acquired a new investigative tool. Clinical neurophysiology
had been liberated from its dependence upon the functional anatomical approach, and electrophysiological exploration of complex
functions of the central nervous system had begun in earnest.
Berger’s work had finally received its well-deserved recognition.
Many of those who undertook the study of the electroencephalogram
were able to bring a far greater technical knowledge of
neurophysiology to bear upon the problems of the electrical activity
of the brain. Yet the community of neurological scientists has not
ceased to look with respect to the founder of electroencephalography,
who, despite overwhelming odds and isolation, opened a new
area of neurophysiology.
Electrocardiogram
The invention: Device for analyzing the electrical currents of the
human heart.
The people behind the invention:
Willem Einthoven (1860-1927), a Dutch physiologist and
winner of the 1924 Nobel Prize in Physiology or Medicine
Augustus D. Waller (1856-1922), a German physician and
researcher
Sir Thomas Lewis (1881-1945), an English physiologist
Horse Vibrations
In the late 1800’s, there was substantial research interest in the
electrical activity that took place in the human body. Researchers
studied many organs and systems in the body, including the nerves,
eyes, lungs, muscles, and heart. Because of a lack of available technology,
this research was tedious and frequently inaccurate. Therefore,
the development of the appropriate instrumentation was as
important as the research itself.
The initial work on the electrical activity of the heart (detected
from the surface of the body) was conducted by Augustus D.Waller
and published in 1887. Many credit him with the development of
the first electrocardiogram. Waller used a Lippmann’s capillary
electrometer (named for its inventor, the French physicist Gabriel-
Jonas Lippmann) to determine the electrical charges in the heart and
called his recording a “cardiograph.” The recording was made by
placing a series of small tubes on the surface of the body. The tubes
contained mercury and sulfuric acid. As an electrical current passed
through the tubes, the mercury would expand and contract. The resulting
images were projected onto photographic paper to produce
the first cardiograph. Yet Waller had only limited sucess with the
device and eventually abandoned it.
In the early 1890’s,Willem Einthoven, who became a good friend
of Waller, began using the same type of capillary tube to study the
electrical currents of the heart. Einthoven also had a difficult time working with the instrument. His laboratory was located in an old
wooden building near a cobblestone street. Teams of horses pulling
heavy wagons would pass by and cause his laboratory to vibrate.
This vibration affected the capillary tube, causing the cardiograph
to be unclear. In his frustration, Einthoven began to modify his laboratory.
He removed the floorboards and dug a hole some ten to fifteen
feet deep. He lined the walls with large rocks to stabilize his instrument.
When this failed to solve the problem, Einthoven, too,
abandoned the Lippmann’s capillary tube. Yet Einthoven did not
abandon the idea, and he began to experiment with other instruments.
Electrocardiographs over the Phone
In order to continue his research on the electrical currents of the
heart, Einthoven began to work with a new device, the d’Arsonval
galvanometer (named for its inventor, the French biophysicist
Arsène d’Arsonval). This instrument had a heavy coil of wire suspended
between the poles of a horseshoe magnet. Changes in electrical
activity would cause the coil to move; however, Einthoven
found that the coil was too heavy to record the small electrical
changes found in the heart. Therefore, he modified the instrument
by replacing the coil with a silver-coated quartz thread (string).
The movements could be recorded by transmitting the deflections
through a microscope and projecting them on photographic film.
Einthoven called the new instrument the “string galvanometer.”
In developing his string galvanomter, Einthoven was influenced
by the work of one of his teachers, Johannes Bosscha. In the 1850’s,
Bosscha had published a study describing the technical complexities
of measuring very small amounts of electricity. He proposed the
idea that a galvanometer modified with a needle hanging from a
silk thread would be more sensitive in measuring the tiny electric
currents of the heart.
By 1905, Einthoven had improved the string galvanometer to
the point that he could begin using it for clinical studies. In 1906,
he had his laboratory connected to the hospital in Leiden by a telephone
wire.With this arrangement, Einthoven was able to study in
his laboratory electrocardiograms derived from patients in the hospital, which was located a mile away. With this source of subjects,
Einthoven was able to use his galvanometer to study many
heart problems. As a result of these studies, Einthoven identified
the following heart problems: blocks in the electrical conduction
system of the heart; premature beats of the heart, including two
premature beats in a row; and enlargements of the various chambers
of the heart. He was also able to study how the heart behaved
during the administration of cardiac drugs.A major researcher who communicated with Einthoven about
the electrocardiogram was Sir Thomas Lewis, who is credited with
developing the electrocardiogram into a useful clinical tool. One of
Lewis’s important accomplishments was his identification of atrial
fibrillation, the overactive state of the upper chambers of the heart.
During World War I, Lewis was involved with studying soldiers’
hearts. He designed a series of graded exercises, which he used to
test the soldiers’ ability to perform work. From this study, Lewis
was able to use similar tests to diagnose heart disease and to screen
recruits who had heart problems.
Impact
As Einthoven published additional studies on the string galvanometer
in 1903, 1906, and 1908, greater interest in his instrument
was generated around the world. In 1910, the instrument, now
called the “electrocardiograph,” was installed in the United States.
It was the foundation of a new laboratory for the study of heart disease
at Johns Hopkins University.
As time passed, the use of the electrocardiogram—or “EKG,” as
it is familiarly known—increased substantially. The major advantage
of the EKG is that it can be used to diagnose problems in the
heart without incisions or the use of needles. It is relatively painless
for the patient; in comparison with other diagnostic techniques,
moreover, it is relatively inexpensive.
Recent developments in the use of the EKG have been in the area
of stress testing. Since many heart problems are more evident during
exercise, when the heart is working harder, EKGs are often
given to patients as they exercise, generally on a treadmill. The clinician
gradually increases the intensity of work the patient is doing
while monitoring the patient’s heart. The use of stress testing has
helped to make the EKG an even more valuable diagnostic tool.
12 June 2009
Electric refrigerator
The invention:
An electrically powered and hermetically sealed
food-storage appliance that replaced iceboxes, improved production,
and lowered food-storage costs.
The people behind the invention:
Marcel Audiffren, a French monk
Christian Steenstrup (1873-1955), an American engineer
Fred Wolf, an American engineer
Electric clock
The invention: Electrically powered time-keeping device with a
quartz resonator that has led to the development of extremely accurate,
relatively inexpensive electric clocks that are used in computers
and microprocessors.
The person behind the invention:
Warren Alvin Marrison (1896-1980), an American scientist
From Complex Mechanisms to Quartz Crystals
William Alvin Marrison’s fabrication of the electric clock began a
new era in time-keeping. Electric clocks are more accurate and more
reliable than mechanical clocks, since they have fewer moving parts
and are less likely to malfunction.
An electric clock is a device that generates a string of electric
pulses. The most frequently used electric clocks are called “free running”
and “periodic,” which means that they generate a continuous
sequence of electric pulses that are equally spaced. There are various
kinds of electronic “oscillators” (materials that vibrate) that can
be used to manufacture electric clocks.
The material most commonly used as an oscillator in electric
clocks is crystalline quartz. Because quartz (silicon dioxide) is a
completely oxidized compound (which means that it does not deteriorate
readily) and is virtually insoluble in water, it is chemically
stable and resists chemical processes that would break down other
materials. Quartz is a “piezoelectric” material, which means that it
is capable of generating electricity when it is subjected to pressure
or stress of some kind. In addition, quartz has the advantage of generating
electricity at a very stable frequency, with little variation. For
these reasons, quartz is an ideal material to use as an oscillator.The Quartz Clock
Aquartz clock is an electric clock that makes use of the piezoelectric
properties of a quartz crystal. When a quartz crystal vibrates, a difference of electric potential is produced between two of its faces.
The crystal has a natural frequency (rate) of vibration that is determined
by its size and shape. If the crystal is placed in an oscillating
electric circuit that has a frequency that is nearly the same as that of
the crystal, it will vibrate at its natural frequency and will cause the
frequency of the entire circuit to match its own frequency.
Piezoelectricity is electricity, or “electric polarity,” that is caused
by the application of mechanical pressure on a “dielectric” material
(one that does not conduct electricity), such as a quartz crystal. The
process also works in reverse; if an electric charge is applied to the
dielectric material, the material will experience a mechanical distortion.
This reciprocal relationship is called “the piezoelectric effect.”
The phenomenon of electricity being generated by the application
of mechanical pressure is called the direct piezoelectric effect, and
the phenomenon of mechanical stress being produced as a result of
the application of electricity is called the converse piezoelectric
effect.
When a quartz crystal is used to create an oscillator, the natural
frequency of the crystal can be used to produce other frequencies
that can power clocks. The natural frequency of a quartz crystal is
nearly constant if precautions are taken when it is cut and polished
and if it is maintained at a nearly constant temperature and pressure.
After a quartz crystal has been used for some time, its frequency usually varies slowly as a result of physical changes. If allowances
are made for such changes, quartz-crystal clocks such as
those used in laboratories can be manufactured that will accumulate
errors of only a few thousandths of a second per month. The
quartz crystals that are typically used in watches, however, may accumulate
errors of tens of seconds per year.
There are other materials that can be used to manufacture accurate
electric clocks. For example, clocks that use the element rubidium
typically would accumulate errors no larger than a few tenthousandths
of a second per year, and those that use the element cesium
would experience errors of only a few millionths of a second
per year. Quartz is much less expensive than rarer materials such as rubidium and cesium, and it is easy to use in such common applications
as computers. Thus, despite their relative inaccuracy, electric
quartz clocks are extremely useful and popular, particularly for applications
that require accurate timekeeping over a relatively short
period of time. In such applications, quartz clocks may be adjusted
periodically to correct for accumulated errors.
Impact
The electric quartz clock has contributed significantly to the development
of computers and microprocessors. The computer’s control
unit controls and synchronizes all data transfers and transformations
in the computer system and is the key subsystem in the
computer itself. Every action that the computer performs is implemented
by the control unit.
The computer’s control unit uses inputs from a quartz clock to
derive timing and control signals that regulate the actions in the system
that are associated with each computer instruction. The control
unit also accepts, as input, control signals generated by other devices
in the computer system.
The other primary impact of the quartz clock is in making the
construction of multiphase clocks a simple task. A multiphase
clock is a clock that has several outputs that oscillate at the same
frequency. These outputs may generate electric waveforms of different
shapes or of the same shape, which makes them useful for
various applications. It is common for a computer to incorporate a
single-phase quartz clock that is used to generate a two-phase
clock.
09 June 2009
Dolby noise reduction
The invention: Electronic device that reduces the signal-to-noise
ratio of sound recordings and greatly improves the sound quality
of recorded music.
The people behind the invention:
Emil Berliner (1851-1929), a German inventor
Ray Milton Dolby (1933- ), an American inventor
Thomas Alva Edison (1847-1931), an American inventor
Phonographs, Tapes, and Noise Reduction
The main use of record, tape, and compact disc players is to listen
to music, although they are also used to listen to recorded speeches,
messages, and various forms of instruction. Thomas Alva Edison
invented the first sound-reproducing machine, which he called the
“phonograph,” and patented it in 1877. Ten years later, a practical
phonograph (the “gramophone”) was marketed by a German, Emil
Berliner. Phonographs recorded sound by using diaphragms that
vibrated in response to sound waves and controlled needles that cut
grooves representing those vibrations into the first phonograph records,
which in Edison’s machine were metal cylinders and in Berliner’s
were flat discs. The recordings were then played by reversing
the recording process: Placing a needle in the groove in the recorded
cylinder or disk caused the diaphragm to vibrate, re-creating the
original sound that had been recorded.
In the 1920’s, electrical recording methods developed that produced
higher-quality recordings, and then, in the 1930’s, stereophonic
recording was developed by various companies, including
the British company Electrical and Musical Industries (EMI). Almost
simultaneously, the technology of tape recording was developed.
By the 1940’s, long-playing stereo records and tapes were
widely available. As recording techniques improved further, tapes
became very popular, and by the 1960’s, they had evolved into both
studio master recording tapes and the audio cassettes used by consumers.Hisses and other noises associated with sound recording and its
environment greatly diminished the quality of recorded music. In
1967, Ray Dolby invented a noise reducer, later named “Dolby A,”
that could be used by recording studios to reduce tape signal-tonoise
ratios. Several years later, his “Dolby B” system, designed
for home use, became standard equipment in all types of playback
machines. Later, Dolby and others designed improved noisesuppression
systems.
Recording and Tape Noise
Sound is made up of vibrations of varying frequencies—sound
waves—that sound recorders can convert into grooves on plastic records,
varying magnetic arrangements on plastic tapes covered
with iron particles, or tiny pits on compact discs. The following discussion
will focus on tape recordings, for which the original Dolby
noise reducers were designed.
Tape recordings are made by a process that converts sound
waves into electrical impulses that cause the iron particles in a tape
to reorganize themselves into particular magnetic arrangements.
The process is reversed when the tape is played back. In this process,
the particle arrangements are translated first into electrical impulses
and then into sound that is produced by loudspeakers.
Erasing a tape causes the iron particles to move back into their original
spatial arrangement.
Whenever a recording is made, undesired sounds such as hisses,
hums, pops, and clicks can mask the nuances of recorded sound, annoying
and fatiguing listeners. The first attempts to do away with
undesired sounds (noise) involved making tapes, recording devices,
and recording studios quieter. Such efforts did not, however,
remove all undesired sounds.
Furthermore, advances in recording technology increased the
problem of noise by producing better instruments that “heard” and
transmitted to recordings increased levels of noise. Such noise is often
caused by the components of the recording system; tape hiss is
an example of such noise. This type of noise is most discernible in
quiet passages of recordings, because loud recorded sounds often
mask it.Because of the problem of noise in quiet passages of recorded
sound, one early attempt at noise suppression involved the reduction
of noise levels by using “dynaural” noise suppressors. These
devices did not alter the loud portions of a recording; instead, they
reduced the very high and very low frequencies in the quiet passages
in which noise became most audible. The problem with such
devices was, however, that removing the high and low frequencies
could also affect the desirable portions of the recorded sound.
These suppressors could not distinguish desirable from undesirable
sounds. As recording techniques improved, dynaural noise suppressors caused more and more problems, and their use was finally
discontinued.
Another approach to noise suppression is sound compression
during the recording process. This compression is based on the fact
that most noise remains at a constant level throughout a recording,
regardless of the sound level of a desired signal (such as music). To
carry out sound compression, the lowest-level signals in a recording
are electronically elevated above the sound level of all noise. Musical
nuances can be lost when the process is carried too far, because
the maximum sound level is not increased by devices that use
sound compression. To return the music or other recorded sound to
its normal sound range for listening, devices that “expand” the recorded
music on playback are used. Two potential problems associated
with the use of sound compression and expansion are the difficulty
of matching the two processes and the introduction into the
recording of noise created by the compression devices themselves.
In 1967, Ray Dolby developed Dolby Ato solve these problems as
they related to tape noise (but not to microphone signals) in the recording
and playing back of studio master tapes. The system operated
by carrying out ten-decibel compression during recording and
then restoring (noiselessly) the range of the music on playback. This
was accomplished by expanding the sound exactly to its original
range. Dolby Awas very expensive and was thus limited to use in recording
studios. In the early 1970’s, however, Dolby invented the less
expensive Dolby B system, which was intended for consumers.
Consequences
The development of Dolby Aand Dolby B noise-reduction systems
is one of the most important contributions to the high-quality
recording and reproduction of sound. For this reason, Dolby A
quickly became standard in the recording industry. In similar fashion,
Dolby B was soon incorporated into virtually every highfidelity
stereo cassette deck to be manufactured.
Dolby’s discoveries spurred advances in the field of noise reduction.
For example, the German company Telefunken and the Japanese
companies Sanyo and Toshiba, among others, developed their
own noise-reduction systems. Dolby Laboratories countered by producing an improved system: Dolby C. The competition in the
area of noise reduction continues, and it will continue as long as
changes in recording technology produce new, more sensitive recording
equipment.
Subscribe to:
Posts (Atom)