09 July 2009
Hearing aid
The invention: Miniaturized electronic amplifier worn inside the
ears of hearing-impaired persons.
The organization behind the invention:
Bell Labs, the research and development arm of the American
Telephone and Telegraph Company
Trapped in Silence
Until the middle of the twentieth century, people who experienced
hearing loss had little hope of being able to hear sounds without the
use of large, awkward, heavy appliances. For many years, the only
hearing aids available were devices known as ear trumpets. The ear
trumpet tried to compensate for hearing loss by increasing the number
of sound waves funneled into the ear canal. A wide, bell-like
mouth similar to the bell of a musical trumpet narrowed to a tube that
the user placed in his or her ear. Ear trumpets helped a little, but they
could not truly increase the volume of the sounds heard.
Beginning in the nineteenth century, inventors tried to develop
electrical devices that would serve as hearing aids. The telephone
was actually a by-product of Alexander Graham Bell’s efforts to
make a hearing aid. Following the invention of the telephone, electrical
engineers designed hearing aids that employed telephone
technology, but those hearing aids were only a slight improvement
over the old ear trumpets. They required large, heavy battery packs
and used a carbon microphone similar to the receiver in a telephone.
More sensitive than purely physical devices such as the ear trumpet,
they could transmit a wider range of sounds but could not amplify
them as effectively as electronic hearing aids now do.
Transistors Make Miniaturization Possible
Two types of hearing aids exist: body-worn and head-worn.
Body-worn hearing aids permit the widest range of sounds to be
heard, but because of the devices’ larger size, many hearing impaired persons do not like to wear them. Head-worn hearing
aids, especially those worn completely in the ear, are much less conspicuous.
In addition to in-ear aids, the category of head-worn hearing
aids includes both hearing aids mounted in eyeglass frames and
those worn behind the ear.
All hearing aids, whether head-worn or body-worn, consist of
four parts: a microphone to pick up sounds, an amplifier, a receiver,
and a power source. The microphone gathers sound waves and converts
them to electrical signals; the amplifier boosts, or increases,
those signals; and the receiver then converts the signals back into
sound waves. In effect, the hearing aid is a miniature radio. After
the receiver converts the signals back to sound waves, those waves
are directed into the ear canal through an earpiece or ear mold. The
ear mold generally is made of plastic and is custom fitted from an
impression taken from the prospective user’s ear.
Effective head-worn hearing aids could not be built until the
electronic circuit was developed in the early 1950’s. The same invention—
the transistor—that led to small portable radios and tape
players allowed engineers to create miniaturized, inconspicuous
hearing aids. Depending on the degree of amplification required,
the amplifier in a hearing aid contains three or more transistors.
Transistors first replaced vacuum tubes in devices such as radios
and phonographs, and then engineers realized that they could be
used in devices for the hearing-impaired.
The research at Bell Labs that led to the invention of the transistor
rose out of military research duringWorldWar II. The vacuum tubes
used in, for example, radar installations to amplify the strength of electronic
signals were big, were fragile because they were made of
blown glass, and gave off high levels of heat when they were used.
Transistors, however, made it possible to build solid-state, integrated
circuits. These are made from crystals of metals such as germanium
or arsenic alloys and therefore are much less fragile than glass. They
are also extremely small (in fact, some integrated circuits are barely
visible to the naked eye) and give off no heat during use.
The number of transistors in a hearing aid varies depending upon
the amount of amplification required. The first transistor is the most
important for the listener in terms of the quality of sound heard. If the
frequency response is set too high—that is, if the device is too sensitive—the listener will be bothered by distracting background noise.
Theoretically, there is no limit on the amount of amplification that a
hearing aid can be designed to provide, but there are practical limits.
The higher the amplification, the more power is required to operate
the hearing aid. This is why body-worn hearing aids can convey a
wider range of sounds than head-worn devices can. It is the power
source—not the electronic components—that is the limiting factor. A
body-worn hearing aid includes a larger battery pack than can be
used with a head-worn device. Indeed, despite advances in battery
technology, the power requirements of a head-worn hearing aid are
such that a 1.4-volt battery that could power a wristwatch for several
years will last only a few days in a hearing aid.
Consequences
The invention of the electronic hearing aid made it possible for
many hearing-impaired persons to participate in a hearing world.
Prior to the invention of the hearing aid, hearing-impaired children
often were unable to participate in routine school activities or function
effectively in mainstream society. Instead of being able to live at
home with their families and enjoy the same experiences that were
available to other children their age, often they were forced to attend
special schools operated by the state or by charities.
Hearing-impaired people were singled out as being different and
were limited in their choice of occupations. Although not every
hearing-impaired person can be helped to hear with a hearing aid—
particularly in cases of total hearing loss—the electronic hearing aid
has ended restrictions for many hearing-impaired people. Hearingimpaired
children are now included in public school classes, and
hearing-impaired adults can now pursue occupations from which
they were once excluded.
Today, many deaf and hearing-impaired persons have chosen to
live without the help of a hearing aid. They believe that they are not
disabled but simply different, and they point out that their “disability”
often allows them to appreciate and participate in life in unique
and positive ways. For them, the use of hearing aids is a choice, not a
necessity. For those who choose, hearing aids make it possible to
participate in the hearing world.
Hard disk
The invention: A large-capacity, permanent magnetic storage device
built into most personal computers.
The people behind the invention:
Alan Shugart (1930- ), an engineer who first developed the
floppy disk
Philip D. Estridge (1938?-1985), the director of IBM’s product
development facility
Thomas J. Watson, Jr. (1914-1993), the chief executive officer of
IBM
The Personal Oddity
When the International Business Machines (IBM) Corporation
introduced its first microcomputer, called simply the IBM PC (for
“personal computer”), the occasion was less a dramatic invention
than the confirmation of a trend begun some years before. A number
of companies had introduced microcomputers before IBM; one
of the best known at that time was Apple Corporation’s Apple II, for
which software for business and scientific use was quickly developed.
Nevertheless, the microcomputer was quite expensive and
was often looked upon as an oddity, not as a useful tool.
Under the leadership of Thomas J. Watson, Jr., IBM, which had
previously focused on giant mainframe computers, decided to develop
the PC. A design team headed by Philip D. Estridge was assembled
in Boca Raton, Florida, and it quickly developed its first,
pacesetting product. It is an irony of history that IBM anticipated
selling only one hundred thousand or so of these machines, mostly
to scientists and technically inclined hobbyists. Instead, IBM’s product
sold exceedingly well, and its design parameters, as well as its
operating system, became standards.
The earliest microcomputers used a cassette recorder as a means
of mass storage; a floppy disk drive capable of storing approximately
160 kilobytes of data was initially offered only as an option.
While home hobbyists were accustomed to using a cassette recorder for storage purposes, such a system was far too slow and awkward
for use in business and science. As a result, virtually every IBM PC
sold was equipped with at least one 5.25-inch floppy disk drive.
Memory Requirements
All computers require memory of two sorts in order to carry out
their tasks. One type of memory is main memory, or random access
memory (RAM), which is used by the computer’s central processor
to store data it is using while operating. The type of memory used
for this function is built typically of silicon-based integrated circuits
that have the advantage of speed (to allow the processor to fetch or
store the data quickly), but the disadvantage of possibly losing or
“forgetting” data when the electric current is turned off. Further,
such memory generally is relatively expensive.
To reduce costs, another type of memory—long-term storage
memory, known also as “mass storage”—was developed. Mass
storage devices include magnetic media (tape or disk drives) and
optical media (such as the compact disc, read-only memory, or CDROM).
While the speed with which data may be retrieved from or
stored in such devices is rather slow compared to the central processor’s
speed, a disk drive—the most common form of mass storage
used in PCs—can store relatively large amounts of data quite inexpensively.
Early floppy disk drives (so called because the magnetically
treated material on which data are recorded is made of a very flexible
plastic) held 160 kilobytes of data using only one side of the
magnetically coated disk (about eighty pages of normal, doublespaced,
typewritten information). Later developments increased
storage capacities to 360 kilobytes by using both sides of the disk
and later, with increasing technological ability, 1.44 megabytes (millions
of bytes). In contrast, mainframe computers, which are typically
connected to large and expensive tape drive storage systems,
could store gigabytes (millions of megabytes) of information.
While such capacities seem large, the needs of business and scientific
users soon outstripped available space. Since even the mailing
list of a small business or a scientist’s mathematical model of a
chemical reaction easily could require greater storage potential than early PCs allowed, the need arose for a mass storage device that
could accommodate very large files of data.
The answer was the hard disk drive, also known as a “fixed disk
drive,” reflecting the fact that the disk itself is not only rigid but also
permanently installed inside the machine. In 1955, IBM had envisioned
the notion of a fixed, hard magnetic disk as a means of storing
computer data, and, under the direction of Alan Shugart in the
1960’s, the floppy disk was developed as well.
As the engineers of IBM’s facility in Boca Raton refined the idea
of the original PC to design the new IBM PC XT, it became clear that
chief among the needs of users was the availability of large-capability
storage devices. The decision was made to add a 10-megabyte
hard disk drive to the PC. On March 8, 1983, less than two years after
the introduction of its first PC, IBM introduced the PC XT. Like
the original, it was an evolutionary design, not a revolutionary one.
The inclusion of a hard disk drive, however, signaled that mass storage
devices in personal computers had arrived.
Consequences
Above all else, any computer provides a means for storing, ordering,
analyzing, and presenting information. If the personal computer
is to become the information appliance some have suggested
it will be, the ability to manipulate very large amounts of data will
be of paramount concern. Hard disk technology was greeted enthusiastically
in the marketplace, and the demand for hard drives has
seen their numbers increase as their quality increases and their
prices drop.
It is easy to understand one reason for such eager acceptance:
convenience. Floppy-bound computer users find themselves frequently
changing (or “swapping”) their disks in order to allow programs
to find the data they need. Moreover, there is a limit to how
much data a single floppy disk can hold. The advantage of a hard
drive is that it allows users to keep seemingly unlimited amounts of
data and programs stored in their machines and readily available.
Also, hard disk drives are capable of finding files and transferring
their contents to the processor much more quickly than a
floppy drive. A user may thus create exceedingly large files, keep them on hand at all times, and manipulate data more quickly than
with a floppy. Finally, while a hard drive is a slow substitute for
main memory, it allows users to enjoy the benefits of larger memories
at significantly lower cost.
The introduction of the PC XT with its 10-megabyte hard drive
was a milestone in the development of the PC. Over the next two decades,
the size of computer hard drives increased dramatically. By
2001, few personal computers were sold with hard drives with less
than three gigabytes of storage capacity, and hard drives with more
than thirty gigabytes were becoming the standard. Indeed, for less
money than a PC XT cost in the mid-1980’s, one could buy a fully
equipped computer with a hard drive holding sixty gigabytes—a
storage capacity equivalent to six thousand 10-megabyte hard drives.
Gyrocompass
The invention: The first practical navigational device that enabled
ships and submarines to stay on course without relying on the
earth’s unreliable magnetic poles.
The people behind the invention:
Hermann Anschütz-Kaempfe (1872-1931), a German inventor
and manufacturer
Jean-Bernard-Léon Foucault (1819-1868), a French experimental
physicist and inventor
Elmer Ambrose Sperry (1860-1930), an American engineer and
inventor
From Toys to Tools
A gyroscope consists of a rapidly spinning wheel mounted in a
frame that enables the wheel to tilt freely in any direction. The
amount of momentum allows the wheel to maintain its “attitude”
even when the whole device is turned or rotated.
These devices have been used to solve problems arising in such
areas as sailing and navigation. For example, a gyroscope aboard a
ship maintains its orientation even while the ship is rolling. Among
other things, this allows the extent of the roll to be measured accurately.
Moreover, the spin axis of a free gyroscope can be adjusted to
point toward true north. It will (with some exceptions) stay that
way despite changes in the direction of a vehicle in which it is
mounted. Gyroscopic effects were employed in the design of various
objects long before the theory behind them was formally
known. A classic example is a child’s top, which balances, seemingly
in defiance of gravity, as long as it continues to spin. Boomerangs
and flying disks derive stability and accuracy from the spin
imparted by the thrower. Likewise, the accuracy of rifles improved
when barrels were manufactured with internal spiral grooves that
caused the emerging bullet to spin.
In 1852, the French inventor Jean-Bernard-Léon Foucault built
the first gyroscope, a measuring device consisting of a rapidly spinning
wheel mounted within concentric rings that allowed the wheel to move freely about two axes. This device, like the Foucault pendulum,
was used to demonstrate the rotation of the earth around its
axis, since the spinning wheel, which is not fixed, retains its orientation
in space while the earth turns under it. The gyroscope had a related
interesting property: As it continued to spin, the force of the
earth’s rotation caused its axis to rotate gradually until it was oriented
parallel to the earth’s axis, that is, in a north-south direction. It
is this property that enables the gyroscope to be used as a compass.
When Magnets Fail
In 1904, Hermann Anschütz-Kaempfe, a German manufacturer
working in the Kiel shipyards, became interested in the navigation
problems of submarines used in exploration under the polar ice cap.
By 1905, efficient working submarines were a reality, and it was evident
to all major naval powers that submarines would play an increasingly
important role in naval strategy.
Submarine navigation posed problems, however, that could not
be solved by instruments designed for surface vessels. Asubmarine
needs to orient itself under water in three dimensions; it has no automatic
horizon with respect to which it can level itself. Navigation
by means of stars or landmarks is impossible when the submarine is
submerged. Furthermore, in an enclosed metal hull containing machinery
run by electricity, a magnetic compass is worthless. To a
lesser extent, increasing use of metal, massive moving parts, and
electrical equipment had also rendered the magnetic compass unreliable
in conventional surface battleships.
It made sense for Anschütz-Kaempfe to use the gyroscopic effect
to design an instrument that would enable a ship to maintain its
course while under water. Yet producing such a device would not be
easy. First, it needed to be suspended in such a way that it was free to
turn in any direction with as little mechanical resistance as possible.
At the same time, it had to be able to resist the inevitable pitching and
rolling of a vessel at sea. Finally, a continuous power supply was required
to keep the gyroscopic wheels spinning at high speed.
The original Anschütz-Kaempfe gyrocompass consisted of a pair
of spinning wheels driven by an electric motor. The device was connected
to a compass card visible to the ship’s navigator. Motor, gyroscope, and suspension system were mounted in a frame that allowed
the apparatus to remain stable despite the pitch and roll of the ship.
In 1906, the German navy installed a prototype of the Anschütz-
Kaempfe gyrocompass on the battleship Undine and subjected it to
exhaustive tests under simulated battle conditions, sailing the ship
under forced draft and suddenly reversing the engines, changing the
position of heavy turrets and other mechanisms, and firing heavy
guns. In conditions under which a magnetic compass would have
been worthless, the gyrocompass proved a satisfactory navigational
tool, and the results were impressive enough to convince the German
navy to undertake installation of gyrocompasses in submarines and
heavy battleships, including the battleship Deutschland.
Elmer Ambrose Sperry, a New York inventor intimately associated
with pioneer electrical development, was independently working on a design for a gyroscopic compass at about the same time.
In 1907, he patented a gyrocompass consisting of a single rotor
mounted within two concentric shells, suspended by fine piano
wire from a frame mounted on gimbals. The rotor of the Sperry
compass operated in a vacuum, which enabled it to rotate more
rapidly. The Sperry gyrocompass was in use on larger American
battleships and submarines on the eve ofWorldWar I (1914-1918).
Impact
The ability to navigate submerged submarines was of critical
strategic importance in World War I. Initially, the German navy
had an advantage both in the number of submarines at its disposal
and in their design and maneuverability. The German U-boat fleet
declared all-out war on Allied shipping, and, although their efforts
to blockade England and France were ultimately unsuccessful, the
tremendous toll they inflicted helped maintain the German position
and prolong the war. To a submarine fleet operating throughout
the Atlantic and in the Caribbean, as well as in near-shore European
waters, effective long-distance navigation was critical.
Gyrocompasses were standard equipment on submarines and
battleships and, increasingly, on larger commercial vessels during
World War I, World War II (1939-1945), and the period between the
wars. The devices also found their way into aircraft, rockets, and
guided missiles. Although the compasses were made more accurate
and easier to use, the fundamental design differed little from that invented
by Anschütz-Kaempfe.
05 July 2009
Geothermal power
The invention: Energy generated from the earth’s natural hot
springs.
The people behind the invention:
Prince Piero Ginori Conti (1865-1939), an Italian nobleman and
industrialist
Sir Charles Parsons (1854-1931), an English engineer
B. C. McCabe, an American businessman
Developing a Practical System
The first successful use of geothermal energy was at Larderello in
northern Italy. The Larderello geothermal field, located near the city
of Pisa about 240 kilometers northwest of Rome, contains many hot
springs and fumaroles (steam vents). In 1777, these springs were
found to be rich in boron, and in 1818, Francesco de Larderel began
extracting the useful mineral borax from them. Shortly after 1900,
Prince Piero Ginori Conti, director of the Larderello borax works,
conceived the idea of using the steam for power production. An experimental
electrical power plant was constructed at Larderello in
1904 to provide electric power to the borax plant. After this initial
experiment proved successful, a 250-kilowatt generating station
was installed in 1913 and commercial power production began.
As the Larderello field grew, additional geothermal sites throughout
the region were prospected and tapped for power. Power production
grew steadily until the 1940’s, when production reached
130 megawatts; however, the Larderello power plants were destroyed
late inWorldWar II (1939-1945). After the war, the generating
plants were rebuilt, and they were producing more than 400
megawatts by 1980.
The Larderello power plants encountered many of the technical
problems that were later to concern other geothermal facilities. For
example, hydrogen sulfide in the steam was highly corrosive to copper,
so the Larderello power plant used aluminum for electrical connections
much more than did conventional power plants of the time. Also, the low pressure of the steam in early wells at Larderello
presented problems. The first generators simply used steam to drive
a generator and vented the spent steam into the atmosphere. Asystem
of this sort, called a “noncondensing system,” is useful for small
generators but not efficient to produce large amounts of power.
Most steam engines derive power not only from the pressure of
the steam but also from the vacuum created when the steam is condensed
back to water. Geothermal systems that generate power
from condensation, as well as direct steam pressure, are called “condensing
systems.” Most large geothermal generators are of this
type. Condensation of geothermal steam presents special problems
not present in ordinary steam engines: There are other gases present
that do not condense. Instead of a vacuum, condensation of steam
contaminated with other gases would result in only a limited drop
in pressure and, consequently, very low efficiency.
Initially, the operators of Larderello tried to use the steam to heat
boilers that would, in turn, generate pure steam. Eventually, a device
was developed that removed most of the contaminating gases from
the steam. Although later wells at Larderello and other geothermal
fields produced steam at greater pressure, these engineering innovations
improved the efficiency of any geothermal power plant.
Expanding the Idea
In 1913, the English engineer Sir Charles Parsons proposed drilling
an extremely deep (12-kilometer) hole to tap the earth’s deep
heat. Power from such a deep hole would not come from natural
steam as at Larderello but would be generated by pumping fluid
into the hole and generating steam (as hot as 500 degrees Celsius) at
the bottom. In modern terms, Parsons proposed tapping “hot dryrock”
geothermal energy. (No such plant has been commercially operated
yet, but research is being actively pursued in several countries.)
The first use of geothermal energy in the United States was for direct
heating. In 1890, the municipal water company of Boise, Idaho,
began supplying hot water from a geothermal well. Water was
piped from the well to homes and businesses along appropriately
namedWarm Springs Avenue. At its peak, the system served more than four hundred customers, but as cheap natural gas became
available, the number declined.
Although Larderello was the first successful geothermal electric
power plant, the modern era of geothermal electric power began
with the opening of the Geysers Geothermal Field in California.
Early attempts began in the 1920’s, but it was not until 1955 that B.
C. McCabe, a Los Angeles businessman, leased 14.6 square kilometers
in the Geysers area and founded the Magma Power Company.
The first 12.5-megawatt generator was installed at the Geysers in
1960, and production increased steadily from then on. The Geysers
surpassed Larderello as the largest producing geothermal field in
the 1970’s, and more than 1,000 megawatts were being generated by
1980. By the end of 1980, geothermal plants had been installed in
thirteen countries, with a total capacity of almost 2,600 megawatts,
and projects with a total capacity of more than 15,000 megawatts
were being planned in more than twenty countries.
Impact
Geothermal power has many attractive features. Because the
steam is naturally heated and under pressure, generating equipment
can be simple, inexpensive, and quickly installed. Equipment
and installation costs are offset by savings in fuel. It is economically
practical to install small generators, a fact that makes geothermal
plants attractive in remote or underdeveloped areas. Most important
to a world faced with a variety of technical and environmental
problems connected with fossil fuels, geothermal power does not
deplete fossil fuel reserves, produces little pollution, and contributes
little to the greenhouse effect.
Despite its attractive features, geothermal power has some limitations.
Geologic settings suitable for easy geothermal power production
are rare; there must be a hot rock or magma body close to
the surface. Although it is technically possible to pump water from
an external source into a geothermal well to generate steam, most
geothermal sites require a plentiful supply of natural underground
water that can be tapped as a source of steam. In contrast, fossil-fuel
generating plants can be at any convenient location.
Genetically engineered insulin
The invention: Artificially manufactured human insulin (Humulin)
as a medication for people suffering from diabetes.
The people behind the invention:
Irving S. Johnson (1925- ), an American zoologist who was
vice president of research at Eli Lilly Research Laboratories
Ronald E. Chance (1934- ), an American biochemist at Eli
Lilly Research Laboratories
What Is Diabetes?
Carbohydrates (sugars and related chemicals) are the main food
and energy source for humans. In wealthy countries such as the
United States, more than 50 percent of the food people eat is made
up of carbohydrates, while in poorer countries the carbohydrate
content of diets is higher, from 70 to 90 percent.
Normally, most carbohydrates that a person eats are used (or metabolized)
quickly to produce energy. Carbohydrates not needed for
energy are either converted to fat or stored as a glucose polymer
called “glycogen.” Most adult humans carry about a pound of body
glycogen; this substance is broken down to produce energy when it
is needed.
Certain diseases prevent the proper metabolism and storage of
carbohydrates. The most common of these diseases is diabetes mellitus,
usually called simply “diabetes.” It is found in more than seventy
million people worldwide. Diabetic people cannot produce or
use enough insulin, a hormone secreted by the pancreas. When their
condition is not treated, the eyes may deteriorate to the point of
blindness. The kidneys may stop working properly, blood vessels
may be damaged, and the person may fall into a coma and die. In
fact, diabetes is the third most common killer in the United States.
Most of the problems surrounding diabetes are caused by high levels
of glucose in the blood. Cataracts often form in diabetics, as excess
glucose is deposited in the lens of the eye.
Important symptoms of diabetes include constant thirst, excessive urination, and large amounts of sugar in the blood and in the
urine. The glucose tolerance test (GTT) is the best way to find out
whether a person is suffering from diabetes. People given a GTT are
first told to fast overnight. In the morning their blood glucose level
is measured; then they are asked to drink about a fourth of a pound
of glucose dissolved in water. During the next four to six hours, the
blood glucose level is measured repeatedly. In nondiabetics, glucose
levels do not rise above a certain amount during a GTT, and the
level drops quickly as the glucose is assimilated by the body. In diabetics,
the blood glucose levels rise much higher and do not drop as
quickly. The extra glucose then shows up in the urine.
Treating Diabetes
Until the 1920’s, diabetes could be controlled only through a diet
very low in carbohydrates, and this treatment was not always successful.
Then Sir Frederick G. Banting and Charles H. Best found a
way to prepare purified insulin from animal pancreases and gave it
to patients. This gave diabetics their first chance to live a fairly normal
life. Banting and his coworkers won the 1923 Nobel Prize in
Physiology or Medicine for their work.
The usual treatment for diabetics became regular shots of insulin.
Drug companies took the insulin from the pancreases of cattle and
pigs slaughtered by the meat-packing industry. Unfortunately, animal
insulin has two disadvantages. First, about 5 percent of diabetics
are allergic to it and can have severe reactions. Second, the world
supply of animal pancreases goes up and down depending on how
much meat is being bought. Between 1970 and 1975, the supply of
insulin fell sharply as people began to eat less red meat, yet the
numbers of diabetics continued to increase. So researchers began to
look for a better way to supply insulin.
Studying pancreases of people who had donated their bodies to
science, researchers found that human insulin did not cause allergic
reactions. Scientists realized that it would be best to find a chemical
or biological way to prepare human insulin, and pharmaceutical
companies worked hard toward this goal. Eli Lilly and Company
was the first to succeed, and on May 14, 1982, it filed a new drug application
with the Food and Drug Administration (FDA) for the human insulin preparation it named “Humulin.”
Humulin is made by genetic engineering. Irving S. Johnson, who
worked on the development of Humulin, described Eli Lilly’s method
for producing Humulin. The common bacterium Escherichia coli
is used. Two strains of the bacterium are produced by genetic engineering:
The first strain is used to make a protein called an “A
chain,” and the second strain is used to make a “B chain.” After the
bacteria are harvested, the Aand B chains are removed and purified
separately. Then the two chains are combined chemically. When
they are purified once more, the result is Humulin, which has been
proved by Ronald E. Chance and his Eli Lilly coworkers to be chemically,
biologically, and physically identical to human insulin.
Consequences
The FDA and other regulatory agencies around the world approved
genetically engineered human insulin in 1982. Humulin
does not trigger allergic reactions, and its supply does not fluctuate.
It has brought an end to the fear that there would be a worldwide
shortage of insulin.
Humulin is important as well in being the first genetically engineered
industrial chemical. It began an era in which such advanced
technology could be a source for medical drugs, chemicals used in
farming, and other important industrial products. Researchers hope
that genetic engineering will help in the understanding of cancer
and other diseases, and that it will lead to ways to grow enough
food for a world whose population continues to rise.
Subscribe to:
Posts (Atom)