Pages

04 February 2009

Autochrome plate

The invention: The first commercially successful process in which a single exposure in a regular camera produced a color image. The people behind the invention: Louis Lumière (1864-1948), a French inventor and scientist Auguste Lumière (1862-1954), an inventor, physician, physicist, chemist, and botanist Alphonse Seyewetz, a skilled scientist and assistant of the Lumière brothers Adding Color In 1882, Antoine Lumière, painter, pioneer photographer, and father of Auguste and Louis, founded a factory to manufacture photographic gelatin dry-plates. After the Lumière brothers took over the factory’s management, they expanded production to include roll film and printing papers in 1887 and also carried out joint research that led to fundamental discoveries and improvements in photographic development and other aspects of photographic chemistry. While recording and reproducing the actual colors of a subject was not possible at the time of photography’s inception (about 1822), the first practical photographic process, the daguerreotype, was able to render both striking detail and good tonal quality. Thus, the desire to produce full-color images, or some approximation to realistic color, occupied the minds of many photographers and inventors, including Louis and Auguste Lumière, throughout the nineteenth century. As researchers set out to reproduce the colors of nature, the first process that met with any practical success was based on the additive color theory expounded by the Scottish physicist James Clerk Maxwell in 1861. He believed that any color can be created by adding together red, green, and blue light in certain proportions. Maxwell, in his experiments, had taken three negatives through screens or filters of these additive primary colors. He then took slides made from these negatives and projected the slides through the same filters onto a screen so that their images were superimposed. As a result, he found that it was possible to reproduce the exact colors as well as the form of an object. Unfortunately, since colors could not be printed in their tonal relationships on paper before the end of the nineteenth century,Maxwell’s experiment was unsuccessful. Although Frederick E. Ives of Philadelphia, in 1892, optically united three transparencies so that they could be viewed in proper alignment by looking through a peephole, viewing the transparencies was still not as simple as looking at a black-and-white photograph. The Autochrome Plate The first practical method of making a single photograph that could be viewed without any apparatus was devised by John Joly of Dublin in 1893. Instead of taking three separate pictures through three colored filters, he took one negative through one filter minutely checkered with microscopic areas colored red, green, and blue. The filter and the plate were exactly the same size and were placed in contact with each other in the camera. After the plate was developed, a transparency was made, and the filter was permanently attached to it. The black-and-white areas of the picture allowed more or less light to shine through the filters; if viewed froma proper distance, the colored lights blended to form the various colors of nature. In sum, the potential principles of additive color and other methods and their potential applications in photography had been discovered and even experimentally demonstrated by 1880. Yet a practical process of color photography utilizing these principles could not be produced until a truly panchromatic emulsion was available, since making a color print required being able to record the primary colors of the light cast by the subject. Louis and Auguste Lumière, along with their research associate Alphonse Seyewetz, succeeded in creating a single-plate process based on this method in 1903. It was introduced commercially as the autochrome plate in 1907 and was soon in use throughout the world. This process is one of many that take advantage of the limited resolving power of the eye. Grains or dots too small to be recognized as separate units are accepted in their entirety and, to the sense of vision, appear as tones and continuous color.Impact While the autochrome plate remained one of the most popular color processes until the 1930’s, soon this process was superseded by subtractive color processes. Leopold Mannes and Leopold Godowsky, both musicians and amateur photographic researchers who eventually joined forces with Eastman Kodak research scientists, did the most to perfect the Lumière brothers’ advances in making color photography practical. Their collaboration led to the introduction in 1935 of Kodachrome, a subtractive process in which a single sheet of film is coated with three layers of emulsion, each sensitive to one primary color. A single exposure produces a color image. Color photography is now commonplace. The amateur market is enormous, and the snapshot is almost always taken in color. Commercial and publishing markets use color extensively. Even photography as an art form, which was done in black and white through most of its history, has turned increasingly to color.

Atomic-powered ship

The invention: The world’s first atomic-powered merchant ship demonstrated a peaceful use of atomic power. The people behind the invention: Otto Hahn (1879-1968), a German chemist Enrico Fermi (1901-1954), an Italian American physicist Dwight D. Eisenhower (1890-1969), president of the United States, 1953-1961 Splitting the Atom In 1938, Otto Hahn, working at the Kaiser Wilhelm Institute for Chemistry, discovered that bombarding uranium atoms with neutrons causes them to split into two smaller, lighter atoms. A large amount of energy is released during this process, which is called “fission.” When one kilogram of uranium is fissioned, it releases the same amount of energy as does the burning of 3,000 metric tons of coal. The fission process also releases new neutrons. Enrico Fermi suggested that these new neutrons could be used to split more uranium atoms and produce a chain reaction. Fermi and his assistants produced the first human-made chain reaction at the University of Chicago on December 2, 1942. Although the first use of this new energy source was the atomic bombs that were used to defeat Japan in World War II, it was later realized that a carefully controlled chain reaction could produce useful energy. The submarine Nautilus, launched in 1954, used the energy released from fission to make steam to drive its turbines. U.S. President Dwight David Eisenhower proposed his “Atoms for Peace” program in December, 1953. On April 25, 1955, President Eisenhower announced that the “Atoms for Peace” program would be expanded to include the design and construction of an atomicpowered merchant ship, and he signed the legislation authorizing the construction of the ship in 1956.Savannah’s Design and Construction A contract to design an atomic-powered merchant ship was awarded to George G. Sharp, Inc., on April 4, 1957. The ship was to carry approximately one hundred passengers (later reduced to sixty to reduce the ship’s cost) and 10,886 metric tons of cargo while making a speed of 21 knots, about 39 kilometers per hour. The ship was to be 181 meters long and 23.7 meters wide. The reactor was to provide steam for a 20,000-horsepower turbine that would drive the ship’s propeller. Most of the ship’s machinery was similar to that of existing ships; the major difference was that steam came from a reactor instead of a coal- or oil-burning boiler. New York Shipbuilding Corporation of Camden, New Jersey, won the contract to build the ship on November 16, 1957. States Marine Lines was selected in July, 1958, to operate the ship. It was christened Savannah and launched on July 21, 1959. The name Savannah was chosen to honor the first ship to use steam power while crossing an ocean. This earlier Savannah was launched in New York City in 1818. Ships are normally launched long before their construction is complete, and the new Savannah was no exception. It was finally turned over to States Marine Lines on May 1, 1962. After extensive testing by its operators and delays caused by labor union disputes, it began its maiden voyage from Yorktown, Virginia, to Savannah, Georgia, on August 20, 1962. The original budget for design and construction was $35 million, but by this time, the actual cost was about $80 million. Savannah‘s nuclear reactor was fueled with about 7,000 kilograms (15,400 pounds) of uranium. Uranium consists of two forms, or “isotopes.” These are uranium 235, which can fission, and uranium 238, which cannot. Naturally occurring uranium is less than 1 percent uranium 235, but the uranium in Savannah‘s reactor had been enriched to contain nearly 5 percent of this isotope. Thus, there was less than 362 kilograms of usable uranium in the reactor. The ship was able to travel about 800,000 kilometers on this initial fuel load. Three and a half million kilograms of water per hour flowed through the reactor under a pressure of 5,413 kilograms per square centimeter. It entered the reactor at 298.8 degrees Celsius and left at 317.7 degrees Celsius. Water leaving the reactor passed through a heat exchanger called a “steam generator.” In the steam generator, reactor water flowed through many small tubes. Heat passed through the walls of these tubes and boiled water outside them. About 113,000 kilograms of steam per hour were produced in this way at a pressure of 1,434 kilograms per square centimeter and a temperature of 240.5 degrees Celsius. Labor union disputes dogged Savannah‘s early operations, and it did not start its first trans-Atlantic crossing until June 8, 1964. Savannah was never a money maker. Even in the 1960’s, the trend was toward much bigger ships. It was announced that the ship would be retired in August, 1967, but that did not happen. It was finally put out of service in 1971. Later, Savannah was placed on permanent display at Charleston, South Carolina. Consequences Following the United States’ lead, Germany and Japan built atomic-powered merchant ships. The Soviet Union is believed to have built several atomic-powered icebreakers. Germany’s Otto Hahn, named for the scientist who first split the atom, began service in 1968, and Japan’s Mutsuai was under construction as Savannah retired. Numerous studies conducted in the early 1970’s claimed to prove that large atomic-powered merchant ships were more profitable than oil-fired ships of the same size. Several conferences devoted to this subject were held, but no new ships were built. Although the U.S. Navy has continued to use reactors to power submarines, aircraft carriers, and cruisers, atomic power has not been widely used for merchant-ship propulsion. Labor union problems such as those that haunted Savannah, high insurance costs, and high construction costs are probably the reasons. Public opinion, after the reactor accidents at Three Mile Island (in 1979) and Chernobyl (in 1986) is also a factor.

Atomic clock

The invention: A clock using the ammonia molecule as its oscillator that surpasses mechanical clocks in long-term stability, precision, and accuracy. The person behind the invention: Harold Lyons (1913-1984), an American physicist Time Measurement The accurate measurement of basic quantities, such as length, electrical charge, and temperature, is the foundation of science. The results of such measurements dictate whether a scientific theory is valid or must be modified or even rejected. Many experimental quantities change over time, but time cannot be measured directly. It must be measured by the occurrence of an oscillation or rotation, such as the twenty-four-hour rotation of the earth. For centuries, the rising of the Sun was sufficient as a timekeeper, but the need for more precision and accuracy increased as human knowledge grew. Progress in science can be measured by how accurately time has been measured at any given point. In 1713, the British government, after the disastrous sinking of a British fleet in 1707 because of a miscalculation of longitude, offered a reward of 20,000 pounds for the invention of a ship’s chronometer (a very accurate clock). Latitude is determined by the altitude of the Sun above the southern horizon at noon local time, but the determination of longitude requires an accurate clock set at Greenwich, England, time. The difference between the ship’s clock and the local sun time gives the ship’s longitude. This permits the accurate charting of new lands, such as those that were being explored in the eighteenth century. John Harrison, an English instrument maker, eventually built a chronometer that was accurate within one minute after five months at sea. He received his reward from Parliament in 1765. Atomic Clocks Provide Greater Stability A clock contains four parts: energy to keep the clock operating, an oscillator, an oscillation counter, and a display. A grandfather clock has weights that fall slowly, providing energy that powers the clock’s gears. The pendulum, a weight on the end of a rod, swings back and forth (oscillates) with a regular beat. The length of the rod determines the pendulum’s period of oscillation. The pendulum is attached to gears that count the oscillations and drive the display hands. There are limits to a mechanical clock’s accuracy and stability. The length of the rod changes as the temperature changes, so the period of oscillation changes. Friction in the gears changes as they wear out. Making the clock smaller increases its accuracy, precision, and stability. Accuracy is how close the clock is to telling the actual time. Stability indicates how the accuracy changes over time, while precision is the number of accurate decimal places in the display. A grandfather clock, for example, might be accurate to ten seconds per day and precise to a second, while having a stability of minutes per week. Applying an electrical signal to a quartz crystal will make the crystal oscillate at its natural vibration frequency, which depends on its size, its shape, and the way in which it was cut from the larger crystal. Since the faster a clock’s oscillator vibrates, the more precise the clock, a crystal-based clock is more precise than a large pendulum clock. By keeping the crystal under constant temperature, the clock is kept accurate, but it eventually loses its stability and slowly wears out. In 1948, Harold Lyons and his colleagues at the National Bureau of Standards (NBS) constructed the first atomic clock, which used the ammonia molecule as its oscillator. Such a clock is called an atomic clock because, when it operates, a nitrogen atom vibrates. The pyramid-shaped ammonia molecule is composed of a triangular base; there is a hydrogen atom at each corner and a nitrogen atom at the top of the pyramid. The nitrogen atom does not remain at the top; if it absorbs radio waves of the right energy and frequency, it passes through the base to produce an upside-down pyramid and then moves back to the top. This oscillation frequency occurs at 23,870 megacycles (1 megacycle equals 1 million cycles) per second. Lyons’s clock was actually a quartz-ammonia clock, since the signal from a quartz crystal produced radio waves of the crystal’s fre- quency that were fed into an ammonia-filled tube. If the radio waves were at 23,870 megacycles, the ammonia molecules absorbed the waves; a detector sensed this, and it sent no correction signal to the crystal. If radio waves deviated from 23,870 megacycles, the ammonia did not absorb them, the detector sensed the unabsorbed radio waves, and a correction signal was sent to the crystal. The atomic clock’s accuracy and precision were comparable to those of a quartz-based clock—one part in a hundred million—but the atomic clock was more stable because molecules do not wear out. The atomic clock’s accuracy was improved by using cesium 133 atoms as the source of oscillation. These atoms oscillate at 9,192,631,770 plus or minus 20 cycles per second. They are accurate to a billionth of a second per day and precise to nine decimal places. A cesium clock is stable for years. Future developments in atomic clocks may see accuracies of one part in a million billions. Impact The development of stable, very accurate atomic clocks has farreaching implications for many areas of science. Global positioning satellites send signals to receivers on ships and airplanes. By timing the signals, the receiver’s position is calculated to within several meters of its true location. Chemists are interested in finding the speed of chemical reactions, and atomic clocks are used for this purpose. The atomic clock led to the development of the maser (an acronym formicrowave amplification by stimulated emission of radiation), which is used to amplify weak radio signals, and the maser led to the development of the laser, a light-frequency maser that has more uses than can be listed here. Atomic clocks have been used to test Einstein’s theories of relativity that state that time on a moving clock, as observed by a stationary observer, slows down, and that a clock slows down near a large mass (because of the effects of gravity). Under normal conditions of low velocities and low mass, the changes in time are very small, but atomic clocks are accurate and stable enough to detect even these small changes. In such experiments, three sets of clocks were used—one group remained on Earth, one was flown west around the earth on a jet, and the last set was flown east. By comparing the times of the in-flight sets with the stationary set, the predicted slowdowns of time were observed and the theories were verified.

03 February 2009

Atomic bomb

The invention: A weapon of mass destruction created during World War II that utilized nuclear fission to create explosions equivalent to thousands of tons of trinitrotoluene (TNT), The people behind the invention: J. Robert Oppenheimer (1904-1967), an American physicist Leslie Richard Groves (1896-1970), an American engineer and Army general Enrico Fermi (1900-1954), an Italian American nuclear physicist Niels Bohr (1885-1962), a Danish physicist Energy on a Large Scale The first evidence of uranium fission (the splitting of uranium atoms) was observed by German chemists Otto Hahn and Fritz Strassmann in Berlin at the end of 1938. When these scientists discovered radioactive barium impurities in neutron-irradiated uranium, they wrote to their colleague Lise Meitner in Sweden. She and her nephew, physicist Otto Robert Frisch, calculated the large release of energy that would be generated during the nuclear fission of certain elements. This result was reported to Niels Bohr in Copenhagen. Meanwhile, similar fission energies were measured by Frédéric Joliot and his associates in Paris, who demonstrated the release of up to three additional neutrons during nuclear fission. It was recognized immediately that if neutron-induced fission released enough additional neutrons to cause at least one more such fission, a selfsustaining chain reaction would result, yielding energy on a large scale. While visiting the United States from January to May of 1939, Bohr derived a theory of fission with John Wheeler of Princeton University. This theory led Bohr to predict that the common isotope uranium 238 (which constitutes 99.3 percent of naturally occurring uranium) would require fast neutrons for fission, but that the rarer uranium 235 would fission with neutrons of any energy. This meant that uranium 235 would be far more suitable for use in any sort of bomb. Uranium bombardment in a cyclotron led to the discovery of plutonium in 1940 and the discovery that plutonium 239 was fissionable— and thus potentially good bomb material. Uranium 238 was then used to “breed” (create) plutonium 239, which was then separated from the uranium by chemical methods. During 1942, the Manhattan District of the Army Corps of Engineers was formed under General Leslie Richard Groves, an engineer and Army general who contracted with E. I. Du Pont de Nemours and Company to construct three secret atomic cities at a total cost of $2 billion. At Oak Ridge, Tennessee, twenty-five thousand workers built a 1,000-kilowatt reactor as a pilot plant.Asecond city of sixty thousand inhabitants was built at Hanford, Washington, where three huge reactors and remotely controlled plutoniumextraction plants were completed in early 1945. A Sustained and Awesome Roar Studies of fast-neutron reactions for an atomic bomb were brought together in Chicago in June of 1942 under the leadership of J. Robert Oppenheimer. He soon became a personal adviser to Groves, who built for Oppenheimer a laboratory for the design and construction of the bomb at Los Alamos, New Mexico. In 1943, Oppenheimer gathered two hundred of the best scientists in what was by now being called the Manhattan Project to live and work in this third secret city. Two bomb designs were developed. A gun-type bomb called “Little Boy” used 15 kilograms of uranium 235 in a 4,500-kilogram cylinder about 2 meters long and 0.5 meter in diameter, in which a uranium bullet could be fired into three uranium target rings to form a critical mass. An implosion-type bomb called “Fat Man” had a 5-kilogram spherical core of plutonium about the size of an orange, which could be squeezed inside a 2,300-kilogram sphere about 1.5 meters in diameter by properly shaped explosives to make the mass critical in the shorter time required for the faster plutonium fission process. A flat scrub region 200 kilometers southeast of Alamogordo, called Trinity, was chosen for the test site, and observer bunkers were built about 10 kilometers from a 30-meter steel tower. On July 13, 1945, one of the plutonium bombs was assembled at the site; the next morning, it was raised to the top of the tower. Two days later, on July 16, after a short thunderstorm delay, the bomb was detonated at 5:30 a.m. The resulting implosion initiated a chain reaction of nearly 60 fission generations in about a microsecond. It produced an intense flash of light and a fireball that expanded to a diameter of about 600 meters in two seconds, rose to a height of more than 12 kilometers, and formed an ominous mushroom shape. Forty seconds later, an air blast hit the observer bunkers, followed by a sustained and awesome roar. Measurements confirmed that the explosion had the power of 18.6 kilotons of trinitrotoluene (TNT), nearly four times the predicted value. Impact On March 9, 1945, 325 American B-29 bombers dropped 2,000 tons of incendiary bombs on Tokyo, resulting in 100,000 deaths from the fire storms that swept the city. Nevertheless, the Japanese military refused to surrender, and American military plans called for an invasion of Japan, with estimates of up to a half million American casualties, plus as many as 2 million Japanese casualties. On August 6, 1945, after authorization by President Harry S. Truman, the B-29 Enola Gay dropped the uranium Little Boy bomb on Hiroshima at 8:15 a.m. On August 9, the remaining plutonium Fat Man bomb was dropped on Nagasaki. Approximately 100,000 people died at Hiroshima (out of a population of 400,000), and about 50,000 more died at Nagasaki. Japan offered to surrender on August 10, and after a brief attempt by some army officers to rebel, an official announcement by Emperor Hirohito was broadcast on August 15. The development of the thermonuclear fusion bomb, in which hydrogen isotopes could be fused together by the force of a fission explosion to produce helium nuclei and almost unlimited energy, had been proposed early in the Manhattan Project by physicist Edward Teller. Little effort was invested in the hydrogen bomb until after the surprise explosion of a Soviet atomic bomb in September, 1949, which had been built with information stolen from the Manhattan Project. After three years of development under Teller’s guidance, the first successful H-bomb was exploded on November 1, 1952, obliterating the Elugelab atoll in the Marshall Islands of the South Pacific. The arms race then accelerated until each side had stockpiles of thousands of H-bombs. The Manhattan Project opened a Pandora’s box of nuclear weapons that would plague succeeding generations, but it contributed more than merely weapons. About 19 percent of the electrical energy in the United States is generated by about 110 nuclear reactors producing more than 100,000 megawatts of power. More than 400 reactors in thirty countries provide 300,000 megawatts of the world’s power. Reactors have made possible the widespread use of radioisotopes in medical diagnosis and therapy. Many of the techniques for producing and using these isotopes were developed by the hundreds of nuclear physicists who switched to the field of radiation biophysics after the war, ensuring that the benefits of their wartime efforts would reach the public.

27 January 2009

Assembly line

The invention: Amanufacturing technique pioneered in the automobile industry by Henry Ford that lowered production costs and helped bring automobile ownership within the reach of millions of Americans in the early twentieth century. The people behind the invention: Henry Ford (1863-1947), an American carmaker Eli Whitney (1765-1825), an American inventor Elisha King Root (1808-1865), the developer of division of labor Oliver Evans (1755-1819), the inventor of power conveyors Frederick Winslow Taylor (1856-1915), an efficiency engineer A Practical Man Henry Ford built his first “horseless carriage” by hand in his home workshop in 1896. In 1903, the Ford Motor Company was born. Ford’s first product, the Model A, sold for less than one thousand dollars, while other cars at that time were priced at five to ten thousand dollars each. When Ford and his partners tried, in 1905, to sell a more expensive car, sales dropped. Then, in 1907, Ford decided that the Ford Motor Company would build “a motor car for the great multitude.” It would be called the Model T. The Model T came out in 1908 and was everything that Henry Ford said it would be. Ford’s Model T was a low-priced (about $850), practical car that came in one color only: black. In the twenty years during which the Model T was built, the basic design never changed. Yet the price of the Model T, or “Tin Lizzie,” as it was affectionately called, dropped over the years to less than half that of the original Model T. As the price dropped, sales increased, and the Ford Motor Company quickly became the world’s largest automobile manufacturer. The last of more than 15 million Model T’s was made in 1927. Although it looked and drove almost exactly like the first Model T, these two automobiles were built in an entirely different way. The first was custom-built, while the last came off an assembly line. At first, Ford had built his cars in the same way everyone else did: one at a time. Skilled mechanics would work on a car from start to finish, while helpers and runners brought parts to these highly paid craftsmen as they were needed. After finishing one car, the mechanics and their helpers would begin the next. The Quest for Efficiency Custom-built products are good when there is little demand and buyers are willing to pay the high labor costs. This was not the case with the automobile. Ford realized that in order to make a large number of quality cars at a low price, he had to find a more efficient way to build cars. To do this, he looked to the past and the work of others. He found four ideas: interchangeable parts, continuous flow, division of labor, and elimination of wasted motion. Eli Whitney, the inventor of the cotton gin, was the first person to use interchangeable parts successfully in mass production. In 1798, the United States government asked Whitney to make several thousand muskets in two years. Instead of finding and hiring gunsmiths to make the muskets by hand, Whitney used most of his time and money to design and build special machines that could make large numbers of identical parts—one machine for each part that was needed to build a musket. These tools, and others Whitney made for holding, measuring, and positioning the parts, made it easy for semiskilled, and even unskilled, workers to build a large number of muskets. Production can be made more efficient by carefully arranging the different stages of production to create a “continuous flow.” Ford borrowed this idea from at least two places: the meat-packing houses of Chicago and an automatic grain mill run by Oliver Evans. Ford’s idea for a moving assembly line came from Chicago’s great meat-packing houses in the late 1860’s. Here, the bodies of animals were moved along an overhead rail past a number of workers, each ofwhommade a certain cut, or handled one part of the packing job. This meant that many animals could be butchered and packaged in a single day. Ford looked to Oliver Evans for an automatic conveyor system. In 1783, Evans had designed and operated an automatic grain mill that could be run by only two workers. As one worker poured grain into a funnel-shaped container, called a “hopper,” at one end of the mill, a second worker filled sacks with flour at the other end. Everything in between was done automatically, as Evans’s conveyors passed the grain through the different steps of the milling process without any help. The idea of “division of labor” is simple: When one complicated job is divided into several easier jobs, some things can be made faster, with fewer mistakes, by workers who need fewer skills than ever before. Elisha King Root had used this principle to make the famous Colt “Six-Shooter.” In 1849, Root went to work for Samuel Colt at his Connecticut factory and proved to be a manufacturing genius. By dividing the work into very simple steps, with each step performed by one worker, Root was able to make many more guns in much less time. Before Ford applied Root’s idea to the making of engines, it took one worker one day to make one engine. By breaking down the complicated job of making an automobile engine into eighty-four simpler jobs, Ford was able to make the process much more efficient. By assigning one person to each job, Ford’s company was able to make 352 engines per day—an increase of more than 400 percent. Frederick Winslow Taylor has been called the “original efficiency expert.” His idea was that inefficiency was caused by wasted time and wasted motion. So Taylor studied ways to eliminate wasted motion. He proved that, in the long run, doing a job too quickly was as bad as doing it too slowly. “Correct speed is the speed at which men can work hour after hour, day after day, year in and year out, and remain continuously in good health,” he said. Taylor also studied ways to streamline workers’ movements. In this way, he was able to keep wasted motion to a minimum. Impact The changeover from custom production to mass production was an evolution rather than a revolution. Henry Ford applied the four basic ideas of mass production slowly and with care, testing each new idea before it was used. In 1913, the first moving assembly line for automobiles was being used to make Model T’s. Ford was able to make his Tin Lizzies faster than ever, and his competitors soon followed his lead. He had succeeded in making it possible for millions of people to buy automobiles. Ford’s work gave a new push to the Industrial Revolution. It showed Americans that mass production could be used to improve quality, cut the cost of making an automobile, and improve profits. In fact, the Model T was so profitable that in 1914 Ford was able to double the minimum daily wage of his workers, so that they too could afford to buy Tin Lizzies. Although Americans account for only about 6 percent of the world’s population, they now own about 50 percent of its wealth. There are more than twice as many radios in the United States as there are people. The roads are crowded with more than 180 million automobiles. Homes are filled with the sounds and sights emitting from more than 150 million television sets. Never have the people of one nation owned so much. Where did all the products—radios, cars, television sets—come from? The answer is industry, which still depends on the methods developed by Henry Ford.

25 January 2009

Aspartame

The invention An artificial sweetener with a comparatively natural taste widely used in carbonated beverages. The people behind the invention Arthur H. Hayes, Jr. (1933- ), a physician and commissioner of the U.S. Food and Drug Administration (FDA) James M. Schlatter (1942- ), an American chemist Michael Sveda (1912- ), an American chemist and inventor Ludwig Frederick Audrieth (1901- ), an American chemist and educator Ira Remsen (1846-1927), an American chemist and educator Constantin Fahlberg (1850-1910), a German chemist. Sweetness Without Calories People have sweetened food and beverages since before recorded history. The most widely used sweetener is sugar, or sucrose. The only real drawback to the use of sucrose is that it is a nutritive sweetener: In addition to adding a sweet taste, it adds calories. Because sucrose is readily absorbed by the body, an excessive amount can be life-threatening to diabetics. This fact alone would make the development of nonsucrose sweeteners attractive. There are three common nonsucrose sweeteners in use around the world: saccharin, cyclamates, and aspartame. Saccharin was the first of this group to be discovered, in 1879. Constantin Fahlberg synthesized saccharin based on the previous experimental work of Ira Remsen using toluene (derived from petroleum). This product was found to be three hundred to five hundred times as sweet as sugar, although some people could detect a bitter aftertaste. In 1944, the chemical family of cyclamates was discovered by Ludwig Frederick Audrieth and Michael Sveda. Although these compounds are only thirty to eighty times as sweet as sugar, there was no detectable aftertaste. By the mid-1960’s, cyclamates had resplaced saccharin as the leading nonnutritive sweetener in theUnited States. Although cyclamates are still in use throughout the world, in October, 1969, FDA removed them from the list of approved food additives because of tests that indicated possible health hazards. A Political Additive Aspartame is the latest in artificial sweeteners that are derived from natural ingredients—in this case, two amino acids, one from milk and one from bananas. Discovered by accident in 1965 by American chemist James M. Schlatter when he licked his fingers during an experiment, aspartame is 180 times as sweet as sugar. In 1974, the FDAapproved its use in dry foods such as gum and cerealand as a sugar replacement. Shortly after its approval for this limited application, the FDA held public hearings on the safety concerns raised by JohnW. Olney, a professor of neuropathology at Washington University in St. Louis. There was some indication that aspartame, when combined with the common food additive monosodium glutamate, caused brain damage in children. These fears were confirmed, but the risk of brain damage was limited to a small percentage of individuals with a rare genetic disorder. At this point, the public debate took a political turn: Senator William Proxmire charged FDA Commissioner AlexanderM. Schmidt with public misconduct. This controversy resulted in aspartame being taken off the market in 1975. In 1981, the new FDA commissioner, Arthur H. Hayes, Jr., resapproved aspartame for use in the same applications: as a tabletop sweetener, as a cold-cereal additive, in chewing gum, and for other miscellaneous uses. In 1983, the FDAapproved aspartame for use in carbonated beverages, its largest application to date. Later safety studies revealed that children with a rare metabolic disease, phenylketonuria,could not ingest this sweetener without severe health risks because of the presence of phenylalanine in aspartame. This condition results in a rapid buildup in phenylalanine in the blood. Laboratories simulated this condition in rats and found that high doses of aspartame inhibited the synthesis of dopamine, a neurotransmitter. Once this happens, an increase in the frequency of seizures can occur. There was no direct evidence, however, that aspartame actually caused seizures in these experiments. Many other compounds are being tested for use as sugar replacements, the sweetest being a relative of aspartame. This compound is seventeen thousand to fifty-two thousand times sweeter than sugar. Impact The business fallout from the approval of a new low-calorie sweetener occurred over a short span of time. In 1981, sales of thisartificial sweetener by G. D. Searle and Company were $74 million. In 1983, sales rose to $336 million and exceeded half a billion dollars the following year. These figures represent sales of more than 2,500tons of this product. In 1985, 3,500 tons of aspartame were consumed. Clearly, this product’s introduction was a commercial success for Searle. During this same period, the percentage of reduced calorie carbonated beverages containing saccharin declined from100 percent to 20 percent in an industry that had $4 billion in sales. Universally, consumers preferred products containing aspartame; the bitter aftertaste of saccharin was rejected in favor of the new, less powerful sweetener. There is a trade-off in using these products. The FDA found evidence linking both saccharin and cyclamates to an elevated incidence of cancer. Cyclamates were banned in the United States for this reason. Public resistance to this measure caused the agency to back away from its position. The rationale was that, compared toother health risks associated with the consumption of sugar (especially for diabetics and overweight persons), the chance of getting cancer was slight and therefore a risk that many people wouldchoose to ignore. The total domination of aspartame in the sweetener market seems to support this assumption.

16 January 2009

Artificial satellite

The invention Sputnik I, the first object put into orbit around the earth, which began the exploration of space. The people behind the invention Sergei P. Korolev (1907-1966), a Soviet rocket scientist Konstantin Tsiolkovsky (1857-1935), a Soviet schoolteacher and the founder of rocketry in the Soviet Union Robert H. Goddard (1882-1945), an American scientist and the founder of rocketry in the United States Wernher von Braun (1912-1977), a German who worked on rocket projects Arthur C. Clarke (1917- ), the author of more than fifty books and the visionary behind telecommunications satellites A Shocking Launch In Russian, sputnik means “satellite” or “fellow traveler.” On October4, 1957, Sputnik 1, the first artificial satellite to orbit Earth, wasplaced into successful orbit by the Soviet Union. The launch of this small aluminum sphere, 0.58 meter in diameter and weighing 83.6 kilograms, opened the doors to the frontiers of space. Orbiting Earth every 96 minutes, at 28,962 kilometers per hour, Sputnik 1 came within 215 kilometers of Earth at its closest point and 939 kilometers away at its farthest point. It carried equipment to measure the atmosphere and to experiment with the transmission of electromagnetic waves from space. Equipped with two radio transmitters (at different frequencies) that broadcast for twenty-one days, Sputnik 1 was in orbit for ninety-two days, until January 4, 1958, when it disintegrated in the atmosphere. Sputnik 1 was launched using a Soviet intercontinental ballistic missile (ICBM) modified by Soviet rocket expert Sergei P. Korolev. After the launch of Sputnik 2, less than a month later, Chester Bowles, a former United States ambassador to India and Nepal, wrote: “Armed with a nuclear warhead, the rocket which launched Sputnik 1 could destroy New York, Chicago, or Detroit 18 minutes after the button was pushed in Moscow.” Although the launch of Sputnik 1 came as a shock to the general public, it came as no surprise to those who followed rocketry. In June, 1957, the United States Air Force had issued a nonclassified memo stating that there was “every reason to believe that the Rus- sian satellite shot would be made on the hundredth anniversary” of Konstantin Tsiolkovsky’s birth. Thousands of Launches Rockets have been used since at least the twelfth century, when Europeans and the Chinese were using black powder devices. In 1659, the Polish engineer Kazimir Semenovich published his Roketten für Luft und Wasser (rockets for air and water), which had a drawing of a three-stage rocket. Rockets were used and perfected for warfare during the nineteenth and twentieth centuries. Nazi Germany’s V-2 rocket (thousands of which were launched by Germany against England during the closing years of World War II) was the model for American and Soviet rocket designers between 1945 and 1957. In the Soviet Union, Tsiolkovsky had been thinking about and writing about space flight since the last decade of the nineteenth century, and in the United States, Robert H. Goddard had been thinking about and experimenting with rockets since the first decade of the twentieth century. Wernher von Braun had worked on rocket projects for Nazi Germany duringWorldWar II, and, as the war was ending in May, 1945, von Braun and several hundred other people involved in German rocket projects surrendered to American troops in Europe. Hundreds of other German rocket experts ended up in the Soviet Union to continue with their research. Tom Bower pointed out in his book The Paperclip Conspiracy: The Hunt for the Nazi Scientists (1987)—so named because American “recruiting officers had identified [Nazi] scientists to be offered contracts by slipping an ordinary paperclip onto their files”—that American rocketry research was helped tremendously by Nazi scientists who switched sides after World War II. The successful launch of Sputnik 1 convinced people that space travel was no longer simply science fiction. The successful launch of Sputnik 2 on November 3, 1957, carrying the first space traveler, a dog named Laika (who was euthanized in orbit because there were no plans to retrieve her), showed that the launch of Sputnik 1 was only the beginning of greater things to come. Consequences After October 4, 1957, the Soviet Union and other nations launched more experimental satellites. On January 31, 1958, the United States sent up Explorer 1, after failing to launch a Vanguard satellite on December 6, 1957. Arthur C. Clarke, most famous for his many books of science fiction, published a technical paper in 1945 entitled “Extra-Terrestrial Relays: Can Rocket Stations GiveWorld-Wide Radio Coverage?” In that paper, he pointed out that a satellite placed in orbit at the correct height and speed above the equator would be able to hover over the same spot on Earth. The placement of three such “geostationary” satellites would allow radio signals to be transmitted around the world. By the 1990’s, communications satellites were numerous. In the first twenty-five years after Sputnik 1 was launched, from 1957 to 1982, more than two thousand objects were placed into various Earth orbits by more than twenty-four nations. On the average, something was launched into space every 3.82 days for this twentyfive- year period, all beginning with Sputnik 1.

08 January 2009

Artificial kidney

The invention A machine that removes waste end-products and poisons out of the blood when human kidneys are not working properly. The people behind the invention John Jacob Abel (1857-1938), a pharmacologist and biochemist known as the “father of American pharmacology” Willem Johan Kolff (1911- ), a Dutch American clinician who pioneered the artificial kidney and the artificial heart. Cleansing the Blood In the human body, the kidneys are the dual organs that remove waste matter from the bloodstream and send it out of the system as urine. If the kidneys fail to work properly, this cleansing process must be done artifically—such as by a machine. John Jacob Abel was the first professor of pharmacology at Johns Hopkins University School of Medicine. Around 1912, he began to study the by-products of metabolism that are carried in the blood. This work was difficult, he realized, because it was nearly impossible to detect even the tiny amounts of the many substances in blood. Moreover, no one had yet developed a method or machine for taking these substances out of the blood. In devising a blood filtering system, Abel understood that he needed a saline solution and a membrane that would let some substances pass through but not others. Working with Leonard Rowntree and Benjamin B. Turner, he spent nearly two years figuring out how to build a machine that would perform dialysis—that is, remove metabolic by-products from blood. Finally their efforts succeeded. The first experiments were performed on rabbits and dogs. In operating the machine, the blood leaving the patient was sent flowing through a celloidin tube that had been wound loosely around a drum. An anticlotting substance (hirudin, taken out of leeches) was added to blood as the blood flowed through the tube. The drum, which was immersed in a saline and dextrose solution, rotated slowly. As blood flowed through the immersed tubing, the pressure of osmosis removed urea and other substances, but not the plasma or cells, from the blood. The celloidin membranes allowed oxygen to pass from the saline and dextrose solution into the blood, so that purified, oxygenated blood then flowed back into the arteries. Abel studied the substances that his machine had removed from the blood, and he found that they included not only urea but also free amino acids. He quickly realized that his machine could be useful for taking care of people whose kidneys were not working properly. Reporting on his research, he wrote, “In the hope of providing a substitute in such emergencies, which might tide over a dangerous crisis . . . a method has been devised by which the blood of a living animal may be submitted to dialysis outside the body, and again returned to the natural circulation.” Abel’s machine removed large quantities of urea and other poisonous substances fairly quickly, so that the process, which he called “vividiffusion,” could serve as an artificial kidney during cases of kidney failure. For his physiological research, Abel found it necessary to remove, study, and then replace large amounts of blood from living animals, all without dissolving the red blood cells, which carry oxygen to the body’s various parts. He realized that this process, which he called “plasmaphaeresis,” would make possible blood banks, where blood could be stored for emergency use. In 1914, Abel published these two discoveries in a series of three articles in the Journal of Pharmacology and Applied Therapeutics, and he demonstrated his techniques in London, England, and Groningen,The Netherlands. Though he had suggested that his techniques could be used for medical purposes, he himself was interested mostly in continuing his biochemical research. So he turned to other projects in pharmacology, such as the crystallization of insulin,and never returned to studying vividiffusion. Refining the Technique Georg Haas, a German biochemist working in Giessen,West Germany, was also interested in dialysis; in 1915, he began to experiment with “blood washing.” After reading Abel’s 1914 writings,Haas tried substituting collodium for the celloidin that Abel had used as a filtering membrane and using commercially prepared heparin instead of the homemade hirudin Abel had used to prevent blood clotting. He then used this machine on a patient and found that it showed promise, but he knew that many technical problems had to be worked out before the procedure could be used on many patients. In 1937,Willem Johan Kolff was a young physician at Groningen.He felt sad to see patients die from kidney failure, and he wanted to find a way to cure others. Having heard his colleagues talk about the possibility of using dialysis on human patients, he decided to build a dialysis machine. Kolff knew that cellophane was an excellent membrane for dialyzing, and that heparin was a good anticoagulant, but he also realized that his machine would need to be able to treat larger volumes of blood than Abel’s and Haas’s had. During World War II (1939-1945), with the help of the director of a nearby enamel factory, Kolff built an artificial kidney that was first tried on a patient on March 17, 1943. Between March, 1943, and July 21, 1944, Kolff used his secretly constructed dialysis machines on fifteen patients, of whom only one survived. He published the results of his research in Acta Medica Scandinavica. Even though most of his patients had not survived,he had collected information and developed the technique until he was sure dialysis would eventually work. Kolff brought machines to Amsterdam and The Hague and encouraged other physicians to try them; meanwhile, he continued to study blood dialysis and to improve his machines. In 1947, he brought improved machines to London and the United States. By the time he reached Boston, however, he had given away all of his machines. He did, however, explain the technique to John P.Merrill, a physician at the Harvard Medical School, who soon became the leading American developer of kidney dialysis and kidney-transplant surgery. Kolff himself moved to the United States, where he became an expert not only in artificial kidneys but also in artificial hearts. He helped develop the Jarvik-7 artificial heart (named for its chief inventor,Robert Jarvik), which was implanted in a patient in 1982. Impact Abel’s work showed that the blood carried some substances that had not been previously known and led to the development of the first dialysis machine for humans. It also encouraged interest in the possibility of organ transplants. After World War II, surgeons had tried to transplant kidneys from one animal to another, but after a few days the recipient began to reject the kidney and die. In spite of these failures, researchers in Europe and America transplanted kidneys in several patients, and they used artificial kidneys to take care of the patients who were waiting for transplants. In 1954, Merrill—to whom Kolff had demonstrated an artificial kidney—successfully transplanted kidneys in identical twins.After immunosuppressant drugs (used to prevent the body from rejecting newly transplanted tissue) were discovered in 1962,transplantation surgery became much more practical. After kidney transplants became common, the artificial kidney became simply a way of keeping a person alive until a kidney donor could befound.

29 December 2008

Artificial insemination

The invention: Practical techniques for the artificial insemination of farm animals that have revolutionized livestock breeding practices throughout the world. The people behind the invention: Lazzaro Spallanzani (1729-1799), an Italian physiologist Ilya Ivanovich Ivanov (1870-1932), a Soviet biologist R. W. Kunitsky, a Soviet veterinarian Reproduction Without Sex The tale is told of a fourteenth-century Arabian chieftain who sought to improve his mediocre breed of horses. Sneaking into the territory of a neighboring hostile tribe, he stimulated a prize stallion to ejaculate into a piece of cotton. Quickly returning home, he inserted this cotton into the vagina of his own mare, who subsequently gave birth to a high-quality horse. This may have been the first case of “artificial insemination,” the technique by which semen is introduced into the female reproductive tract without sexual contact. The first scientific record of artificial insemination comes from Italy in the 1770’s. Lazzaro Spallanzani was one of the foremost physiologists of his time, well known for having disproved the theory of spontaneous generation, which states that living organisms can spring “spontaneously” from lifeless matter. There was some disagreement at that time about the basic requirements for reproduction in animals. It was unclear if the sex act was necessary for an embryo to develop, or if it was sufficient that the sperm and eggs come into contact. Spallanzani began by studying animals in which union of the sperm and egg normally takes place outside the body of the female. He stimulated males and females to release their sperm and eggs, then mixed these sex cells in a glass dish. In this way, he produced young frogs, toads, salamanders, and silkworms. Next, Spallanzani asked whether the sex act was also unnecessary for reproduction in those species in which fertilization normally takes place inside the body of the female. He collected semen that had been ejaculated by a male spaniel and, using a syringe, injected the semen into the vagina of a female spaniel in heat. Two months later, she delivered a litter of three pups, which bore some resemblance to both the mother and the male that had provided the sperm. It was in animal breeding that Spallanzani’s techniques were to have their most dramatic application. In the 1880’s, an English dog breeder, Sir Everett Millais, conducted several experiments on artificial insemination. He was interested mainly in obtaining offspring from dogs that would not normally mate with one another because of difference in size. He followed Spallanzani’s methods to produce a cross between a short, low, basset hound and the much larger bloodhound. Long-Distance Reproduction Ilya Ivanovich Ivanov was a Soviet biologist who was commissioned by his government to investigate the use of artificial insemination on horses. Unlike previous workers who had used artificial insemination to get around certain anatomical barriers to fertilization, Ivanov began the use of artificial insemination to reproduce thoroughbred horses more effectively. His assistant in this work was the veterinarian R. W. Kunitsky. In 1901, Ivanov founded the Experimental Station for the Artificial Insemination of Horses. As its director, he embarked on a series of experiments to devise the most efficient techniques for breeding these animals. Not content with the demonstration that the technique was scientifically feasible, he wished to ensure further that it could be practiced by Soviet farmers. If sperm from a male were to be used to impregnate females in another location, potency would have to be maintained for a long time. Ivanov first showed that the secretions from the sex glands were not required for successful insemination; only the sperm itself was necessary. He demonstrated further that if a testicle were removed from a bull and kept cold, the sperm would remain alive. More useful than preservation of testicles would be preservation of the ejaculated sperm. By adding certain salts to the sperm-containing fluids, and by keeping these at cold temperatures, Ivanov was able to preserve sperm for long periods. Ivanov also developed instruments to inject the sperm, to hold the vagina open during insemination, and to hold the horse in place during the procedure. In 1910, Ivanov wrote a practical textbook with technical instructions for the artificial insemination of horses. He also trained some three hundred veterinary technicians in the use of artificial insemination, and the knowledge he developed quickly spread throughout the Soviet Union. Artificial insemination became the major means of breeding horses. Until his death in 1932, Ivanov was active in researching many aspects of the reproductive biology of animals. He developed methods to treat reproductive diseases of farm animals and refined methods of obtaining, evaluating, diluting, preserving, and disinfecting sperm. He also began to produce hybrids between wild and domestic animals in the hope of producing new breeds that would be able to withstand extreme weather conditions better and that would be more resistant to disease. His crosses included hybrids of ordinary cows with aurochs, bison, and yaks, as well as some more exotic crosses of zebras with horses. Ivanov also hoped to use artificial insemination to help preserve species that were in danger of becoming extinct. In 1926, he led an expedition to West Africa to experiment with the hybridization of different species of anthropoid apes. Impact The greatest beneficiaries of artificial insemination have been dairy farmers. Some bulls are able to sire genetically superior cows that produce exceptionally large volumes of milk. Under natural conditions, such a bull could father at most a few hundred offspring in its lifetime. Using artificial insemination, a prize bull can inseminate ten to fifteen thousand cows each year. Since frozen sperm may be purchased through the mail, this also means that dairy farmers no longer need to keep dangerous bulls on the farm. Artificial insemination has become the main method of reproduction of dairy cows, with about 150 million cows (as of 1992) produced this way throughout the world. In the 1980’s, artificial insemination gained added importance as a method of breeding rare animals. Animals kept in zoo cages, animals that are unable to take part in normal mating, may still produce sperm that can be used to inseminate a female artificially. Some species require specific conditions of housing or diet for normal breeding to occur, conditions not available in all zoos. Such animals can still reproduce using artificial insemination.

17 December 2008

Artificial hormone




The invention: 

Synthesized oxytocin, a small polypeptide hormone
from the pituitary gland that has shown how complex polypeptides
and proteins may be synthesized and used in medicine.

The people behind the invention:

Vincent du Vigneaud (1901-1978), an American biochemist and
winner of the 1955 Nobel Prize in Chemistry
Oliver Kamm (1888-1965), an American biochemist
Sir Edward Albert Sharpey-Schafer (1850-1935), an English
physiologist
Sir Henry Hallett Dale (1875-1968), an English physiologist and
winner of the 1936 Nobel Prize in Physiology or Medicine
John Jacob Abel (1857-1938), an American pharmacologist and
biochemist


12 December 2008

Artificial heart

The invention: The first successful artificial heart, the Jarvik-7, has helped to keep patients suffering from otherwise terminal heart disease alive while they await human heart transplants. The people behind the invention: Robert Jarvik (1946- ), the main inventor of the Jarvik-7 William Castle DeVries (1943- ), a surgeon at the University of Utah in Salt Lake City Barney Clark (1921-1983), a Seattle dentist, the first recipient of the Jarvik-7 Early Success The Jarvik-7 artificial heart was designed and produced by researchers at the University of Utah in Salt Lake City; it is named for the leader of the research team, Robert Jarvik. An air-driven pump made of plastic and titanium, it is the size of a human heart. It is made up of two hollow chambers of polyurethane and aluminum, each containing a flexible plastic membrane. The heart is implanted in a human being but must remain connected to an external air pump by means of two plastic hoses. The hoses carry compressed air to the heart, which then pumps the oxygenated blood through the pulmonary artery to the lungs and through the aorta to the rest of the body. The device is expensive, and initially the large, clumsy air compressor had to be wheeled from room to room along with the patient. The device was new in 1982, and that same year Barney Clark, a dentist from Seattle, was diagnosed as having only hours to live. His doctor, cardiac specialistWilliam Castle DeVries, proposed surgically implanting the Jarvik-7 heart, and Clark and his wife agreed. The Food and Drug Administration (FDA), which regulates the use of medical devices, had already given DeVries and his coworkers permission to implant up to seven Jarvik-7 hearts for permanent use. The operation was performed on Clark, and at first it seemed quite successful. Newspapers, radio, and television reported this medical breakthrough: the first time a severely damaged heart had been re-placed by a totally artificial heart. It seemed DeVries had proved that an artificial heart could be almost as good as a human heart. Soon after Clark’s surgery, DeVries went on to implant the device placed by a totally artificial heart.in several other patients with serious heart disease. For a time, all of them survived the surgery. As a result, DeVries was offered a position at Humana Hospital in Louisville, Kentucky. Humana offered to pay for the first one hundred implant operations The Controversy Begins In the three years after DeVries’s operation on Barney Clark, however, doubts and criticism arose. Of the people who by then had received the plastic and metal device as a permanent replacement for their own diseased hearts, three had died (including Clark) and four had suffered serious strokes. The FDAasked Humana Hospital and Symbion (the company that manufactured the Jarvik-7) for complete, detailed histories of the artificial-heart recipients. It was determined that each of the patients who had died or been disabled had suffered from infection. Life-threatening infection, or “foreign-body response,” is a danger with the use of any artificial organ. The Jarvik-7, with its metal valves, plastic body, and Velcro attachments, seemed to draw bacteria like a magnet—and these bacteria proved resistant to even the most powerful antibiotics. By 1988, researchers had come to realize that severe infection was almost inevitable if a patient used the Jarvik-7 for a long period of time. As a result, experts recommended that the device be used for no longer than thirty days. Questions of values and morality also became part of the controversy surrounding the artificial heart. Some people thought that it was wrong to offer patients a device that would extend their lives but leave them burdened with hardship and pain. At times DeVries claimed that it was worth the price for patients to be able live another year; at other times, he admitted that if he thought a patient would have to spend the rest of his or her life in a hospital, he would think twice before performing the implant. There were also questions about “informed consent”—the patient’s understanding that a medical procedure has a high risk of failure and may leave the patient in misery even if it succeeds. Getting truly informed consent from a dying patient is tricky, because, understandably, the patient is probably willing to try anything. The Jarvik-7 raised several questions in this regard:Was the ordeal worth the risk? Was the patient’s suffering justifiable? Who should make the decision for or against the surgery: the patient, the researchers, or a government agency? Also there was the issue of cost. Should money be poured into expensive, high-technology devices such as the Jarvik heart, or should it be reserved for programs to help prevent heart disease in the first place? Expenses for each of DeVries’s patients had amounted to about one million dollars. Humana’s and DeVries’s earnings were criticized in particular. Once the first one hundred free Jarvik-7 implantations had been performed, Humana Hospital could expect to make large amounts of money on the surgery. By that time, Humana would have so much expertise in the field that, though the surgical techniques could not be patented, it was expected to have a practical monopoly. DeVries himself owned thousands of shares of stock in Symbion. Many people wondered whether this was ethical. Consequences Given all the controversies, in December of 1985 a panel of experts recommended that the FDAallow the experiment to continue,but only with careful monitoring. Meanwhile, cardiac transplantation was becoming easier and more common. By the end of 1985, almost twenty-six hundred patients in various countries had received human heart transplants, and 76 percent of these patients had survived for at least four years. When the demand for donor hearts exceeded the supply, physicians turned to the Jarvik device and other artificial hearts to help see patients through the waiting period. Experience with the Jarvik-7 made the world keenly aware of how far medical science still is from making the implantable permanent mechanical heart a reality. Nevertheless, the device was a breakthrough in the relatively new field of artificial organs. Since then, other artificial body parts have included heart valves, blood vessels, and inner ears that help restore hearing to the deaf. William C. DeVries William Castle DeVries did not invent the artificial heart himself; however, he did develop the procedure to implant it. The first attempt took him seven and a half hours, and he needed fourteen assistants. Asuccess, the surgery made DeVries one of the most talked-about doctors in the world. DeVries was born in Brooklyn,NewYork, in 1943. His father, a Navy physician, was killed in action a few months later, and his mother, a nurse, moved with her son to Utah. As a child DeVries showed both considerable mechanical aptitude and athletic prowess. He won an athletic scholarship to the University of Utah, graduating with honors in 1966. He entered the state medical school and there met Willem Kolff, a pioneer in designing and testing artificial organs. Under Kolff’s guidance, DeVries began performing experimental surgeries on animals to test prototype mechanical hearts. He finished medical school in 1970 and from 1971 until 1979 was an intern and then a resident in surgery at the Duke University Medical Center in North Carolina. DeVries returned to the University of Utah as an assistant professor of cardiovascular and thoracic surgery. In the meantime, Robert K. Jarvik had devised the Jarvik-7 artificial heart. DeVries experimented, implanting it in animals and cadavers until, following approval from the Federal Drug Administration, Barney Clark agreed to be the first test patient. He died 115 days after the surgery, having never left the hospital. Although controversy arose over the ethics and cost of the procedure, more artificial heart implantations followed, many by DeVries. Long administrative delays getting patients approved for surgery at Utah frustrated DeVries, so he moved to Humana Hospital-Audubon in Louisville, Kentucky, in 1984 and then took a professorship at the University of Louisville. In 1988 he left experimentation for a traditional clinical practice. The FDA withdrew its approval for the Jarvik-7 in 1990. In 1999 DeVries retired from practice, but not from medicine. The next year he joined the Army Reserve and began teaching surgery at the Walter Reed Army Medical Center.

06 December 2008

Artificial blood

The invention: Aperfluorocarbon emulsion that serves as a blood plasma substitute in the treatment of human patients. The person behind the invention: Ryoichi Naito (1906-1982), a Japanese physician. Blood Substitutes The use of blood and blood products in humans is a very complicated issue. Substances present in blood serve no specific purpose and can be dangerous or deadly, especially when blood or blood products are taken from one person and given to another. This fact, combined with the necessity for long-term blood storage, a shortage of donors, and some patients’ refusal to use blood for religious reasons, brought about an intense search for a universal bloodlike substance. The life-sustaining properties of blood (for example, oxygen transport) can be entirely replaced by a synthetic mixture of known chemicals. Fluorocarbons are compounds that consist of molecules containing only fluorine and carbon atoms. These compounds are interesting to physiologists because they are chemically and pharmacologically inert and because they dissolve oxygen and other gases. Studies of fluorocarbons as blood substitutes began in 1966, when it was shown that a mouse breathing a fluorocarbon liquid treated with oxygen could survive. Subsequent research involved the use of fluorocarbons to play the role of red blood cells in transporting oxygen. Encouraging results led to the total replacement of blood in a rat, and the success of this experiment led in turn to trials in other mammals, culminating in 1979 with the use of fluorocarbons in humans. Clinical Studies The chemical selected for the clinical studies was Fluosol-DA, produced by the Japanese Green Cross Corporation. Fluosol-DA consists of a 20 percent emulsion of two perfluorocarbons (perfluorodecalin and perfluorotripopylamine), emulsifiers, and salts that are included to give the chemical some of the properties of blood plasma. Fluosol-DA had been tested in monkeys, and it had shown a rapid reversible uptake and release of oxygen, a reasonably rapid excretion, no carcinogenicity or irreversible changes in the animals’ systems, and the recovery of blood components to normal ranges within three weeks of administration. The clinical studies were divided into three phases. The first phase consisted of the administration of Fluosol-DA to normal human volunteers. Twelve healthy volunteers were administered the chemical, and the emulsion’s effects on blood pressure and composition and on heart, liver, and kidney functions were monitored. No adverse effects were found in any case. The first phase ended in March, 1979, and based on its positive results, the second and third phases were begun in April, 1979. Twenty-four Japanese medical institutions were involved in the next two phases. The reasons for the use of Fluosol-DA instead of blood in the patients involved were various, and they included refusal of transfusion for religious reasons, lack of compatible blood, “bloodless” surgery for protection from risk of hepatitis, and treatment of carbon monoxide intoxication. Among the effects noticed by the patients were the following: a small increase in blood pressure, with no corresponding effects on respiration and body temperature; an increase in blood oxygen content; bodily elimination of half the chemical within six to nineteen hours, depending on the initial dose administered; no change in red-cell count or hemoglobin content of blood; no change in wholeblood coagulation time; and no significant blood-chemistry changes. These results made the clinical trials a success and opened the door for other, more extensive ones. IMPACT Perfluorocarbon emulsions were initially proposed as oxygencarrying resuscitation fluids, or blood substitutes, and the results of the pioneering studies show their success as such. Their success in this area, however, led to advanced studies and expanded use of these compounds in many areas of clinical medicine and biomedical research. Perfluorocarbon emulsions are useful in cancer therapy, because they increase the oxygenation of tumor cells and therefore sensitize them to the effects of radiation or chemotherapy. Perfluorocarbons can also be used as “contrasting agents” to facilitate magnetic resonance imaging studies of various tissues; for example, the uptake of particles of the emulsion by the cells of malignant tissues makes it possible to locate tumors. Perfluorocarbons also have a high nitrogen solubility and therefore can be used to alleviate the potentially fatal effects of decompression sickness by “mopping up” nitrogen gas bubbles from the circulation system. They can also be used to preserve isolated organs and amputated extremities until they can be reimplanted or reattached. In addition, the emulsions are used in cell cultures to regulate gas supply and to improve cell growth and productivity. The biomedical applications of perfluorocarbon emulsions are multidisciplinary, involving areas as diverse as tissue imaging, organ preservation, cancer therapy, and cell culture. The successful clinical trials opened the door for new applications of these compounds, which rank among the most versatile compounds exploited by humankind.

03 December 2008

Aqualung

The invention: A device that allows divers to descend hundreds of meters below the surface of the ocean by enabling them to carry the oxygen they breathe with them. The people behind the invention: Jacques-Yves Cousteau (1910-1997), a French navy officer, undersea explorer, inventor, and author. Émile Gagnan, a French engineer who invented an automatic air-regulating device. The Limitations of Early Diving Undersea dives have been made since ancient times for the purposes of spying, recovering lost treasures from wrecks, and obtaining natural treasures (such as pearls). Many attempts have been made since then to prolong the amount of time divers could remain underwater. The first device, described by the Greek philosopher Aristotle in 335 b.c.e., was probably the ancestor of the modern snorkel. It was a bent reed placed in the mouth, with one end above the water. In addition to depth limitations set by the length of the reed, pressure considerations also presented a problem. The pressure on a diver’s body increases by about one-half pound per square centimeter for every meter ventured below the surface. After descending about 0.9 meter, inhaling surface air through a snorkel becomes difficult because the human chest muscles are no longer strong enough to inflate the chest. In order to breathe at or below this depth, a diver must breathe air that has been pressurized; moreover, that pressure must be able to vary as the diver descends or ascends. Few changes were possible in the technology of diving until air compressors were invented during the early nineteenth century. Fresh, pressurized air could then be supplied to divers. At first, the divers who used this method had to wear diving suits, complete with fishbowl-like helmets. This “tethered” diving made divers relatively immobile but allowed them to search for sunken treasure or do other complex jobs at great depths. The Development of Scuba Diving The invention of scuba gear gave divers more freedom to move about and made them less dependent on heavy equipment. (“Scuba” stands for self-contained underwater breathing apparatus.) Its development occurred in several stages. In 1880, Henry Fleuss of England developed an outfit that used a belt containing pure oxygen. Belt and diver were connected, and the diver breathed the oxygen over and over. Aversion of this system was used by the U.S. Navy in World War II spying efforts. Nevertheless, it had serious drawbacks: Pure oxygen was toxic to divers at depths greater than 9 meters, and divers could carry only enough oxygen for relatively short dives. It did have an advantage for spies, namely, that the oxygen—breathed over and over in a closed system—did not reach the surface in the form of telltale bubbles. The next stage of scuba development occurred with the design of metal tanks that were able to hold highly compressed air. This enabled divers to use air rather than the potentially toxic pure oxygen. More important, being hooked up to a greater supply of air meant that divers could stay under water longer. Initially, the main problem with the system was that the air flowed continuously through a mask that covered the diver’s entire face. This process wasted air, and the scuba divers expelled a continual stream of air bubbles that made spying difficult. The solution, according to Axel Madsen’s Cousteau (1986), was “a valve that would allow inhaling and exhaling through the same mouthpiece.” Jacques-Yves Cousteau’s father was an executive for Air Liquide— France’s main producer of industrial gases. He was able to direct Cousteau to Émile Gagnan, an engineer at thecompany’s Paris laboratory who had been developing an automatic gas shutoff valve for Air Liquide. This valve became the Cousteau-Gagnan regulator, a breathing device that fed air to the diver at just the right pressure whenever he or she inhaled.With this valve—and funding from Air Liquide—Cousteau and Gagnan set out to design what would become the Aqualung. The first Aqualungs could be used at depths of up to 68.5 meters. During testing, however, the dangers of Aqualung diving became apparent. For example, unless divers ascended and descended in slow stages, it was likely that they would get “the bends” (decompression sickness), the feared disease of earlier, tethered deep-sea divers. Another problem was that, below 42.6 meters, divers encountered nitrogen narcosis. (This can lead to impaired judgment that may cause fatal actions, including removing a mouthpiece or developing an overpowering desire to continue diving downward, to dangerous depths.)Cousteau believed that the Aqualung had tremendous military potential. DuringWorldWar II, he traveled to London soon after the Normandy invasion, hoping to persuade the Allied Powers of its usefulness. He was not successful. So Cousteau returned to Paris and convinced France’s new government to use Aqualungs to locate and neutralize underwater mines laid along the French coast by the German navy. Cousteau was commissioned to combine minesweeping with the study of the physiology of scuba diving. Further research revealed that the use of helium-oxygen mixtures increased to 76 meters the depth to which a scuba diver could go without suffering nitrogen narcosis. Impact One way to describe the effects of the development of the Aqualung is to summarize Cousteau’s continued efforts to the present. In 1946, he and Philippe Tailliez established the Undersea Research Group of Toulon to study diving techniques and various aspects of life in the oceans. They studied marine life in the Red Sea from 1951 to 1952. From 1952 to 1956, they engaged in an expedition supported by the National Geographic Society. By that time, the Research Group had developed many techniques that enabled them to identify life-forms and conditions at great depths. Throughout their undersea studies, Cousteau and his coworkers continued to develop better techniques for scuba diving, for recording observations by means of still and television photography, and for collecting plant and animal specimens. In addition, Cousteau participated (with Swiss physicist Auguste Piccard) in the construction of the deep-submergence research vehicle, or bathyscaphe. In the 1960’s, he directed a program called Conshelf, which tested a human’s ability to live in a specially built underwater habitat. He also wrote and produced films on underwater exploration that attracted, entertained, and educated millions of people. Cousteau has won numerous medals and scientific distinctions. These include the Gold Medal of the National Geographic Society (1963), the United Nations International Environment Prize (1977), membership in the American and Indian academies of science (1968 and 1978, respectively), and honorary doctor of science degrees from the University of California, Berkeley (1970), Harvard University (1979), and Rensselaer Polytechnical Institute (1979).

30 November 2008

Apple II computer


The invention:
The first commercially available, preassembled
personal computer, the Apple II helped move computers out of
the workplace and into the home.

The people behind the invention:

Stephen Wozniak (1950- ), cofounder of Apple and designer
of the Apple II computer
Steven Jobs (1955-2011 ), cofounder of Apple
Regis McKenna (1939- ), owner of the Silicon Valley public
relations and advertising company that handled the Apple
account
Chris Espinosa (1961- ), the high school student who wrote
the BASIC program shipped with the Apple II
Randy Wigginton (1960- ), a high school student and Apple
software programmer

27 November 2008

Antibacterial drugs

Mechanisms of genetic resistance to antimicrobial agents: Bacteria have developed, or will develop, genetic resistance to all known antimicrobial agents that are now in the marketplace. The five main mechanisms that bacteria use to resist antibacterial drugs are shown in the figure. a | The site of action (enzyme, ribosome or cell-wall precursor) can be altered. For example, acquiring a plasmid or transposon that codes for a resistant dihydrofolate reductase confers trimethoprim resistance to bacteria52. b | The inhibited steps can be by-passed. c | Bacteria can reduce the intracellular concentration of the antimicrobial agent, either by reducing membrane permeability, for example, as shown by Pseudomonas aeruginosa53, or by active efflux of the agent54. d | They can inactivate the drug. For example, some bacteria produce beta-lactamase, which destroys the penicillin beta-lactam ring50, 51 . e | The target enzyme can be overproduced by the bacteria. The invention: Sulfonamides and other drugs that have proved effective in combating many previously untreatable bacterial diseases. The people behind the invention: Gerhard Domagk (1895-1964), a German physician who was awarded the 1939 Nobel Prize in Physiology or Medicine Paul Ehrlich (1854-1915), a German chemist and bacteriologist who was the cowinner of the 1908 Nobel Prize in Physiology or Medicine. The Search for Magic Bullets Although quinine had been used to treat malaria long before the twentieth century, Paul Ehrlich, who discovered a large number of useful drugs, is usually considered the father of modern chemotherapy. Ehrlich was familiar with the technique of using dyes to stain microorganisms in order to make them visible under a microscope, and he suspected that some of these dyes might be used to poison the microorganisms responsible for certain diseases without hurting the patient. Ehrlich thus began to search for dyes that could act as “magic bullets” that would destroy microorganisms and cure diseases. From 1906 to 1910, Ehrlich tested numerous compounds that had been developed by the German dye industry. He eventually found that a number of complex trypan dyes would inhibit the protozoans that caused African sleeping sickness. Ehrlich and his coworkers also synthesized hundreds of organic compounds that contained arsenic. In 1910, he found that one of these compounds, salvarsan, was useful in curing syphilis, a sexually transmitted disease caused by the bacterium Treponema. This was an important discovery, because syphilis killed thousands of people each year. Salvarsan, however, was often toxic to patients, because it had to be taken in large doses for as long as two years to effect a cure. Ehrlich thus searched for and found a less toxic arsenic compound, neosalvarsan, which replaced salvarsan in 1912. In 1915, tartar emetic (a compound containing the metal antimony) was found to be useful in treating kala-azar, which was caused by a protozoan. Kala-azar affected millions of people in Africa, India, and Asia, causing much suffering and many deaths each year. Two years later, it was discovered that injection of tartar emetic into the blood of persons suffering from bilharziasis killed the flatworms infecting the bladder, liver, and spleen. In 1920, suramin, a colorless compound developed from trypan red, was introduced to treat African sleeping sickness. It was much less toxic to the patient than any of the drugs Ehrlich had developed, and a single dose would give protection for more than a month. From the dye methylene blue, chemists made mepacrine, a drug that was effective against the protozoans that cause malaria. This chemical was introduced in 1933 and used duringWorldWar II; its principal drawback was that it could cause a patient’s skin to become yellow. Well Worth the Effort Gerhard Domagk had been trained in medicine, but he turned to research in an attempt to discover chemicals that would inhibit or kill microorganisms. In 1927, he became director of experimental pathology and bacteriology at the Elberfeld laboratories of the German chemical firm I. G. Farbenindustrie. Ehrlich’s discovery that trypan dyes selectively poisoned microorganisms suggested to Domagk that he look for antimicrobials in a new group of chemicals known as azo dyes. A number of these dyes were synthesized from sulfonamides and purified by Fritz Mietzsch and Josef Klarer. Domagk found that many of these dyes protected mice infected with the bacteria Streptococcus pyogenes. In 1932, he discovered that one of these dyes was much more effective than any tested previously. This red azo dye containing a sulfonamide was named prontosil rubrum. From 1932 to 1935, Domagk began a rigorous testing program to determine the effectiveness and dangers of prontosil use at different doses in animals. Since all chemicals injected into animals or humans are potentially dangerous, Domagk determined the doses that harmed or killed. In addition, he worked out the lowest doses that would eliminate the pathogen. The firm supplied samples of the drug to physicians to carry out clinical trials on humans. (Animal experimentation can give only an indication of which chemicals might be useful in humans and which doses are required.) Domagk thus learned which doses were effective and safe. This knowledge saved his daughter’s life. One day while knitting, Domagk’s daughter punctured her finger with a needle and was infected with a virulent bacteria, which quickly multiplied and spread from the wound into neighboring tissues. In an attempt to alleviate the swelling, the infected area was lanced and allowed to drain, but this did not stop the infection from spreading. The child became critically ill with developing septicemia, or blood poisoning. In those days, more than 75 percent of those who acquired blood infections died. Domagk realized that the chances for his daughter’s survival were poor. In desperation, he obtained some of the powdered prontosil that had worked so well on infected animals. He extrapolated from his animal experiments how much to give his daughter so that the bacteria would be killed but his daughter would not be poisoned. Within hours of the first treatment, her fever dropped, and she recovered completely after repeated doses of prontosil. Impact Directly and indirectly, Ehrlich’s and Domagk’s work served to usher in a new medical age. Prior to the discovery that prontosil could be use to treat bacterial infection and the subsequent development of a series of sulfonamides, or “sulfa drugs,” there was no chemical defense against this type of disease; as a result, illnesses such as streptococcal infection, gonorrhea, and pneumonia held terrors of which they have largely been shorn.Asmall injury could easily lead to death. By following the clues presented by the synthetic sulfa drugs and how they worked to destroy bacteria, other scientists were able to develop an even more powerful type of drug, the antibiotic. When the American bacteriologist Rene Dubos discovered that natural organisms could also be used to fight bacteria, interest was renewed in an earlier discovery by the Scottish bacteriologist Sir Alexander: the development of penicillin. Antibiotics such as penicillin and streptomycin have become some of the most important tools in fighting disease. Antibiotics have replaced sulfa drugs for most uses, in part because they cause fewer side effects, but sulfa drugs are still used for a handful of purposes. Together, sulfonamides and antibiotics have offered the possibility of a cure to millions of people who previously would have had little chance of survival.

23 November 2008

Amniocentesis


The invention:

A technique for removing amniotic fluid from
pregnant women, amniocentesis became a life-saving tool for diagnosing
fetal maturity, health, and genetic defects.

The people behind the invention:

Douglas Bevis, an English physician
Aubrey Milunsky (1936- ), an American pediatrician