Pages

09 June 2009

Differential analyzer

The invention: An electromechanical device capable of solving differential equations. The people behind the invention: Vannevar Bush (1890-1974), an American electrical engineer Harold L. Hazen (1901-1980), an American electrical engineer Electrical Engineering Problems Become More Complex AfterWorldWar I, electrical engineers encountered increasingly difficult differential equations as they worked on vacuum-tube circuitry, telephone lines, and, particularly, long-distance power transmission lines. These calculations were lengthy and tedious. Two of the many steps required to solve them were to draw a graph manually and then to determine the area under the curve (essentially, accomplishing the mathematical procedure called integration). In 1925, Vannevar Bush, a faculty member in the Electrical Engineering Department at the Massachusetts Institute of Technology (MIT), suggested that one of his graduate students devise a machine to determine the area under the curve. They first considered a mechanical device but later decided to seek an electrical solution. Realizing that a watt-hour meter such as that used to measure electricity in most homes was very similar to the device they needed, Bush and his student refined the meter and linked it to a pen that automatically recorded the curve. They called this machine the Product Integraph, and MIT students began using it immediately. In 1927, Harold L. Hazen, another MIT faculty member, modified it in order to solve the more complex second-order differential equations (it originally solved only firstorder equations). The Differential Analyzer The original Product Integraph had solved problems electrically, and Hazen’s modification had added a mechanical integrator. Although the revised Product Integraph was useful in solving the types of problems mentioned above, Bush thought the machine could be improved by making it an entirely mechanical integrator, rather than a hybrid electrical and mechanical device. In late 1928, Bush received funding from MIT to develop an entirely mechanical integrator, and he completed the resulting Differential Analyzer in 1930. This machine consisted of numerous interconnected shafts on a long, tablelike framework, with drawing boards flanking one side and six wheel-and-disk integrators on the other. Some of the drawing boards were configured to allow an operator to trace a curve with a pen that was linked to the Analyzer, thus providing input to the machine. The other drawing boards were configured to receive output from the Analyzer via a pen that drew a curve on paper fastened to the drawing board. The wheel-and-disk integrator, which Hazen had first used in the revised Product Integraph, was the key to the operation of the Differential Analyzer. The rotational speed of the horizontal disk was the input to the integrator, and it represented one of the variables in the equation. The smaller wheel rolled on the top surface of the disk, and its speed, which was different from that of the disk, represented the integrator’s output. The distance from the wheel to the center of the disk could be changed to accommodate the equation being solved, and the resulting geometry caused the two shafts to turn so that the output was the integral of the input. The integrators were linked mechanically to other devices that could add, subtract, multiply, and divide. Thus, the Differential Analyzer could solve complex equations involving many different mathematical operations. Because all the linkages and calculating devices were mechanical, the Differential Analyzer actually acted out each calculation. Computers of this type, which create an analogy to the physical world, are called analog computers. The Differential Analyzer fulfilled Bush’s expectations, and students and researchers found it very useful. Although each different problem required Bush’s team to set up a new series of mechanical linkages, the researchers using the calculations viewed this as a minor inconvenience. Students at MIT used the Differential Analyzer in research for doctoral dissertations, master’s theses, and bachelor’s theses. Other researchers worked on a wide range of problems with the Differential Analyzer, mostly in electrical engineering, but also in atomic physics, astrophysics, and seismology. An English researcher, Douglas Hartree, visited Bush’s laboratory in 1933 to learn about the Differential Analyzer and to use it in his own work on the atomic field of mercury. When he returned to England, he built several analyzers based on his knowledge of MIT’s machine. The U.S. Army also built a copy in order to carry out the complex calculations required to create artillery firing tables (which specified the proper barrel angle to achieve the desired range). Other analyzers were built by industry and universities around the world. Impact As successful as the Differential Analyzer had been, Bush wanted to make another, better analyzer that would be more precise, more convenient to use, and more mathematically flexible. In 1932, Bush began seeking money for his new machine, but because of the Depression it was not until 1936 that he received adequate funding for the Rockefeller Analyzer, as it came to be known. Bush left MIT in 1938, but work on the Rockefeller Analyzer continued. It was first demonstrated in 1941, and by 1942, it was being used in the war effort to calculate firing tables and design radar antenna profiles. At the end of the war, it was the most important computer in existence. All the analyzers, which were mechanical computers, faced serious limitations in speed because of the momentum of the machinery, and in precision because of slippage and wear. The digital computers that were being developed after World War II (even at MIT) were faster, more precise, and capable of executing more powerful operations because they were electrical computers. As a result, during the 1950’s, they eclipsed differential analyzers such as those built by Bush. Descendants of the Differential Analyzer remained in use as late as the 1990’s, but they played only a minor role.

Diesel locomotive

The invention: An internal combustion engine in which ignition is achieved by the use of high-temperature compressed air, rather than a spark plug. The people behind the invention: Rudolf Diesel (1858-1913), a German engineer and inventor Sir Dugold Clark (1854-1932), a British engineer Gottlieb Daimler (1834-1900), a German engineer Henry Ford (1863-1947), an American automobile magnate Nikolaus Otto (1832-1891), a German engineer and Daimler’s teacher A Beginning in Winterthur By the beginning of the twentieth century, new means of providing society with power were needed. The steam engines that were used to run factories and railways were no longer sufficient, since they were too heavy and inefficient. At that time, Rudolf Diesel, a German mechanical engineer, invented a new engine. His diesel engine was much more efficient than previous power sources. It also appeared that it would be able to run on a wide variety of fuels, ranging fromoil to coal dust. Diesel first showed that his engine was practical by building a diesel-driven locomotive that was tested in 1912. In the 1912 test runs, the first diesel-powered locomotive was operated on the track of the Winterthur-Romanston rail line in Switzerland. The locomotive was built by a German company, Gesellschaft für Thermo-Lokomotiven, which was owned by Diesel and his colleagues. Immediately after the test runs atWinterthur proved its efficiency, the locomotive—which had been designed to pull express trains on Germany’s Berlin-Magdeburg rail line—was moved to Berlin and put into service. It worked so well that many additional diesel locomotives were built. In time, diesel engines were also widely used to power many other machines, including those that ran factories, motor vehicles, and ships.Diesels, Diesels Everywhere In the 1890’s, the best engines available were steam engines that were able to convert only 5 to 10 percent of input heat energy to useful work. The burgeoning industrial society and a widespread network of railroads needed better, more efficient engines to help businesses make profits and to speed up the rate of transportation available for moving both goods and people, since the maximum speed was only about 48 kilometers per hour. In 1894, Rudolf Diesel, then thirty-five years old, appeared in Augsburg, Germany, with a new engine that he believed would demonstrate great efficiency. The diesel engine demonstrated at Augsburg ran for only a short time. It was, however, more efficient than other existing engines. In addition, Diesel predicted that his engines would move trains faster than could be done by existing engines and that they would run on a wide variety of fuels. Experimentation proved the truth of his claims; even the first working motive diesel engine (the one used in the Winterthur test) was capable of pulling heavy freight and passenger trains at maximum speeds of up to 160 kilometers per hour. By 1912, Diesel, a millionaire, saw the wide use of diesel locomotives in Europe and the United States and the conversion of hundreds of ships to diesel power. Rudolf Diesel’s role in the story ends here, a result of his mysterious death in 1913—believed to be a suicide by the authorities—while crossing the English Channel on the steamer Dresden. Others involved in the continuing saga of diesel engines were the Britisher Sir Dugold Clerk, who improved diesel design, and the American Adolphus Busch (of beer-brewing fame), who bought the North American rights to the diesel engine. The diesel engine is related to automobile engines invented by Nikolaus Otto and Gottlieb Daimler. The standard Otto-Daimler (or Otto) engine was first widely commercialized by American auto magnate Henry Ford. The diesel and Otto engines are internalcombustion engines. This means that they do work when a fuel is burned and causes a piston to move in a tight-fitting cylinder. In diesel engines, unlike Otto engines, the fuel is not ignited by a spark from a spark plug. Instead, ignition is accomplished by the use of high-temperature compressed air.In common “two-stroke” diesel engines, pioneered by Sir Dugold Clerk, a starter causes the engine to make its first stroke. This draws in air and compresses the air sufficiently to raise its temperature to 900 to 1,000 degrees Fahrenheit. At this point, fuel (usually oil) is sprayed into the cylinder, ignites, and causes the piston to make its second, power-producing stroke. At the end of that stroke, more air enters as waste gases leave the cylinder; air compression occurs again; and the power-producing stroke repeats itself. This process then occurs continuously, without restarting. Impact Proof of the functionality of the first diesel locomotive set the stage for the use of diesel engines to power many machines. Although Rudolf Diesel did not live to see it, diesel engines were widely used within fifteen years after his death. At first, their main applications were in locomotives and ships. Then, because diesel engines are more efficient and more powerful than Otto engines, they were modified for use in cars, trucks, and buses. At present, motor vehicle diesel engines are most often used in buses and long-haul trucks. In contrast, diesel engines are not as popular in automobiles as Otto engines, although European auto makers make much wider use of diesel engines than American automakers do. Many enthusiasts, however, view diesel automobiles as the wave of the future. This optimism is based on the durability of the engine, its great power, and the wide range and economical nature of the fuels that can be used to run it. The drawbacks of diesels include the unpleasant odor and high pollutant content of their emissions. Modern diesel engines are widely used in farm and earth-moving equipment, including balers, threshers, harvesters, bulldozers,rock crushers, and road graders. Construction of the Alaskan oil pipeline relied heavily on equipment driven by diesel engines. Diesel engines are also commonly used in sawmills, breweries, coal mines, and electric power plants. Diesel’s brainchild has become a widely used power source, just as he predicted. It is likely that the use of diesel engines will continue and will expand, as the demands of energy conservation require more efficient engines and as moves toward fuel diversification require engines that can be used with various fuels.

06 June 2009

Cyclotron

The invention: The first successful magnetic resonance accelerator for protons, the cyclotron gave rise to the modern era of particle accelerators, which are used by physicists to study the structure of atoms. The people behind the invention: Ernest Orlando Lawrence (1901-1958), an American nuclear physicist who was awarded the 1939 Nobel Prize in Physics M. Stanley Livingston (1905-1986), an American nuclear physicist Niels Edlefsen (1893-1971), an American physicist David Sloan (1905- ), an American physicist and electrical engineer The Beginning of an Era The invention of the cyclotron by Ernest Orlando Lawrence marks the beginning of the modern era of high-energy physics. Although the energies of newer accelerators have increased steadily, the principles incorporated in the cyclotron have been fundamental to succeeding generations of accelerators, many of which were also developed in Lawrence’s laboratory. The care and support for such machines have also given rise to “big science”: the massing of scientists, money, and machines in support of experiments to discover the nature of the atom and its constituents. At the University of California, Lawrence took an interest in the new physics of the atomic nucleus, which had been developed by the British physicist Ernest Rutherford and his followers in England, and which was attracting more attention as the development of quantum mechanics seemed to offer solutions to problems that had long preoccupied physicists. In order to explore the nucleus of the atom, however, suitable probes were required. An artificial means of accelerating ions to high energies was also needed. During the late 1920’s, various means of accelerating alpha particles, protons (hydrogen ions), and electrons had been tried, but none had been successful in causing a nuclear transformation when Lawrence entered the field. The high voltages required exceeded the resources available to physicists. It was believed that more than a million volts would be required to accelerate an ion to sufficient energies to penetrate even the lightest atomic nuclei. At such voltages, insulators broke down, releasing sparks across great distances. European researchers even attempted to harness lightning to accomplish the task, with fatal results. Early in April, 1929, Lawrence discovered an article by a German electrical engineer that described a linear accelerator of ions that worked by passing an ion through two sets of electrodes, each of which carried the same voltage and increased the energy of the ions correspondingly. By spacing the electrodes appropriately and using an alternating electrical field, this “resonance acceleration” of ions could speed subatomic particles to many times the energy applied in each step, overcoming the problems presented when one tried to apply a single charge to an ion all at once. Unfortunately, the spacing of the electrodes would have to be increased as the ions were accelerated, since they would travel farther between each alternation of the phase of the accelerating charge, making an accelerator impractically long in those days of small-scale physics. Fast-Moving Streams of Ions Lawrence knew that a magnetic field would cause the ions to be deflected and form a curved path. If the electrodes were placed across the diameter of the circle formed by the ions’ path, they should spiral out as they were accelerated, staying in phase with the accelerating charge until they reached the periphery of the magnetic field. This, it seemed to him, afforded a means of producing indefinitely high voltages without using high voltages by recycling the accelerated ions through the same electrodes. Many scientists doubted that such a method would be effective. No mechanism was known that would keep the circulating ions in sufficiently tight orbits to avoid collisions with the walls of the accelerating chamber. Others tried unsuccessfully to use resonance acceleration. Agraduate student, M. Stanley Livingston, continued Lawrence’s work. For his dissertation project, he used a brass cylinder 10 centimeters in diameter sealed with wax to hold a vacuum, a half-pillbox of copper mounted on an insulated stem to serve as the electrode, and a Hartley radio frequency oscillator producing 10 watts. The hydrogen molecular ions were produced by a thermionic cathode (mounted near the center of the apparatus) from hydrogen gas admitted through an aperture in the side of the cylinder after a vacuum had been produced by a pump. Once formed, the oscillating electrical field drew out the ions and accelerated them as they passed through the cylinder. The accelerated ions spiraled out in a magnetic field produced by a 10-centimeter electromagnet to a collector. By November, 1930, Livingston had observed peaks in the collector current as he tuned the magnetic field through the value calculated to produce acceleration. Borrowing a stronger magnet and tuning his radio frequency oscillator appropriately, Livingston produced 80,000-electronvolt ions at his collector on January 2, 1931, thus demonstrating the principle of magnetic resonance acceleration.Impact Demonstration of the principle led to the construction of a succession of large cyclotrons, beginning with a 25-centimeter cyclotron developed in the spring and summer of 1931 that produced one-million-electronvolt protons. With the support of the Research Corporation, Lawrence secured a large electromagnet that had been developed for radio transmission and an unused laboratory to house it: the Radiation Laboratory. The 69-centimeter cyclotron built with the magnet was used to explore nuclear physics. It accelerated deuterons, ions of heavy water or deuterium that contain, in addition to the proton, the neutron, which was discovered by Sir James Chadwick in 1932. The accelerated deuteron, which injected neutrons into target atoms, was used to produce a wide variety of artificial radioisotopes. Many of these, such as technetium and carbon 14, were discovered with the cyclotron and were later used in medicine. By 1939, Lawrence had built a 152-centimeter cyclotron for medical applications, including therapy with neutron beams. In that year, he won the Nobel Prize in Physics for the invention of the cyclotron and the production of radioisotopes. During World War II, Lawrence and the members of his Radiation Laboratory developed electromagnetic separation of uranium ions to produce the uranium 235 required for the atomic bomb. After the war, the 467-centimeter cyclotron was completed as a synchrocyclotron, which modulated the frequency of the accelerating fields to compensate for the increasing mass of ions as they approached the speed of light. The principle of synchronous acceleration, invented by Lawrence’s associate, the American physicist Edwin Mattison McMillan, became fundamental to proton and electron synchrotrons. The cyclotron and the Radiation Laboratory were the center of accelerator physics throughout the 1930’s and well into the postwar era. The invention of the cyclotron not only provided a new tool for probing the nucleus but also gave rise to new forms of organizing scientific work and to applications in nuclear medicine and nuclear chemistry. Cyclotrons were built in many laboratories in the United States, Europe, and Japan, and they became a standard tool of nuclear physics.

02 June 2009

Cyclamate

The invention: An artificial sweetener introduced to the American market in 1950 under the tradename Sucaryl. The person behind the invention: Michael Sveda (1912-1999), an American chemist A Foolhardy Experiment The first synthetic sugar substitute, saccharin, was developed in 1879. It became commercially available in 1907 but was banned for safety reasons in 1912. Sugar shortages during World War I (1914- 1918) resulted in its reintroduction. Two other artificial sweeteners, Dulcin and P-4000, were introduced later but were banned in 1950 for causing cancer in laboratory animals. In 1937, Michael Sveda was a young chemist working on his Ph.D. at the University of Illinois. Aflood in the Ohio valley had ruined the local pipe-tobacco crop, and Sveda, a smoker, had been forced to purchase cigarettes. One day while in the laboratory, Sveda happened to brush some loose tobacco from his lips and noticed that his fingers tasted sweet. Having a curious, if rather foolhardy, nature, Sveda tasted the chemicals on his bench to find which one was responsible for the taste. The culprit was the forerunner of cyclohexylsulfamate, the material that came to be known as “cyclamate.” Later, on reviewing his career, Sveda explained the serendipitous discovery with the comment: “God looks after . . . fools, children, and chemists.” Sveda joined E. I. Du Pont de Nemours and Company in 1939 and assigned the patent for cyclamate to his employer. In June of 1950, after a decade of testing on animals and humans, Abbott Laboratories announced that it was launching Sveda’s artificial sweetener under the trade name Sucaryl. Du Pont followed with its sweetener product, Cyclan. A Time magazine article in 1950 announced the new product and noted that Abbott had warned that because the product was a sodium salt, individuals with kidney problems should consult their doctors before adding it to their food.Cyclamate had no calories, but it was thirty to forty times sweeter than sugar. Unlike saccharin, cyclamate left no unpleasant aftertaste. The additive was also found to improve the flavor of some foods, such as meat, and was used extensively to preserve various foods. By 1969, about 250 food products contained cyclamates, including cakes, puddings, canned fruit, ice cream, salad dressings, and its most important use, carbonated beverages. It was originally thought that cyclamates were harmless to the human body. In 1959, the chemical was added to the GRAS (generally recognized as safe) list. Materials on this list, such as sugar, salt, pepper, and vinegar, did not have to be rigorously tested before being added to food. In 1964, however, a report cited evidence that cyclamates and saccharin, taken together, were a health hazard. Its publication alarmed the scientific community. Numerous investigations followed. Shooting Themselves in the Foot Initially, the claims against cyclamate had been that it caused diarrhea or prevented drugs from doing their work in the body. By 1969, these claims had begun to include the threat of cancer. Ironically, the evidence that sealed the fate of the artificial sweetener was provided by Abbott itself. Aprivate Long Island company had been hired by Abbott to conduct an extensive toxicity study to determine the effects of longterm exposure to the cyclamate-saccharin mixtures often found in commercial products. The team of scientists fed rats daily doses of the mixture to study the effect on reproduction, unborn fetuses, and fertility. In each case, the rats were declared to be normal. When the rats were killed at the end of the study, however, those that had been exposed to the higher doses showed evidence of bladder tumors. Abbott shared the report with investigators from the National Cancer Institute and then with the U.S. Food and Drug Administration (FDA). The doses required to produce the tumors were equivalent to an individual drinking 350 bottles of diet cola a day. That was more than one hundred times greater than that consumed even by those people who consumed a high amount of cyclamate. A six-person panel of scientists met to review the data and urged the ban of all cyclamates from foodstuffs. In October, 1969, amid enormous media coverage, the federal government announced that cyclamates were to be withdrawn from the market by the beginning of 1970. In the years following the ban, the controversy continued. Doubt was cast on the results of the independent study linking sweetener use to tumors in rats, because the study was designed not to evaluate cancer risks but to explain the effects of cyclamate use over many years. Bladder parasites, known as “nematodes,” found in the rats may have affected the outcome of the tests. In addition, an impurity found in some of the saccharin used in the study may have led to the problems observed. Extensive investigations such as the three-year project conducted at the National Cancer Research Center in Heidelberg, Germany, found no basis for the widespread ban. In 1972, however, rats fed high doses of saccharin alone were found to have developed bladder tumors. At that time, the sweetener was removed from the GRAS list. An outright ban was averted by the mandatory use of labels alerting consumers that certain products contained saccharin. Impact The introduction of cyclamate heralded the start of a new industry. For individuals who had to restrict their sugar intake for health reasons, or for those who wished to lose weight, there was now an alternative to giving up sweet food. The Pepsi-Cola company put a new diet drink formulation on the market almost as soon as the ban was instituted. In fact, it ran advertisements the day after the ban was announced showing the Diet Pepsi product boldly proclaiming “Sugar added—No Cyclamates.” Sveda, the discoverer of cyclamates, was not impressed with the FDA’s decision on the sweetener and its handling of subsequent investigations. He accused the FDAof “a massive cover-up of elemental blunders” and claimed that the original ban was based on sugar politics and bad science. For the manufacturers of cyclamate, meanwhile, the problem lay with the wording of the Delaney amendment, the legislation that regulates new food additives. The amendment states that the manufacturer must prove that its product is safe, rather than the FDAhaving to prove that it is unsafe. The onus was on Abbott Laboratories to deflect concerns about the safety of the product, and it remained unable to do so.

Cruise missile

The invention: Aircraft weapons system that makes it possible to attack both land and sea targets with extreme accuracy without endangering the lives of the pilots. The person behind the invention: Rear Admiral Walter M. Locke (1930- ), U.S. Navy project manager From the Buzz Bombs of World War II During World War II, Germany developed and used two different types of missiles: ballistic missiles and cruise missiles.Aballistic missile is one that does not use aerodynamic lift in order to fly. It is fired into the air by powerful jet engines and reaches a high altitude; when its engines are out of fuel, it descends on its flight path toward its target. The German V-2 was the first ballistic missile. The United States and other countries subsequently developed a variety of highly sophisticated and accurate ballistic missiles. The other missile used by Germany was a cruise missile called the V-1, which was also called the flying bomb or the buzz bomb. The V-1 used aerodynamic lift in order to fly, just as airplanes do. It flew relatively low and was slow; by the end of the war, the British, against whom it was used, had developed techniques for countering it, primarily by shooting it down. After World War II, both the United States and the Soviet Union carried on the Germans’ development of both ballistic and cruise missiles. The United States discontinued serious work on cruise missile technology during the 1950’s: The development of ballistic missiles of great destructive capability had been very successful. Ballistic missiles armed with nuclear warheads had become the basis for the U.S. strategy of attempting to deter enemy attacks with the threat of a massive missile counterattack. In addition, aircraft carriers provided an air-attack capability similar to that of cruise missiles. Finally, cruise missiles were believed to be too vulnerable to being shot down by enemy aircraft or surface-to-air missiles.While ballistic missiles are excellent for attacking large, fixed targets, they are not suitable for attacking moving targets. They can be very accurately aimed, but since they are not very maneuverable during their final descent, they are limited in their ability to change course to hit a moving target, such as a ship. During the 1967 war, the Egyptians used a Soviet-built cruise missile to sink the Israeli ship Elath. The U.S. military, primarily the Navy and the Air Force, took note of the Egyptian success and within a few years initiated cruise missile development programs. The Development of Cruise Missiles The United States probably could have developed cruise missiles similar to 1990’s models as early as the 1960’s, but it would have required a huge effort. The goal was to develop missiles that could be launched from ships and planes using existing launching equipment, could fly long distances at low altitudes at fairly high speeds, and could reach their targets with a very high degree of accuracy. If the missiles flew too slowly, they would be fairly easy to shoot down, like the German V-1’s. If they flew at too high an altitude, they would be vulnerable to the same type of surface-based missiles that shot down Gary Powers, the pilot of the U.S. U2 spyplane, in 1960. If they were inaccurate, they would be of little use. The early Soviet cruise missiles were designed to meet their performance goals without too much concern about how they would be launched. They were fairly large, and the ships that launched them required major modifications. The U.S. goal of being able to launch using existing equipment, without making major modifications to the ships and planes that would launch them, played a major part in their torpedo-like shape: Sea-Launched Cruise Missiles (SLCMs) had to fit in the submarine’s torpedo tubes, and Air- Launched Cruise Missiles (ALCMs) were constrained to fit in rotary launchers. The size limitation also meant that small, efficient jet engines would be required that could fly the long distances required without needing too great a fuel load. Small, smart computers were needed to provide the required accuracy. The engine and computer technologies began to be available in the 1970’s, and they blossomed in the 1980’s.The U.S. Navy initiated cruise missile development efforts in 1972; the Air Force followed in 1973. In 1977, the Joint Cruise Missile Project was established, with the Navy taking the lead. Rear Admiral Walter M. Locke was named project manager. The goal was to develop air-, sea-, and ground-launched cruise missiles. By coordinating activities, encouraging competition, and requiring the use of common components wherever possible, the cruise missile development program became a model for future weapon-system development efforts. The primary contractors included Boeing Aerospace Company, General Dynamics, and McDonnell Douglas. In 1978, SLCMs were first launched from submarines. Over the next few years, increasingly demanding tests were passed by several versions of cruise missiles. By the mid-1980’s, both antiship and antiland missiles were available. An antiland version could be guided to its target with extreme accuracy by comparing a map programmed into its computer to the picture taken by an on-board video camera. The typical cruise missile is between 18 and 21 feet long, about 21 inches in diameter, and has a wingspan of between 8 and 12 feet. Cruise missiles travel slightly below the speed of sound and have a range of around 1,350 miles (antiland) or 250 miles (antiship). Both conventionally armed and nuclear versions have been fielded. Consequences Cruise missiles have become an important part of the U.S. arsenal. They provide a means of attacking targets on land and water without having to put an aircraft pilot’s life in danger. Their value was demonstrated in 1991 during the Persian GulfWar. One of their uses was to “soften up” defenses prior to sending in aircraft, thus reducing the risk to pilots. Overall estimates are that about 85 percent of cruise missiles used in the Persian Gulf War arrived on target, which is an outstanding record. It is believed that their extreme accuracy also helped to minimize noncombatant casualties.

31 May 2009

Coronary artery bypass surgery

The invention: The most widely used procedure of its type, coronary bypass surgery uses veins from legs to improve circulation to the heart. The people behind the invention: Rene Favaloro (1923-2000), a heart surgeon Donald B. Effler (1915- ), a member of the surgical team that performed the first coronary artery bypass operation F. Mason Sones (1918- ), a physician who developed an improved technique of X-raying the heart’s arteries Fighting Heart Disease In the mid-1960’s, the leading cause of death in the United States was coronary artery disease, claiming nearly 250 deaths per 100,000 people. Because this number was so alarming, much research was being conducted on the heart. Most of the public’s attention was focused on heart transplants performed separately by the famous surgeons Christiaan Barnard and Michael DeBakey. Yet other, less dramatic procedures were being developed and studied. Amajor problem with coronary artery disease, besides the threat of death, is chest pain, or angina. Individuals whose arteries are clogged with fat and cholesterol are frequently unable to deliver enough oxygen to their heart muscles. This may result in angina, which causes enough pain to limit their physical activities. Some of the heart research in the mid-1960’s was an attempt to find a surgical procedure that would eliminate angina in heart patients. The various surgical procedures had varying success rates. In the late 1950’s and early 1960’s, a team of physicians in Cleveland was studying surgical procedures that would eliminate angina. The team was composed of Rene Favaloro, Donald B. Effler, F. Mason Sones, and Laurence Groves. They were working on the concept, proposed by Dr. Arthur M. Vineberg from McGill University in Montreal, of implanting a healthy artery from the chest into the heart. This bypass procedure would provide the heart with another source of blood, resulting in enough oxygen to overcome the angina. Yet Vineberg’s surgery was often ineffective because it was hard to determine exactly where to implant the new artery. New Techniques In order to make Vineberg’s proposed operation successful, better diagnostic tools were needed. This was accomplished by the work of Sones. He developed a diagnostic procedure, called “arteriography,” whereby a catheter was inserted into an artery in the arm, which he ran all the way into the heart. He then injected a dye into the coronary arteries and photographed them with a high-speed motionpicture camera. This provided an image of the heart, which made it easy to determine where the blockages were in the coronary arteries. Using this tool, the team tried several new techniques. First, the surgeons tried to ream out the deposits found in the narrow portion of the artery. They found, however, that this actually reduced blood flow. Second, they tried slitting the length of the blocked area of the artery and suturing a strip of tissue that would increase the diameter of the opening. This was also ineffective because it often resulted in turbulent blood flow. Finally, the team attempted to reroute the flow of blood around the blockage by suturing in other tissue, such as a portion of a vein from the upper leg. This bypass procedure removed that part of the artery that was clogged and replaced it with a clear vessel, thereby restoring blood flow through the artery. This new method was introduced by Favaloro in 1967. In order for Favaloro and other heart surgeons to perform coronary artery surgery successfully, several other medical techniques had to be developed. These included extracorporeal circulation and microsurgical techniques.Extracorporeal circulation is the process of diverting the patient’s blood flow from the heart and into a heart-lung machine. This procedure was developed in 1953 by U.S. surgeon John H. Gibbon, Jr. Since the blood does not flow through the heart, the heart can be temporarily stopped so that the surgeons can isolate the artery and perform the surgery on motionless tissue. Microsurgery is necessary because some of the coronary arteries are less than 1.5 millimeters in diameter. Since these arteries had to be sutured, optical magnification and very delicate and sophisticated surgical tools were required. After performing this surgery on numerous patients, follow-up studieswere able to determine the surgery’s effectiveness. Only then was the value of coronary artery bypass surgery recognized as an effective procedure for reducing angina in heart patients. Consequences According to the American Heart Association, approximately 332,000 bypass surgeries were performed in the United States in 1987, an increase of 48,000 from 1986. These figures show that the work by Favaloro and others has had a major impact on the health of United States citizens. The future outlook is also positive. It has been estimated that five million people had coronary artery disease in 1987. Of this group, an estimated 1.5 million had heart attacks and 500,000 died. Of those living, many experienced angina. Research has developed new surgical procedures and new drugs to help fight coronary artery disease. Yet coronary artery bypass surgery is still a major form of treatment.

28 May 2009

Contact lenses

The invention: Small plastic devices that fit under the eyelids, contact lenses, or “contacts,” frequently replace the more familiar eyeglasses that many people wear to correct vision problems. The people behind the invention: Leonardo da Vinci (1452-1519), an Italian artist and scientist Adolf Eugen Fick (1829-1901), a German glassblower Kevin Tuohy, an American optician Otto Wichterle (1913- ), a Czech chemist William Feinbloom (1904-1985), an American optometrist An Old Idea There are two main types of contact lenses: hard and soft. Both types are made of synthetic polymers (plastics). The basic concept of the contact lens was conceived by Leonardo da Vinci in 1508. He proposed that vision could be improved if small glass ampules filled with water were placed in front of each eye. Nothing came of the idea until glass scleral lenses were invented by the German glassblower Adolf Fick. Fick’s large, heavy lenses covered the pupil of the eye, its colored iris, and part of the sclera (the white of the eye). Fick’s lenses were not useful, since they were painful to wear. In the mid-1930’s, however, plastic scleral lenses were developed by various organizations and people, including the German company I. G. Farben and the American optometrist William Feinbloom. These lenses were light and relatively comfortable; they could be worn for several hours at a time. In 1945, the American optician Kevin Tuohy developed corneal lenses, which covered only the cornea of the eye. Reportedly, Tuohy’s invention was inspired by the fact that his nearsighted wife could not bear scleral lenses but hated to wear eyeglasses. Tuohy’s lenses were hard contact lenses made of rigid plastic, but they were much more comfortable than scleral lenses and could be worn for longer periods of time. Soon after, other people developed soft contact lenses, which cover both the cornea and the iris. At present,many kinds of contact lenses are available. Both hard and soft contact lenses have advantages for particular uses. Eyes, Tears, and Contact Lenses The camera-like human eye automatically focuses itself and adjusts to the prevailing light intensity. In addition, it never runs out of “film” and makes a continuous series of visual images. In the process of seeing, light enters the eye and passes through the clear, dome-shaped cornea, through the hole (the pupil) in the colored iris, and through the clear eye lens, which can change shape by means of muscle contraction. The lens focuses the light, which next passes across the jellylike “vitreous humor” and hits the retina. There, light-sensitive retinal cells send visual images to the optic nerve, which transmits them to the brain for interpretation. Many people have 20/20 (normal) vision, which means that they can clearly see letters on a designated line of a standard eye chart placed 20 feet away. Nearsighted (myopic) people have vision of 20/40 or worse. This means that, 20 feet from the eye chart, they see clearly what people with 20/20 vision can see clearly at a greater distance. Myopia (nearsightedness) is one of the four most common visual defects. The others are hyperopia, astigmatism, and presbyopia. All are called “refractive errors” and are corrected with appropriate eyeglasses or contact lenses. Myopia, which occurs in 30 percent of humans, occurs when the eyeball is too long for the lens’s focusing ability and images of distant objects focus before they reach the retina, causing blurry vision. Hyperopia, or farsightedness, occurs when the eyeballs are too short. In hyperopia, the eye’s lenses cannot focus images of nearby objects by the time those images reach the retina, resulting in blurry vision. A more common condition is astigmatism, in which incorrectly shaped corneas make all objects appear blurred. Finally, presbyopia, part of the aging process, causes the lens of the eye to lose its elasticity. It causes progressive difficulty in seeing nearby objects. In myopic, hyperopic, or astigmatic people, bifocal (two-lens) systems are used to correct presbyopia, whereas monofocal systems are used to correct presbyopia in people whose vision is otherwise normal.Modern contact lenses, which many people prefer to eyeglasses, are used to correct all common eye defects as well as many others not mentioned here. The lenses float on the layer of tears that is made continuously to nourish the eye and keep it moist. They fit under the eyelids and either over the cornea or over both the cornea and the iris, and they correct visual errors by altering the eye’s focal length enough to produce 20/20 vision. In addition to being more attractive than eyeglasses, contact lenses correct visual defects more effectively than eyeglasses can. Some soft contact lenses (all are made of flexible plastics) can be worn almost continuously. Hard lenses are made of more rigid plastic and last longer, though they can usually be worn only for six to nine hours at a time. The choice of hard or soft lenses must be made on an individual basis. The disadvantages of contact lenses include the fact that they must be cleaned frequently to prevent eye irritation. Furthermore, people who do not produce adequate amounts of tears (a condition called “dry eyes”) cannot wear them. Also, arthritis, many allergies, and poor manual dexterity caused by old age or physical problems make many people poor candidates for contact lenses.Impact The invention of Plexiglas hard scleral contact lenses set the stage for the development of the widely used corneal hard lenses by Tuohy. The development of soft contact lenses available to the general public began in Czechoslovakia in the 1960’s. It led to the sale, starting in the 1970’s, of the popular, soft contact lenses pioneered by Otto Wichterle. The Wichterle lenses, which cover both the cornea and the iris, are made of a plastic called HEMA (short for hydroxyethylmethylmethacrylate). These very thin lenses have disadvantages that include the requirement of disinfection between uses, incomplete astigmatism correction, low durability, and the possibility of chemical combination with some medications, which can damage the eyes. Therefore, much research is being carried out to improve them. For this reason, and because of the continued popularity of hard lenses, new kinds of soft and hard lenses are continually coming on the market.

24 May 2009

Computer chips

The invention: Also known as a microprocessor, a computer chip combines the basic logic circuits of a computer on a single silicon chip. The people behind the invention: Robert Norton Noyce (1927-1990), an American physicist William Shockley (1910-1989), an American coinventor of the transistor who was a cowinner of the 1956 Nobel Prize in Physics Marcian Edward Hoff, Jr. (1937- ), an American engineer Jack St. Clair Kilby (1923- ), an American researcher and assistant vice president of Texas Instruments The Shockley Eight The microelectronics industry began shortly after World War II with the invention of the transistor. While radar was being developed during the war, it was discovered that certain crystalline substances, such as germanium and silicon, possess unique electrical properties that make them excellent signal detectors. This class of materials became known as “semiconductors,” because they are neither conductors nor insulators of electricity. Immediately after the war, scientists at Bell Telephone Laboratories began to conduct research on semiconductors in the hope that they might yield some benefits for communications. The Bell physicists learned to control the electrical properties of semiconductor crystals by “doping” (treating) them with minute impurities. When two thin wires for current were attached to this material, a crude device was obtained that could amplify the voice. The transistor, as this device was called, was developed late in 1947. The transistor duplicated many functions of vacuum tubes; it was also smaller, required less power, and generated less heat. The three Bell Laboratories scientists who guided its development—William Shockley, Walter H. Brattain, and John Bardeen—won the 1956 Nobel Prize in Physics for their work.Shockley left Bell Laboratories and went to Palo Alto, California, where he formed his own company, Shockley Semiconductor Laboratories, which was a subsidiary of Beckman Instruments. Palo Alto is the home of Stanford University, which, in 1954, set aside 655 acres of land for a high-technology industrial area known as Stanford Research Park. One of the first small companies to lease a site there was Hewlett-Packard. Many others followed, and the surrounding area of Santa Clara County gave rise in the 1960’s and 1970’s to a booming community of electronics firms that became known as “Silicon Valley.” On the strength of his prestige, Shockley recruited eight young scientists from the eastern United States to work for him. One was Robert Norton Noyce, an Iowa-bred physicist with a doctorate from the Massachusetts Institute of Technology. Noyce came to Shockley’s company in 1956. The “Shockley Eight,” as they became known in the industry, soon found themselves at odds with their boss over issues of research and development. Seven of the dissenting scientists negotiated with industrialist Sherman Fairchild, and they convinced the remaining holdout, Noyce, to join them as their leader. The Shockley Eight defected in 1957 to form a new company, Fairchild Semiconductor, in nearby Mountain View, California. Shockley’s company, which never recovered from the loss of these scientists, soon went out of business.Integrating Circuits Research efforts at Fairchild Semiconductor and Texas Instruments, in Dallas, Texas, focused on putting several transistors on one piece, or “chip,” of silicon. The first step involved making miniaturized electrical circuits. Jack St. Clair Kilby, a researcher at Texas Instruments, succeeded in making a circuit on a chip that consisted of tiny resistors, transistors, and capacitors, all of which were connected with gold wires. He and his company filed for a patent on this “integrated circuit” in February, 1959. Noyce and his associates at Fairchild Semiconductor followed in July of that year with an integrated circuit manufactured by means of a “planar process,” which involved laying down several layers of semiconductor that were isolated by layers of insulating material. Although Kilby and Noyce are generally recognized as coinventors of the integrated circuit, Kilby alone received a membership in the National Inventors Hall of Fame for his efforts. Consequences By 1968, Fairchild Semiconductor had grown to a point at which many of its key Silicon Valley managers had major philosophical differences with the East Coast management of their parent company. This led to a major exodus of top-level management and engineers. Many started their own companies. Noyce, Gordon E. Moore, and Andrew Grove left Fairchild to form a new company in Santa Clara called Intel with $2 million that had been provided by venture capitalist Arthur Rock. Intel’s main business was the manufacture of computer memory integrated circuit chips. By 1970, Intel was able to develop and bring to market a random-access memory (RAM) chip that was subsequently purchased in large quantities by several major computer manufacturers, providing large profits for Intel. In 1969, Marcian Edward Hoff, Jr., an Intel research and development engineer, met with engineers from Busicom, a Japanese firm. These engineers wanted Intel to design a set of integrated circuits for Busicom’s desktop calculators, but Hoff told them their specifications were too complex. Nevertheless, Hoff began to think about the possibility of incorporating all the logic circuits of a computer central processing unit (CPU) into one chip. He began to design a chip called a “microprocessor,” which, when combined with a chip that would hold a program and one that would hold data, would become a small, general-purpose computer. Noyce encouraged Hoff and his associates to continue his work on the microprocessor, and Busicom contracted with Intel to produce the chip. Frederico Faggin, who was hired from Fairchild, did the chip layout and circuit drawings. In January, 1971, the Intel team finished its first working microprocessor, the 4004. The following year, Intel made a higher-capacity microprocessor, the 8008, for Computer Terminals Corporation. That company contracted with Texas Instruments to produce a chip with the same specifications as the 8008, which was produced in June, 1972. Other manufacturers soon produced their own microprocessors. The Intel microprocessor became the most widely used computer chip in the budding personal computer industry and may take significant credit for the PC “revolution” that soon followed. Microprocessors have become so common that people use them every day without realizing it. In addition to being used in computers,the microprocessor has found its way into automobiles, microwave ovens, wristwatches, telephones, and many other ordinary items.

21 May 2009

Compressed-air-accumulating power plant



The invention:

Plants that can be used to store energy in the form
of compressed air when electric power demand is low and use it
to produce energy when power demand is high.

The organization behind the invention:

Nordwestdeutsche Kraftwerke, a Germany company


16 May 2009

Compact disc

Compact disc The invention: A plastic disk on which digitized music or computer data is stored. The people behind the invention: Akio Morita (1921- ), a Japanese physicist and engineer who was a cofounder of Sony Wisse Dekker (1924- ), a Dutch businessman who led the Philips company W. R. Bennett (1904-1983), an American engineer who was a pioneer in digital communications and who played an important part in the Bell Laboratories research program Digital Recording The digital system of sound recording, like the analog methods that preceded it, was developed by the telephone companies to improve the quality and speed of telephone transmissions. The system of electrical recording introduced by Bell Laboratories in the 1920s was part of this effort. Even Edison’s famous invention of the phonograph in 1877 was originally conceived as an accompaniment to the telephone. Although developed within the framework of telephone communications, these innovations found wide applications in the entertainment industry. The basis of the digital recording system was a technique of sampling the electrical waveforms of sound called PCM, or pulse code modulation. PCM measures the characteristics of these waves and converts them into numbers. This technique was developed at Bell Laboratories in the 1930’s to transmit speech. At the end of World War II, engineers of the Bell System began to adaptPCMtechnology for ordinary telephone communications. The problem of turning sound waves into numbers was that of finding a method that could quickly and reliably manipulate millions of them. The answer to this problem was found in electronic computers, which used binary code to handle millions of computations in a few seconds. The rapid advance of computer technology and the semiconductor circuits that gave computers the power to handle complex calculations provided the means to bring digital sound technology into commercial use. In the 1960’s, digital transmission and switching systems were introduced to the telephone network. Pulse coded modulation of audio signals into digital code achieved standards of reproduction that exceeded even the best analog system, creating an enormous dynamic range of sounds with no distortion or background noise. The importance of digital recording went beyond the transmission of sound because it could be applied to all types of magnetic recording in which the source signal is transformed into an electric current. There were numerous commercial applications for such a system, and several companies began to explore the possibilities of digital recording in the 1970’s. Researchers at the Sony, Matsushita, and Mitsubishi electronics companies in Japan produced experimental digital recording systems. Each developed its own PCM processor, an integrated circuit that changes audio signals into digital code. It does not continuously transform sound but instead samples it by analyzing thousands of minute slices of it per second. Sony’s PCM-F1 was the first analog-to-digital conversion chip to be produced. This gave Sony a lead in the research into and development of digital recording. All three companies had strong interests in both audio and video electronics equipment and saw digital recording as a key technology because it could deal with both types of information simultaneously. They devised recorders for use in their manufacturing operations. After using PCM techniques to turn sound into digital code, they recorded this information onto tape, using not magnetic audio tape but the more advanced video tape, which could handle much more information. The experiments with digital recording occurred simultaneously with the accelerated development of video recording technology and owed much to the enhanced capabilities of video recorders. At this time, videocassette recorders were being developed in several corporate laboratories in Japan and Europe. The Sony Corporation was one of the companies developing video recorders at this time. Its U-matic machines were successfully used to record digitally. In 1972, the Nippon Columbia Company began to make its master recordings digitally on an Ampex video recording machine. Links Among New Technologies There were powerful links between the new sound recording systems and the emerging technologies of storing and retrieving video images. The television had proved to be the most widely used and profitable electronic product of the 1950’s, but with the market for color television saturated by the end of the 1960’s, manufacturers had to look for a replacement product.Amachine to save and replay television images was seen as the ideal companion to the family TV set. The great consumer electronics companies—General Electric and RCAin the United States, Philips and Telefunken in Europe, and Sony and Matsushita in Japan—began experimental programs to find a way to save video images. RCA’s experimental teams took the lead in developing an optical videodisc system, called Selectavision, that used an electronic stylus to read changes in capacitance on the disc. The greatest challenge to them came from the Philips company of Holland. Its optical videodisc used a laser beam to read information on a revolving disc, in which a layer of plastic contained coded information. With the aid of the engineering department of the Deutsche Grammophon record company, Philips had an experimental laser disc in hand by 1964. The Philips Laservision videodisc was not a commercial success, but it carried forward an important idea. The research and engineering work carried out in the laboratories at Eindhoven in Holland proved that the laser reader could do the job. More important, Philips engineers had found that this fragile device could be mass produced as a cheap and reliable component of a commercial product. The laser optical decoder was applied to reading the binary codes of digital sound. By the end of the 1970’s, Philips engineers had produced a working system. Ten years of experimental work on the Laservision system proved to be a valuable investment for the Philips corporation. Around 1979, it started to work on a digital audio disc (DAD) playback system. This involved more than the basic idea of converting the output of the PCM conversion chip onto a disc. The lines of pits on the compact disc carry a great amount of information: the left- and right-hand tracks of the stereo system are identified, and a sequence of pits also controls the motor speed and corrects any error in the laser reading of the binary codes. This research was carried out jointly with the Sony Corporation of Japan, which had produced a superior method of encoding digital sound with its PCM chips. The binary codes that carried the information were manipulated by Sony’s sixteen-bit microprocessor. Its PCM chip for analog-to-digital conversion was also employed. Together, Philips and Sony produced a commercial digital playback record that they named the compact disc. The name is significant, as it does more than indicate the size of the disc—it indicates family ties with the highly successful compact cassette. Philips and Sony had already worked to establish this standard in the magnetic tape format and aimed to make their compact disc the standard for digital sound reproduction.Philips and Sony began to demonstrate their compact digital disc (CD) system to representatives of the audio industry in 1981. They were not alone in digital recording. The Japanese Victor Company, a subsidiary of Matsushita, had developed a version of digital recording from its VHD video disc design. It was called audio high density disc (AHD). Instead of the small CD disc, the AHD system used a ten-inch vinyl disc. Each digital recording system used a different PCM chip with a different rate of sampling the audio signal.The recording and electronics industries’ decision to standardize on the Philips/ Sony CD system was therefore a major victory for these companies and an important event in the digital era of sound recording. Sony had found out the hard way that the technical performance of an innovation is irrelevant when compared with the politics of turning it into an industrywide standard. Although the pioneer in videocassette recorders, Sony had been beaten by its rival, Matsushita, in establishing the video recording standard. This mistake was not repeated in the digital standards negotiations, and many companies were persuaded to license the new technology. In 1982, the technology was announced to the public. The following year, the compact disc was on the market. The Apex of Sound Technology The compact disc represented the apex of recorded sound technology. Simply put, here at last was a system of recording in which there was no extraneous noise—no surface noise of scratches and pops, no tape hiss, no background hum—and no damage was done to the recording as it was played. In principle, a digital recording will last forever, and each play will sound as pure as the first. The compact disc could also play much longer than the vinyl record or long-playing cassette tape. Despite these obvious technical advantages, the commercial success of digital recording was not ensured. There had been several other advanced systems that had not fared well in the marketplace, and the conspicuous failure of quadrophonic sound in the 1970’s had not been forgotten within the industry of recorded sound. Historically, there were two key factors in the rapid acceptance of a new system of sound recording and reproduction: a library of prerecorded music to tempt the listener into adopting the system and a continual decrease in the price of the playing units to bring them within the budgets of more buyers. By 1984, there were about a thousand titles available on compact disc in the United States; that number had doubled by 1985. Although many of these selections were classical music—it was naturally assumed that audiophiles would be the first to buy digital equipment—popular music was well represented. The firstCDavailable for purchase was an album by popular entertainer Billy Joel. The first CD-playing units cost more than $1,000, but Akio Morita of Sony was determined that the company should reduce the price of players even if it meant selling them below cost. Sony’s audio engineering department improved the performance of the players while reducing size and cost. By 1984, Sony had a small CD unit on the market for $300. Several of Sony’s competitors, including Matsushita, had followed its lead into digital reproduction. There were several compact disc players available in 1985 that cost less than $500. Sony quickly applied digital technology to the popular personal stereo and to automobile sound systems. Sales of CD units increased roughly tenfold from 1983 to 1985. Impact on Vinyl Recording When the compact disc was announced in 1982, the vinyl record was the leading form of recorded sound, with 273 million units sold annually compared to 125 million prerecorded cassette tapes. The compact disc sold slowly, beginning with 800,000 units shipped in 1983 and rising to 53 million in 1986. By that time, the cassette tape had taken the lead, with slightly fewer than 350 million units. The vinyl record was in decline, with only about 110 million units shipped. Compact discs first outsold vinyl records in 1988. In the ten years from 1979 to 1988, the sales of vinyl records dropped nearly 80 percent. In 1989, CDs accounted for more than 286 million sales, but cassettes still led the field with total sales of 446 million. The compact disc finally passed the cassette in total sales in 1992, when more than 300 million CDs were shipped, an increase of 22 percent over the figure for 1991. The introduction of digital recording had an invigorating effect on the industry of recorded sound, which had been unable to fully recover from the slump of the late 1970’s. Sales of recorded music had stagnated in the early 1980’s, and an industry accustomed to steady increases in output became eager to find a new product or style of music to boost its sales. The compact disc was the product to revitalize the market for both recordings and players. During the 1980’s, worldwide sales of recorded music jumped from $12 billion to $22 billion, with about half of the sales volume accounted for by digital recordings by the end of the decade. The success of digital recording served in the long run to undermine the commercial viability of the compact disc. This was a playonly technology, like the vinyl record before it. Once users had become accustomed to the pristine digital sound, they clamored for digital recording capability. The alliance of Sony and Philips broke down in the search for a digital tape technology for home use. Sony produced a digital tape system calledDAT, while Philips responded with a digital version of its compact audio tape called DCC. Sony answered the challenge of DCC with its Mini Disc (MD) product, which can record and replay digitally. The versatility of digital recording has opened up a wide range of consumer products. Compact disc technology has been incorporated into the computer, in which CD-ROM readers convert the digital code of the disc into sound and images. Many home computers have the capability to record and replay sound digitally. Digital recording is the basis for interactive audio/video computer programs in which the user can interface with recorded sound and images. Philips has established a strong foothold in interactive digital technology with its CD-I (compact disc interactive) system, which was introduced in 1990. This acts as a multimedia entertainer, providing sound, moving images, games, and interactive sound and image publications such as encyclopedias. The future of digital recording will be broad-based systems that can record and replay a wide variety of sounds and images and that can be manipulated by users of home computers.

13 May 2009

Community antenna television


The invention: 

Asystem for connecting households in isolated areas to common antennas to improve television reception, community antenna television was a forerunner of modern cabletelevision systems.

The people behind the invention: 

Robert J. Tarlton, the founder of CATV in eastern Pennsylvania
Ed Parsons, the founder of CATV in Oregon
Ted Turner (1938- ), founder of the first cable superstation,WTBS


08 May 2009

Communications satellite

The invention: Telstar I, the world’s first commercial communications satellite, opened the age of live, worldwide television by connecting the United States and Europe. The people behind the invention: Arthur C. Clarke (1917- ), a British science-fiction writer who in 1945 first proposed the idea of using satellites as communications relays John R. Pierce (1910- ), an American engineer who worked on the Echo and Telstar satellite communications projects Science Fiction? In 1945, Arthur C. Clarke suggested that a satellite orbiting high above the earth could relay television signals between different stations on the ground, making for a much wider range of transmission than that of the usual ground-based systems. Writing in the February, 1945, issue of Wireless World, Clarke said that satellites “could give television and microwave coverage to the entire planet.” In 1956, John R. Pierce at the Bell Telephone Laboratories of the American Telephone & Telegraph Company (AT&T) began to urge the development of communications satellites. He saw these satellites as a replacement for the ocean-bottom cables then being used to carry transatlantic telephone calls. In 1950, about one-and-a-half million transatlantic calls were made, and that number was expected to grow to three million by 1960, straining the capacity of the existing cables; in 1970, twenty-one million calls were made. Communications satellites offered a good, cost-effective alternative to building more transatlantic telephone cables. On January 19, 1961, the Federal Communications Commission (FCC) gave permission for AT&T to begin Project Telstar, the first commercial communications satellite bridging the Atlantic Ocean.AT&T reached an agreement with the National Aeronautics and Space Administration (NASA) in July, 1961, in which AT&T would pay $3 million for each Telstar launch. The Telstar project involved about four hundred scientists, engineers, and technicians at the Bell Telephone Laboratories, twenty more technical personnel at AT&T headquarters, and the efforts of more than eight hundred other companies that provided equipment or services. Telstar 1 was shaped like a faceted sphere, was 88 centimeters in diameter, and weighed 80 kilograms. Most of its exterior surface (sixty of the seventy-four facets) was covered by 3,600 solar cells to convert sunlight into 15 watts of electricity to power the satellite. Each solar cell was covered with artificial sapphire to reduce the damage caused by radiation. The main instrument was a two-way radio able to handle six hundred telephone calls at a time or one television channel. The signal that the radio would send back to Earth was very weak—less than one-thirtieth the energy used by a household light bulb. Large ground antennas were needed to receive Telstar’s faint signal. The main ground station was built by AT&T in Andover, Maine, on a hilltop informally called “Space Hill.” A horn-shaped antenna, weighing 380 tons, with a length of 54 meters and an open end with an area of 1,097 square meters, was mounted so that it could rotate to track Telstar across the sky. To protect it from wind and weather, the antenna was built inside an inflated dome, 64 meters in diameter and 49 meters tall. It was, at the time, the largest inflatable structure ever built. A second, smaller horn antenna in Holmdel, New Jersey, was also used.International Cooperation In February, 1961, the governments of the United States and England agreed to let the British Post Office and NASAwork together to test experimental communications satellites. The British Post Office built a 26-meter-diameter steerable dish antenna of its own design at Goonhilly Downs, near Cornwall, England. Under a similar agreement, the French National Center for Telecommunications Studies constructed a ground station, almost identical to the Andover station, at Pleumeur-Bodou, Brittany, France. After testing, Telstar 1 was moved to Cape Canaveral, Florida, and attached to the Thor-Delta launch vehicle built by the Douglas Aircraft Company. The Thor-Delta was launched at 3:35 a.m. eastern standard time (EST) on July 10, 1962. Once in orbit, Telstar 1 took 157.8 minutes to circle the globe. The satellite came within range of the Andover station on its sixth orbit, and a television test pattern was transmitted to the satellite at 6:26 p.m. EST. At 6:30 p.m. EST, a tape-recorded black-and-white image of the American flag with the Andover station in the background, transmitted from Andover to Holmdel, opened the first television show ever broadcast by satellite. Live pictures of U.S. vice president Lyndon B. Johnson and other officials gathered at Carnegie Institution inWashington, D.C., followed on the AT&T program carried live on all three American networks. Up to the moment of launch, it was uncertain if the French station would be completed in time to participate in the initial test. At 6:47 p.m. EST, however, Telstar’s signal was picked up by the station in Pleumeur-Bodou, and Johnson’s image became the first television transmission to cross the Atlantic. Pictures received at the French station were reported to be so clear that they looked like they had been sent from only forty kilometers away. Because of technical difficulties, the English station was unable to receive a clear signal. The first formal exchange of programming between the United States and Europe occurred on July 23, 1962. This special eighteenminute program, produced by the European Broadcasting Union, consisted of live scenes from major cities throughout Europe and was transmitted from Goonhilly Downs, where the technical difficulties had been corrected, to Andover via Telstar. On the previous orbit, a program entitled “America, July 23, 1962,” showing scenes from fifty television cameras around the United States, was beamed from Andover to Pleumeur-Bodou and seen by an estimated one hundred million viewers throughout Europe.Consequences Telstar 1 and the communications satellites that followed it revolutionized the television news and sports industries. Before, television networks had to ship film across the oceans, meaning delays of hours or days between the time an event occurred and the broadcast of pictures of that event on television on another continent. Now, news of major significance, as well as sporting events, can be viewed live around the world. The impact on international relations also was significant, with world opinion becoming able to influence the actions of governments and individuals, since those actions could be seen around the world as the events were still in progress. More powerful launch vehicles allowed new satellites to be placed in geosynchronous orbits, circling the earth at a speed the same as the earth’s rotation rate. When viewed from the ground, these satellites appeared to remain stationary in the sky. This allowed continuous communications and greatly simplified the ground antenna system. By the late 1970’s, private individuals had built small antennas in their backyards to receive television signals directly from the satellites.

04 May 2009

Colossus computer

The invention: The first all-electronic calculating device, the Colossus computer was built to decipher German military codes during World War II. The people behind the invention: Thomas H. Flowers, an electronics expert Max H. A. Newman (1897-1984), a mathematician Alan Mathison Turing (1912-1954), a mathematician C. E. Wynn-Williams, a member of the Telecommunications Research Establishment An Undercover Operation In 1939, during World War II (1939-1945), a team of scientists, mathematicians, and engineers met at Bletchley Park, outside London, to discuss the development of machines that would break the secret code used in Nazi military communications. The Germans were using a machine called “Enigma” to communicate in code between headquarters and field units. Polish scientists, however, had been able to examine a German Enigma and between 1928 and 1938 were able to break the codes by using electromechanical codebreaking machines called “bombas.” In 1938, the Germans made the Enigma more complicated, and the Polish were no longer able to break the codes. In 1939, the Polish machines and codebreaking knowledge passed to the British. Alan Mathison Turing was one of the mathematicians gathered at Bletchley Park to work on codebreaking machines. Turing was one of the first people to conceive of the universality of digital computers. He first mentioned the “Turing machine” in 1936 in an article published in the Proceedings of the London Mathematical Society. The Turing machine, a hypothetical device that can solve any problem that involves mathematical computation, is not restricted to only one task—hence the universality feature. Turing suggested an improvement to the Bletchley codebreaking machine, the “Bombe,” which had been modeled on the Polish bomba. This improvement increased the computing power of the machine. The new codebreaking machine replaced the tedious method of decoding by hand, which in addition to being slow, was ineffective in dealing with complicated encryptions that were changed daily. Building a Better Mousetrap The Bombe was very useful. In 1942, when the Germans started using a more sophisticated cipher machine known as the “Fish,” Max H. A. Newman, who was in charge of one subunit at Bletchley Park, believed that an automated device could be designed to break the codes produced by the Fish. Thomas H. Flowers, who was in charge of a switching group at the Post Office Research Station at Dollis Hill, had been approached to build a special-purpose electromechanical device for Bletchley Park in 1941. The device was not useful, and Flowers was assigned to other problems. Flowers began to work closely with Turing, Newman, and C. E. Wynn-Williams of the Telecommunications Research Establishment (TRE) to develop a machine that could break the Fish codes. The Dollis Hill team worked on the tape driving and reading problems, and Wynn-Williams’s team at TRE worked on electronic counters and the necessary circuitry. Their efforts produced the “Heath Robinson,” which could read two thousand characters per second. The Heath Robinson used vacuum tubes, an uncommon component in the early 1940’s. The vacuum tubes performed more reliably and rapidly than the relays that had been used for counters. Heath Robinson and the companion machines proved that high-speed electronic devices could successfully do cryptoanalytic work (solve decoding problems). Entirely automatic in operation once started, the Heath Robinson was put together at Bletchley Park in the spring of 1943. The Heath Robinson became obsolete for codebreaking shortly after it was put into use, so work began on a bigger, faster, and more powerful machine: the Colossus. Flowers led the team that designed and built the Colossus in eleven months at Dollis Hill. The first Colossus (Mark I) was a bigger, faster version of the Heath Robinson and read about five thousand characters per second. Colossus had approximately fifteen hundred vacuum tubes, which was the largest number that had ever been used at that time. Although Turing and Wynn-Williams were not directly involved with the design of the Colossus, their previous work on the Heath Robinson was crucial to the project, since the first Colossus was based on the Heath Robinson. Colossus became operational at Bletchley Park in December, 1943, and Flowers made arrangements for the manufacture of its components in case other machines were required. The request for additional machines came in March, 1944. The second Colossus, the Mark II, was extensively redesigned and was able to read twentyfive thousand characters per second because it was capable of performing parallel operations (carrying out several different operations at once, instead of one at a time); it also had a short-term memory. The Mark II went into operation on June 1, 1944. More machines were made, each with further modifications, until there were ten. The Colossus machines were special-purpose, programcontrolled electronic digital computers, the only known electronic programmable computers in existence in 1944. The use of electronics allowed for a tremendous increase in the internal speed of the machine. Impact The Colossus machines gave Britain the best codebreaking machines of World War II and provided information that was crucial for the Allied victory. The information decoded by Colossus, the actual messages, and their influence on military decisions would remain classified for decades after the war. The later work of several of the people involved with the Bletchley Park projects was important in British computer development after the war. Newman’s and Turing’s postwar careers were closely tied to emerging computer advances. Newman, who was interested in the impact of computers on mathematics, received a grant from the Royal Society in 1946 to establish a calculating machine laboratory at Manchester University. He was also involved with postwar computer growth in Britain. Several other members of the Bletchley Park team, including Turing, joined Newman at Manchester in 1948. Before going to Manchester University, however, Turing joined Britain’s National Physical Laboratory (NPL). At NPL, Turing worked on an advanced computer known as the Pilot Automatic Computing Engine (Pilot ACE). While at NPL, Turing proposed the concept of a stored program, which was a controversial but extremely important idea in computing. A“stored” program is one that remains in residence inside the computer, making it possible for a particular program and data to be fed through an input device simultaneously. (The Heath Robinson and Colossus machines were limited by utilizing separate input tapes, one for the program and one for the data to be analyzed.) Turing was among the first to explain the stored-program concept in print. He was also among the first to imagine how subroutines could be included in a program. (Asubroutine allows separate tasks within a large program to be done in distinct modules; in effect, it is a detour within a program. After the completion of the subroutine, the main program takes control again.)

22 April 2009

Color television


The invention: 

System for broadcasting full-color images over the
airwaves.

The people behind the invention:

Peter Carl Goldmark (1906-1977), the head of the CBS research
and development laboratory
William S. Paley (1901-1990), the businessman who took over
CBS
David Sarnoff (1891-1971), the founder of RCA


11 April 2009

Color film

The invention:Aphotographic medium used to take full-color pictures. The people behind the invention: Rudolf Fischer (1881-1957), a German chemist H. Siegrist (1885-1959), a German chemist and Fischer’s collaborator Benno Homolka (1877-1949), a German chemist The Process Begins Around the turn of the twentieth century, Arthur-Louis Ducos du Hauron, a French chemist and physicist, proposed a tripack (threelayer) process of film development in which three color negatives would be taken by means of superimposed films. This was a subtractive process. (In the “additive method” of making color pictures, the three colors are added in projection—that is, the colors are formed by the mixture of colored light of the three primary hues. In the “subtractive method,” the colors are produced by the superposition of prints.) In Ducos du Hauron’s process, the blue-light negative would be taken on the top film of the pack; a yellow filter below it would transmit the yellow light, which would reach a green-sensitive film and then fall upon the bottom of the pack, which would be sensitive to red light. Tripacks of this type were unsatisfactory, however, because the light became diffused in passing through the emulsion layers, so the green and red negatives were not sharp. To obtain the real advantage of a tripack, the three layers must be coated one over the other so that the distance between the bluesensitive and red-sensitive layers is a small fraction of a thousandth of an inch. Tripacks of this type were suggested by the early pioneers of color photography, who had the idea that the packs would be separated into three layers for development and printing. The manipulation of such systems proved to be very difficult in practice. It was also suggested, however, that it might be possible to develop such tripacks as a unit and then, by chemical treatment, convert the silver images into dye images.Fischer’s Theory One of the earliest subtractive tripack methods that seemed to hold great promise was that suggested by Rudolf Fischer in 1912. He proposed a tripack that would be made by coating three emulsions on top of one another; the lowest one would be red-sensitive, the middle one would be green-sensitive, and the top one would be bluesensitive. Chemical substances called “couplers,” which would produce dyes in the development process, would be incorporated into the layers. In this method, the molecules of the developing agent, after becoming oxidized by developing the silver image, would react with the unoxidized form (the coupler) to produce the dye image. The two types of developing agents described by Fischer are paraminophenol and paraphenylenediamine (or their derivatives). The five types of dye that Fischer discovered are formed when silver images are developed by these two developing agents in the presence of suitable couplers. The five classes of dye he used (indophenols, indoanilines, indamines, indothiophenols, and azomethines) were already known when Fischer did his work, but it was he who discovered that the photographic latent image could be used to promote their formulation from “coupler” and “developing agent.” The indoaniline and azomethine types have been found to possess the necessary properties, but the other three suffer from serious defects. Because only p-phenylenediamine and its derivatives can be used to form the indoaniline and azomethine dyes, it has become the most widely used color developing agent.Impact In the early 1920’s, Leopold Mannes and Leopold Godowsky made a great advance beyond the Fischer process. Working on a new process of color photography, they adopted coupler development, but instead of putting couplers into the emulsion as Fischer had, they introduced them during processing. Finally, in 1935, the film was placed on the market under the name “Kodachrome,” a name that had been used for an early two-color process. The first use of the new Kodachrome process in 1935 was for 16- millimeter film. Color motion pictures could be made by the Kodachrome process as easily as black-and-white pictures, because the complex work involved (the color development of the film) was done under precise technical control. The definition (quality of the image) given by the process was soon sufficient to make it practical for 8-millimeter pictures, and in 1936, Kodachrome film was introduced in a 35-millimeter size for use in popular miniature cameras. Soon thereafter, color processes were developed on a larger scale and new color materials were rapidly introduced. In 1940, the Kodak Research Laboratories worked out a modification of the Fischer process in which the couplers were put into the emulsion layers. These couplers are not dissolved in the gelatin layer itself, as the Fischer couplers are, but are carried in small particles of an oily material that dissolves the couplers, protects them from the gelatin, and protects the silver bromide from any interaction with the couplers. When development takes place, the oxidation product of the developing agent penetrates into the organic particles and reacts with the couplers so that the dyes are formed in small particles that are dispersed throughout the layers. In one form of this material, Ektachrome (originally intended for use in aerial photography), the film is reversed to produce a color positive. It is first developed with a black-and-white developer, then reexposed and developed with a color developer that recombines with the couplers in each layer to produce the appropriate dyes, all three of which are produced simultaneously in one development. In summary, although Fischer did not succeed in putting his theory into practice, his work still forms the basis of most modern color photographic systems. Not only did he demonstrate the general principle of dye-coupling development, but the art is still mainly confined to one of the two types of developing agent, and two of the five types of dye, described by him.

COBOL computer language

The invention: The first user-friendly computer programming language, COBOL was originally designed to solve ballistics problems. The people behind the invention: Grace Murray Hopper (1906-1992), an American mathematician Howard Hathaway Aiken (1900-1973), an American mathematician Plain Speaking Grace Murray Hopper, a mathematician, was a faculty member at Vassar College when World War II (1939-1945) began. She enlisted in the Navy and in 1943 was assigned to the Bureau of Ordnance Computation Project, where she worked on ballistics problems. In 1944, the Navy began using one of the first electronic computers, the Automatic Sequence Controlled Calculator (ASCC), designed by an International Business Machines (IBM) Corporation team of engineers headed by Howard Hathaway Aiken, to solve ballistics problems. Hopper became the third programmer of the ASCC. Hopper’s interest in computer programming continued after the war ended. By the early 1950’s, Hopper’s work with programming languages had led to her development of FLOW-MATIC, the first English-language data processing compiler. Hopper’s work on FLOW-MATIC paved the way for her later work with COBOL (Common Business Oriented Language). Until Hopper developed FLOW-MATIC, digital computer programming was all machine-specific and was written in machine code. A program designed for one computer could not be used on another. Every program was both machine-specific and problemspecific in that the programmer would be told what problem the machine was going to be asked and then would write a completely new program for that specific problem in the machine code.Machine code was based on the programmer’s knowledge of the physical characteristics of the computer as well as the requirements of the problem to be solved; that is, the programmer had to know what was happening within the machine as it worked through a series of calculations, which relays tripped when and in what order, and what mathematical operations were necessary to solve the problem. Programming was therefore a highly specialized skill requiring a unique combination of linguistic, reasoning, engineering, and mathematical abilities that not even all the mathematicians and electrical engineers who designed and built the early computers possessed. While every computer still operates in response to the programming, or instructions, built into it, which are formatted in machine code, modern computers can accept programs written in nonmachine code—that is, in various automatic programming languages. They are able to accept nonmachine code programs because specialized programs now exist to translate those programs into the appropriate machine code. These translating programs are known as “compilers,” or “assemblers,” andFLOW-MATIC was the first such program. Hopper developed FLOW-MATIC after realizing that it would be necessary to eliminate unnecessary steps in programming to make computers more efficient. FLOW-MATIC was based, in part, on Hopper’s recognition that certain elements, or commands, were common to many different programming applications. Hopper theorized that it would not be necessary to write a lengthy series of instructions in machine code to instruct a computer to begin a series of operations; instead, she believed that it would be possible to develop commands in an assembly language in such a way that a programmer could write one command, such as the word add, that would translate into a sequence of several commands in machine code. Hopper’s successful development of a compiler to translate programming languages into machine code thus meant that programming became faster and easier. From assembly languages such asFLOW-MATIC, it was a logical progression to the development of high-level computer languages, such as FORTRAN (Formula Translation) and COBOL.The Language of Business Between 1955 (when FLOW-MATIC was introduced) and 1959, a number of attempts at developing a specific business-oriented language were made. IBM and Remington Rand believed that the only way to market computers to the business community was through the development of a language that business people would be comfortable using. Remington Rand officials were especially committed to providing a language that resembled English. None of the attempts to develop a business-oriented language succeeded, however, and by 1959 Hopper and other members of the U.S. Department of Defense had persuaded representatives of various companies of the need to cooperate. On May 28 and 29, 1959, a conference sponsored by the Department of Defense was held at the Pentagon to discuss the problem of establishing a common language for the adaptation of electronic computers for data processing. As a result, the first distribution of COBOL was accomplished on December 17, 1959. Although many people were involved in the development of COBOL, Hopper played a particularly important role. She not only found solutions to technical problems but also succeeded in selling the concept of a common language from an administrative and managerial point of view. Hopper recognized that while the companies involved in the commercial development of computers were in competition with one another, the use of a common, business-oriented language would contribute to the growth of the computer industry as a whole, as well as simplify the training of computer programmers and operators. Consequences COBOL was the first compiler developed for business data processing operations. Its development simplified the training required for computer users in business applications and demonstrated that computers could be practical tools in government and industry as well as in science. Prior to the development of COBOL, electronic computers had been characterized as expensive, oversized adding machines that were adequate for performing time-consuming mathematics but lacked the flexibility that business people required. In addition, the development of COBOL freed programmers not only from the need to know machine code but also from the need to understand the physical functioning of the computers they were using. Programming languages could be written that were both machine- independent and almost universally convertible from one computer to another.Finally, because Hopper and the other committee members worked under the auspices of the Department of Defense, the software was not copyrighted, and in a short period of time COBOL became widely available to anyone who wanted to use it. It diffused rapidly throughout the industry and contributed to the widespread adaptation of computers for use in countless settings.