Pages

29 June 2009

Genetic “fingerprinting”

The invention: Atechnique for using the unique characteristics of each human being’s DNA to identify individuals, establish connections among relatives, and identify criminals. The people behind the invention: Alec Jeffreys (1950- ), an English geneticist Victoria Wilson (1950- ), an English geneticist Swee Lay Thein (1951- ), a biochemical geneticist Microscopic Fingerprints In 1985, Alec Jeffreys, a geneticist at the University of Leicester in England, developed a method of deoxyribonucleic acid (DNA) analysis that provides a visual representation of the human genetic structure. Jeffreys’s discovery had an immediate, revolutionary impact on problems of human identification, especially the identification of criminals. Whereas earlier techniques, such as conventional blood typing, provide evidence that is merely exclusionary (indicating only whether a suspect could or could not be the perpetrator of a crime), DNA fingerprinting provides positive identification. For example, under favorable conditions, the technique can establish with virtual certainty whether a given individual is a murderer or rapist. The applications are not limited to forensic science; DNA fingerprinting can also establish definitive proof of parenthood (paternity or maternity), and it is invaluable in providing markers for mapping disease-causing genes on chromosomes. In addition, the technique is utilized by animal geneticists to establish paternity and to detect genetic relatedness between social groups. DNAfingerprinting (also referred to as “genetic fingerprinting”) is a sophisticated technique that must be executed carefully to produce valid results. The technical difficulties arise partly from the complex nature of DNA. DNA, the genetic material responsible for heredity in all higher forms of life, is an enormously long, doublestranded molecule composed of four different units called “bases.” The bases on one strand of DNApair with complementary bases on the other strand. A human being contains twenty-three pairs of chromosomes; one member of each chromosome pair is inherited fromthe mother, the other fromthe father. The order, or sequence, of bases forms the genetic message, which is called the “genome.” Scientists did not know the sequence of bases in any sizable stretch of DNA prior to the 1970’s because they lacked the molecular tools to split DNA into fragments that could be analyzed. This situation changed with the advent of biotechnology in the mid-1970’s. The door toDNAanalysis was opened with the discovery of bacterial enzymes called “DNA restriction enzymes.” A restriction enzyme binds to DNA whenever it finds a specific short sequence of base pairs (analogous to a code word), and it splits the DNAat a defined site within that sequence. A single enzyme finds millions of cutting sites in human DNA, and the resulting fragments range in size from tens of base pairs to hundreds or thousands. The fragments are exposed to a radioactive DNA probe, which can bind to specific complementary DNA sequences in the fragments. X-ray film detects the radioactive pattern. The developed film, called an “autoradiograph,” shows a pattern of DNA fragments, which is similar to a bar code and can be compared with patterns from known subjects. The Presence of Minisatellites The uniqueness of a DNA fingerprint depends on the fact that, with the exception of identical twins, no two human beings have identical DNA sequences. Of the three billion base pairs in human DNA, many will differ from one person to another. In 1985, Jeffreys and his coworkers, Victoria Wilson at the University of Leicester and Swee Lay Thein at the John Radcliffe Hospital in Oxford, discovered a way to produce a DNA fingerprint. Jeffreys had found previously that human DNA contains many repeated minisequences called “minisatellites.” Minisatellites consist of sequences of base pairs repeated in tandem, and the number of repeated units varies widely from one individual to another. Every person, with the exception of identical twins, has a different number of tandem repeats and, hence, different lengths of minisatellite DNA. By using two labeled DNA probes to detect two different minisatellite sequences, Jeffreys obtained a unique fragment band pattern that was completely specific for an individual. The power of the technique derives from the law of chance, which indicates that the probability (chance) that two or more unrelated events will occur simultaneously is calculated as the multiplication product of the two separate probabilities. As Jeffreys discovered, the likelihood of two unrelated people having completely identical DNAfingerprints is extremely small—less than one in ten trillion. Given the population of the world, it is clear that the technique can distinguish any one person from everyone else. Jeffreys called his band patterns “DNAfingerprints” because of their ability to individualize. As he stated in his landmark research paper, published in the English scientific journal Nature in 1985, probes to minisatellite regions of human DNA produce “DNA ‘fingerprints’ which are completely specific to an individual (or to his or her identical twin) and can be applied directly to problems of human identification, including parenthood testing.” Consequences In addition to being used in human identification, DNA fingerprinting has found applications in medical genetics. In the search for a cause, a diagnostic test for, and ultimately the treatment of an inherited disease, it is necessary to locate the defective gene on a human chromosome. Gene location is accomplished by a technique called “linkage analysis,” in which geneticists use marker sections of DNA as reference points to pinpoint the position of a defective gene on a chromosome. The minisatellite DNA probes developed by Jeffreys provide a potent and valuable set of markers that are of great value in locating disease-causing genes. Soon after its discovery, DNA fingerprinting was used to locate the defective genes responsible for several diseases, including fetal hemoglobin abnormality and Huntington’s disease. Genetic fingerprinting also has had a major impact on genetic studies of higher animals. BecauseDNAsequences are conserved in evolution, humans and other vertebrates have many sequences in common. This commonality enabled Jeffreys to use his probes to human minisatellites to bind to the DNA of many different vertebrates, ranging from mammals to birds, reptiles, amphibians, and fish; this made it possible for him to produce DNA fingerprints of these vertebrates. In addition, the technique has been used to discern the mating behavior of birds, to determine paternity in zoo primates, and to detect inbreeding in imperiled wildlife. DNA fingerprinting can also be applied to animal breeding problems, such as the identification of stolen animals, the verification of semen samples for artificial insemination, and the determination of pedigree. The technique is not foolproof, however, and results may be far from ideal. Especially in the area of forensic science, there was a rush to use the tremendous power of DNA fingerprinting to identify a purported murderer or rapist, and the need for scientific standards was often neglected. Some problems arose because forensic DNA fingerprinting in the United States is generally conducted in private, unregulated laboratories. In the absence of rigorous scientific controls, the DNA fingerprint bands of two completely unknown samples cannot be matched precisely, and the results may be unreliable.

Geiger counter

The invention: the first electronic device able to detect and measure radioactivity in atomic particles. The people behind the invention: Hans Geiger (1882-1945), a German physicist Ernest Rutherford (1871-1937), a British physicist Sir John Sealy Edward Townsend (1868-1957), an Irish physicist Sir William Crookes (1832-1919), an English physicist Wilhelm Conrad Röntgen (1845-1923), a German physicist Antoine-Henri Becquerel (1852-1908), a French physicist Discovering Natural Radiation When radioactivity was discovered and first studied, the work was done with rather simple devices. In the 1870’s, Sir William Crookes learned how to create a very good vacuum in a glass tube. He placed electrodes in each end of the tube and studied the passage of electricity through the tube. This simple device became known as the “Crookes tube.” In 1895, Wilhelm Conrad Röntgen was experimenting with a Crookes tube. It was known that when electricity went through a Crookes tube, one end of the glass tube might glow. Certain mineral salts placed near the tube would also glow. In order to observe carefully the glowing salts, Röntgen had darkened the room and covered most of the Crookes tube with dark paper. Suddenly, a flash of light caught his eye. It came from a mineral sample placed some distance from the tube and shielded by the dark paper; yet when the tube was switched off, the mineral sample went dark. Experimenting further, Röntgen became convinced that some ray from the Crookes tube had penetrated the mineral and caused it to glow. Since light rays were blocked by the black paper, he called the mystery ray an “X ray,” with “X” standing for unknown. Antoine-Henri Becquerel heard of the discovery of X rays and, in February, 1886, set out to discover if glowing minerals themselves emitted X rays. Some minerals, called “phosphorescent,” begin to glow when activated by sunlight. Becquerel’s experiment involved wrapping photographic film in black paper and setting various phosphorescent minerals on top and leaving them in the sun. He soon learned that phosphorescent minerals containing uranium would expose the film. Aseries of cloudy days, however, brought a great surprise. Anxious to continue his experiments, Becquerel decided to develop film that had not been exposed to sunlight. He was astonished to discover that the film was deeply exposed. Some emanations must be coming from the uranium, he realized, and they had nothing to do with sunlight. Thus, natural radioactivity was discovered by accident with a simple piece of photographic film. Rutherford and Geiger Ernest Rutherford joined the world of international physics at about the same time that radioactivity was discovered. Studying the “Becquerel rays” emitted by uranium, Rutherford eventually distinguished three different types of radiation, which he named “alpha,” “beta,” and “gamma” after the first three letters of the Greek alphabet. He showed that alpha particles, the least penetrating of the three, are the nuclei of helium atoms (a group of two neutrons and a proton tightly bound together). It was later shown that beta particles are electrons. Gamma rays, which are far more penetrating than either alpha or beta particles, were shown to be similar to X rays, but with higher energies. Rutherford became director of the associated research laboratory at Manchester University in 1907. Hans Geiger became an assistant. At this time, Rutherford was trying to prove that alpha particles carry a double positive charge. The best way to do this was to measure the electric charge that a stream of alpha particles would bring to a target. By dividing that charge by the total number of alpha particles that fell on the target, one could calculate the charge of a single alpha particle. The problem lay in counting the particles and in proving that every particle had been counted. Basing their design upon work done by Sir John Sealy Edward Townsend, a former colleague of Rutherford, Geiger and Rutherford constructed an electronic counter. It consisted of a long brass tube sealed at both ends from which most of the air had been pumped. A thin wire, insulated from the brass, was suspended down the middle of the tube. This wire was connected to batteries producing about thirteen hundred volts and to an electrometer, a device that could measure the voltage of the wire. This voltage could be increased until a spark jumped between the wire and the tube. If the voltage was turned down a little, the tube was ready to operate. An alpha particle entering the tube would ionize (knock some electrons away from) at least a few atoms. These electrons would be accelerated by the high voltage and, in turn, would ionize more atoms, freeing more electrons. This process would continue until an avalanche of electrons struck the central wire and the electrometer registered the voltage change. Since the tube was nearly ready to arc because of the high voltage, every alpha particle, even if it had very little energy, would initiate a discharge. The most complex of the early radiation detection devices—the forerunner of the Geiger counter—had just been developed. The two physicists reported their findings in February, 1908. Impact Their first measurements showed that one gram of radium emitted 34 thousand million alpha particles per second. Soon, the number was refined to 32.8 thousand million per second. Next, Geiger and Rutherford measured the amount of charge emitted by radium each second. Dividing this number by the previous number gave them the charge on a single alpha particle. Just as Rutherford had anticipated, the charge was double that of a hydrogen ion (a proton). This proved to be the most accurate determination of the fundamental charge until the American physicist Robert Andrews Millikan conducted his classic oil-drop experiment in 1911. Another fundamental result came froma careful measurement of the volume of helium emitted by radium each second. Using that value, other properties of gases, and the number of helium nuclei emitted each second, they were able to calculate Avogadro’s number more directly and accurately than had previously been possible. (Avogadro’s number enables one to calculate the number of atoms in a given amount of material.)The true Geiger counter evolved when Geiger replaced the central wire of the tube with a needle whose point lay just inside a thin entrance window. This counter was much more sensitive to alpha and beta particles and also to gamma rays. By 1928, with the assistance of Walther Müller, Geiger made his counter much more efficient, responsive, durable, and portable. There are probably few radiation facilities in the world that do not have at least one Geiger counter or one of its compact modern relatives.

26 June 2009

Gas-electric car


The invention: 

A hybrid automobile with both an internal combustion engine and an electric motor.

The people behind the invention: 

Victor Wouk -   an American engineer Tom Elliott, executive vice president of
                           American Honda Motor Company
Hiroyuki Yoshino - president and chief executive officer of Honda Motor Company
Fujio Cho              -  president of Toyota Motor Corporation

23 June 2009

Fuel cell

The invention: An electrochemical cell that directly converts energy from reactions between oxidants and fuels, such as liquid hydrogen, into electrical energy. The people behind the invention: Francis Thomas Bacon (1904-1992), an English engineer Sir William Robert Grove (1811-1896), an English inventor Georges Leclanché (1839-1882), a French engineer Alessandro Volta (1745-1827), an Italian physicist The Earth’s Resources Because of the earth’s rapidly increasing population and the dwindling of fossil fuels (natural gas, coal, and petroleum), there is a need to design and develop new ways to obtain energy and to encourage its intelligent use. The burning of fossil fuels to create energy causes a slow buildup of carbon dioxide in the atmosphere, creating pollution that poses many problems for all forms of life on this planet. Chemical and electrical studies can be combined to create electrochemical processes that yield clean energy. Because of their very high rate of efficiency and their nonpolluting nature, fuel cells may provide the solution to the problem of finding sufficient energy sources for humans. The simple reaction of hydrogen and oxygen to form water in such a cell can provide an enormous amount of clean (nonpolluting) energy. Moreover, hydrogen and oxygen are readily available. Studies by Alessandro Volta, Georges Leclanché, and William Grove preceded the work of Bacon in the development of the fuel cell. Bacon became interested in the idea of a hydrogen-oxygen fuel cell in about 1932. His original intent was to develop a fuel cell that could be used in commercial applications. The Fuel Cell Emerges In 1800, the Italian physicist Alessandro Volta experimented with solutions of chemicals and metals that were able to conduct electricity. He found that two pieces of metal and such a solution could be arranged in such a way as to produce an electric current. His creation was the first electrochemical battery, a device that produced energy from a chemical reaction. Studies in this area were continued by various people, and in the late nineteenth century, Georges Leclanché invented the dry cell battery, which is now commonly used. The work of William Grove followed that of Leclanché. His first significant contribution was the Grove cell, an improved form of the cells described above, which became very popular. Grove experimented with various forms of batteries and eventually invented the “gas battery,” which was actually the earliest fuel cell. It is worth noting that his design incorporated separate test tubes of hydrogen and oxygen, which he placed over strips of platinum. After studying the design of Grove’s fuel cell, Bacon decided that, for practical purposes, the use of platinum and other precious metals should be avoided. By 1939, he had constructed a cell in which nickel replaced the platinum used. The theory behind the fuel cell can be described in the following way. If a mixture of hydrogen and oxygen is ignited, energy is released in the form of a violent explosion. In a fuel cell, however, the reaction takes place in a controlled manner. Electrons lost by the hydrogen gas flow out of the fuel cell and return to be taken up by the oxygen in the cell. The electron flow provides electricity to any device that is connected to the fuel cell, and the water that the fuel cell produces can be purified and used for drinking. Bacon’s studies were interrupted byWorldWar II. After the war was over, however, Bacon continued his work. Sir Eric Keightley Rideal of Cambridge University in England supported Bacon’s studies; later, others followed suit. In January, 1954, Bacon wrote an article entitled “Research into the Properties of the Hydrogen/ Oxygen Fuel Cell” for a British journal. He was surprised at the speed with which news of the article spread throughout the scientific world, particularly in the United States. After a series of setbacks, Bacon demonstrated a forty-cell unit that had increased power. This advance showed that the fuel cell was not merely an interesting toy; it had the capacity to do useful work. At this point, the General Electric Company (GE), an American corporation, sent a representative to England to offer employment in the United States to senior members of Bacon’s staff. Three scientists accepted the offer. A high point in Bacon’s career was the announcement that the American Pratt and Whitney Aircraft company had obtained an order to build fuel cells for the Apollo project, which ultimately put two men on the Moon in 1969. Toward the end of his career in 1978, Bacon hoped that commercial applications for his fuel cells would be found.Impact Because they are lighter and more efficient than batteries, fuel cells have proved to be useful in the space program. Beginning with the Gemini 5 spacecraft, alkaline fuel cells (in which a water solution of potassium hydroxide, a basic, or alkaline, chemical, is placed) have been used for more than ten thousand hours in space. The fuel cells used aboard the space shuttle deliver the same amount of power as batteries weighing ten times as much. On a typical seven-day mission, the shuttle’s fuel cells consume 680 kilograms (1,500 pounds) of hydrogen and generate 719 liters (190 gallons) of water that can be used for drinking. Major technical and economic problems must be overcome in order to design fuel cells for practical applications, but some important advancements have been made.Afew test vehicles that use fuel cells as a source of power have been constructed. Fuel cells using hydrogen as a fuel and oxygen to burn the fuel have been used in a van built by General Motors Corporation. Thirty-two fuel cells are installed below the floorboards, and tanks of liquid oxygen are carried in the back of the van. A power plant built in New York City contains stacks of hydrogen-oxygen fuel cells, which can be put on line quickly in response to power needs. The Sanyo Electric Company has developed an electric car that is partially powered by a fuel cell. These tremendous technical advances are the result of the singleminded dedication of Francis Thomas Bacon, who struggled all of his life with an experiment he was convinced would be successful.

Freeze-drying

The invention: 

Method for preserving foods and other organic matter by freezing them and using a vacuum to remove their water content without damaging their solid matter.

The people behind the invention:

Earl W. Flosdorf (1904- ), an American physician
Ronald I. N. Greaves (1908- ), an English pathologist
Jacques Arsène d’Arsonval (1851-1940), a French physicist

FORTRAN programming language

The invention: The first major computer programming language, FORTRAN supported programming in a mathematical language that was natural to scientists and engineers and achieved unsurpassed success in scientific computation. The people behind the invention: John Backus (1924- ), an American software engineer and manager John W. Mauchly (1907-1980), an American physicist and engineer Herman Heine Goldstine (1913- ), a mathematician and computer scientist John von Neumann (1903-1957), a Hungarian American mathematician and physicist Talking to Machines Formula Translation, or FORTRAN—the first widely accepted high-level computer language—was completed by John Backus and his coworkers at the International Business Machines (IBM) Corporation in April, 1957. Designed to support programming in a mathematical language that was natural to scientists and engineers, FORTRAN achieved unsurpassed success in scientific computation. Computer languages are means of specifying the instructions that a computer should execute and the order of those instructions. Computer languages can be divided into categories of progressively higher degrees of abstraction. At the lowest level is binary code, or machine code: Binary digits, or “bits,” specify in complete detail every instruction that the machine will execute. This was the only language available in the early days of computers, when such machines as the ENIAC (Electronic Numerical Integrator and Calculator) required hand-operated switches and plugboard connections. All higher levels of language are implemented by having a program translate instructions written in the higher language into binary machine language (also called “object code”). High-level languages (also called “programming languages”) are largely or entirely independent of the underlying machine structure. FORTRAN was the first language of this type to win widespread acceptance. The emergence of machine-independent programming languages was a gradual process that spanned the first decade of electronic computation. One of the earliest developments was the invention of “flowcharts,” or “flow diagrams,” by Herman Heine Goldstine and John von Neumann in 1947. Flowcharting became the most influential software methodology during the first twenty years of computing. Short Code was the first language to be implemented that contained some high-level features, such as the ability to use mathematical equations. The idea came from JohnW. Mauchly, and it was implemented on the BINAC (Binary Automatic Computer) in 1949 with an “interpreter”; later, it was carried over to the UNIVAC (Universal Automatic Computer) I. Interpreters are programs that do not translate commands into a series of object-code instructions; instead, they directly execute (interpret) those commands. Every time the interpreter encounters a command, that command must be interpreted again. “Compilers,” however, convert the entire command into object code before it is executed. Much early effort went into creating ways to handle commonly encountered problems—particularly scientific mathematical calculations. A number of interpretive languages arose to support these features. As long as such complex operations had to be performed by software (computer programs), however, scientific computation would be relatively slow. Therefore, Backus lobbied successfully for a direct hardware implementation of these operations on IBM’s new scientific computer, the 704. Backus then started the Programming Research Group at IBM in order to develop a compiler that would allow programs to be written in a mathematically oriented language rather than a machine-oriented language. In November of 1954, the group defined an initial version of FORTRAN.A More Accessible Language Before FORTRAN was developed, a computer had to perform a whole series of tasks to make certain types of mathematical calculations. FORTRAN made it possible for the same calculations to be performed much more easily. In general, FORTRAN supported constructs with which scientists were already acquainted, such as functions and multidimensional arrays. In defining a powerful notation that was accessible to scientists and engineers, FORTRAN opened up programming to a much wider community. Backus’s success in getting the IBM 704’s hardware to support scientific computation directly, however, posed a major challenge: Because such computation would be much faster, the object code produced by FORTRAN would also have to be much faster. The lower-level compilers preceding FORTRAN produced programs that were usually five to ten times slower than their hand-coded counterparts; therefore, efficiency became the primary design objective for Backus. The highly publicized claims for FORTRAN met with widespread skepticism among programmers. Much of the team’s efforts, therefore, went into discovering ways to produce the most efficient object code. The efficiency of the compiler produced by Backus, combined with its clarity and ease of use, guaranteed the system’s success. By 1959, many IBM 704 users programmed exclusively in FORTRAN. By 1963, virtually every computer manufacturer either had delivered or had promised a version of FORTRAN. Incompatibilities among manufacturers were minimized by the popularity of IBM’s version of FORTRAN; every company wanted to be able to support IBM programs on its own equipment. Nevertheless, there was sufficient interest in obtaining a standard for FORTRAN that the American National Standards Institute adopted a formal standard for it in 1966. Arevised standard was adopted in 1978, yielding FORTRAN 77. Consequences In demonstrating the feasibility of efficient high-level languages, FORTRAN inaugurated a period of great proliferation of programming languages. Most of these languages attempted to provide similar or better high-level programming constructs oriented toward a different, nonscientific programming environment. COBOL, for example, stands for “Common Business Oriented Language.” FORTRAN, while remaining the dominant language for scientific programming, has not found general acceptance among nonscientists. An IBM project established in 1963 to extend FORTRAN found the task too unwieldy and instead ended up producing an entirely different language, PL/I, which was delivered in 1966. In the beginning, Backus and his coworkers believed that their revolutionary language would virtually eliminate the burdens of coding and debugging. Instead, FORTRAN launched software as a field of study and an industry in its own right. In addition to stimulating the introduction of new languages, FORTRAN encouraged the development of operating systems. Programming languages had already grown into simple operating systems called “monitors.” Operating systems since then have been greatly improved so that they support, for example, simultaneously active programs (multiprogramming) and the networking (combining) of multiple computers.

21 June 2009

Food freezing

The invention: It was long known that low temperatures helped to protect food against spoiling; the invention that made frozen food practical was a method of freezing items quickly. Clarence Birdseye’s quick-freezing technique made possible a revolution in food preparation, storage, and distribution. The people behind the invention: Clarence Birdseye (1886-1956), a scientist and inventor Donald K. Tressler (1894-1981), a researcher at Cornell University Amanda Theodosia Jones (1835-1914), a food-preservation pioneer Feeding the Family In 1917, Clarence Birdseye developed a means of quick-freezing meat, fish, vegetables, and fruit without substantially changing their original taste. His system of freezing was called by Fortune magazine “one of the most exciting and revolutionary ideas in the history of food.” Birdseye went on to refine and perfect his method and to promote the frozen foods industry until it became a commercial success nationwide. It was during a trip to Labrador, where he worked as a fur trader, that Birdseye was inspired by this idea. Birdseye’s new wife and five-week-old baby had accompanied him there. In order to keep his family well fed, he placed barrels of fresh cabbages in salt water and then exposed the vegetables to freezing winds. Successful at preserving vegetables, he went on to freeze a winter’s supply of ducks, caribou, and rabbit meat. In the following years, Birdseye experimented with many freezing techniques. His equipment was crude: an electric fan, ice, and salt water. His earliest experiments were on fish and rabbits, which he froze and packed in old candy boxes. By 1924, he had borrowed money against his life insurance and was lucky enough to find three partners willing to invest in his new General Seafoods Company (later renamed General Foods), located in Gloucester, Massachusetts. Although it was Birdseye’s genius that put the principles of quick-freezing to work, he did not actually invent quick-freezing. The scientific principles involved had been known for some time. As early as 1842, a patent for freezing fish had been issued in England. Nevertheless, the commercial exploitation of the freezing process could not have happened until the end of the 1800’s, when mechanical refrigeration was invented. Even then, Birdseye had to overcome major obstacles. Finding a Niche By the 1920’s, there still were few mechanical refrigerators in American homes. It would take years before adequate facilities for food freezing and retail distribution would be established across the United States. By the late 1930’s, frozen foods had, indeed, found its role in commerce but still could not compete with canned or fresh foods. Birdseye had to work tirelessly to promote the industry, writing and delivering numerous lectures and articles to advance its popularity. His efforts were helped by scientific research conducted at Cornell University by Donald K. Tressler and by C. R. Fellers of what was then Massachusetts State College. Also, during World War II (1939-1945), more Americans began to accept the idea: Rationing, combined with a shortage of canned foods, contributed to the demand for frozen foods. The armed forces made large purchases of these items as well. General Foods was the first to use a system of extremely rapid freezing of perishable foods in packages. Under the Birdseye system, fresh foods, such as berries or lobster, were packaged snugly in convenient square containers. Then, the packages were pressed between refrigerated metal plates under pressure at 50 degrees below zero. Two types of freezing machines were used. The “double belt” freezer consisted of two metal belts that moved through a 15-meter freezing tunnel, while a special salt solution was sprayed on the surfaces of the belts. This double-belt freezer was used only in permanent installations and was soon replaced by the “multiplate” freezer, which was portable and required only 11.5 square meters of floor space compared to the double belt’s 152 square meters.The multiplate freezer also made it possible to apply the technique of quick-freezing to seasonal crops. People were able to transport these freezers easily from one harvesting field to another, where they were used to freeze crops such as peas fresh off the vine. The handy multiplate freezer consisted of an insulated cabinet equipped with refrigerated metal plates. Stacked one above the other, these plates were capable of being opened and closed to receive food products and to compress them with evenly distributed pressure. Each aluminum plate had internal passages through which ammonia flowed and expanded at a temperature of -3.8 degrees Celsius, thus causing the foods to freeze. A major benefit of the new frozen foods was that their taste and vitamin content were not lost. Ordinarily, when food is frozen slowly, ice crystals form, which slowly rupture food cells, thus altering the taste of the food. With quick-freezing, however, the food looks, tastes, and smells like fresh food. Quick-freezing also cuts down on bacteria. Impact During the months between one food harvest and the next, humankind requires trillions of pounds of food to survive. In many parts of the world, an adequate supply of food is available; elsewhere, much food goes to waste and many go hungry. Methods of food preservation such as those developed by Birdseye have done much to help those who cannot obtain proper fresh foods. Preserving perishable foods also means that they will be available in greater quantity and variety all year-round. In all parts of the world, both tropical and arctic delicacies can be eaten in any season of the year. With the rise in popularity of frozen “fast” foods, nutritionists began to study their effect on the human body. Research has shown that fresh is the most beneficial. In an industrial nation with many people, the distribution of fresh commodities is, however, difficult. It may be many decades before scientists know the long-term effects on generations raised primarily on frozen foods.

FM radio

The invention: A method of broadcasting radio signals by modulating the frequency, rather than the amplitude, of radio waves, FM radio greatly improved the quality of sound transmission. The people behind the invention: Edwin H. Armstrong (1890-1954), the inventor of FM radio broadcasting David Sarnoff (1891-1971), the founder of RCA An Entirely New System Because early radio broadcasts used amplitude modulation (AM) to transmit their sounds, they were subject to a sizable amount of interference and static. Since goodAMreception relies on the amount of energy transmitted, energy sources in the atmosphere between the station and the receiver can distort or weaken the original signal. This is particularly irritating for the transmission of music. Edwin H. Armstrong provided a solution to this technological constraint. A graduate of Columbia University, Armstrong made a significant contribution to the development of radio with his basic inventions for circuits for AM receivers. (Indeed, the monies Armstrong received from his earlier inventions financed the development of the frequency modulation, or FM, system.) Armstrong was one among many contributors to AM radio. For FM broadcasting, however, Armstrong must be ranked as the most important inventor. During the 1920’s, Armstrong established his own research laboratory in Alpine, New Jersey, across the Hudson River from New York City. With a small staff of dedicated assistants, he carried out research on radio circuitry and systems for nearly three decades. At that time, Armstrong also began to teach electrical engineering at Columbia University. From 1928 to 1933, Armstrong worked diligently at his private laboratory at Columbia University to construct a working model of an FM radio broadcasting system. With the primitive limitations then imposed on the state of vacuum tube technology, a number of Armstrong’s experimental circuits required as many as one hundred tubes. Between July, 1930, and January, 1933, Armstrong filed four basic FM patent applications. All were granted simultaneously on December 26, 1933. Armstrong sought to perfectFMradio broadcasting, not to offer radio listeners better musical reception but to create an entirely new radio broadcasting system. On November 5, 1935, Armstrong made his first public demonstration of FM broadcasting in New York City to an audience of radio engineers. An amateur station based in suburban Yonkers, New York, transmitted these first signals. The scientific world began to consider the advantages and disadvantages of Armstrong’s system; other laboratories began to craft their own FM systems. Corporate Conniving Because Armstrong had no desire to become a manufacturer or broadcaster, he approached David Sarnoff, head of the Radio Corporation of America (RCA). As the owner of the top manufacturer of radio sets and the top radio broadcasting network, Sarnoff was interested in all advances of radio technology. Armstrong first demonstrated FM radio broadcasting for Sarnoff in December, 1933. This was followed by visits from RCA engineers, who were sufficiently impressed to recommend to Sarnoff that the company conduct field tests of the Armstrong system. In 1934, Armstrong, with the cooperation of RCA, set up a test transmitter at the top of the Empire State Building, sharing facilities with the experimental RCAtelevision transmitter. From 1934 through 1935, tests were conducted using the Empire State facility, to mixed reactions of RCA’s best engineers. AM radio broadcasting already had a performance record of nearly two decades. The engineers wondered if this new technology could replace something that had worked so well. This less-than-enthusiastic evaluation fueled the skepticism of RCA lawyers and salespeople. RCA had too much invested in the AM system, both as a leading manufacturer and as the dominant owner of the major radio network of the time, the National Broadcasting Company (NBC). Sarnoff was in no rush to adopt FM. To change systems would risk the millions of dollars RCAwas making as America emerged from the Great Depression. In 1935, Sarnoff advised Armstrong that RCA would cease any further research and development activity in FM radio broadcasting. (Still, engineers at RCA laboratories continued to work on FM to protect the corporate patent position.) Sarnoff declared to the press that his company would push the frontiers of broadcasting by concentrating on research and development of radio with pictures, that is, television. As a tangible sign, Sarnoff ordered that Armstrong’s FM radio broadcasting tower be removed from the top of the Empire State Building. Armstrong was outraged. By the mid-1930’s, the development of FM radio broadcasting had become a mission for Armstrong. For the remainder of his life, Armstrong devoted his considerable talents to the promotion of FM radio broadcasting. Impact After the break with Sarnoff, Armstrong proceeded with plans to develop his own FM operation. Allied with two of RCA’s biggest manufacturing competitors, Zenith and General Electric, Armstrong pressed ahead. In June of 1936, at a Federal Communications Commission (FCC) hearing, Armstrong proclaimed that FM broadcasting was the only static-free, noise-free, and uniform system—both day and night—available. He argued, correctly, thatAMradio broadcasting had none of these qualities. During World War II (1939-1945), Armstrong gave the military permission to use FM with no compensation. That patriotic gesture cost Armstrong millions of dollars when the military soon became all FM. It did, however, expand interest in FM radio broadcasting. World War II had provided a field test of equipment and use. By the 1970’s, FM radio broadcasting had grown tremendously. By 1972, one in three radio listeners tuned into an FM station some time during the day. Advertisers began to use FM radio stations to reach the young and affluent audiences that were turning to FM stations in greater numbers. By the late 1970’s, FM radio stations were outnumberingAMstations. By 1980, nearly half of radio listeners tuned into FM stations on a regular basis. Adecade later, FM radio listening accounted for more than two-thirds of audience time. Armstrong’s predictions that listeners would prefer the clear, static-free sounds offered by FM radio broadcasting had come to pass by the mid-1980’s, nearly fifty years after Armstrong had commenced his struggle to make FM radio broadcasting a part of commercial radio.

Fluorescent lighting

lighting The invention: A form of electrical lighting that uses a glass tube coated with phosphor that gives off a cool bluish light and emits ultraviolet radiation. The people behind the invention: Vincenzo Cascariolo (1571-1624), an Italian alchemist and shoemaker Heinrich Geissler (1814-1879), a German glassblower Peter Cooper Hewitt (1861-1921), an American electrical engineer Celebrating the “Twelve Greatest Inventors” On the night of November 23, 1936, more than one thousand industrialists, patent attorneys, and scientists assembled in the main ballroom of the Mayflower Hotel in Washington, D.C., to celebrate the one hundredth anniversary of the U.S. Patent Office.Atransport liner over the city radioed the names chosen by the Patent Office as America’s “Twelve Greatest Inventors,” and, as the distinguished group strained to hear those names, “the room was flooded for a moment by the most brilliant light yet used to illuminate a space that size.” Thus did The New York Times summarize the commercial introduction of the fluorescent lamp. The twelve inventors present were Thomas Alva Edison, Robert Fulton, Charles Goodyear, Charles Hall, Elias Howe, Cyrus Hall McCormick, Ottmar Mergenthaler, Samuel F. B. Morse, George Westinghouse, Wilbur Wright, and Eli Whitney. There was, however, no name to bear the honor for inventing fluorescent lighting. That honor is shared by many who participated in a very long series of discoveries. The fluorescent lamp operates as a low-pressure, electric discharge inside a glass tube that contains a droplet of mercury and a gas, commonly argon. The inside of the glass tube is coated with fine particles of phosphor. When electricity is applied to the gas, the mercury gives off a bluish light and emits ultraviolet radiation.When bathed in the strong ultraviolet radiation emitted by the mercury, the phosphor fluoresces (emits light). The setting for the introduction of the fluorescent lamp began at the beginning of the 1600’s, when Vincenzo Cascariolo, an Italian shoemaker and alchemist, discovered a substance that gave off a bluish glow in the dark after exposure to strong sunlight. The fluorescent substance was apparently barium sulfide and was so unusual for that time and so valuable that its formulation was kept secret for a long time. Gradually, however, scholars became aware of the preparation secrets of the substance and studied it and other luminescent materials. Further studies in fluorescent lighting were made by the German physicist Johann Wilhelm Ritter. He observed the luminescence of phosphors that were exposed to various “exciting” lights. In 1801, he noted that some phosphors shone brightly when illuminated by light that the eye could not see (ultraviolet light). Ritter thus discovered the ultraviolet region of the light spectrum. The use of phosphors to transform ultraviolet light into visible light was an important step in the continuing development of the fluorescent lamp. Further studies in fluorescent lighting were made by the German physicist Johann Wilhelm Ritter. He observed the luminescence of phosphors that were exposed to various “exciting” lights. In 1801, he noted that some phosphors shone brightly when illuminated by light that the eye could not see (ultraviolet light). Ritter thus discovered the ultraviolet region of the light spectrum. The use of phosphors to transform ultraviolet light into visible light was an important step in the continuing development of the fluorescent lamp. The British mathematician and physicist Sir George Gabriel Stokes studied the phenomenon as well. It was he who, in 1852, termed the afterglow “fluorescence.” Geissler Tubes While these advances were being made, other workers were trying to produce a practical form of electric light. In 1706, the English physicist Francis Hauksbee devised an electrostatic generator, which is used to accelerate charged particles to very high levels of electrical energy. He then connected the device to a glass “jar,” used a vacuum pump to evacuate the jar to a low pressure, and tested his generator. In so doing, Hauksbee obtained the first human-made electrical glow discharge by “capturing lightning” in a jar. In 1854, Heinrich Geissler, a glassblower and apparatus maker, opened his shop in Bonn, Germany, to make scientific instruments; in 1855, he produced a vacuum pump that used liquid mercury as an evacuation fluid. That same year, Geissler made the first gaseous conduction lamps while working in collaboration with the German scientist Julius Plücker. Plücker referred to these lamps as “Geissler tubes.” Geissler was able to create red light with neon gas filling a lamp and light of nearly all colors by using certain types of gas within each of the lamps. Thus, both the neon sign business and the science of spectroscopy were born. Geissler tubes were studied extensively by a variety of workers. At the beginning of the twentieth century, the practical American engineer Peter Cooper Hewitt put these studies to use by marketing the first low-pressure mercury vapor lamps. The lamps were quite successful, although they required high voltage for operation, emitted an eerie blue-green, and shone dimly by comparison with their eventual successor, the fluorescent lamp. At about the same time, systematic studies of phosphors had finally begun. By the 1920’s, a number of investigators had discovered that the low-pressure mercury vapor discharge marketed by Hewitt was an extremely efficient method for producing ultraviolet light, if the mercury and rare gas pressures were properly adjusted. With a phosphor to convert the ultraviolet light back to visible light, the Hewitt lamp made an excellent light source. Impact The introduction of fluorescent lighting in 1936 presented the public with a completely new form of lighting that had enormous advantages of high efficiency, long life, and relatively low cost. By 1938, production of fluorescent lamps was well under way. By April, 1938, four sizes of fluorescent lamps in various colors had been offered to the public and more than two hundred thousand lamps had been sold. During 1939 and 1940, two great expositions—the New York World’s Fair and the San Francisco International Exposition— helped popularize fluorescent lighting. Thousands of tubular fluorescent lamps formed a great spiral in the “motor display salon,” the car showroom of the General Motors exhibit at the New York World’s Fair. Fluorescent lamps lit the Polish Restaurant and hung in vertical clusters on the flagpoles along theAvenue of the Flags at the fair, while two-meter-long, upright fluorescent tubes illuminated buildings at the San Francisco International Exposition. When the United States entered World War II (1939-1945), the demand for efficient factory lighting soared. In 1941, more than twenty-one million fluorescent lamps were sold. Technical advances continued to improve the fluorescent lamp. By the 1990’s, this type of lamp supplied most of the world’s artificial lighting.

20 June 2009

Floppy disk

The invention: Inexpensive magnetic medium for storing and moving computer data. The people behind the invention: Andrew D. Booth (1918- ), an English inventor who developed paper disks as a storage medium Reynold B. Johnson (1906-1998), a design engineer at IBM’s research facility who oversaw development of magnetic disk storage devices Alan Shugart (1930- ), an engineer at IBM’s research laboratory who first developed the floppy disk as a means of mass storage for mainframe computers First Tries When the International Business Machines (IBM) Corporation decided to concentrate on the development of computers for business use in the 1950’s, it faced a problem that had troubled the earliest computer designers: how to store data reliably and inexpensively. In the early days of computers (the early 1940’s), a number of ideas were tried. The English inventor Andrew D. Booth produced spinning paper disks on which he stored data by means of punched holes, only to abandon the idea because of the insurmountable engineering problems he foresaw. The next step was “punched” cards, an idea first used when the French inventor Joseph-Marie Jacquard invented an automatic weaving loom for which patterns were stored in pasteboard cards. The idea was refined by the English mathematician and inventor Charles Babbage for use in his “analytical engine,” an attempt to build a kind of computing machine. Although it was simple and reliable, it was not fast enough, nor did it store enough data, to be truly practical. The Ampex Corporation demonstrated its first magnetic audiotape recorder after World War II (1939-1945). Shortly after that, the Binary Automatic Computer (BINAC) was introduced with a storage device that appeared to be a large tape recorder. A more advanced machine, the Universal Automatic Computer (UNIVAC), used metal tape instead of plastic (plastic was easily stretched or even broken). Unfortunately, metal tape was considerably heavier, and its edges were razor-sharp and thus dangerous. Improvements in plastic tape eventually produced sturdy media, and magnetic tape became (and remains) a practical medium for storage of computer data. Still later designs combined Booth’s spinning paper disks with magnetic technology to produce rapidly rotating “drums.” Whereas a tape might have to be fast-forwarded nearly to its end to locate a specific piece of data, a drum rotating at speeds up to 12,500 revolutions per minute (rpm) could retrieve data very quickly and could store more than 1 million bits (or approximately 125 kilobytes) of data. In May, 1955, these drums evolved, under the direction of Reynold B. Johnson, into IBM’s hard disk unit. The hard disk unit consisted of fifty platters, each 2 feet in diameter, rotating at 1,200 rpm. Both sides of the disk could be used to store information. When the operator wished to access the disk, at his or her command a read/write head was moved to the right disk and to the side of the disk that held the desired data. The operator could then read data from or record data onto the disk. To speed things even more, the next version of the device, similar in design, employed one hundred read/write heads—one for each of its fifty double-sided disks. The only remaining disadvantage was its size, which earned IBM’s first commercial unit the nickname “jukebox.” The First Floppy The floppy disk drive developed directly from hard disk technology. It did not take shape until the late 1960’s under the direction of Alan Shugart (it was announced by IBM as a ready product in 1970). First created to help restart the operating systems of mainframe computers that had gone dead, the floppy seemed in some ways to be a step back, for it operated more slowly than a hard disk drive and did not store as much data. Initially, it consisted of a single thin plastic disk eight inches in diameter and was developed without the protective envelope in which it is now universally encased. The addition of that jacket gave the floppy its single greatest advantage over the hard disk: portability with reliability. Another advantage soon became apparent: The floppy is resilient to damage. In a hard disk drive, the read/write heads must hover thousandths of a centimeter over the disk surface in order to attain maximum performance. Should even a small particle of dust get in the way, or should the drive unit be bumped too hard, the head may “crash” into the surface of the disk and ruin its magnetic coating; the result is a permanent loss of data. Because the floppy operates with the read-write head in contact with the flexible plastic disk surface, individual particles of dust or other contaminants are not nearly as likely to cause disaster. As a result of its advantages, the floppy disk was the logical choice for mass storage in personal computers (PCs), which were developed a few years after the floppy disk’s introduction. The floppy is still an important storage device even though hard disk drives for PCs have become less expensive. Moreover, manufacturers continually are developing new floppy formats and new floppy disks that can hold more data.Consequences Personal computing would have developed very differently were it not for the availability of inexpensive floppy disk drives. When IBM introduced its PC in 1981, the machine provided as standard equipment a connection for a cassette tape recorder as a storage device; a floppy disk was only an option (though an option few did not take). The awkwardness of tape drives—their slow speed and sequential nature of storing data—presented clear obstacles to the acceptance of the personal computer as a basic information tool. By contrast, the floppy drive gives computer users relatively fast storage at low cost. Floppy disks provided more than merely economical data storage. Since they are built to be removable (unlike hard drives), they represented a basic means of transferring data between machines. Indeed, prior to the popularization of local area networks (LANs), the floppy was known as a “sneaker” network: One merely carried the disk by foot to another computer. Floppy disks were long the primary means of distributing new software to users. Even the very flexible floppy showed itself to be quite resilient to the wear and tear of postal delivery. Later, the 3.5- inch disk improved upon the design of the original 8-inch and 5.25- inch floppies by protecting the disk medium within a hard plastic shell and by using a sliding metal door to protect the area where the read/write heads contact the disk. By the late 1990’s, floppy disks were giving way to new datastorage media, particularly CD-ROMs—durable laser-encoded disks that hold more than 700 megabytes of data. As the price of blank CDs dropped dramatically, floppy disks tended to be used mainly for short-term storage of small amounts of data. Floppy disks were also being used less and less for data distribution and transfer, as computer users turned increasingly to sending files via e-mail on the Internet, and software providers made their products available for downloading on Web sites.

19 June 2009

Field ion microscope

The invention:Amicroscope that uses ions formed in high-voltage electric fields to view atoms on metal surfaces. The people behind the invention: Erwin Wilhelm Müller (1911-1977), a physicist, engineer, and research professor J. Robert Oppenheimer (1904-1967), an American physicist To See Beneath the Surface In the early twentieth century, developments in physics, especially quantum mechanics, paved the way for the application of new theoretical and experimental knowledge to the problem of viewing the atomic structure of metal surfaces. Of primary importance were American physicist George Gamow’s 1928 theoretical explanation of the field emission of electrons by quantum mechanical means and J. Robert Oppenheimer’s 1928 prediction of the quantum mechanical ionization of hydrogen in a strong electric field. In 1936, ErwinWilhelm Müller developed his field emission microscope, the first in a series of instruments that would exploit these developments. It was to be the first instrument to view atomic structures—although not the individual atoms themselves— directly. Müller’s subsequent field ion microscope utilized the same basic concepts used in the field emission microscope yet proved to be a much more powerful and versatile instrument. By 1956, Müller’s invention allowed him to view the crystal lattice structure of metals in atomic detail; it actually showed the constituent atoms. The field emission and field ion microscopes make it possible to view the atomic surface structures of metals on fluorescent screens. The field ion microscope is the direct descendant of the field emission microscope. In the case of the field emission microscope, the images are projected by electrons emitted directly from the tip of a metal needle, which constitutes the specimen under investigation.These electrons produce an image of the atomic lattice structure of the needle’s surface. The needle serves as the electron-donating electrode in a vacuum tube, also known as the “cathode.” Afluorescent screen that serves as the electron-receiving electrode, or “anode,” is placed opposite the needle. When sufficient electrical voltage is applied across the cathode and anode, the needle tip emits electrons, which strike the screen. The image produced on the screen is a projection of the electron source—the needle surface’s atomic lattice structure. Müller studied the effect of needle shape on the performance of the microscope throughout much of 1937. When the needles had been properly shaped, Müller was able to realize magnifications of up to 1 million times. This magnification allowed Müller to view what he called “maps” of the atomic crystal structure of metals, since the needles were so small that they were often composed of only one simple crystal of the material. While the magnification may have been great, however, the resolution of the instrument was severely limited by the physics of emitted electrons, which caused the images Müller obtained to be blurred. Improving the View In 1943, while working in Berlin, Müller realized that the resolution of the field emission microscope was limited by two factors. The electron velocity, a particle property, was extremely high and uncontrollably random, causing the micrographic images to be blurred. In addition, the electrons had an unsatisfactorily high wavelength. When Müller combined these two factors, he was able to determine that the field emission microscope could never depict single atoms; it was a physical impossibility for it to distinguish one atom from another. By 1951, this limitation led him to develop the technology behind the field ion microscope. In 1952, Müller moved to the United States and founded the Pennsylvania State University Field Emission Laboratory. He perfected the field ion microscope between 1952 and 1956. The field ion microscope utilized positive ions instead of electrons to create the atomic surface images on the fluorescent screen.When an easily ionized gas—at first hydrogen, but usually helium, neon, or argon—was introduced into the evacuated tube, the emitted electrons ionized the gas atoms, creating a stream of positively charged particles, much as Oppenheimer had predicted in 1928. Müller’s use of positive ions circumvented one of the resolution problems inherent in the use of imaging electrons. Like the electrons, however, the positive ions traversed the tube with unpredictably random velocities. Müller eliminated this problem by cryogenically cooling the needle tip with a supercooled liquefied gas such as nitrogen or hydrogen. By 1956, Müller had perfected the means of supplying imaging positive ions by filling the vacuum tube with an extremely small quantity of an inert gas such as helium, neon, or argon. By using such a gas, Müller was assured that no chemical reaction would occur between the needle tip and the gas; any such reaction would alter the surface atomic structure of the needle and thus alter the resulting microscopic image. The imaging ions allowed the field ion microscope to image the emitter surface to a resolution of between two and three angstroms, making it ten times more accurate than its close relative, the field emission microscope. Consequences The immediate impact of the field ion microscope was its influence on the study of metallic surfaces. It is a well-known fact of materials science that the physical properties of metals are influenced by the imperfections in their constituent lattice structures. It was not possible to view the atomic structure of the lattice, and thus the finest detail of any imperfection, until the field ion microscope was developed. The field ion microscope is the only instrument powerful enough to view the structural flaws of metal specimens in atomic detail. Although the instrument may be extremely powerful, the extremely large electrical fields required in the imaging process preclude the instrument’s application to all but the heartiest of metallic specimens. The field strength of 500 million volts per centimeter exerts an average stress on metal specimens in the range of almost 1 ton per square millimeter. Metals such as iron and platinum can withstand this strain because of the shape of the needles into which they are formed. Yet this limitation of the instrument makes it extremely difficult to examine biological materials, which cannot withstand the amount of stress that metals can. Apractical by-product in the study of field ionization—field evaporation—eventually permitted scientists to view large biological molecules. Field evaporation also allowed surface scientists to view the atomic structures of biological molecules. By embedding molecules such as phthalocyanine within the metal needle, scientists have been able to view the atomic structures of large biological molecules by field evaporating much of the surrounding metal until the biological material remains at the needle’s surface.

18 June 2009

Fiber-optics

The invention: The application of glass fibers to electronic communications and other fields to carry large volumes of information quickly, smoothly, and cheaply over great distances. The people behind the invention: Samuel F. B. Morse (1791-1872), the American artist and inventor who developed the electromagnetic telegraph system Alexander Graham Bell (1847-1922), the Scottish American inventor and educator who invented the telephone and the photophone Theodore H. Maiman (1927- ), the American physicist and engineer who invented the solid-state laser Charles K. Kao (1933- ), a Chinese-born electrical engineer Zhores I. Alferov (1930- ), a Russian physicist and mathematician The Singing Sun In 1844, Samuel F. B. Morse, inventor of the telegraph, sent his famous message, “What hath God wrought?” by electrical impulses traveling at the speed of light over a 66-kilometer telegraph wire strung between Washington, D.C., and Baltimore. Ever since that day, scientists have worked to find faster, less expensive, and more efficient ways to convey information over great distances. At first, the telegraph was used to report stock-market prices and the results of political elections. The telegraph was quite important in the American Civil War (1861-1865). The first transcontinental telegraph message was sent by Stephen J. Field, chief justice of the California Supreme Court, to U.S. president Abraham Lincoln on October 24, 1861. The message declared that California would remain loyal to the Union. By 1866, telegraph lines had reached all across the North American continent and a telegraph cable had been laid beneath the Atlantic Ocean to link the OldWorld with the New World.Another American inventor made the leap from the telegraph to the telephone. Alexander Graham Bell, a teacher of the deaf, was interested in the physical way speech works. In 1875, he started experimenting with ways to transmit sound vibrations electrically. He realized that an electrical current could be adjusted to resemble the vibrations of speech. Bell patented his invention on March 7, 1876. On July 9, 1877, he founded the Bell Telephone Company. In 1880, Bell invented a device called the “photophone.” He used it to demonstrate that speech could be transmitted on a beam of light. Light is a form of electromagnetic energy. It travels in a vibrating wave. When the amplitude (height) of the wave is adjusted, a light beam can be made to carry messages. Bell’s invention included a thin mirrored disk that converted sound waves directly into a beam of light. At the receiving end, a selenium resistor connected to a headphone converted the light back into sound. “I have heard a ray of sun laugh and cough and sing,” Bell wrote of his invention. Although Bell proved that he could transmit speech over distances of several hundred meters with the photophone, the device was awkward and unreliable, and it never became popular as the telephone did. Not until one hundred years later did researchers find important practical uses for Bell’s idea of talking on a beam of light. Two other major discoveries needed to be made first: developdevelopment of the laser and of high-purity glass. Theodore H. Maiman, an American physicist and electrical engineer at Hughes Research Laboratories in Malibu, California, built the first laser. The laser produces an intense, narrowly focused beam of light that can be adjusted to carry huge amounts of information. The word itself is an acronym for light amplification by the stimulated emission of radiation. It soon became clear, though, that even bright laser light can be broken up and absorbed by smog, fog, rain, and snow. So in 1966, Charles K. Kao, an electrical engineer at the Standard Telecommunications Laboratories in England, suggested that glass fibers could be used to transmit message-carrying beams of laser light without disruption from weather. Fiber Optics Are Tested Optical glass fiber is made from common materials, mostly silica, soda, and lime. The inside of a delicate silica glass tube is coated with a hundred or more layers of extremely thin glass. The tube is then heated to 2,000 degrees Celsius and collapsed into a thin glass rod, or preform. The preform is then pulled into thin strands of fiber. The fibers are coated with plastic to protect them from being nicked or scratched, and then they are covered in flexible cable.The earliest glass fibers contained many impurities and defects, so they did not carry light well. Signal repeaters were needed every few meters to energize (amplify) the fading pulses of light. In 1970, however, researchers at the Corning Glass Works in New York developed a fiber pure enough to carry light at least one kilometer without amplification. The telephone industry quickly became involved in the new fiber-optics technology. Researchers believed that a bundle of optical fibers as thin as a pencil could carry several hundred telephone calls at the same time. Optical fibers were first tested by telephone companies in big cities, where the great volume of calls often overloaded standard underground phone lines. On May 11, 1977, American Telephone & Telegraph Company (AT&T), along with Illinois Bell Telephone, Western Electric, and Bell Telephone Laboratories, began the first commercial test of fiberoptics telecommunications in downtown Chicago. The system consisted of a 2.4-kilometer cable laid beneath city streets. The cable, only 1.3 centimeters in diameter, linked an office building in the downtown business district with two telephone exchange centers. Voice and video signals were coded into pulses of laser light and transmitted through the hair-thin glass fibers. The tests showed that a single pair of fibers could carry nearly six hundred telephone conversations at once very reliably and at a reasonable cost. Six years later, in October, 1983, Bell Laboratories succeeded in transmitting the equivalent of six thousand telephone signals through an optical fiber cable that was 161 kilometers long. Since that time, countries all over the world, fromEngland to Indonesia, have developed optical communications systems.Consequences Fiber optics has had a great impact on telecommunications. Asingle fiber can now carry thousands of conversations with no electrical interference. These fibers are less expensive, weigh less, and take up much less space than copper wire. As a result, people can carry on conversations over long distances without static and at a low cost. One of the first uses of fiber optics and perhaps its best-known application is the fiberscope, a medical instrument that permits internal examination of the human body without surgery or X-ray techniques. The fiberscope, or endoscope, consists of two fiber bundles. One of the fiber bundles transmits bright light into the patient, while the other conveys a color image back to the eye of the physician. The fiberscope has been used to look for ulcers, cancer, and polyps in the stomach, intestine, and esophagus of humans. Medical instruments, such as forceps, can be attached to the fiberscope, allowing the physician to perform a range of medical procedures, such as clearing a blocked windpipe or cutting precancerous polyps from the colon.

Fax machine

The invention: Originally known as the “facsimile machine,” a machine that converts written and printed images into electrical signals that can be sent via telephone, computer, or radio. The person behind the invention: Alexander Bain (1818-1903), a Scottish inventor Sending Images The invention of the telegraph and telephone during the latter half of the nineteenth century gave people the ability to send information quickly over long distances.With the invention of radio and television technologies, voices and moving pictures could be seen around the world as well. Oddly, however, the facsimile process— which involves the transmission of pictures, documents, or other physical data over distance—predates all these modern devices, since a simple facsimile apparatus (usually called a fax machine) was patented in 1843 by Alexander Bain. This early device used a pendulum to synchronize the transmitting and receiving units; it did not convert the image into an electrical format, however, and it was quite crude and impractical. Nevertheless, it reflected the desire to send images over long distances, which remained a technological goal for more than a century. Facsimile machines developed in the period around 1930 enabled news services to provide newspapers around the world with pictures for publication. It was not until the 1970’s, however, that technological advances made small fax machines available for everyday office use. Scanning Images Both the fax machines of the 1930’s and those of today operate on the basis of the same principle: scanning. In early machines, an image (a document or a picture) was attached to a roller, placed in the fax machine, and rotated at a slow and fixed speed (which must be the same at each end of the link) in a bright light. Light from the image was reflected from the document in varying degrees, since dark areas reflect less light than lighter areas do. Alens moved across the page one line at a time, concentrating and directing the reflected light to a photoelectric tube. This tube would respond to the change in light level by varying its electric output, thus converting the image into an output signal whose intensity varied with the changing light and dark spots of the image. Much like the signal from a microphone or television camera, this modulated (varying) wave could then be broadcast by radio or sent over telephone lines to a receiver that performed a reverse function. At the receiving end, a light bulb was made to vary its intensity to match the varying intensity of the incoming signal. The output of the light bulb was concentrated through a lens onto photographically sensitive paper, thus re-creating the original image as the paper was rotated. Early fax machines were bulky and often difficult to operate. Advances in semiconductor and computer technology in the 1970’s, however, made the goal of creating an easy-to-use and inexpensive fax machine realistic. Instead of a photoelectric tube that consumes a relatively large amount of electrical power, a row of small photodiode semiconductors is used to measure light intensity. Instead of a power-consuming light source, low-power light-emitting diodes (LEDs) are used. Some 1,728 light-sensitive diodes are placed in a row, and the image to be scanned is passed over them one line at a time. Each diode registers either a dark or a light portion of the image. As each diode is checked in sequence, it produces a signal for one picture element, also known as a “pixel” or “pel.” Because many diodes are used, there is no need for a focusing lens; the diode bar is as wide as the page being scanned, and each pixel represents a portion of a line on that page. Since most fax transmissions take place over public telephone system lines, the signal from the photodiodes is transmitted by means of a built-in computer modem in much the same format that computers use to transmit data over telephone lines. The receiving fax uses its modem to convert the audible signal into a sequence that varies in intensity in proportion to the original signal. This varying signal is then sent in proper sequence to a row of 1,728 small wires over which a chemically treated paper is passed. As each wire receives a signal that represents a black portion of the scanned image, the wire heats and, in contact with the paper, produces a black dot that corresponds to the transmitted pixel. As the page is passed over these wires one line at a time, the original image is re-created. Consequences The fax machine has long been in use in many commercial and scientific fields.Weather data in the form of pictures are transmitted from orbiting satellites to ground stations; newspapers receive photographs from international news sources via fax; and, using a very expensive but very high-quality fax device, newspapers and magazines are able to transmit full-size proof copies of each edition to printers thousands of miles away so that a publication edited in one country can reach newsstands around the world quickly. With the technological advances that have been made in recent years, however, fax transmission has become a part of everyday life, particularly in business and research environments. The ability to send quickly a copy of a letter, document, or report over thousands of miles means that information can be shared in a matter of minutes rather than in a matter of days. In fields such as advertising and architecture, it is often necessary to send pictures or drawings to remote sites. Indeed, the fax machine has played an important role in providing information to distant observers of political unrest when other sources of information (such as radio, television, and newspapers) are shut down. In fact, there has been a natural coupling of computers, modems, and fax devices. Since modern faxes are sent as computer data over phone lines, specialized and inexpensive modems (which allow two computers to share data) have been developed that allow any computer user to send and receive faxes without bulky machines. For example, a document—including drawings, pictures, or graphics of some kind—is created in a computer and transmitted directly to another fax machine. That computer can also receive a fax transmission and either display it on the computer’s screen or print it on the local printer. Since fax technology is now within the reach of almost anyone who is interested in using it, there is little doubt that it will continue to grow in popularity.

ENIAC computer



The invention: 

The first general-purpose electronic digital computer.

The people behind the invention:

John Presper Eckert (1919-1995), an electrical engineer
John William Mauchly (1907-1980), a physicist, engineer, and
professor
John von Neumann (1903-1957), a Hungarian American
mathematician, physicist, and logician
Herman Heine Goldstine (1913- ), an army mathematician
Arthur Walter Burks (1915- ), a philosopher, engineer, and
professor
John Vincent Atanasoff (1903-1995), a mathematician and
physicist

Electronic synthesizer

The invention: Portable electronic device that both simulates the sounds of acoustic instruments and creates entirely new sounds. The person behind the invention: Robert A. Moog (1934- ), an American physicist, engineer, and inventor From Harmonium to Synthesizer The harmonium, or acoustic reed organ, is commonly viewed as having evolved into the modern electronic synthesizer that can be used to create many kinds of musical sounds, from the sounds of single or combined acoustic musical instruments to entirely original sounds. The first instrument to be called a synthesizer was patented by the Frenchman J. A. Dereux in 1949. Dereux’s synthesizer, which amplified the acoustic properties of harmoniums, led to the development of the recording organ. Next, several European and American inventors altered and augmented the properties of such synthesizers. This stage of the process was followed by the invention of electronic synthesizers, which initially used electronically generated sounds to imitate acoustic instruments. It was not long, however, before such synthesizers were used to create sounds that could not be produced by any other instrument. Among the early electronic synthesizers were those made in Germany by Herbert Elmert and Robert Beyer in 1953, and the American Olsen-Belar synthesizers, which were developed in 1954. Continual research produced better and better versions of these large, complex electronic devices. Portable synthesizers, which are often called “keyboards,” were then developed for concert and home use. These instruments became extremely popular, especially in rock music. In 1964, Robert A. Moog, an electronics professor, created what are thought by many to be the first portable synthesizers to be made available to the public. Several other well-known portable synthesizers, such as ARP and Buchla synthesizers, were also introduced at about the same time. Currently, many companies manufacture studio-quality synthesizers of various types. Synthesizer Components and Operation Modern synthesizers make music electronically by building up musical phrases via numerous electronic circuits and combining those phrases to create musical compositions. In addition to duplicating the sounds of many instruments, such synthesizers also enable their users to create virtually any imaginable sound. Many sounds have been created on synthesizers that could not have been created in any other way. Synthesizers use sound-processing and sound-control equipment that controls “white noise” audio generators and oscillator circuits. This equipment can be manipulated to produce a huge variety of sound frequencies and frequency mixtures in the same way that a beam of white light can be manipulated to produce a particular color or mixture of colors. Once the desired products of a synthesizer’s noise generator and oscillators are produced, percussive sounds that contain all or many audio frequencies are mixed with many chosen individual sounds and altered by using various electronic processing components. The better the quality of the synthesizer, the more processing components it will possess. Among these components are sound amplifiers, sound mixers, sound filters, reverberators, and sound combination devices. Sound amplifiers are voltage-controlled devices that change the dynamic characteristics of any given sound made by a synthesizer. Sound mixers make it possible to combine and blend two or more manufactured sounds while controlling their relative volumes. Sound filters affect the frequency content of sound mixtures by increasing or decreasing the amplitude of the sound frequencies within particular frequency ranges, which are called “bands.” Sound filters can be either band-pass filters or band-reject filters. They operate by increasing or decreasing the amplitudes of sound frequencies within given ranges (such as treble or bass). Reverberators (or “reverb” units) produce artificial echoes that can have significant musical effects. There are also many other varieties of soundprocessing elements, among them sound-envelope generators, spatial locators, and frequency shifters. Ultimately, the soundcombination devices put together the results of the various groups of audio generating and processing elements, shaping the sound that has been created into its final form.Avariety of control elements are used to integrate the operation of synthesizers. Most common is the keyboard, which provides the name most often used for portable electronic synthesizers. Portable synthesizer keyboards are most often pressure-sensitive devices (meaning that the harder one presses the key, the louder the resulting sound will be) that resemble the black-and-white keyboards of more conventional musical instruments such as the piano and the organ. These synthesizer keyboards produce two simultaneous outputs: control voltages that govern the pitches of oscillators, and timing pulses that sustain synthesizer responses for as long as a particular key is depressed. Unseen but present are the integrated voltage controls that control overall signal generation and processing. In addition to voltage controls and keyboards, synthesizers contain buttons and other switches that can transpose their sound ranges and other qualities. Using the appropriate buttons or switches makes it possible for a single synthesizer to imitate different instruments—or groups of instruments— at different times. Other synthesizer control elements include sample-and-hold devices and random voltage sources that make it possible to sustain particular musical effects and to add various effects to the music that is being played, respectively. Electronic synthesizers are complex and flexible instruments. The various types and models of synthesizers make it possible to produce many different kinds of music, and many musicians use a variety of keyboards to give them great flexibility in performing and recording. Impact The development and wide dissemination of studio and portable synthesizers has led to their frequent use to combine the sound properties of various musical instruments; a single musician can thus produce, inexpensively and with a single instrument, sound combinations that previously could have been produced only by a large number of musicians playing various instruments. (Understandably, many players of acoustic instruments have been upset by this development, since it means that they are hired to play less often than they were before synthesizers were developed.) Another consequence of synthesizer use has been the development of entirely original varieties of sound, although this area has been less thoroughly explored, for commercial reasons. The development of synthesizers has also led to the design of other new electronic music- making techniques and to the development of new electronic musical instruments. Opinions about synthesizers vary from person to person—and, in the case of certain illustrious musicians, from time to time. One well-known musician initially proposed that electronic synthesizers would replace many or all conventional instruments, particularly pianos. Two decades later, though, this same musician noted that not even the best modern synthesizers could match the quality of sound produced by pianos made by manufacturers such as Steinway and Baldwin.

Electron microscope


The invention: 

A device for viewing extremely small objects that
uses electron beams and “electron lenses” instead of the light
rays and optical lenses used by ordinary microscopes.

The people behind the invention:

Ernst Ruska (1906-1988), a German engineer, researcher, and
inventor who shared the 1986 Nobel Prize in Physics
Hans Busch (1884-1973), a German physicist
Max Knoll (1897-1969), a German engineer and professor
Louis de Broglie (1892-1987), a French physicist who won the
1929 Nobel Prize in Physics


14 June 2009

Electroencephalogram

The invention: A system of electrodes that measures brain wave patterns in humans, making possible a new era of neurophysiology. The people behind the invention: Hans Berger (1873-1941), a German psychiatrist and research scientist Richard Caton (1842-1926), an English physiologist and surgeon The Electrical Activity of the Brain Hans Berger’s search for the human electroencephalograph (English physiologist Richard Caton had described the electroencephalogram, or “brain wave,” in rabbits and monkeys in 1875) was motivated by his desire to find a physiological method that might be applied successfully to the study of the long-standing problem of the relationship between the mind and the brain. His scientific career, therefore, was directed toward revealing the psychophysical relationship in terms of principles that would be rooted firmly in the natural sciences and would not have to rely upon vague philosophical or mystical ideas. During his early career, Berger attempted to study psychophysical relationships by making plethysmographic measurements of changes in the brain circulation of patients with skull defects. In plethysmography, an instrument is used to indicate and record by tracings the variations in size of an organ or part of the body. Later, Berger investigated temperature changes occurring in the human brain during mental activity and the action of psychoactive drugs. He became disillusioned, however, by the lack of psychophysical understanding generated by these investigations. Next, Berger turned to the study of the electrical activity of the brain, and in the 1920’s he set out to search for the human electroencephalogram. He believed that the electroencephalogram would finally provide him with a physiological method capable of furnishing insight into mental functions and their disturbances.Berger made his first unsuccessful attempt at recording the electrical activity of the brain in 1920, using the scalp of a bald medical student. He then attempted to stimulate the cortex of patients with skull defects by using a set of electrodes to apply an electrical current to the skin covering the defect. The main purpose of these stimulation experiments was to elicit subjective sensations. Berger hoped that eliciting these sensations might give him some clue about the nature of the relationship between the physiochemical events produced by the electrical stimulus and the mental processes revealed by the patients’ subjective experience. The availability of many patients with skull defects—in whom the pulsating surface of the brain was separated from the stimulating electrodes by only a few millimeters of tissue—reactivated Berger’s interest in recording the brain’s electrical activity.Small, Tremulous Movements Berger used several different instruments in trying to detect brain waves, but all of them used a similar method of recording. Electrical oscillations deflected a mirror upon which a light beam was projected. The deflections of the light beam were proportional to the magnitude of the electrical signals. The movement of the spot of the light beam was recorded on photographic paper moving at a speed no greater than 3 centimeters per second. In July, 1924, Berger observed small, tremulous movements of the instrument while recording from the skin overlying a bone defect in a seventeen-year-old patient. In his first paper on the electroencephalogram, Berger described this case briefly as his first successful recording of an electroencephalogram. At the time of these early studies, Berger already had used the term “electroencephalogram” in his diary. Yet for several years he had doubts about the origin of the electrical signals he recorded. As late as 1928, he almost abandoned his electrical recording studies. The publication of Berger’s first paper on the human encephalogram in 1929 had little impact on the scientific world. It was either ignored or regarded with open disbelief. At this time, even when Berger himself was not completely free of doubts about the validity of his findings, he managed to continue his work. He published additional contributions to the study of the electroencephalogram in a series of fourteen papers. As his research progressed, Berger became increasingly confident and convinced of the significance of his discovery. Impact The long-range impact of Berger’s work is incontestable. When Berger published his last paper on the human encephalogram in 1938, the new approach to the study of brain function that he inaugurated in 1929 had gathered momentum in many centers, both in Europe and in the United States. As a result of his pioneering work, a new diagnostic method had been introduced into medicine. Physiology had acquired a new investigative tool. Clinical neurophysiology had been liberated from its dependence upon the functional anatomical approach, and electrophysiological exploration of complex functions of the central nervous system had begun in earnest. Berger’s work had finally received its well-deserved recognition. Many of those who undertook the study of the electroencephalogram were able to bring a far greater technical knowledge of neurophysiology to bear upon the problems of the electrical activity of the brain. Yet the community of neurological scientists has not ceased to look with respect to the founder of electroencephalography, who, despite overwhelming odds and isolation, opened a new area of neurophysiology.