Pages

20 June 2009

Floppy disk

The invention: Inexpensive magnetic medium for storing and moving computer data. The people behind the invention: Andrew D. Booth (1918- ), an English inventor who developed paper disks as a storage medium Reynold B. Johnson (1906-1998), a design engineer at IBM’s research facility who oversaw development of magnetic disk storage devices Alan Shugart (1930- ), an engineer at IBM’s research laboratory who first developed the floppy disk as a means of mass storage for mainframe computers First Tries When the International Business Machines (IBM) Corporation decided to concentrate on the development of computers for business use in the 1950’s, it faced a problem that had troubled the earliest computer designers: how to store data reliably and inexpensively. In the early days of computers (the early 1940’s), a number of ideas were tried. The English inventor Andrew D. Booth produced spinning paper disks on which he stored data by means of punched holes, only to abandon the idea because of the insurmountable engineering problems he foresaw. The next step was “punched” cards, an idea first used when the French inventor Joseph-Marie Jacquard invented an automatic weaving loom for which patterns were stored in pasteboard cards. The idea was refined by the English mathematician and inventor Charles Babbage for use in his “analytical engine,” an attempt to build a kind of computing machine. Although it was simple and reliable, it was not fast enough, nor did it store enough data, to be truly practical. The Ampex Corporation demonstrated its first magnetic audiotape recorder after World War II (1939-1945). Shortly after that, the Binary Automatic Computer (BINAC) was introduced with a storage device that appeared to be a large tape recorder. A more advanced machine, the Universal Automatic Computer (UNIVAC), used metal tape instead of plastic (plastic was easily stretched or even broken). Unfortunately, metal tape was considerably heavier, and its edges were razor-sharp and thus dangerous. Improvements in plastic tape eventually produced sturdy media, and magnetic tape became (and remains) a practical medium for storage of computer data. Still later designs combined Booth’s spinning paper disks with magnetic technology to produce rapidly rotating “drums.” Whereas a tape might have to be fast-forwarded nearly to its end to locate a specific piece of data, a drum rotating at speeds up to 12,500 revolutions per minute (rpm) could retrieve data very quickly and could store more than 1 million bits (or approximately 125 kilobytes) of data. In May, 1955, these drums evolved, under the direction of Reynold B. Johnson, into IBM’s hard disk unit. The hard disk unit consisted of fifty platters, each 2 feet in diameter, rotating at 1,200 rpm. Both sides of the disk could be used to store information. When the operator wished to access the disk, at his or her command a read/write head was moved to the right disk and to the side of the disk that held the desired data. The operator could then read data from or record data onto the disk. To speed things even more, the next version of the device, similar in design, employed one hundred read/write heads—one for each of its fifty double-sided disks. The only remaining disadvantage was its size, which earned IBM’s first commercial unit the nickname “jukebox.” The First Floppy The floppy disk drive developed directly from hard disk technology. It did not take shape until the late 1960’s under the direction of Alan Shugart (it was announced by IBM as a ready product in 1970). First created to help restart the operating systems of mainframe computers that had gone dead, the floppy seemed in some ways to be a step back, for it operated more slowly than a hard disk drive and did not store as much data. Initially, it consisted of a single thin plastic disk eight inches in diameter and was developed without the protective envelope in which it is now universally encased. The addition of that jacket gave the floppy its single greatest advantage over the hard disk: portability with reliability. Another advantage soon became apparent: The floppy is resilient to damage. In a hard disk drive, the read/write heads must hover thousandths of a centimeter over the disk surface in order to attain maximum performance. Should even a small particle of dust get in the way, or should the drive unit be bumped too hard, the head may “crash” into the surface of the disk and ruin its magnetic coating; the result is a permanent loss of data. Because the floppy operates with the read-write head in contact with the flexible plastic disk surface, individual particles of dust or other contaminants are not nearly as likely to cause disaster. As a result of its advantages, the floppy disk was the logical choice for mass storage in personal computers (PCs), which were developed a few years after the floppy disk’s introduction. The floppy is still an important storage device even though hard disk drives for PCs have become less expensive. Moreover, manufacturers continually are developing new floppy formats and new floppy disks that can hold more data.Consequences Personal computing would have developed very differently were it not for the availability of inexpensive floppy disk drives. When IBM introduced its PC in 1981, the machine provided as standard equipment a connection for a cassette tape recorder as a storage device; a floppy disk was only an option (though an option few did not take). The awkwardness of tape drives—their slow speed and sequential nature of storing data—presented clear obstacles to the acceptance of the personal computer as a basic information tool. By contrast, the floppy drive gives computer users relatively fast storage at low cost. Floppy disks provided more than merely economical data storage. Since they are built to be removable (unlike hard drives), they represented a basic means of transferring data between machines. Indeed, prior to the popularization of local area networks (LANs), the floppy was known as a “sneaker” network: One merely carried the disk by foot to another computer. Floppy disks were long the primary means of distributing new software to users. Even the very flexible floppy showed itself to be quite resilient to the wear and tear of postal delivery. Later, the 3.5- inch disk improved upon the design of the original 8-inch and 5.25- inch floppies by protecting the disk medium within a hard plastic shell and by using a sliding metal door to protect the area where the read/write heads contact the disk. By the late 1990’s, floppy disks were giving way to new datastorage media, particularly CD-ROMs—durable laser-encoded disks that hold more than 700 megabytes of data. As the price of blank CDs dropped dramatically, floppy disks tended to be used mainly for short-term storage of small amounts of data. Floppy disks were also being used less and less for data distribution and transfer, as computer users turned increasingly to sending files via e-mail on the Internet, and software providers made their products available for downloading on Web sites.

19 June 2009

Field ion microscope

The invention:Amicroscope that uses ions formed in high-voltage electric fields to view atoms on metal surfaces. The people behind the invention: Erwin Wilhelm Müller (1911-1977), a physicist, engineer, and research professor J. Robert Oppenheimer (1904-1967), an American physicist To See Beneath the Surface In the early twentieth century, developments in physics, especially quantum mechanics, paved the way for the application of new theoretical and experimental knowledge to the problem of viewing the atomic structure of metal surfaces. Of primary importance were American physicist George Gamow’s 1928 theoretical explanation of the field emission of electrons by quantum mechanical means and J. Robert Oppenheimer’s 1928 prediction of the quantum mechanical ionization of hydrogen in a strong electric field. In 1936, ErwinWilhelm Müller developed his field emission microscope, the first in a series of instruments that would exploit these developments. It was to be the first instrument to view atomic structures—although not the individual atoms themselves— directly. Müller’s subsequent field ion microscope utilized the same basic concepts used in the field emission microscope yet proved to be a much more powerful and versatile instrument. By 1956, Müller’s invention allowed him to view the crystal lattice structure of metals in atomic detail; it actually showed the constituent atoms. The field emission and field ion microscopes make it possible to view the atomic surface structures of metals on fluorescent screens. The field ion microscope is the direct descendant of the field emission microscope. In the case of the field emission microscope, the images are projected by electrons emitted directly from the tip of a metal needle, which constitutes the specimen under investigation.These electrons produce an image of the atomic lattice structure of the needle’s surface. The needle serves as the electron-donating electrode in a vacuum tube, also known as the “cathode.” Afluorescent screen that serves as the electron-receiving electrode, or “anode,” is placed opposite the needle. When sufficient electrical voltage is applied across the cathode and anode, the needle tip emits electrons, which strike the screen. The image produced on the screen is a projection of the electron source—the needle surface’s atomic lattice structure. Müller studied the effect of needle shape on the performance of the microscope throughout much of 1937. When the needles had been properly shaped, Müller was able to realize magnifications of up to 1 million times. This magnification allowed Müller to view what he called “maps” of the atomic crystal structure of metals, since the needles were so small that they were often composed of only one simple crystal of the material. While the magnification may have been great, however, the resolution of the instrument was severely limited by the physics of emitted electrons, which caused the images Müller obtained to be blurred. Improving the View In 1943, while working in Berlin, Müller realized that the resolution of the field emission microscope was limited by two factors. The electron velocity, a particle property, was extremely high and uncontrollably random, causing the micrographic images to be blurred. In addition, the electrons had an unsatisfactorily high wavelength. When Müller combined these two factors, he was able to determine that the field emission microscope could never depict single atoms; it was a physical impossibility for it to distinguish one atom from another. By 1951, this limitation led him to develop the technology behind the field ion microscope. In 1952, Müller moved to the United States and founded the Pennsylvania State University Field Emission Laboratory. He perfected the field ion microscope between 1952 and 1956. The field ion microscope utilized positive ions instead of electrons to create the atomic surface images on the fluorescent screen.When an easily ionized gas—at first hydrogen, but usually helium, neon, or argon—was introduced into the evacuated tube, the emitted electrons ionized the gas atoms, creating a stream of positively charged particles, much as Oppenheimer had predicted in 1928. Müller’s use of positive ions circumvented one of the resolution problems inherent in the use of imaging electrons. Like the electrons, however, the positive ions traversed the tube with unpredictably random velocities. Müller eliminated this problem by cryogenically cooling the needle tip with a supercooled liquefied gas such as nitrogen or hydrogen. By 1956, Müller had perfected the means of supplying imaging positive ions by filling the vacuum tube with an extremely small quantity of an inert gas such as helium, neon, or argon. By using such a gas, Müller was assured that no chemical reaction would occur between the needle tip and the gas; any such reaction would alter the surface atomic structure of the needle and thus alter the resulting microscopic image. The imaging ions allowed the field ion microscope to image the emitter surface to a resolution of between two and three angstroms, making it ten times more accurate than its close relative, the field emission microscope. Consequences The immediate impact of the field ion microscope was its influence on the study of metallic surfaces. It is a well-known fact of materials science that the physical properties of metals are influenced by the imperfections in their constituent lattice structures. It was not possible to view the atomic structure of the lattice, and thus the finest detail of any imperfection, until the field ion microscope was developed. The field ion microscope is the only instrument powerful enough to view the structural flaws of metal specimens in atomic detail. Although the instrument may be extremely powerful, the extremely large electrical fields required in the imaging process preclude the instrument’s application to all but the heartiest of metallic specimens. The field strength of 500 million volts per centimeter exerts an average stress on metal specimens in the range of almost 1 ton per square millimeter. Metals such as iron and platinum can withstand this strain because of the shape of the needles into which they are formed. Yet this limitation of the instrument makes it extremely difficult to examine biological materials, which cannot withstand the amount of stress that metals can. Apractical by-product in the study of field ionization—field evaporation—eventually permitted scientists to view large biological molecules. Field evaporation also allowed surface scientists to view the atomic structures of biological molecules. By embedding molecules such as phthalocyanine within the metal needle, scientists have been able to view the atomic structures of large biological molecules by field evaporating much of the surrounding metal until the biological material remains at the needle’s surface.

18 June 2009

Fiber-optics

The invention: The application of glass fibers to electronic communications and other fields to carry large volumes of information quickly, smoothly, and cheaply over great distances. The people behind the invention: Samuel F. B. Morse (1791-1872), the American artist and inventor who developed the electromagnetic telegraph system Alexander Graham Bell (1847-1922), the Scottish American inventor and educator who invented the telephone and the photophone Theodore H. Maiman (1927- ), the American physicist and engineer who invented the solid-state laser Charles K. Kao (1933- ), a Chinese-born electrical engineer Zhores I. Alferov (1930- ), a Russian physicist and mathematician The Singing Sun In 1844, Samuel F. B. Morse, inventor of the telegraph, sent his famous message, “What hath God wrought?” by electrical impulses traveling at the speed of light over a 66-kilometer telegraph wire strung between Washington, D.C., and Baltimore. Ever since that day, scientists have worked to find faster, less expensive, and more efficient ways to convey information over great distances. At first, the telegraph was used to report stock-market prices and the results of political elections. The telegraph was quite important in the American Civil War (1861-1865). The first transcontinental telegraph message was sent by Stephen J. Field, chief justice of the California Supreme Court, to U.S. president Abraham Lincoln on October 24, 1861. The message declared that California would remain loyal to the Union. By 1866, telegraph lines had reached all across the North American continent and a telegraph cable had been laid beneath the Atlantic Ocean to link the OldWorld with the New World.Another American inventor made the leap from the telegraph to the telephone. Alexander Graham Bell, a teacher of the deaf, was interested in the physical way speech works. In 1875, he started experimenting with ways to transmit sound vibrations electrically. He realized that an electrical current could be adjusted to resemble the vibrations of speech. Bell patented his invention on March 7, 1876. On July 9, 1877, he founded the Bell Telephone Company. In 1880, Bell invented a device called the “photophone.” He used it to demonstrate that speech could be transmitted on a beam of light. Light is a form of electromagnetic energy. It travels in a vibrating wave. When the amplitude (height) of the wave is adjusted, a light beam can be made to carry messages. Bell’s invention included a thin mirrored disk that converted sound waves directly into a beam of light. At the receiving end, a selenium resistor connected to a headphone converted the light back into sound. “I have heard a ray of sun laugh and cough and sing,” Bell wrote of his invention. Although Bell proved that he could transmit speech over distances of several hundred meters with the photophone, the device was awkward and unreliable, and it never became popular as the telephone did. Not until one hundred years later did researchers find important practical uses for Bell’s idea of talking on a beam of light. Two other major discoveries needed to be made first: developdevelopment of the laser and of high-purity glass. Theodore H. Maiman, an American physicist and electrical engineer at Hughes Research Laboratories in Malibu, California, built the first laser. The laser produces an intense, narrowly focused beam of light that can be adjusted to carry huge amounts of information. The word itself is an acronym for light amplification by the stimulated emission of radiation. It soon became clear, though, that even bright laser light can be broken up and absorbed by smog, fog, rain, and snow. So in 1966, Charles K. Kao, an electrical engineer at the Standard Telecommunications Laboratories in England, suggested that glass fibers could be used to transmit message-carrying beams of laser light without disruption from weather. Fiber Optics Are Tested Optical glass fiber is made from common materials, mostly silica, soda, and lime. The inside of a delicate silica glass tube is coated with a hundred or more layers of extremely thin glass. The tube is then heated to 2,000 degrees Celsius and collapsed into a thin glass rod, or preform. The preform is then pulled into thin strands of fiber. The fibers are coated with plastic to protect them from being nicked or scratched, and then they are covered in flexible cable.The earliest glass fibers contained many impurities and defects, so they did not carry light well. Signal repeaters were needed every few meters to energize (amplify) the fading pulses of light. In 1970, however, researchers at the Corning Glass Works in New York developed a fiber pure enough to carry light at least one kilometer without amplification. The telephone industry quickly became involved in the new fiber-optics technology. Researchers believed that a bundle of optical fibers as thin as a pencil could carry several hundred telephone calls at the same time. Optical fibers were first tested by telephone companies in big cities, where the great volume of calls often overloaded standard underground phone lines. On May 11, 1977, American Telephone & Telegraph Company (AT&T), along with Illinois Bell Telephone, Western Electric, and Bell Telephone Laboratories, began the first commercial test of fiberoptics telecommunications in downtown Chicago. The system consisted of a 2.4-kilometer cable laid beneath city streets. The cable, only 1.3 centimeters in diameter, linked an office building in the downtown business district with two telephone exchange centers. Voice and video signals were coded into pulses of laser light and transmitted through the hair-thin glass fibers. The tests showed that a single pair of fibers could carry nearly six hundred telephone conversations at once very reliably and at a reasonable cost. Six years later, in October, 1983, Bell Laboratories succeeded in transmitting the equivalent of six thousand telephone signals through an optical fiber cable that was 161 kilometers long. Since that time, countries all over the world, fromEngland to Indonesia, have developed optical communications systems.Consequences Fiber optics has had a great impact on telecommunications. Asingle fiber can now carry thousands of conversations with no electrical interference. These fibers are less expensive, weigh less, and take up much less space than copper wire. As a result, people can carry on conversations over long distances without static and at a low cost. One of the first uses of fiber optics and perhaps its best-known application is the fiberscope, a medical instrument that permits internal examination of the human body without surgery or X-ray techniques. The fiberscope, or endoscope, consists of two fiber bundles. One of the fiber bundles transmits bright light into the patient, while the other conveys a color image back to the eye of the physician. The fiberscope has been used to look for ulcers, cancer, and polyps in the stomach, intestine, and esophagus of humans. Medical instruments, such as forceps, can be attached to the fiberscope, allowing the physician to perform a range of medical procedures, such as clearing a blocked windpipe or cutting precancerous polyps from the colon.

Fax machine

The invention: Originally known as the “facsimile machine,” a machine that converts written and printed images into electrical signals that can be sent via telephone, computer, or radio. The person behind the invention: Alexander Bain (1818-1903), a Scottish inventor Sending Images The invention of the telegraph and telephone during the latter half of the nineteenth century gave people the ability to send information quickly over long distances.With the invention of radio and television technologies, voices and moving pictures could be seen around the world as well. Oddly, however, the facsimile process— which involves the transmission of pictures, documents, or other physical data over distance—predates all these modern devices, since a simple facsimile apparatus (usually called a fax machine) was patented in 1843 by Alexander Bain. This early device used a pendulum to synchronize the transmitting and receiving units; it did not convert the image into an electrical format, however, and it was quite crude and impractical. Nevertheless, it reflected the desire to send images over long distances, which remained a technological goal for more than a century. Facsimile machines developed in the period around 1930 enabled news services to provide newspapers around the world with pictures for publication. It was not until the 1970’s, however, that technological advances made small fax machines available for everyday office use. Scanning Images Both the fax machines of the 1930’s and those of today operate on the basis of the same principle: scanning. In early machines, an image (a document or a picture) was attached to a roller, placed in the fax machine, and rotated at a slow and fixed speed (which must be the same at each end of the link) in a bright light. Light from the image was reflected from the document in varying degrees, since dark areas reflect less light than lighter areas do. Alens moved across the page one line at a time, concentrating and directing the reflected light to a photoelectric tube. This tube would respond to the change in light level by varying its electric output, thus converting the image into an output signal whose intensity varied with the changing light and dark spots of the image. Much like the signal from a microphone or television camera, this modulated (varying) wave could then be broadcast by radio or sent over telephone lines to a receiver that performed a reverse function. At the receiving end, a light bulb was made to vary its intensity to match the varying intensity of the incoming signal. The output of the light bulb was concentrated through a lens onto photographically sensitive paper, thus re-creating the original image as the paper was rotated. Early fax machines were bulky and often difficult to operate. Advances in semiconductor and computer technology in the 1970’s, however, made the goal of creating an easy-to-use and inexpensive fax machine realistic. Instead of a photoelectric tube that consumes a relatively large amount of electrical power, a row of small photodiode semiconductors is used to measure light intensity. Instead of a power-consuming light source, low-power light-emitting diodes (LEDs) are used. Some 1,728 light-sensitive diodes are placed in a row, and the image to be scanned is passed over them one line at a time. Each diode registers either a dark or a light portion of the image. As each diode is checked in sequence, it produces a signal for one picture element, also known as a “pixel” or “pel.” Because many diodes are used, there is no need for a focusing lens; the diode bar is as wide as the page being scanned, and each pixel represents a portion of a line on that page. Since most fax transmissions take place over public telephone system lines, the signal from the photodiodes is transmitted by means of a built-in computer modem in much the same format that computers use to transmit data over telephone lines. The receiving fax uses its modem to convert the audible signal into a sequence that varies in intensity in proportion to the original signal. This varying signal is then sent in proper sequence to a row of 1,728 small wires over which a chemically treated paper is passed. As each wire receives a signal that represents a black portion of the scanned image, the wire heats and, in contact with the paper, produces a black dot that corresponds to the transmitted pixel. As the page is passed over these wires one line at a time, the original image is re-created. Consequences The fax machine has long been in use in many commercial and scientific fields.Weather data in the form of pictures are transmitted from orbiting satellites to ground stations; newspapers receive photographs from international news sources via fax; and, using a very expensive but very high-quality fax device, newspapers and magazines are able to transmit full-size proof copies of each edition to printers thousands of miles away so that a publication edited in one country can reach newsstands around the world quickly. With the technological advances that have been made in recent years, however, fax transmission has become a part of everyday life, particularly in business and research environments. The ability to send quickly a copy of a letter, document, or report over thousands of miles means that information can be shared in a matter of minutes rather than in a matter of days. In fields such as advertising and architecture, it is often necessary to send pictures or drawings to remote sites. Indeed, the fax machine has played an important role in providing information to distant observers of political unrest when other sources of information (such as radio, television, and newspapers) are shut down. In fact, there has been a natural coupling of computers, modems, and fax devices. Since modern faxes are sent as computer data over phone lines, specialized and inexpensive modems (which allow two computers to share data) have been developed that allow any computer user to send and receive faxes without bulky machines. For example, a document—including drawings, pictures, or graphics of some kind—is created in a computer and transmitted directly to another fax machine. That computer can also receive a fax transmission and either display it on the computer’s screen or print it on the local printer. Since fax technology is now within the reach of almost anyone who is interested in using it, there is little doubt that it will continue to grow in popularity.

ENIAC computer



The invention: 

The first general-purpose electronic digital computer.

The people behind the invention:

John Presper Eckert (1919-1995), an electrical engineer
John William Mauchly (1907-1980), a physicist, engineer, and
professor
John von Neumann (1903-1957), a Hungarian American
mathematician, physicist, and logician
Herman Heine Goldstine (1913- ), an army mathematician
Arthur Walter Burks (1915- ), a philosopher, engineer, and
professor
John Vincent Atanasoff (1903-1995), a mathematician and
physicist

Electronic synthesizer

The invention: Portable electronic device that both simulates the sounds of acoustic instruments and creates entirely new sounds. The person behind the invention: Robert A. Moog (1934- ), an American physicist, engineer, and inventor From Harmonium to Synthesizer The harmonium, or acoustic reed organ, is commonly viewed as having evolved into the modern electronic synthesizer that can be used to create many kinds of musical sounds, from the sounds of single or combined acoustic musical instruments to entirely original sounds. The first instrument to be called a synthesizer was patented by the Frenchman J. A. Dereux in 1949. Dereux’s synthesizer, which amplified the acoustic properties of harmoniums, led to the development of the recording organ. Next, several European and American inventors altered and augmented the properties of such synthesizers. This stage of the process was followed by the invention of electronic synthesizers, which initially used electronically generated sounds to imitate acoustic instruments. It was not long, however, before such synthesizers were used to create sounds that could not be produced by any other instrument. Among the early electronic synthesizers were those made in Germany by Herbert Elmert and Robert Beyer in 1953, and the American Olsen-Belar synthesizers, which were developed in 1954. Continual research produced better and better versions of these large, complex electronic devices. Portable synthesizers, which are often called “keyboards,” were then developed for concert and home use. These instruments became extremely popular, especially in rock music. In 1964, Robert A. Moog, an electronics professor, created what are thought by many to be the first portable synthesizers to be made available to the public. Several other well-known portable synthesizers, such as ARP and Buchla synthesizers, were also introduced at about the same time. Currently, many companies manufacture studio-quality synthesizers of various types. Synthesizer Components and Operation Modern synthesizers make music electronically by building up musical phrases via numerous electronic circuits and combining those phrases to create musical compositions. In addition to duplicating the sounds of many instruments, such synthesizers also enable their users to create virtually any imaginable sound. Many sounds have been created on synthesizers that could not have been created in any other way. Synthesizers use sound-processing and sound-control equipment that controls “white noise” audio generators and oscillator circuits. This equipment can be manipulated to produce a huge variety of sound frequencies and frequency mixtures in the same way that a beam of white light can be manipulated to produce a particular color or mixture of colors. Once the desired products of a synthesizer’s noise generator and oscillators are produced, percussive sounds that contain all or many audio frequencies are mixed with many chosen individual sounds and altered by using various electronic processing components. The better the quality of the synthesizer, the more processing components it will possess. Among these components are sound amplifiers, sound mixers, sound filters, reverberators, and sound combination devices. Sound amplifiers are voltage-controlled devices that change the dynamic characteristics of any given sound made by a synthesizer. Sound mixers make it possible to combine and blend two or more manufactured sounds while controlling their relative volumes. Sound filters affect the frequency content of sound mixtures by increasing or decreasing the amplitude of the sound frequencies within particular frequency ranges, which are called “bands.” Sound filters can be either band-pass filters or band-reject filters. They operate by increasing or decreasing the amplitudes of sound frequencies within given ranges (such as treble or bass). Reverberators (or “reverb” units) produce artificial echoes that can have significant musical effects. There are also many other varieties of soundprocessing elements, among them sound-envelope generators, spatial locators, and frequency shifters. Ultimately, the soundcombination devices put together the results of the various groups of audio generating and processing elements, shaping the sound that has been created into its final form.Avariety of control elements are used to integrate the operation of synthesizers. Most common is the keyboard, which provides the name most often used for portable electronic synthesizers. Portable synthesizer keyboards are most often pressure-sensitive devices (meaning that the harder one presses the key, the louder the resulting sound will be) that resemble the black-and-white keyboards of more conventional musical instruments such as the piano and the organ. These synthesizer keyboards produce two simultaneous outputs: control voltages that govern the pitches of oscillators, and timing pulses that sustain synthesizer responses for as long as a particular key is depressed. Unseen but present are the integrated voltage controls that control overall signal generation and processing. In addition to voltage controls and keyboards, synthesizers contain buttons and other switches that can transpose their sound ranges and other qualities. Using the appropriate buttons or switches makes it possible for a single synthesizer to imitate different instruments—or groups of instruments— at different times. Other synthesizer control elements include sample-and-hold devices and random voltage sources that make it possible to sustain particular musical effects and to add various effects to the music that is being played, respectively. Electronic synthesizers are complex and flexible instruments. The various types and models of synthesizers make it possible to produce many different kinds of music, and many musicians use a variety of keyboards to give them great flexibility in performing and recording. Impact The development and wide dissemination of studio and portable synthesizers has led to their frequent use to combine the sound properties of various musical instruments; a single musician can thus produce, inexpensively and with a single instrument, sound combinations that previously could have been produced only by a large number of musicians playing various instruments. (Understandably, many players of acoustic instruments have been upset by this development, since it means that they are hired to play less often than they were before synthesizers were developed.) Another consequence of synthesizer use has been the development of entirely original varieties of sound, although this area has been less thoroughly explored, for commercial reasons. The development of synthesizers has also led to the design of other new electronic music- making techniques and to the development of new electronic musical instruments. Opinions about synthesizers vary from person to person—and, in the case of certain illustrious musicians, from time to time. One well-known musician initially proposed that electronic synthesizers would replace many or all conventional instruments, particularly pianos. Two decades later, though, this same musician noted that not even the best modern synthesizers could match the quality of sound produced by pianos made by manufacturers such as Steinway and Baldwin.

Electron microscope


The invention: 

A device for viewing extremely small objects that
uses electron beams and “electron lenses” instead of the light
rays and optical lenses used by ordinary microscopes.

The people behind the invention:

Ernst Ruska (1906-1988), a German engineer, researcher, and
inventor who shared the 1986 Nobel Prize in Physics
Hans Busch (1884-1973), a German physicist
Max Knoll (1897-1969), a German engineer and professor
Louis de Broglie (1892-1987), a French physicist who won the
1929 Nobel Prize in Physics


14 June 2009

Electroencephalogram

The invention: A system of electrodes that measures brain wave patterns in humans, making possible a new era of neurophysiology. The people behind the invention: Hans Berger (1873-1941), a German psychiatrist and research scientist Richard Caton (1842-1926), an English physiologist and surgeon The Electrical Activity of the Brain Hans Berger’s search for the human electroencephalograph (English physiologist Richard Caton had described the electroencephalogram, or “brain wave,” in rabbits and monkeys in 1875) was motivated by his desire to find a physiological method that might be applied successfully to the study of the long-standing problem of the relationship between the mind and the brain. His scientific career, therefore, was directed toward revealing the psychophysical relationship in terms of principles that would be rooted firmly in the natural sciences and would not have to rely upon vague philosophical or mystical ideas. During his early career, Berger attempted to study psychophysical relationships by making plethysmographic measurements of changes in the brain circulation of patients with skull defects. In plethysmography, an instrument is used to indicate and record by tracings the variations in size of an organ or part of the body. Later, Berger investigated temperature changes occurring in the human brain during mental activity and the action of psychoactive drugs. He became disillusioned, however, by the lack of psychophysical understanding generated by these investigations. Next, Berger turned to the study of the electrical activity of the brain, and in the 1920’s he set out to search for the human electroencephalogram. He believed that the electroencephalogram would finally provide him with a physiological method capable of furnishing insight into mental functions and their disturbances.Berger made his first unsuccessful attempt at recording the electrical activity of the brain in 1920, using the scalp of a bald medical student. He then attempted to stimulate the cortex of patients with skull defects by using a set of electrodes to apply an electrical current to the skin covering the defect. The main purpose of these stimulation experiments was to elicit subjective sensations. Berger hoped that eliciting these sensations might give him some clue about the nature of the relationship between the physiochemical events produced by the electrical stimulus and the mental processes revealed by the patients’ subjective experience. The availability of many patients with skull defects—in whom the pulsating surface of the brain was separated from the stimulating electrodes by only a few millimeters of tissue—reactivated Berger’s interest in recording the brain’s electrical activity.Small, Tremulous Movements Berger used several different instruments in trying to detect brain waves, but all of them used a similar method of recording. Electrical oscillations deflected a mirror upon which a light beam was projected. The deflections of the light beam were proportional to the magnitude of the electrical signals. The movement of the spot of the light beam was recorded on photographic paper moving at a speed no greater than 3 centimeters per second. In July, 1924, Berger observed small, tremulous movements of the instrument while recording from the skin overlying a bone defect in a seventeen-year-old patient. In his first paper on the electroencephalogram, Berger described this case briefly as his first successful recording of an electroencephalogram. At the time of these early studies, Berger already had used the term “electroencephalogram” in his diary. Yet for several years he had doubts about the origin of the electrical signals he recorded. As late as 1928, he almost abandoned his electrical recording studies. The publication of Berger’s first paper on the human encephalogram in 1929 had little impact on the scientific world. It was either ignored or regarded with open disbelief. At this time, even when Berger himself was not completely free of doubts about the validity of his findings, he managed to continue his work. He published additional contributions to the study of the electroencephalogram in a series of fourteen papers. As his research progressed, Berger became increasingly confident and convinced of the significance of his discovery. Impact The long-range impact of Berger’s work is incontestable. When Berger published his last paper on the human encephalogram in 1938, the new approach to the study of brain function that he inaugurated in 1929 had gathered momentum in many centers, both in Europe and in the United States. As a result of his pioneering work, a new diagnostic method had been introduced into medicine. Physiology had acquired a new investigative tool. Clinical neurophysiology had been liberated from its dependence upon the functional anatomical approach, and electrophysiological exploration of complex functions of the central nervous system had begun in earnest. Berger’s work had finally received its well-deserved recognition. Many of those who undertook the study of the electroencephalogram were able to bring a far greater technical knowledge of neurophysiology to bear upon the problems of the electrical activity of the brain. Yet the community of neurological scientists has not ceased to look with respect to the founder of electroencephalography, who, despite overwhelming odds and isolation, opened a new area of neurophysiology.

Electrocardiogram

The invention: Device for analyzing the electrical currents of the human heart. The people behind the invention: Willem Einthoven (1860-1927), a Dutch physiologist and winner of the 1924 Nobel Prize in Physiology or Medicine Augustus D. Waller (1856-1922), a German physician and researcher Sir Thomas Lewis (1881-1945), an English physiologist Horse Vibrations In the late 1800’s, there was substantial research interest in the electrical activity that took place in the human body. Researchers studied many organs and systems in the body, including the nerves, eyes, lungs, muscles, and heart. Because of a lack of available technology, this research was tedious and frequently inaccurate. Therefore, the development of the appropriate instrumentation was as important as the research itself. The initial work on the electrical activity of the heart (detected from the surface of the body) was conducted by Augustus D.Waller and published in 1887. Many credit him with the development of the first electrocardiogram. Waller used a Lippmann’s capillary electrometer (named for its inventor, the French physicist Gabriel- Jonas Lippmann) to determine the electrical charges in the heart and called his recording a “cardiograph.” The recording was made by placing a series of small tubes on the surface of the body. The tubes contained mercury and sulfuric acid. As an electrical current passed through the tubes, the mercury would expand and contract. The resulting images were projected onto photographic paper to produce the first cardiograph. Yet Waller had only limited sucess with the device and eventually abandoned it. In the early 1890’s,Willem Einthoven, who became a good friend of Waller, began using the same type of capillary tube to study the electrical currents of the heart. Einthoven also had a difficult time working with the instrument. His laboratory was located in an old wooden building near a cobblestone street. Teams of horses pulling heavy wagons would pass by and cause his laboratory to vibrate. This vibration affected the capillary tube, causing the cardiograph to be unclear. In his frustration, Einthoven began to modify his laboratory. He removed the floorboards and dug a hole some ten to fifteen feet deep. He lined the walls with large rocks to stabilize his instrument. When this failed to solve the problem, Einthoven, too, abandoned the Lippmann’s capillary tube. Yet Einthoven did not abandon the idea, and he began to experiment with other instruments. Electrocardiographs over the Phone In order to continue his research on the electrical currents of the heart, Einthoven began to work with a new device, the d’Arsonval galvanometer (named for its inventor, the French biophysicist Arsène d’Arsonval). This instrument had a heavy coil of wire suspended between the poles of a horseshoe magnet. Changes in electrical activity would cause the coil to move; however, Einthoven found that the coil was too heavy to record the small electrical changes found in the heart. Therefore, he modified the instrument by replacing the coil with a silver-coated quartz thread (string). The movements could be recorded by transmitting the deflections through a microscope and projecting them on photographic film. Einthoven called the new instrument the “string galvanometer.” In developing his string galvanomter, Einthoven was influenced by the work of one of his teachers, Johannes Bosscha. In the 1850’s, Bosscha had published a study describing the technical complexities of measuring very small amounts of electricity. He proposed the idea that a galvanometer modified with a needle hanging from a silk thread would be more sensitive in measuring the tiny electric currents of the heart. By 1905, Einthoven had improved the string galvanometer to the point that he could begin using it for clinical studies. In 1906, he had his laboratory connected to the hospital in Leiden by a telephone wire.With this arrangement, Einthoven was able to study in his laboratory electrocardiograms derived from patients in the hospital, which was located a mile away. With this source of subjects, Einthoven was able to use his galvanometer to study many heart problems. As a result of these studies, Einthoven identified the following heart problems: blocks in the electrical conduction system of the heart; premature beats of the heart, including two premature beats in a row; and enlargements of the various chambers of the heart. He was also able to study how the heart behaved during the administration of cardiac drugs.A major researcher who communicated with Einthoven about the electrocardiogram was Sir Thomas Lewis, who is credited with developing the electrocardiogram into a useful clinical tool. One of Lewis’s important accomplishments was his identification of atrial fibrillation, the overactive state of the upper chambers of the heart. During World War I, Lewis was involved with studying soldiers’ hearts. He designed a series of graded exercises, which he used to test the soldiers’ ability to perform work. From this study, Lewis was able to use similar tests to diagnose heart disease and to screen recruits who had heart problems. Impact As Einthoven published additional studies on the string galvanometer in 1903, 1906, and 1908, greater interest in his instrument was generated around the world. In 1910, the instrument, now called the “electrocardiograph,” was installed in the United States. It was the foundation of a new laboratory for the study of heart disease at Johns Hopkins University. As time passed, the use of the electrocardiogram—or “EKG,” as it is familiarly known—increased substantially. The major advantage of the EKG is that it can be used to diagnose problems in the heart without incisions or the use of needles. It is relatively painless for the patient; in comparison with other diagnostic techniques, moreover, it is relatively inexpensive. Recent developments in the use of the EKG have been in the area of stress testing. Since many heart problems are more evident during exercise, when the heart is working harder, EKGs are often given to patients as they exercise, generally on a treadmill. The clinician gradually increases the intensity of work the patient is doing while monitoring the patient’s heart. The use of stress testing has helped to make the EKG an even more valuable diagnostic tool.

12 June 2009

Electric refrigerator


The invention: 

An electrically powered and hermetically sealed
food-storage appliance that replaced iceboxes, improved production,
and lowered food-storage costs.

The people behind the invention:

Marcel Audiffren, a French monk
Christian Steenstrup (1873-1955), an American engineer
Fred Wolf, an American engineer


Electric clock

The invention: Electrically powered time-keeping device with a quartz resonator that has led to the development of extremely accurate, relatively inexpensive electric clocks that are used in computers and microprocessors. The person behind the invention: Warren Alvin Marrison (1896-1980), an American scientist From Complex Mechanisms to Quartz Crystals William Alvin Marrison’s fabrication of the electric clock began a new era in time-keeping. Electric clocks are more accurate and more reliable than mechanical clocks, since they have fewer moving parts and are less likely to malfunction. An electric clock is a device that generates a string of electric pulses. The most frequently used electric clocks are called “free running” and “periodic,” which means that they generate a continuous sequence of electric pulses that are equally spaced. There are various kinds of electronic “oscillators” (materials that vibrate) that can be used to manufacture electric clocks. The material most commonly used as an oscillator in electric clocks is crystalline quartz. Because quartz (silicon dioxide) is a completely oxidized compound (which means that it does not deteriorate readily) and is virtually insoluble in water, it is chemically stable and resists chemical processes that would break down other materials. Quartz is a “piezoelectric” material, which means that it is capable of generating electricity when it is subjected to pressure or stress of some kind. In addition, quartz has the advantage of generating electricity at a very stable frequency, with little variation. For these reasons, quartz is an ideal material to use as an oscillator.The Quartz Clock Aquartz clock is an electric clock that makes use of the piezoelectric properties of a quartz crystal. When a quartz crystal vibrates, a difference of electric potential is produced between two of its faces. The crystal has a natural frequency (rate) of vibration that is determined by its size and shape. If the crystal is placed in an oscillating electric circuit that has a frequency that is nearly the same as that of the crystal, it will vibrate at its natural frequency and will cause the frequency of the entire circuit to match its own frequency. Piezoelectricity is electricity, or “electric polarity,” that is caused by the application of mechanical pressure on a “dielectric” material (one that does not conduct electricity), such as a quartz crystal. The process also works in reverse; if an electric charge is applied to the dielectric material, the material will experience a mechanical distortion. This reciprocal relationship is called “the piezoelectric effect.” The phenomenon of electricity being generated by the application of mechanical pressure is called the direct piezoelectric effect, and the phenomenon of mechanical stress being produced as a result of the application of electricity is called the converse piezoelectric effect. When a quartz crystal is used to create an oscillator, the natural frequency of the crystal can be used to produce other frequencies that can power clocks. The natural frequency of a quartz crystal is nearly constant if precautions are taken when it is cut and polished and if it is maintained at a nearly constant temperature and pressure. After a quartz crystal has been used for some time, its frequency usually varies slowly as a result of physical changes. If allowances are made for such changes, quartz-crystal clocks such as those used in laboratories can be manufactured that will accumulate errors of only a few thousandths of a second per month. The quartz crystals that are typically used in watches, however, may accumulate errors of tens of seconds per year. There are other materials that can be used to manufacture accurate electric clocks. For example, clocks that use the element rubidium typically would accumulate errors no larger than a few tenthousandths of a second per year, and those that use the element cesium would experience errors of only a few millionths of a second per year. Quartz is much less expensive than rarer materials such as rubidium and cesium, and it is easy to use in such common applications as computers. Thus, despite their relative inaccuracy, electric quartz clocks are extremely useful and popular, particularly for applications that require accurate timekeeping over a relatively short period of time. In such applications, quartz clocks may be adjusted periodically to correct for accumulated errors. Impact The electric quartz clock has contributed significantly to the development of computers and microprocessors. The computer’s control unit controls and synchronizes all data transfers and transformations in the computer system and is the key subsystem in the computer itself. Every action that the computer performs is implemented by the control unit. The computer’s control unit uses inputs from a quartz clock to derive timing and control signals that regulate the actions in the system that are associated with each computer instruction. The control unit also accepts, as input, control signals generated by other devices in the computer system. The other primary impact of the quartz clock is in making the construction of multiphase clocks a simple task. A multiphase clock is a clock that has several outputs that oscillate at the same frequency. These outputs may generate electric waveforms of different shapes or of the same shape, which makes them useful for various applications. It is common for a computer to incorporate a single-phase quartz clock that is used to generate a two-phase clock.

09 June 2009

Dolby noise reduction

The invention: Electronic device that reduces the signal-to-noise ratio of sound recordings and greatly improves the sound quality of recorded music. The people behind the invention: Emil Berliner (1851-1929), a German inventor Ray Milton Dolby (1933- ), an American inventor Thomas Alva Edison (1847-1931), an American inventor Phonographs, Tapes, and Noise Reduction The main use of record, tape, and compact disc players is to listen to music, although they are also used to listen to recorded speeches, messages, and various forms of instruction. Thomas Alva Edison invented the first sound-reproducing machine, which he called the “phonograph,” and patented it in 1877. Ten years later, a practical phonograph (the “gramophone”) was marketed by a German, Emil Berliner. Phonographs recorded sound by using diaphragms that vibrated in response to sound waves and controlled needles that cut grooves representing those vibrations into the first phonograph records, which in Edison’s machine were metal cylinders and in Berliner’s were flat discs. The recordings were then played by reversing the recording process: Placing a needle in the groove in the recorded cylinder or disk caused the diaphragm to vibrate, re-creating the original sound that had been recorded. In the 1920’s, electrical recording methods developed that produced higher-quality recordings, and then, in the 1930’s, stereophonic recording was developed by various companies, including the British company Electrical and Musical Industries (EMI). Almost simultaneously, the technology of tape recording was developed. By the 1940’s, long-playing stereo records and tapes were widely available. As recording techniques improved further, tapes became very popular, and by the 1960’s, they had evolved into both studio master recording tapes and the audio cassettes used by consumers.Hisses and other noises associated with sound recording and its environment greatly diminished the quality of recorded music. In 1967, Ray Dolby invented a noise reducer, later named “Dolby A,” that could be used by recording studios to reduce tape signal-tonoise ratios. Several years later, his “Dolby B” system, designed for home use, became standard equipment in all types of playback machines. Later, Dolby and others designed improved noisesuppression systems. Recording and Tape Noise Sound is made up of vibrations of varying frequencies—sound waves—that sound recorders can convert into grooves on plastic records, varying magnetic arrangements on plastic tapes covered with iron particles, or tiny pits on compact discs. The following discussion will focus on tape recordings, for which the original Dolby noise reducers were designed. Tape recordings are made by a process that converts sound waves into electrical impulses that cause the iron particles in a tape to reorganize themselves into particular magnetic arrangements. The process is reversed when the tape is played back. In this process, the particle arrangements are translated first into electrical impulses and then into sound that is produced by loudspeakers. Erasing a tape causes the iron particles to move back into their original spatial arrangement. Whenever a recording is made, undesired sounds such as hisses, hums, pops, and clicks can mask the nuances of recorded sound, annoying and fatiguing listeners. The first attempts to do away with undesired sounds (noise) involved making tapes, recording devices, and recording studios quieter. Such efforts did not, however, remove all undesired sounds. Furthermore, advances in recording technology increased the problem of noise by producing better instruments that “heard” and transmitted to recordings increased levels of noise. Such noise is often caused by the components of the recording system; tape hiss is an example of such noise. This type of noise is most discernible in quiet passages of recordings, because loud recorded sounds often mask it.Because of the problem of noise in quiet passages of recorded sound, one early attempt at noise suppression involved the reduction of noise levels by using “dynaural” noise suppressors. These devices did not alter the loud portions of a recording; instead, they reduced the very high and very low frequencies in the quiet passages in which noise became most audible. The problem with such devices was, however, that removing the high and low frequencies could also affect the desirable portions of the recorded sound. These suppressors could not distinguish desirable from undesirable sounds. As recording techniques improved, dynaural noise suppressors caused more and more problems, and their use was finally discontinued. Another approach to noise suppression is sound compression during the recording process. This compression is based on the fact that most noise remains at a constant level throughout a recording, regardless of the sound level of a desired signal (such as music). To carry out sound compression, the lowest-level signals in a recording are electronically elevated above the sound level of all noise. Musical nuances can be lost when the process is carried too far, because the maximum sound level is not increased by devices that use sound compression. To return the music or other recorded sound to its normal sound range for listening, devices that “expand” the recorded music on playback are used. Two potential problems associated with the use of sound compression and expansion are the difficulty of matching the two processes and the introduction into the recording of noise created by the compression devices themselves. In 1967, Ray Dolby developed Dolby Ato solve these problems as they related to tape noise (but not to microphone signals) in the recording and playing back of studio master tapes. The system operated by carrying out ten-decibel compression during recording and then restoring (noiselessly) the range of the music on playback. This was accomplished by expanding the sound exactly to its original range. Dolby Awas very expensive and was thus limited to use in recording studios. In the early 1970’s, however, Dolby invented the less expensive Dolby B system, which was intended for consumers. Consequences The development of Dolby Aand Dolby B noise-reduction systems is one of the most important contributions to the high-quality recording and reproduction of sound. For this reason, Dolby A quickly became standard in the recording industry. In similar fashion, Dolby B was soon incorporated into virtually every highfidelity stereo cassette deck to be manufactured. Dolby’s discoveries spurred advances in the field of noise reduction. For example, the German company Telefunken and the Japanese companies Sanyo and Toshiba, among others, developed their own noise-reduction systems. Dolby Laboratories countered by producing an improved system: Dolby C. The competition in the area of noise reduction continues, and it will continue as long as changes in recording technology produce new, more sensitive recording equipment.

Disposable razor

The invention: An inexpensive shaving blade that replaced the traditional straight-edged razor and transformed shaving razors into a frequent household purchase item. The people behind the invention: King Camp Gillette (1855-1932), inventor of the disposable razor Steven Porter, the machinist who created the first three disposable razors for King Camp Gillette William Emery Nickerson (1853-1930), an expert machine inventor who created the machines necessary for mass production Jacob Heilborn, an industrial promoter who helped Gillette start his company and became a partner Edward J. Stewart, a friend and financial backer of Gillette Henry Sachs, an investor in the Gillette Safety Razor Company John Joyce, an investor in the Gillette Safety Razor Company William Painter (1838-1906), an inventor who inspired Gillette George Gillette, an inventor, King Camp Gillette’s father A Neater Way to Shave In 1895, King Camp Gillette thought of the idea of a disposable razor blade. Gillette spent years drawing different models, and finally Steven Porter, a machinist and Gillette’s associate, created from those drawings the first three disposable razors that worked. Gillette soon founded the Gillette Safety Razor Company, which became the leading seller of disposable razor blades in the United States. George Gillette, King Camp Gillette’s father, had been a newspaper editor, a patent agent, and an inventor. He never invented a very successful product, but he loved to experiment. He encouraged all of his sons to figure out how things work and how to improve on them. King was always inventing something new and had many patents, but he was unsuccessful in turning them into profitable businesses. Gillette worked as a traveling salesperson for Crown Cork and Seal Company.William Painter, one of Gillette’s friends and the inventor of the crown cork, presented Gillette with a formula for making a fortune: Invent something that would constantly need to be replaced. Painter’s crown cork was used to cap beer and soda bottles. It was a tin cap covered with cork, used to form a tight seal over a bottle. Soda and beer companies could use a crown cork only once and needed a steady supply. King took Painter’s advice and began thinking of everyday items that needed to be replaced often. After owning a Star safety razor for some time, King realized that the razor blade had not been improved for a long time. He studied all the razors on the market and found that both the common straight razor and the safety razor featured a heavy V-shaped piece of steel, sharpened on one side. King reasoned that a thin piece of steel sharpened on both sides would create a better shave and could be thrown away once it became dull. The idea of the disposable razor had been born. Gillette made several drawings of disposable razors. He then made a wooden model of the razor to better explain his idea. Gillette’s first attempt to construct a working model was unsuccessful, as the steel was too flimsy. Steven Porter, a Boston machinist, decided to try to make Gillette’s razor from his drawings. He produced three razors, and in the summer of 1899 King was the first man to shave with a disposable razor. Changing Consumer Opinion In the early 1900’s, most people considered a razor to be a oncein- a-lifetime purchase. Many fathers handed down their razors to their sons. Straight razors needed constant and careful attention to keep them sharp. The thought of throwing a razor in the garbage after several uses was contrary to the general public’s idea of a razor. If Gillette’s razor had not provided a much less painful and faster shave, it is unlikely that the disposable would have been a success. Even with its advantages, public opinion against the product was still difficult to overcome. Financing a company to produce the razor proved to be a major obstacle. King did not have the money himself, and potential investors were skeptical. Skepticism arose both because of public perceptions of the product and because of its manufacturing process. Mass production appeared to be impossible, but the disposable razor would never be profitable if produced using the methods used to manufacture its predecessor. William Emery Nickerson, an expert machine inventor, had looked at Gillette’s razor and said it was impossible to create a machine to produce it. He was convinced to reexamine the idea and finally created a machine that would create a workable blade. In the process, Nickerson changed Gillette’s original model. He improved the handle and frame so that it would better support the thin steel blade. In the meantime, Gillette was busy getting his patent assigned to the newly formed American Safety Razor Company, owned by Gillette, Jacob Heilborn, Edward J. Stewart, and Nickerson. Gillette owned considerably more shares than anyone else. Henry Sachs provided additional capital, buying shares from Gillette. The stockholders decided to rename the company the Gillette Safety Razor Company. It soon spent most of its money on machinery and lacked the capital it needed to produce and advertise its product. The only offer the company had received was from a group of New York investors who were willing to give $125,000 in exchange for 51 percent of the company. None of the directors wanted to lose control of the company, so they rejected the offer. John Joyce, a friend of Gillette, rescued the financially insecure new company. He agreed to buy $100,000 worth of bonds from the company for sixty cents on the dollar, purchasing the bonds gradually as the company needed money. He also received an equivalent amount of company stock. After an investment of $30,000, Joyce had the option of backing out. This deal enabled the company to start manufacturing and advertising.Impact The company used $18,000 to perfect the machinery to produce the disposable razor blades and razors. Originally the directors wanted to sell each razor with twenty blades for three dollars. Joyce insisted on a price of five dollars. In 1903, five dollars was about one-third of the average American’s weekly salary, and a highquality straight razor could be purchased for about half that price.The other directors were skeptical, but Joyce threatened to buy up all the razors for three dollars and sell them himself for five dollars. Joyce had the financial backing to make this promise good, so the directors agreed to the higher price. The Gillette Safety Razor Company contracted with Townsend& Hunt for exclusive sales. The contract stated that Townsend & Hunt would buy 50,000 razors with twenty blades each during a period of slightly more than a year and would purchase 100,000 sets per year for the following four years. The first advertisement for the product appeared in System Magazine in early fall of 1903, offering the razors by mail order. By the end of 1903, only fifty-one razors had been sold. Since Gillette and most of the directors of the company were not salaried, Gillette had needed to keep his job as salesman with Crown Cork and Seal. At the end of 1903, he received a promotion that meant relocation from Boston to London. Gillette did not want to go and pleaded with the other directors, but they insisted that the company could not afford to put him on salary. The company decided to reduce the number of blades in a set from twenty to twelve in an effort to increase profits without noticeably raising the cost of a set. Gillette resigned the title of company president and left for England. Shortly thereafter, Townsend & Hunt changed its name to the Gillette Sales Company, and three years later the sales company sold out to the parent company for $300,000. Sales of the new type of razor were increasing rapidly in the United States, and Joyce wanted to sell patent rights to European companies for a small percentage of sales. Gillette thought that that would be a horrible mistake and quickly traveled back to Boston. He had two goals: to stop the sale of patent rights, based on his conviction that the foreign market would eventually be very lucrative, and to become salaried by the company. Gillette accomplished both these goals and soon moved back to Boston. Despite the fact that Joyce and Gillette had been good friends for a long time, their business views often differed. Gillette set up a holding company in an effort to gain back controlling interest in the Gillette Safety Razor Company. He borrowed money and convinced his allies in the company to invest in the holding company, eventually regaining control. He was reinstated as president of the company. One clear disagreement was that Gillette wanted to relocate the company to Newark, New Jersey, and Joyce thought that that would be a waste of money. Gillette authorized company funds to be invested in a Newark site. The idea was later dropped, costing the company a large amount of capital. Gillette was not a very wise businessman and made many costly mistakes. Joyce even accused him of deliberately trying to keep the stock price low so that Gillette could purchase more stock. Joyce eventually bought out Gillette, who retained his title as president but had little say about company business. With Gillette out of a management position, the company became more stable and more profitable. The biggest problem the company faced was that it would soon lose its patent rights. After the patent expired, the company would have competition. The company decided that it could either cut prices (and therefore profits) to compete with the lower-priced disposables that would inevitably enter the market, or it could create a new line of even better razors. The company opted for the latter strategy. Weeks before the patent expired, the Gillette Safety Razor Company introduced a new line of razors. Both World War I and World War II were big boosts to the company, which contracted with the government to supply razors to almost all the troops. This transaction created a huge increase in sales and introduced thousands of young men to the Gillette razor. Many of them continued to use Gillettes after returning from the war. Aside from the shaky start of the company, its worst financial difficulties were during the Great Depression. Most Americans simply could not afford Gillette blades, and many used a blade for an extended time and then resharpened it rather than throwing it away. If it had not been for the company’s foreign markets, the company would not have shown a profit during the Great Depression. Gillette’s obstinancy about not selling patent rights to foreign investors proved to be an excellent decision. The company advertised through sponsoring sporting events, including the World Series. Gillette had many celebrity endorsements from well-known baseball players. Before it became too expensive for one company to sponsor an entire event, Gillette had exclusive advertising during the World Series, various boxing matches, the Kentucky Derby, and football bowl games. Sponsoring these events was costly, but sports spectators were the typical Gillette customers. The Gillette Company created many products that complemented razors and blades, including shaving cream, women’s raincluding women’s cosmetics, writing utensils, deodorant, and wigs. One of the main reasons for obtaining a more diverse product line was that a one-product company is less stable, especially in a volatile market. The Gillette Company had learned that lesson in the Great Depression. Gillette continued to thrive by following the principles the company had used from the start. The majority of Gillette’s profits came from foreign markets, and its employees looked to improve products and find opportunities in other departments as well as their own.

Dirigible

The invention: Arigid lighter-than-air aircraft that played a major role in World War I and in international air traffic until a disastrous accident destroyed the industry. The people behind the invention: Ferdinand von Zeppelin (1838-1917), a retired German general Theodor Kober (1865-1930), Zeppelin’s private engineer Early Competition When the Montgolfier brothers launched the first hot-air balloon in 1783, engineers—especially those in France—began working on ways to use machines to control the speed and direction of balloons. They thought of everything: rowing through the air with silk-covered oars; building movable wings; using a rotating fan, an airscrew, or a propeller powered by a steam engine (1852) or an electric motor (1882). At the end of the nineteenth century, the internal combustion engine was invented. It promised higher speeds and more power. Up to this point, however, the balloons were not rigid. Arigid airship could be much larger than a balloon and could fly farther. In 1890, a rigid airship designed by David Schwarz of Dalmatia was tested in St. Petersburg, Russia. The test failed because there were problems with inflating the dirigible. A second test, in Berlin in 1897, was only slightly more successful, since the hull leaked and the flight ended in a crash. Schwarz’s airship was made of an entirely rigid aluminum cylinder. Ferdinand von Zeppelin had a different idea: His design was based on a rigid frame. Zeppelin knew about balloons from having fought in two wars in which they were used: the American Civil War of 1861-1865 and the Franco-Prussian War of 1870-1871. He wrote down his first “thoughts about an airship” in his diary on March 25, 1874, inspired by an article about flying and international mail. Zeppelin soon lost interest in this idea of civilian uses for an airship and concentrated instead on the idea that dirigible balloons might become an important part of modern warfare. He asked the German government to fund his research, pointing out that France had a better military air force than Germany did. Zeppelin’s patriotism was what kept him trying, in spite of money problems and technical difficulties. In 1893, in order to get more money, Zeppelin tried to persuade the German military and engineering experts that his invention was practical. Even though a government committee decided that his work was worth a small amount of funding, the army was not sure that Zeppelin’s dirigible was worth the cost. Finally, the committee chose Schwarz’s design. In 1896, however, Zeppelin won the support of the powerful Union of German Engineers, which in May, 1898, gave him 800,000 marks to form a stock company called the Association for the Promotion of Airship Flights. In 1899, Zeppelin began building his dirigible in Manzell at Lake Constance. In July, 1900, the airship was finished and ready for its first test flight. Several Attempts Zeppelin, together with his engineer, Theodor Kober, had worked on the design since May, 1892, shortly after Zeppelin’s retirement from the army. They had finished the rough draft by 1894, and though they made some changes later, this was the basic design of the Zeppelin. An improved version was patented in December, 1897. In the final prototype, called the LZ 1, the engineers tried to make the airship as light as possible. They used a light internal combustion engine and designed a frame made of the light metal aluminum. The airship was 128 meters long and had a diameter of 11.7 meters when inflated. Twenty-four zinc-aluminum girders ran the length of the ship, being drawn together at each end. Sixteen rings held the body together. The engineers stretched an envelope of smooth cotton over the framework to reduce wind resistance and to protect the gas bags fromthe sun’s rays. Seventeen gas bags made of rubberized cloth were placed inside the framework. Together they held more than 120,000 cubic meters of hydrogen gas, which would lift 11,090 kilograms. Two motor gondolas were attached to the sides, each with a 16-horsepower gasoline engine, spinning four propellers.The test flight did not go well. The two main questions—whether the craft was strong enough and fast enough—could not be answered because little things kept going wrong; for example, a crankshaft broke and a rudder jammed. The first flight lasted no more than eighteen minutes, with a maximum speed of 13.7 kilometers per hour. During all three test flights, the airship was in the air for a total of only two hours, going no faster than 28.2 kilometers per hour. Zeppelin had to drop the project for some years because he ran out of money, and his company was dissolved. The LZ 1 was wrecked in the spring of 1901. A second airship was tested in November, 1905, and January, 1906. Both tests were unsuccessful, and in the end the ship was destroyed during a storm. By 1906, however, the German government was convinced of the military usefulness of the airship, though it would not give money to Zeppelin unless he agreed to design one that could stay in the air for at least twenty-four hours. The third Zeppelin failed this test in the autumn of 1907. Finally, in the summer of 1908, the LZ 4 not only proved itself to the military but also attracted great publicity. It flew for more than twenty-four hours and reached a speed of more than 60 kilometers per hour. Caught in a storm at the end of this flight, the airship was forced to land and exploded, but money came from all over Germany to build another. Impact Most rigid airships were designed and flown in Germany. Of the 161 that were built between 1900 and 1938, 139 were made in Germany, and 119 were based on the Zeppelin design. More than 80 percent of the airships were built for the military. The Germans used more than one hundred for gathering information and for bombing during World War I (1914-1918). Starting in May, 1915, airships bombed Warsaw, Poland; Bucharest, Romania; Salonika, Greece; and London, England. This was mostly a fear tactic, since the attacks did not cause great damage, and the English antiaircraft defense improved quickly. By 1916, the German army had lost so many airships that it stopped using them, though the navy continued. Airships were first used for passenger flights in 1910. By 1914, the Delag (German Aeronautic Stock Company) used seven passenger airships for sightseeing trips around German cities. There were still problems with engine power and weather forecasting, and it was difficult to move the airships on the ground. AfterWorldWar I, the Zeppelins that were left were given to the Allies as payment, and the Germans were not allowed to build airships for their own use until 1925. In the 1920’s and 1930’s, it became cheaper to use airplanes for short flights, so airships were useful mostly for long-distance flight. ABritish airship made the first transatlantic flight in 1919. The British hoped to connect their empire by means of airships starting in 1924, but the 1930 crash of the R-101, in which most of the leading English aeronauts were killed, brought that hope to an end. The United States Navy built the Akron (1931) and the Macon (1933) for long-range naval reconnaissance, but both airships crashed. Only the Germans continued to use airships on a regular basis. In 1929, the world tour of the Graf Zeppelin was a success. Regular flights between Germany and South America started in 1932, and in 1936, German airships bearing Nazi swastikas flew to Lakehurst, New Jersey. The tragic explosion of the hydrogen-filled Hindenburg in 1937, however, brought the era of the rigid airship to a close. The U.S. secretary of the interior vetoed the sale of nonflammable helium, fearing that the Nazis would use it for military purposes, and the German government had to stop transatlantic flights for safety reasons. In 1940, the last two remaining rigid airships were destroyed.

Differential analyzer

The invention: An electromechanical device capable of solving differential equations. The people behind the invention: Vannevar Bush (1890-1974), an American electrical engineer Harold L. Hazen (1901-1980), an American electrical engineer Electrical Engineering Problems Become More Complex AfterWorldWar I, electrical engineers encountered increasingly difficult differential equations as they worked on vacuum-tube circuitry, telephone lines, and, particularly, long-distance power transmission lines. These calculations were lengthy and tedious. Two of the many steps required to solve them were to draw a graph manually and then to determine the area under the curve (essentially, accomplishing the mathematical procedure called integration). In 1925, Vannevar Bush, a faculty member in the Electrical Engineering Department at the Massachusetts Institute of Technology (MIT), suggested that one of his graduate students devise a machine to determine the area under the curve. They first considered a mechanical device but later decided to seek an electrical solution. Realizing that a watt-hour meter such as that used to measure electricity in most homes was very similar to the device they needed, Bush and his student refined the meter and linked it to a pen that automatically recorded the curve. They called this machine the Product Integraph, and MIT students began using it immediately. In 1927, Harold L. Hazen, another MIT faculty member, modified it in order to solve the more complex second-order differential equations (it originally solved only firstorder equations). The Differential Analyzer The original Product Integraph had solved problems electrically, and Hazen’s modification had added a mechanical integrator. Although the revised Product Integraph was useful in solving the types of problems mentioned above, Bush thought the machine could be improved by making it an entirely mechanical integrator, rather than a hybrid electrical and mechanical device. In late 1928, Bush received funding from MIT to develop an entirely mechanical integrator, and he completed the resulting Differential Analyzer in 1930. This machine consisted of numerous interconnected shafts on a long, tablelike framework, with drawing boards flanking one side and six wheel-and-disk integrators on the other. Some of the drawing boards were configured to allow an operator to trace a curve with a pen that was linked to the Analyzer, thus providing input to the machine. The other drawing boards were configured to receive output from the Analyzer via a pen that drew a curve on paper fastened to the drawing board. The wheel-and-disk integrator, which Hazen had first used in the revised Product Integraph, was the key to the operation of the Differential Analyzer. The rotational speed of the horizontal disk was the input to the integrator, and it represented one of the variables in the equation. The smaller wheel rolled on the top surface of the disk, and its speed, which was different from that of the disk, represented the integrator’s output. The distance from the wheel to the center of the disk could be changed to accommodate the equation being solved, and the resulting geometry caused the two shafts to turn so that the output was the integral of the input. The integrators were linked mechanically to other devices that could add, subtract, multiply, and divide. Thus, the Differential Analyzer could solve complex equations involving many different mathematical operations. Because all the linkages and calculating devices were mechanical, the Differential Analyzer actually acted out each calculation. Computers of this type, which create an analogy to the physical world, are called analog computers. The Differential Analyzer fulfilled Bush’s expectations, and students and researchers found it very useful. Although each different problem required Bush’s team to set up a new series of mechanical linkages, the researchers using the calculations viewed this as a minor inconvenience. Students at MIT used the Differential Analyzer in research for doctoral dissertations, master’s theses, and bachelor’s theses. Other researchers worked on a wide range of problems with the Differential Analyzer, mostly in electrical engineering, but also in atomic physics, astrophysics, and seismology. An English researcher, Douglas Hartree, visited Bush’s laboratory in 1933 to learn about the Differential Analyzer and to use it in his own work on the atomic field of mercury. When he returned to England, he built several analyzers based on his knowledge of MIT’s machine. The U.S. Army also built a copy in order to carry out the complex calculations required to create artillery firing tables (which specified the proper barrel angle to achieve the desired range). Other analyzers were built by industry and universities around the world. Impact As successful as the Differential Analyzer had been, Bush wanted to make another, better analyzer that would be more precise, more convenient to use, and more mathematically flexible. In 1932, Bush began seeking money for his new machine, but because of the Depression it was not until 1936 that he received adequate funding for the Rockefeller Analyzer, as it came to be known. Bush left MIT in 1938, but work on the Rockefeller Analyzer continued. It was first demonstrated in 1941, and by 1942, it was being used in the war effort to calculate firing tables and design radar antenna profiles. At the end of the war, it was the most important computer in existence. All the analyzers, which were mechanical computers, faced serious limitations in speed because of the momentum of the machinery, and in precision because of slippage and wear. The digital computers that were being developed after World War II (even at MIT) were faster, more precise, and capable of executing more powerful operations because they were electrical computers. As a result, during the 1950’s, they eclipsed differential analyzers such as those built by Bush. Descendants of the Differential Analyzer remained in use as late as the 1990’s, but they played only a minor role.

Diesel locomotive

The invention: An internal combustion engine in which ignition is achieved by the use of high-temperature compressed air, rather than a spark plug. The people behind the invention: Rudolf Diesel (1858-1913), a German engineer and inventor Sir Dugold Clark (1854-1932), a British engineer Gottlieb Daimler (1834-1900), a German engineer Henry Ford (1863-1947), an American automobile magnate Nikolaus Otto (1832-1891), a German engineer and Daimler’s teacher A Beginning in Winterthur By the beginning of the twentieth century, new means of providing society with power were needed. The steam engines that were used to run factories and railways were no longer sufficient, since they were too heavy and inefficient. At that time, Rudolf Diesel, a German mechanical engineer, invented a new engine. His diesel engine was much more efficient than previous power sources. It also appeared that it would be able to run on a wide variety of fuels, ranging fromoil to coal dust. Diesel first showed that his engine was practical by building a diesel-driven locomotive that was tested in 1912. In the 1912 test runs, the first diesel-powered locomotive was operated on the track of the Winterthur-Romanston rail line in Switzerland. The locomotive was built by a German company, Gesellschaft für Thermo-Lokomotiven, which was owned by Diesel and his colleagues. Immediately after the test runs atWinterthur proved its efficiency, the locomotive—which had been designed to pull express trains on Germany’s Berlin-Magdeburg rail line—was moved to Berlin and put into service. It worked so well that many additional diesel locomotives were built. In time, diesel engines were also widely used to power many other machines, including those that ran factories, motor vehicles, and ships.Diesels, Diesels Everywhere In the 1890’s, the best engines available were steam engines that were able to convert only 5 to 10 percent of input heat energy to useful work. The burgeoning industrial society and a widespread network of railroads needed better, more efficient engines to help businesses make profits and to speed up the rate of transportation available for moving both goods and people, since the maximum speed was only about 48 kilometers per hour. In 1894, Rudolf Diesel, then thirty-five years old, appeared in Augsburg, Germany, with a new engine that he believed would demonstrate great efficiency. The diesel engine demonstrated at Augsburg ran for only a short time. It was, however, more efficient than other existing engines. In addition, Diesel predicted that his engines would move trains faster than could be done by existing engines and that they would run on a wide variety of fuels. Experimentation proved the truth of his claims; even the first working motive diesel engine (the one used in the Winterthur test) was capable of pulling heavy freight and passenger trains at maximum speeds of up to 160 kilometers per hour. By 1912, Diesel, a millionaire, saw the wide use of diesel locomotives in Europe and the United States and the conversion of hundreds of ships to diesel power. Rudolf Diesel’s role in the story ends here, a result of his mysterious death in 1913—believed to be a suicide by the authorities—while crossing the English Channel on the steamer Dresden. Others involved in the continuing saga of diesel engines were the Britisher Sir Dugold Clerk, who improved diesel design, and the American Adolphus Busch (of beer-brewing fame), who bought the North American rights to the diesel engine. The diesel engine is related to automobile engines invented by Nikolaus Otto and Gottlieb Daimler. The standard Otto-Daimler (or Otto) engine was first widely commercialized by American auto magnate Henry Ford. The diesel and Otto engines are internalcombustion engines. This means that they do work when a fuel is burned and causes a piston to move in a tight-fitting cylinder. In diesel engines, unlike Otto engines, the fuel is not ignited by a spark from a spark plug. Instead, ignition is accomplished by the use of high-temperature compressed air.In common “two-stroke” diesel engines, pioneered by Sir Dugold Clerk, a starter causes the engine to make its first stroke. This draws in air and compresses the air sufficiently to raise its temperature to 900 to 1,000 degrees Fahrenheit. At this point, fuel (usually oil) is sprayed into the cylinder, ignites, and causes the piston to make its second, power-producing stroke. At the end of that stroke, more air enters as waste gases leave the cylinder; air compression occurs again; and the power-producing stroke repeats itself. This process then occurs continuously, without restarting. Impact Proof of the functionality of the first diesel locomotive set the stage for the use of diesel engines to power many machines. Although Rudolf Diesel did not live to see it, diesel engines were widely used within fifteen years after his death. At first, their main applications were in locomotives and ships. Then, because diesel engines are more efficient and more powerful than Otto engines, they were modified for use in cars, trucks, and buses. At present, motor vehicle diesel engines are most often used in buses and long-haul trucks. In contrast, diesel engines are not as popular in automobiles as Otto engines, although European auto makers make much wider use of diesel engines than American automakers do. Many enthusiasts, however, view diesel automobiles as the wave of the future. This optimism is based on the durability of the engine, its great power, and the wide range and economical nature of the fuels that can be used to run it. The drawbacks of diesels include the unpleasant odor and high pollutant content of their emissions. Modern diesel engines are widely used in farm and earth-moving equipment, including balers, threshers, harvesters, bulldozers,rock crushers, and road graders. Construction of the Alaskan oil pipeline relied heavily on equipment driven by diesel engines. Diesel engines are also commonly used in sawmills, breweries, coal mines, and electric power plants. Diesel’s brainchild has become a widely used power source, just as he predicted. It is likely that the use of diesel engines will continue and will expand, as the demands of energy conservation require more efficient engines and as moves toward fuel diversification require engines that can be used with various fuels.