They said it couldn't be done: 7 impossible inventions
NOTHING dates the 1987 movie Wall Street like the $4000 cellphone clutched by financier Gordon Gekko. It was the size of a brick and he could only talk for 30 minutes before having to recharge it.
In the 1980s, it was difficult to imagine the capabilities of today's smartphones. Cellphone users and engineers alike would have considered it unlikely, if not impossible, that so much could be packed into such a small case. To understand why, consider this: if you were to try building an iPhone using equivalent components from the 1980s, just how big would that phone be?
Let's start with the batteries. The Motorola DynaTAC phone used by Gekko had a nickel-cadmium battery that was thicker and more than twice the length of an iPhone.
Second, antennas. The iPhone has a pair of them - one for cellular reception, the other for GPS, Wi-Fi and Bluetooth signals - disguised as the stainless steel frame that forms the phone's rim. The DynaTAC's antenna was less subtle, sticking out 13 centimetres.
The iPhone's GPS receiver is a single chip the size of a small child's fingernail, according to a component analysis by iSuppli, a market research firm in El Segundo, California. Civilian GPS receivers of the mid-1980s would fill a hefty backpack, not counting the car battery needed to power them.
To sense motion and orientation, the iPhone has a three-axis gyroscope and an accelerometer, both in the form of silicon microelectromechanical (MEMs) devices mounted on circuit boards. Only mechanical versions were available in the 1980s and although the accelerometers were small, the gyros of the time were a few centimetres in size, and three were needed to monitor motion in three dimensions.
The iPhone doubles as a music player by storing songs in its flash memory. In Gekko's day, the portable audio technology of choice was the Sony Walkman, which would fill your pocket. (Since they're not components, we won't include the hundreds of cassettes required to store the thousands of songs that fit on an iPhone.)
The iPhone 4, released in 2010, includes a pair of digital cameras. Only film cameras were available in the 1980s, and we would need to add two of those. The iPhone can also record digital video. In the 1980s, video capture was a job for a VHS camcorder, which could fit into a small backpack.
A hallmark of the iPhone is a colour touchscreen. The touchscreen's first appearance in a consumer device dates back to 1983: the 23-centimetre screen of Hewlett-Packard's HP-150 personal computer. It was monochrome green, but the technology was there for Gekko to swipe and point with one finger. The downside is that it would have come with a bulky cathode ray tube.
The components for the iPhone à la 1985 we've listed so far would fill a large wheelbarrow. But we have left out something important. "The beauty of the iPhone is that they squeezed desktop and mobile computing down into a phone," says Wayne Lam, a senior analyst at iSuppli.
The processor at the heart of the iPhone 4 can perform up to a billion operations per second (the new iPhone 4S is even zippier). You might have matched that in the mid-80s if you had bought the Cray X-MP, then the world's most powerful supercomputer. But the Cray would have filled an office cubicle and also required an industrial-strength refrigerator to remove the waste heat.
So cancel the wheelbarrow. To haul the 1985 iPhone around, we're going to need a truck. Jeff Hecht
"A waste of time. Nobody will use it, ever" Thomas Edison, US inventor, 1889 On alternating current
Energy-saving light bulbs
LIGHT bulb makers were worried. In 1973, the oil crisis was starting to bite, so people began cutting their electricity use. This slashed bulb sales.
At General Electric, engineer Edward Hammer was assigned to develop an energy-efficient replacement for the incandescent bulb. Hammer wanted to build a bulb based on fluorescent tubes, which generate light when mercury atoms in the vapour inside them are excited by a stream of electrons. His colleagues told him he was wasting his time. The cost would be too high, they warned, and efficiency improvements would be meagre.
"I was told that this lamp wouldn't even work," recalls Hammer. "So really, when I built the first one I wanted to see how bad it was going to be."
One problem was that such a bulb required the maker to curve the fluorescent tube into a spiral. This meant a lot of its light would be reflected multiple times within the spiral, creating losses and reducing efficacy. Hammond got around this hurdle by simply using a spiral with a certain spacing between each turn. This meant less light was trapped.
Today, compact fluorescent lamps make up 30 per cent of bulbs sold in the US, up from 1 per cent in 1990. Around 247 million CFLs are used in UK homes. "In energy-efficiency terms we regard the invention of the CFL as the most important development in the history of domestic lighting," says the UK Energy Saving Trust's James Russill. Helen Knight
IN 1920, The New York Times ran an editorial criticising one of the great pioneers of rocketry, the aeronautical engineer Robert Goddard. He had published a report mapping out many of the basic rules of rocket flight. The New York Times was not convinced. It believed that a rocket could not accelerate in space because it could not push against a vacuum. Of Goddard's contention that a rocket could reach the moon, it declared: "That will be believed when it is done." It was an unedifying start for space journalism.
Today, there are over 100 space rocket launches every year. While space travel itself is hardly everyday, our lives are entwined with the technology we have placed in orbit. Planes, trains and automobiles navigate using a constellation of satellites, our weather forecasts come courtesy of sensors zooming high above us, and television, radio and internet-data signals are broadcast from orbiting transmitters.
Isaac Newton could have told The New York Times that a rocket can accelerate in a vaccum. Still, before Goddard it was unthinkable that a rocket could reach escape velocity. Goddard's greatest contribution to rocketry was probably the invention of a nozzle that both cools exhaust gas and accelerates it to hypersonic speeds in one direction - dramatically improving the efficiency of rocket engines.
"It's hard to imagine just how difficult space flight must have seemed to the pioneers of rocketry," says Steve Garber at NASA's History Office. And if getting into space seemed hard, surviving the return journey looked impossible, as any vehicle orbiting Earth would be travelling fast enough to be vaporised completely during re-entry. Two NASA engineers, Harvey Allen and Alfred Eggers, eventually solved the problem with the counterintuitive discovery that a blunt heat shield, rather than a streamlined one, would best survive re-entry.
The New York Times eventually acknowledged its error a few days before Neil Armstrong landed on the moon. "It is now definitely established that a rocket can function in a vacuum as well as in an atmosphere," it wrote. "The Times regrets the error." Justin Mullins
"It would cause passengers, unable to breathe, to die of asphyxia" Dionysius Lardner, University College London, 1830 On high-speed rail travel
SHUJI NAKAMURA stood at the podium before a packed auditorium. He took out his laser pointer, and raised it towards a blank wall. The audience looked up in awe. Dancing above them was a bright blue spot.
That was in 1995, at the Materials Research Society meeting in Boston, Massachusetts. It's hard to imagine why such a simple laser pointer would generate the buzz it did, but the blue laser that Nakumura presented had seemed out of reach only a few years earlier. Today, blue lasers can be found in living rooms all over the world. Without them, you couldn't watch a Blu-ray movie.
Semiconductor lasers convert electric current directly into light. By the 1990s, they were widely used in fibre-optic communications, compact disc players and laser printers. But, to echo Henry Ford, you could get any colour of laser you wanted as long as it was red (or infrared).
Why? Semiconductor lasers (and light-emitting diodes) emit light when an electron carrying current through the crystal material drops into a vacant slot in the electron shell of an atom and releases its energy as a photon. The composition of the crystal determines the amount of energy released, and that, in turn, determines the wavelength of the light.
The inherent properties of the gallium-arsenide compounds used in red and infrared semiconductor lasers prevent them from emitting wavelengths in other parts of the spectrum. Physicists had managed to coax some blue light from another material, gallium nitride, but it was feeble.
In 1993 Nakamura produced the first bright blue LEDs at the Nichia Corporation in Japan. His innovation was growing gallium nitride without flaws in the crystals, which meant that it would release more energy. Today, these LEDs are used in cheap flashlights, night lights and domestic lighting that is more energy efficient than fluorescent bulbs.
Within two years, Nakamura had made a blue semiconductor laser using the same approach, except that it needed much more drive current than an LED requires. Consumer electronics companies were delighted. It meant they would soon be able to put a high-definition movie on one optical disc. Recording capacity depends how many spots can fit onto a disc, and shorter wavelengths - at the blue end of the spectrum - allows smaller spots to be created and read back. Goodbye DVD, hello Blu-ray. Jeff Hecht
RECALL how radical Wikipedia seemed 10 years ago. Its contributors receive no payment or training. They grapple with an unintuitive editing system. No one is in charge of fact-checking, article selection or any of the jobs that editors traditionally oversee.
"Everyone 'knew' that people don't work for free, and if they did, they could not make something useful without a boss," wrote the futurist Kevin Kelly in a piece about apparent impossibilities, published on his blog in August.
In fact, even Wikipedia's creators started with a more traditional approach: an online encyclopedia called Nupedia. Backed by entrepreneur Jimmy Wales, Nupedia got off to a sluggish start in March 2000. Most contributors were expected to have some kind of scholarly track record in their subject. Articles had to complete a seven-stage review. By November, just two full-length articles had been published.
Both Wales and Larry Sanger, Nupedia's editor, realised the project was in trouble, so they launched a bold attempt to produce articles more quickly. In January 2001 they started a second encyclopedia to which anyone could contribute, designed as a "feeder" site for Nupedia. Wales feared his expert Nupedia contributors would grumble, but knew he had little choice: "If we didn't do it we would have to close anyway." Thus Wikipedia was born.
Wikipedia exists in more than 280 languages and contains almost 20 million articles. It is the first place many people go for information. It is also proof that, when it comes to the communication of knowledge, not all work has to be paid or managed. Give motivated volunteers the tools they need, and it turns out they can self-organise - which, in itself, is a pretty radical idea. Jim Giles
"This glib supposition is a completely unscientific utopian dream, a childish bugaboo" Robert Millikan, physicist, 1928 On nuclear energy
ONE of the greatest inventions of the digital age is in your pocket, and chances are you don't know it's there. It's called a turbo encoder: a device that allows cellphones as well as satellite televisions to send and receive signals with near-perfect clarity.
Since the invention of the telegraph, engineers have been devising ways to communicate clearly over noisy channels. In 1948 Claude Shannon of Bell Telephone Laboratories proved mathematically that you can transmit a signal with essentially perfect fidelity - despite noise - by adding "redundant" information. It sounds counterintuitive, but you use a similar approach when asked to spell out your name or address. "D as in delta," you may say. Though the "as in delta" is redundant, it avoids confusion between the similar-sounding letters "B" and "D".
To add redundant information to the signal, algorithms called "codes" are used to shuffle and add bits, using patterns known to sender and receiver. The simplest code would repeat the message content a few times, so errors would be obvious. But since repetition balloons the amount of data to be sent, more efficient and complex encoding methods are used.
Shannon derived a speed limit for error-free transmission of data, depending on the bandwidth of the channel and on the signal-to-noise ratio. The "Shannon limit" cannot be broken. But it was once thought impossible to even get near it. The best encoding methods fell far short of the limit, and with nobody able to think of any other techniques, it became an article of faith that you could not do better.
In 1993, Claude Berrou and Alain Glavieux of the French National School of Telecommunications rocked the telecom world by getting to within 10 per cent of the Shannon limit. Their "turbo code" had three main ideas behind it: concatenation, interleaving and message passing.
Concatenation means that each message is encoded twice - using conventional coding. Interleaving means that the message is scrambled before the second encoding. This turns very similar segments (analogous to the sounds of the letters "B" and "D") into more easily distinguished ones. The price is that decoding gets harder, because you have scrambled the message - and this is where message passing comes in.
Berriou and Glavieux showed that you can decode with almost perfect accuracy by having two decoders working in tandem at the receiver's end, passing guesses back and forth. Think of it as a Sudoku puzzle with one decoder keeping track of the rows while the other keeps track of the columns. As the correct decoding of a row becomes more apparent, the arrangement provides more information about the columns as well.
Researchers later realised that a turbo-like method had been proposed in 1960 by Robert Gallager at the Massachusetts Institute of Technology. His scheme was disregarded as infeasible using the circuitry of the time. Now both his and the French team's method are crucial ingredients in 4th-generation cellphone networks and high-definition TV.
Since the Shannon limit cannot be broken, your great-grandkids and beyond will probably use turbo codes too. You have a piece of Star Trek technology in your pocket. Dana Mackenzie
IN THE 1986 Encyclopedia Americana, translator of literature J. M. Cohen was quoted as saying that it is impossible "to imagine a literary-translation machine less complex than the human brain itself, with all its knowledge, reading, and discrimination".
He didn't think much of his computer counterparts, and he wasn't the only one. At the time, machine translation was poor and showing little sign of improvement. Today it is still not perfect, but online translations between more than 60 languages are available at the click of a mouse, and will soon achieve 80 per cent accuracy. What made this possible?
Computerised translation began in the 1950s with the US military looking for an edge in the cold war. The logical place to start seemed to be to program machines with English and Russian vocabularies and grammars, and then hope they would do a good job of comparing the two. The results were often garbled, but the approach stuck for most of the 20th century with only incremental improvements.
In the early 1990s, Pentagon researchers tried a new tack, called statistical machine translation. They started programming computers to look for language patterns instead, with computers inferring translations based on probability. Studying a Chinese-English menu, for example, would turn up several instances of "fried vegetables" and "rice". An algorithm could then reasonably guess how to translate "vegetable fried rice" into Chinese.
Things really began to take off in the early 2000s, when massive caches of digital documents became available. In 2003, a researcher called Franz Josef Och of the University of Southern California in Los Angeles entered a Pentagon-sponsored machine translation contest. Och had honed his algorithm using millions of recently digitised UN documents issued in six languages. With perhaps the largest cache of comparative data yet, Och scored top honours.
Google hired Och the next year and used his techniques to relaunch Google Translate. Google now has access to a mind-boggling number of online documents to train its algorithms, and Google Translate routinely achieves top accuracy scores in annual assessments by the US National Institute of Standards and Technology.
Asish Venugopal of Google Translate says computers will soon handle most translation needs. All that will be left for humans to do is polish complex language such as poetry or legal documents - "providing the last mile", as Venugopal puts it. Peter Nowak
"They'll prove to be a hoax" Lord Kelvin, president of the Royal Society, 1883 On X-rays