Main Page | See live article | Alphabetical index

Technological singularity

The technological singularity is a term with multiple related, but conceptually distinct, definitions. One definition has the Singularity as a time at which technological progress accelerates beyond the ability of current-day human beings to understand. Another defines the Singularity as the culmination of some telescoping process of accelerating computation taking place in this universe since the beginning of human civilization or even life on Earth. Yet another defines the Singularity as the emergence of smarter-than-human intelligence.

Table of contents
1 Introduction
2 Concepts and terms
3 Further reading

Introduction

The concept was first mentioned in the book Future Shock by Alvin Toffler. It is based on observations that projections of speed of travel, human intelligence, social communication, population, and many other trend lines showed logarithmic increases up until about 1995. At this point, many of them became linear, or inflected and began to flatten into limited growth curves.

Belief in the singularity was reinforced by Moore's Law in the computer industry. It was introduced as a science fiction theme by Vernor Vinge in 1993, with the essay Singularity. Since then, it has been the subject of several futurist writings.

Vinge claims that:

"Within thirty years, we will have the technological means to create superhuman intelligence. Shortly thereafter, the human era will be ended."

Vinge's technological singularity is commonly misunderstood to mean technological progress rising to "infinity." Actually, he refers to the pace of technological change increasing to such a degree that our ability to predict its consequences will diminish virtually to zero and a person who doesn't keep pace with it will rapidly find civilization to have become completely incomprehensible.

It has been speculated that the key to such a rapid increase in technological sophistication will be the development of superhuman intelligence, either by directly enhancing existing human minds (perhaps with cybernetics), or by building artificial intelligences. These superhuman intelligences would presumably be capable of inventing ways to enhance themselves even more, leading to a feedback effect that would quickly surpass preexisting intelligences.

The effect is presumed to work along these lines: first, a seed intelligence is created that is able to reengineer itself, not merely for increased speed, but for new types of intelligence. At a minimum, this might be a human equivalent intelligence. This intelligence redesigns itself with improvements, and uploads its memories, skills and experience into the new structure. The process repeats, with presumed redesign of not just the software, but also the computer. The mind may well make mistakes, but it will make backups. Failing designs will be discarded, successful ones will be retained.

Simply having a human-equivalent artificial intelligence may yield this effect, if Moore's law continues long enough. That is, at first, the intelligence is equal to a human. Eighteen months later, it is twice as fast, three years later, it is four times as fast, etc. But because the design of computers themselves is done by accelerated AIs, every next step would take about eighteen subjective months and proportionally less of real time with each step. Assuming for the sake of simplicity that the rate of computer speed growth remains governed by unchanged Moore's law, every next step would take exactly twice less time. In just three years (36 months = 18+9+4.5+2.25...) the computer speed would reach its ultimate theoretical limit.

However, human neurons only transmit signals at 200 meters per second, while electronic signals move at 100 million meters per second in copper. Therefore, it may be reasonable to expect a conservative (only) million fold improvement in the intelligence's speed of thought if it just moves from flesh to electronics and stays the same size.

In this case, the intelligence could double its capacity as fast as every 46 seconds (18 months divided by a million). The actual doubling time would probably start out more slowly, because the intelligence would need special machinery constructed for its new mind. However, one of the first improvements would probably be to give it control of its self-manufacture.

One presumption is that such intelligences will be attainably small and inexpensive. Some researchers claim that even without quantum computing, using advanced molecular manufacturing, matter could be organized so that a gram of matter could simulate a million years of a human civilization per second.

Another presumption is that at some point, with the correct mechanisms of thought, all possible correct human thoughts will become obvious to such an intelligence.

Therefore, if the above conjectures are right, then all human problems could be solved within a few years of constructing a friendly version of such an intelligence. If that's true, then constructing such an intelligence is the most moral possible allocation of resources at this time.

Concepts and terms

A number of concepts and terms have come into standard use in this topic:

Whether such a process will actually occur is open to strong debate. There is no guarantee that we can make artificial intelligences that exceed or even approach human cognitive abilities. The claim that Moore's Law will aid in this process is also open to strong debate - considering the enormous speedup in computers over the past 50 years and the minimal progress made towards creating "human-like" artificial intelligence empirical evidence for the claim is not strong.

The claim that the rate of technological progress is increasing has also been questioned. The technological singularity is sometimes referred to as the "Rapture of the Nerds" by detractors of the idea.

The Singularity Institute was formed to assure that the singularity occurs, and that it be 'friendly' to human beings.

Prominent theorists and speculators on the subject include:

Vernor Vinge
Hans Moravec
Eliezer Yudkowsky
Ray Kurzweil
Marvin Minsky
Michael Anissimov
Terence McKenna

Further reading

See also

References

External links