Monday, May 5, 2014

The Singularity

I'm about a third of the way through Capital and will eventually make at least one more post on it. Today I'll write about the singularity.

In earlier posts I suggested that government could be automated and that capitalism could be brought to an end if the right technology existed and were put into effect. This view generally fits within a gradualist framework under which new technology becomes incorporated into our lives without any major shocks, setbacks or unexpected turns of events. Alternatively, there are scenarios in which technological developments could suddenly cause radical shifts either for better or for worse.

The singularity, if you haven't heard of it, is the theoretical moment when artificial intelligence surpasses human intelligence, with unpredictable effects that could change Homo sapiens forever. Some proponents, such as inventor Ray Kurzweil, take this seriously and are planning their lives accordingly. Kurzweil is hoping to remain healthy and live as long as possible in order to benefit from coming technology that will make him immortal. Some experts think that Kurzweil is unrealistic regarding the biology of human longevity and the capabilities of technology. Others, such as physicist Max Tegmark, are more cautious regarding the singularity. His approach is that we don't know for certain that it will occur or what would happen if it did. Tegmark suggests that we discuss it ahead of time rather than wait and see.

I am not well-informed on artificial intelligence or biology, but I think a singularity is likely to occur, though I'm not sure when or what the results will be. Since I don't believe that there is anything special about human intelligence, I see no reason why supercomputers couldn't dramatically outthink humans in the not-too-distant future. They are already better at chess and Jeopardy. Certainly computers could have much larger memories and far greater processing capabilities than any human. Once they can be taught to learn, which does not seem to be an insurmountable task and already goes on at a rudimentary level, why couldn't they outperform us?

Among the positive potential outcomes, humans might live much as they do now, but without having to work, and with increased longevity. There could be a benign merger between humans and machines that would create a new species without eradicating what we now think of as human nature. Conflicts might be resolved peaceably, the ecosystem could be managed better, and in theory everyone could be happy.

One negative outcome would be an uncontrolled rampage by supercomputers that don't act in the interests of humans. This has been a subject of science fiction for many years. It could probably be prevented but would require advance planning.

Another negative outcome, and perhaps more likely, would be the use of supercomputers to benefit one group of people but not others. Under this scenario, a small group of wealthy technocrats might rule the world, neutralizing or eliminating their opponents and accelerating their own evolution while excluding others. Or this could occur at a national level, in which case the supercomputers would simply represent the most advanced weaponry.

At present this may all appear too speculative, but I think one of these outcomes is possible. Keep in mind that the type of supercomputer I'm talking about here might be capable of making improved versions of itself, anticipating all human behavior, developing new energy sources that we have been unable to, designing and making weapons beyond our comprehension and obviating the need for human labor of any description. It might even write better novels, short stories, poems - and blog posts - than humans.

No comments:

Post a Comment

Comments are moderated in order to remove spam.