Wednesday, March 28, 2018

Life 3.0: Being Human in the Age of Artificial Intelligence II

The book on the whole provides a scattershot view of the future of AI. Tegmark seems to include snippets of just about everything he knows on the subject. While one does get exposure to many aspects of AI, there is a lack of focus throughout the book, and, in my opinion, Tegmark draws far too much from the wide range of science fiction that he apparently has read. Instead of the multiple scenarios that he brings up, I would have preferred more basic categories, such as 1. Independent superintelligent AI acting benevolently toward humans; 2. Independent superintelligent AI acting maliciously toward humans; 3. Superintelligent AI controlled by humans and acting benevolently toward humans; and 4. Superintelligent AI controlled by humans and acting maliciously toward humans. Since one of the underlying themes of the book is the existential risk associated with AI, I think these would have been a better starting point. He includes many speculative ideas from all sources and organizes them into groups without reaching any definitive conclusions. The book is supposed to be a conversation-starter for those who are interested in the topic, and, as such, leaves each topic too open for my liking. I would have found it more effective it if he had restricted himself to probable scenarios, which would have reduced the length of the book considerably. Some chapters veer off into pie-in-the sky futures that have little likelihood of materializing ever. However, the book warrants attention, since Tegmark is concerned about existential risk and is one of the founders of the Future of Life Institute, which is one of the very few organizations in the world that studies this important topic.

Tegmark says very little about what I think is one of the most likely scenarios: superintelligent AI controlled by some humans and acting maliciously toward other humans. He spends what I consider to be too much time on independent superintelligent AI destroying mankind. Where I seem to differ with him is in my understanding of life. Almost the entire book is framed within the context of goals, whether they are the goals of humans or of superintelligent AI. In my view, goals are a minor aspect of humanity. We are no different from other animals in that we are driven by DNA-encoded behavior which generally leads us to reach adulthood, engage in sex, have children and raise them. Goals do not play a role in this except in the sense that we happen to superimpose an intellectual schema on our behavior, but in reality we would most likely behave exactly the same way without any deliberate plans to raise families. Though it is true that some aspects of modern society, such as the availability of birth control, have changed the landscape a little, in a biological sense we are hardly any different from people who lived hundreds of years ago. Speaking for myself, I have never been goal-oriented, and it seems possible that Tegmark and his cohort, which includes Elon Musk, are goal-driven in the extreme, but are hardly representative of most people. They may also be ascribing their goal hysteria to inanimate objects such as superintelligent AI. In my view, the outcomes that we prefer have no meaning outside the human sphere, and it is folly to think that sophisticated computers would have comparable preferences. We only think that living is good and death is bad because we have a biological imperative, and that imperative would not be shared by superintelligent AI unless it were programmed into it. Being dead or alive makes no difference to non-organisms, and it may be that Tegmark is unwittingly engaging in anthropocentric conceit. Thus, I think that Tegmark is somewhat misguided in not focusing more attention on the possible abuse of superintelligent AI by an individual or group that doesn't represent the interests of mankind as a whole.

I did not find most of the book objectionable, but didn't pay close attention to much of it, because I was not interested in many of the subjects. The only section that I thought was completely incorrect was Tegmark's view on intelligent extraterrestrial life. He proposes an obscure statistical model which indicates a low probability of other intelligent life anywhere in the universe. On this front, I go with more mainstream thinking. If one assumes that there is no magical ingredient to the formation of life, and that the evolutionary processes on earth that led to our existence are not unusual, the obvious procedure is to determine how many sun-like stars there are in the universe and how many of those are likely to possess planetary systems like the solar system. The fact is that our sun isn't unusual, and many stars have planets. Thus, given that there are billions of galaxies that each contain billions of stars, it seems likely that earth-like conditions aren't all that rare. Furthermore, there is no reason to dismiss the possibility that life has emerged on planets orbiting stars unlike the sun. At one point, Tegmark refers to himself as crazy, and here I can see why. Another section that I could have done without is the chapter on consciousness. Tegmark remains neutral on the topic, but I find it mostly irrelevant. I think consciousness is simply a biological feature that amounts to little more than self-awareness. As I've said, there is a continuum between small mammals and humans, and there is not a marked difference between chipmunk-level consciousness and human-level consciousness. For mammals, consciousness seems to be a byproduct of how the brain operates, and, to me, higher consciousness simply refers to more sophisticated brain function. There is no need to think about consciousness in AI, since it would not exist unless self-awareness were programmed into the AI.

In a similar vein, there is what I think of as a conceptual misunderstanding among many AI futurists. They envision futures as immortal cyborgs or digitized people who roam the universe and populate other regions for eternity. It seems to me that they are extrapolating from their current mental states to their future mental states without taking into consideration significant changes that might occur in the process. What if, with superintelligence, they soon know all that they ever can know about the universe: how might this affect their enthusiasm for exploration and discovery? What if, once they have merged with superintelligent entities, immortality suddenly loses its appeal? If they do in fact become immortal, what would the point of reproduction be? I don't think they have taken into consideration the ways in which their current thinking is skewed in a way that it only can be in living organisms, and they are not taking into account how their outlook might change. As I said in an earlier post, it is possible that advanced extraterrestrials that reached superintelligence may have opted for death over life.

One of Tegmark's primary purposes in writing this book and founding the Future of Life Institute has been to increase awareness of the situations that could develop as AI advances. My feeling is that if it advances slowly, in incremental steps, and different groups reach comparable technological levels in unison, it will be possible to enact various safeguards in a manner similar to the safeguards that were adopted in biological weaponry. However, in the event that AI research makes a sudden major advance that is available only to one group, there is a significant chance that all bets will be off the table. In that case, the risk of abuse of power would be significant, and there may not be enough time to enact any safeguards. This kind of thinking is so far from public and political awareness that we can only hope for the extremely slow and coordinated development of AGI in the coming years.

No comments:

Post a Comment

Comments are moderated in order to remove spam.