Friday, February 24, 2017

The Centre for the Study of Existential Risk

The Centre for the Study of Existential Risk seems like one of the best ideas I've heard in years. It is a fairly new organization and is based in Cambridge, UK. Recent articles in Wired profile its members and the specific risks that they study. For me, the Centre kills two birds with one stone: it attempts both to understand the universe and to identify the most serious risks facing mankind. What is unusual is that it studies actual threats that are often ignored by governments and profit-based organizations. One of the greatest weaknesses of capitalism is its emphasis on short time periods. Capitalists usually start by thinking that they can make money through their ideas and hard work, and that if all goes well they, their families and their descendants may benefit financially. Entrepreneurs don't typically give much thought to whether their company will be around in a hundred years or what its lasting impact might be. Capitalism encourages short-term thinking related to making money, which may explain why businessmen are often dismissive of the arts and sciences. Particularly in the U.S., it is common for them to focus on their work and to oversimplify other aspects of the world by assuming, for example, that God will take care of everything else. In extreme cases they may even believe that God favors business, with the implication that government should not interfere with it. Their thinking may be that God intended the U.S. to be the world's model for free enterprise. If you start from a theological point of view like this, the problems associated with climate change aren't necessarily ones for human resolution, since God's will is being followed, and he is the final arbiter on that question. It is therefore important to have independent researchers study existential risk, because the people who inhabit the main channels of power in the world are wedded to economic prosperity and may be indifferent to it or have conflicts of interest.

I enjoy reading through the list of risks. When I mention AI, I usually refer to it as a potential solution to problems, but it may also lead to catastrophic outcomes. There are a few conventional risks that are already well-publicized, such as pandemics, nuclear war, climate change, a collision with an asteroid and food shortages. Then there are obscure ones understood by few, such as the accidental annihilation of the universe during a particle accelerator experiment. I was particularly reassured by CSER's interest in tyrannical leaders. After Donald Trump was elected president, they met to discuss whether this constituted an existential threat. Many organizations are having a hard time coming to terms with the Trump presidency, and although the news has been filled with concerns about him since the election, conventional organizations are not well-positioned to make objective, public statements about him now that he is in power. Everyone has been pressured to treat Trump respectfully because he was elected in a democratic process, and many organizations are reluctant to publicly question his suitability or competence as a political leader.

Though it is hard to say how successful CSER will be, it is better than nothing for the risks that aren't generally studied. Their academic environment has both advantages and disadvantages. If the funding isn't predominantly from special interests, it may stand a better chance of producing useful, objective work than is the case with many other public or private research organizations. However, its affiliation with stodgy, careerist academics could entail some of the pitfalls that are common in academia. For example, given my opinion of academic philosophy, I think it unlikely that philosophers as a group will be helpful in this field.

No comments:

Post a Comment