[Existential Risk/Opportunity] Singularity Management
April 2015

Contents:
- Pep Talk
- Book Reviews as Things We Can Do
- Book Review: Nick Bostrom, SuperIntelligence, Paths,
Dangers, Strategies
. Reviewed by James Tankersley Jr.

Copyright 2015 Global Risk SIG. Rights, except nonexclusive multiple use, retained by authors. This publication is produced by the Global Risk Reduction Special Interest Group, a SIG within American Mensa. Content expressed here does not reflect the opinions of Mensa, which has no opinions. To join Mensa or just see what it is about, visit www.us.mensa.org. A copy of this publication is available at http://www.global-risk-sig.org/pub.htm .

Pep Talk
By James Blodgett

One of the purposes of this newsletter is to put together a roadmap for making personal contributions to global risk reduction, a map with plausible philosophical grounds. I am even more ambitious than that, since (following Nick Bostrom) I define global risk as including the risk that humanity will not reach its great promise, so facilitating technological singularities is part of the mix. This sounds like a job that is beyond even Superman. How can we simple humans contribute to such an enormous task?

The litany of scientific ”saints"--people like Newton and Einstein--shows us that individuals can make a difference. Our world is amazingly different than the world of even 300 years ago, mostly because of their work. I have just been reading a charming and really non-religious book about real saints: [Phyllis McGinley, Saint Watching.] Several were Jesuits. Jesuits take their religion seriously and act on it, in the world and intellectually, so much so that, interspersed with their successes, they have been preached against and even banned within the formal Catholic hierarchy. As an example of a Jesuit scientist-priest whose philosophies would fit in our group, Google "Teilhard." He got in trouble, too. A few years ago I read the prediction that no Jesuit would ever become pope. Now there is a Jesuit pope. He joked about his electors: "God forgive them, for they don't know what they have done." Now he is shaking things up, apparently to popular acclaim. If we are going to reduce global risk, it would help to take the task as seriously as the Jesuits take their religion.

Religion is a reason that some people don't worry about our fate, because they think that God is looking out for us. However, even within the Christian tradition, God Himself is a global risk, once deciding to wipe out most of the human species (Noah's flood). I like to think that a myth, but He might have had reasons. Despite fundamentalists it seems reasonable to think that God works through things like evolution, evolution that may transcend Earth and wait billions of years for another species in another galaxy, or perhaps even find that intelligent life is a dead end. (The Fermi paradox hints at that.) I like to think that God is rooting for us to work out here, but evidence suggests that He (if present) is letting the nature He designed take its course. (That evidence is the many disasters and holocausts in which God did not intervene, at least not in ways that leave a clear historical record.) If God likes humanity, we are doing His work, perhaps without Him as safety net. So we had better do a good job. If God is not present, the result is still that we had better do a good job.

Can limited individuals make contributions that help the whole world? History records many examples of heroes who did exactly that. We have at least some small probability of being the next generation of heroes. I like the logic of expected value = (probability times value), a respected criteria (often the ultimate criteria) within decision theory. Even a small probability of saving the world has a high expected value, because the underlying value in that equation is billions of human lives. Even a small probability that our ideas will be heard invokes this logic. Even if we fail, trying to save the world is an interesting hobby.

Suppose we make a mistake and screw things up? An important principle is: "First, do no harm." However, that is not exactly right. Modern drugs sometimes have side effects. Drugs are approved if careful study shows that good effects outweigh bad. The present world is less than perfectly safe for the human species, and we are constrained to do something, since "not to decide is to decide not to decide." I think our odds are best if we think things out as carefully as we can, then act carefully on our best thoughts, working with others within approved frameworks that encompass our whole species. That is only fair, and reduces the probability of mistake. The process of determining a direction for collective action can be raucous, but if done well a collective synthesis of diverse ideas is better than the ideas of most individuals. (On the contrary, some Greeks thought a philosopher king better than a democracy. The problem is determining who is the true philosopher. But that is no problem: I am available! (This is a joke.) )

Book Reviews as Things We Can Do
By James Blodgett

The first step in "saving the world" is learning what others know of the territory. Quite a few people have contributed ideas that are relevant to this quest. There are many relevant books, papers, and discussions. The first step is a review of some portion of the literature. A second step is to share what one has learned, adding one's own assessment and perhaps one's own contribution. There may be other steps as we develop a strategy for effective activism, but we can't develop that strategy without knowledge of at least the immediate territory. Right here in this SIG and in this publication, we can implement something like a graduate seminar where each participant learns some area and presents results. I suggest reading books and writing reviews of them, reviews to be published here. "Book reviews" is a loose description, since I mean to include ideas expressed in media and in locations other than books. This can be a first step on the intellectual path for the person writing the review. Published reviews also give the rest of us a glimpse of the material the writer has found, and may encourage us to read or review the original material. Also, our reviews published here just might influence the larger world, since this SIG publication is posted on the Internet. A book review published here is also a real citation of your real work. Accumulate a bunch of those and you begin to build an intellectual portfolio. There are other ways to contribute, but a bit of this is a good preparation for any of those ways.

There are lots of ways to search the literature. Start with a few basic ideas, and chase them through various publications, using footnotes and references as indicators of what else has been done. The Internet is a good hunting ground, and so are libraries and booksellers. Google, Wikipedia, and human reference librarians are good guides.


Here are a few leads you might follow to get started, consisting of books and papers you might review:

Sir Martin Rees, Our Final Hour: A Scientist's Warning: How Terror, Error, and Environmental Disaster Threaten Humankind's Future In This Century—On Earth and Beyond, Basic Books, 2003.

Willard Wells, Apocalypse When?: Calculating How Long the Human Race Will Survive, Springer, 2009.

Eric E. Johnson, "The Black Hole Case: The Injunction Against the End of the World," Tennessee Law Review, Vol 76, 2009, pp.819-908. Preprint available at: http://arxiv.org/abs/0912.5480 Download the PDF.

Milan M. Cirkovic, Anders Sandberg, & Nick Bostrom, "Anthropic Shadow: Observation Selection Effects and Human Extinction Risks," Risk Analysis, Vol. 30, No. 10, 2010 pp. 1495-1506. Available at http://www.nickbostrom.com/papers/anthropicshadow.pdf


The following describe potential positive singularities, part of the potential the human species might lose if our civilization collapses or we go extinct. Some may be protective against risk.

Gerard O'Neill, The High Frontier, Human Colonies in Space,, William Morrow and Company, 1977.

Philip Metzger et al, "Affordable, Rapid Bootstrapping of Space Industry and Solar System Civilization," Journal of Aerospace Engineering, Vol. 26, No. 1, January 2013, pp. 18-29. Preprint available at: http://www.philipmetzger.com/blog/affordable-rapid-bootstrapping-space-industry-solar-system-civilization/

Stuart Armstrong & Anders Sandberg, "Eternity in six hours: Intergalactic spreading of intelligent life and sharpening the Fermi paradox," Acta Astronauticaz,, Volume 89, August–September 2013, pp. 1-13. Available at: http://www.fhi.ox.ac.uk/intergalactic-spreading.pdf

Ray Kurzweil, The Singularity Is Near: When Humans Transcend Biology, New York: Viking Books, 2005.


Book Review

SuperIntelligence, Paths, Dangers, Strategies, 343 pages, © 2014 by Nick Bostrom, Oxford University Professor of Philosophy and Director of Future of Humanity Institute.

Reviewed by James Tankersley Jr., Software Engineer and Assistant Coordinator,
Global Risk Reduction Special Interest Group, a SIG within American Mensa Ltd.

This book focuses on existential risks from a super [intelligence] explosion (a situation where artificial intelligence greatly exceeds the intelligence of humans), how and when this might happen, and strategies that might allow humans to maintain control over machines that dwarf their own intelligence.

Professor Bostrom estimates that super intelligence may be the greatest existential threat human kind may face, and his book greatly expands on the definition and warning proposed in 1965 by Alan Turing’s chief statistician I.J. Good, who helped break the German Enigma code during WWII.

“Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion,” and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.”

The book discusses the history of artificial intelligence (AI), starting with a team of Professors at Dartmouth College in 1956, the growth, types (including neural networks, genetic algorithms, etc.), and successes and failures of AI up through present day.

A large focus of the book is on the inevitability of a super intelligence explosion, what forms this intelligence might take (including oracles & “genies”), how soon this is likely to happen, and possible strategies for maintaining some level of control before super intelligent computers might be beyond human control. (The book estimates a 10% chance of a super intelligence explosion by 2030 and a 90% chance by 2100.)

The book is very thorough in its careful and detailed analysis, and as a professional computer programmer myself, who once worked for Neural Network company HNC software, I have long been aware of the potential power of artificial intelligence technology, and I found this book fascinating though possibly a bit pedantic at times.

All in all, I felt this is an important work on the subject of super intelligence existential risk, and a good starting point for future super intelligence safety discussion and analysis.

4/19/15