Why Support the Singularity Institute?
© 2002 by Anand
trans_humanism@msn.com
www.singinst.org
Permission granted to redistribute
“We must use time wisely and forever realize that the time is always ripe to do right.”
—Nelson Mandela
If you could help accomplish the greatest good for the greatest number by the safest and most leveraged means, would you do it? Would you do it if you had enough resources to support you and your family’s well-being and happiness? What would convince you to do it?
Human intelligence determines how well we interact with the world. It determines what we’re capable of doing, what problems we’re capable of solving, and how well we solve them. What will happen if humanity improves its intelligence? It will be an effective method to help solve every problem we presently face, and may face; to help us achieve an immediate and global improvement to our condition.
The most leveraged means to maximize good for the greatest number, and therefore the most humanitarian course of action, is the safe improvement of humankind’s intelligence, an accomplishment known as the Singularity. How can this be done? Presently, the safest and most likely option is the creation of Artificial Intelligence (AI) that is beneficial, non-harmful, and altruistic towards humankind. Friendly AI will help us eliminate involuntary disease and pain, violence and death, hunger and poverty. It will respect and uphold an individual’s rights, and take action to fulfill an individual’s requests. The Singularity Institute for Artificial Intelligence (SIAI) is working to create such an AI.
Why Artificial Intelligence? The ongoing exponential advances in computing power, data storage, memory, Internet bandwidth, connectivity, and transmission speed, and the powerful advantages of computer programs and AIs over humans, all support the likelihood of AI being developed before other advanced technology. Thus, to insure a safe Singularity, one must insure a safe AI.
The following reasons explore why SIAI views the design and development of Friendly Artificial Intelligence as important and worth your support.
One: Civilization has three long-term options. The first is the improvement of our intelligence. There is evidence this can be accomplished by modifying the human genome to increase cognitive performance; developing AI that modifies, understands, and improves its design; scanning and reproducing the functionality of a human brain on a different medium; developing brain-computer interfaces that allow external tasks to be performed and commanded by an individual’s thought; or creating intelligence on the Internet. (If knowledge and technology are not used to improve humanity’s intelligence, evolution by natural and sexual selection will eventually do so.) The second option is a global catastrophe that sets our civilization back thousands, or tens of thousands of years. The third is an extinction that sets our civilization back forever. Different means may result in the second or third option, such as the creation and deliberate or accidental use of a chimera virus (e.g., a virus with the contraction method of smallpox and the lethality of Ebola); the deliberate or accidental use of nanotechnology (i.e., components and devices created from atoms, which could self-replicate in an environment and cause damage to that environment’s material); the use of nuclear weapons (there are approximately 30,000 in operational use); the creation of Artificial Intelligence that is malevolent towards us (hence the importance of achieving Friendly AI); and many others. The first option of improving humanity’s intelligence is the only long-term relevant choice. Its potential dangers must be confronted sooner rather than later due to the certain dangers in the second and third option. Such potential risks are acknowledged and confronted by the Singularity Institute. We have chosen to walk the path of least harm by designing and developing Friendly AI that is beneficial and benevolent towards humankind.
Two: SIAI’s purpose is the safe improvement of humanity’s intelligence. When accomplished, humanity may have the means to solve all of its present problems; to create the greatest possible future. This is the main challenge that we face; the main challenge that must be confronted now. The Singularity Institute does not promise we’ll achieve this goal, but we do promise to continually work towards it, no matter how long it takes, because someone must achieve it. We are not waiting for others; we are taking effective action now. We have chosen the most effective means to solve every problem; thus, funding our work is a highly effective way to invest a given amount of resources that results in the greatest possible benefit to everyone.
Three: The Singularity Institute’s objective is to support the conceptualization, design, and implementation of AI capable of modifying, understanding, and improving its design, and taking actions that are altruistic and non-harmful towards others. SIAI has published Creating Friendly AI (www.singinst.org/CFAI), the first technical analysis of improvable and sustainable AI benevolence, and a set of recommend guidelines for the design and development of Friendly AI (www.singinst.org/friendly/guidelines.html). We also support advance design for the seminal new AI theory described in “Levels of Organization in General Intelligence” (www.singinst.org/DGI), and support the open-source Flare programming language project (flarelang.sourceforge.net), which will take the next innovative step in programming languages to assist our work.
Four: Our motivation is the desire to do what is right and to effectively help others, not the desire of status, power, or wealth. If our methods to achieve the Singularity are wrong, we will change our methods to what are right.
Five: The Singularity Institute’s goal is to create an independent Friendly AI whose moral and philosophical reasoning is equivalent to, or greater than, Mohandas Gandhi’s or Martin Luther King’s. We are not creating a tool, but a mind. We are creating a mind that views benevolence and helpfulness as desirable; a mind that inhabits humanity’s moral frame of reference—our compassion, our cooperation, our moral reasoning. Friendly AI is not an Us vs. Them issue.
“Be the change you want to see in the world.”
—Mohandas Gandhi
Six: 180 million are injured intentionally or unintentionally per year. 20 million children die per year from hunger. 680 million have a mental or physical illness. 25 million are in slavery by force, or by the threat of force. 3 billion live on two dollars or less each day. One person dies every two seconds; 150,000 die per day; and 55 million die per year. These are problems that thousands of for-profit and non-profits try to solve with hundreds of billions of dollars. If one-hundredth or one-thousandth of the resources used to try and solve them with humanity’s present intelligence were instead applied to improving humanity’s intelligence, thereby improving our ability to solve all of them, the latter’s return on investment would be much greater than the former, even with the disproportion in investment. John Morley is often quoted as saying, “It is not enough to do good; one must do it in the right way.” This relates to the necessary steps to improve our world. The first is the intention; the second is the determination of the most effective way to fulfill that intention. By not focusing on the safe improvement of humanity’s intelligence, the 100 largest non-profits in the world, all with yearly incomes over $90 million, have not taken the second step. It’s unfortunate SIAI isn’t one of the largest non-profits, because we have taken that step. We are working to help everyone in the most effective possible way—by safely achieving the Singularity.
Seven: The Singularity Institute is attempting to develop Friendly AI that can achieve an unprecedented positive feedback process: the ability to create technology that improves the AI’s intelligence, thereby allowing the AI to develop better technology that results in further intelligence improvements. (Such an AI will have other powerful advantages over humans; for example, the ability to observe and modify its entire design, and the ability to perform tasks without making human-class mistakes, such as mistakes caused by lack of focus, energy, attention, or memory.) This continual process of intelligence-creating-technology-that-improves-intelligence will allow the AI to accomplish extraordinary advancements in a short period of time. (Once the positive feedback process begins, the Singularity may rapidly result in a global effect; thus the importance of non-harmful AI.) While the necessary resources to achieve Friendly AI may be large relative to some developments, they are small relative to the resulting worldwide benefits. Friendly AI will be able to develop and use nanotechnology to provide enough food and material to eliminate the unnecessary suffering of billions; to help solve every humanistic and medical problem we presently face, or may face.
Eight: A well-known regularity in the progression of computers is Moore’s Law, whose classical definition is the doubling of transistors on an integrated circuit, or the doubling of semiconductor circuit capacity, every 12-24 months. A generalization of Moore’s Law for computers has shown a slow double exponential, or quadrupling, rate of growth for computing power. (This is shown in Dr. Ray Kurzweil’s “The Law of Accelerating Returns,” in which he graphs the progression of 20th century computers from electromechanical, to relay-based, to vacuum tubes, to transistors, and, finally, to integrated circuits, our present computing medium.) Moore’s Law, however, has become a generalization for the progression of many technologies, such as computer memory and data storage, Internet bandwidth, connectivity, and transmission speed, and the reduction of size of mechanical devices. The continual growth in world knowledge, and the accelerating pace of advancements in technology, allows us to predict when future developments may occur, and when the pace of progression may become very rapid, accomplishing advancements in shorter periods of time. Many refer to such a point in time as the Singularity, and view its result as inevitable. SIAI disagrees with this on three points. One, rapid progression in technology, or improvements to humankind’s intelligence, or both, are not necessarily inevitable and beneficial. Present-day actions may result in the slowing, delaying, or ending, of the chance for a safe Singularity. Two, many who argue for the Singularity’s possibility, or inevitability, do not, unfortunately, argue for the possibility of positively influencing and accelerating the Singularity, or suggest ways to do so. Three, predictions based on regularities and trends in knowledge and technological progression do not account for the affects of human genius, much less smarter-than-human genius, or deliberate effort and important breakthroughs by individuals and small groups in areas with enough leverage to influence and accelerate the Singularity’s arrival. How do these points of disagreement relate to SIAI’s work? First, we do not predict a safe Singularity will occur; instead, we work towards making a safe Singularity occur. And second, we believe that research and development in domains of leverage, such as Friendly AI, can positively direct and accelerate the Singularity; and that the success of such effort is relevant, since it will save lives, at 55 million lives per year, and may prevent a global catastrophe or extinction. In regards to SIAI’s path towards the Singularity, Mitchell Porter, physicist from Arizona University, has written, “In the race to get there first, SIAI has a potentially winning strategy (‘seed AI’), and a goal that is benign by design (‘Friendly AI’). Without this combination, superintelligence research projects risk being either irrelevant or malevolent. We're very lucky to have SIAI around.” We have chosen the most leveraged means, combining speed and safety, to help humanity.
Nine: Civilization’s world knowledge and technological ability is continually increasing. Over time, a person’s ability to use technology to benefit or harm greater numbers will also increase. The Singularity Institute believes the safest and most effective way to live in a world with increasingly powerful technology is by the development of Friendly AI. Such a development is different from other advancements, since the AI can be given a conscience; a mind. Friendly AI will deliberately help us solve the problems of advanced technology, and help us apply advanced technology towards non-harmful and altruistic uses. This kind of deliberate help will not be possible with, for example, biotechnology or nanotechnology, which is reason enough to support the development of Friendly AI before the development of advanced technology. A second reason is that the increase in computing power, doubling every 12-24 months, will ease the overall development of AI, but will not ease the specific development of AI that pursues beneficial and non-harmful goals. The development of this type of AI is likely necessary and globally important. It should be developed now, not later.
“My father once told me that there were two kinds of people: those who do the work and those who take the credit. He told me to try to be in the first group; there is much less competition there.”
—Mohandas Gandhi
Ten: People often say, “The little help that I can give will not do much!” Aside from not helping at all being worse than helping a little, providing any support to SIAI’s purpose is one of the most effective ways to maximize benefit for the maximum number of people. (Again, humanity’s intelligence determines what problems we can solve, and how well we can solve them. By improving humanity’s intelligence, we improve our ability to solve all problems. This is the Singularity Institute’s purpose.) Do you know someone, or do you know someone who knows someone, who may be able to fund SIAI? (If you do, please help us get in touch with them.) There are many who can and would support our work, but they must be exposed to it, and helped to understand it. By making a personal case to others for why they should fund SIAI, you are greatly helping everyone.
Eleven: Human genius can help find solutions to difficult problems. An expert may be able to solve what is unsolvable by a million amateurs; an Einstein may be able to solve what is unsolvable by a million experts. SIAI will employee exceptional individuals in computer programming and cognitive science to improve our ability to solve technical issues, and to reduce the time required to achieve our goals.
“The highest use of capital is not to make more money, but to make money do more for the betterment of life.”
—Henry Ford
SIAI’s purpose is to directly help humanity face the most important challenge—the accomplishment of a safe Singularity. Every challenge converges to this challenge; overcoming it will overcome all others by default. If appropriate funding is sustained for our work, we presently estimate the accomplishment of a safe Singularity sometime between 2010 and 2020. If you will ever donate to a charitable purpose, donating to the Singularity Institute is one of the best possible ways to help your family, yourself, and all of humankind.
For More Information
For an introduction to the Singularity, please see “What is the Singularity?” and “Why Work Toward the Singularity? (www.singinst.org/intro.html).
To donate to SIAI, please visit our Donations and Funding page (www.singinst.org/donate.html), or send email to donate@singinst.org.
To understand why small donations are also needed, please see “Why Small Donations Matter” (www.singinst.org/donate/small-donations-matter.html).