• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo

Why Support The Singularity Institute?


  • Please log in to reply
8 replies to this topic

#1 Anand

  • Guest Singularity
  • 5 posts
  • 0

Posted 04 September 2002 - 03:16 AM


Why Support the Singularity Institute?

© 2002 by Anand

trans_humanism@msn.com

www.singinst.org


Permission granted to redistribute


“We must use time wisely and forever realize that the time is always ripe to do right.”
—Nelson Mandela

If you could help accomplish the greatest good for the greatest number by the safest and most leveraged means, would you do it? Would you do it if you had enough resources to support you and your family’s well-being and happiness? What would convince you to do it?

Human intelligence determines how well we interact with the world. It determines what we’re capable of doing, what problems we’re capable of solving, and how well we solve them. What will happen if humanity improves its intelligence? It will be an effective method to help solve every problem we presently face, and may face; to help us achieve an immediate and global improvement to our condition.

The most leveraged means to maximize good for the greatest number, and therefore the most humanitarian course of action, is the safe improvement of humankind’s intelligence, an accomplishment known as the Singularity. How can this be done? Presently, the safest and most likely option is the creation of Artificial Intelligence (AI) that is beneficial, non-harmful, and altruistic towards humankind. Friendly AI will help us eliminate involuntary disease and pain, violence and death, hunger and poverty. It will respect and uphold an individual’s rights, and take action to fulfill an individual’s requests. The Singularity Institute for Artificial Intelligence (SIAI) is working to create such an AI.

Why Artificial Intelligence? The ongoing exponential advances in computing power, data storage, memory, Internet bandwidth, connectivity, and transmission speed, and the powerful advantages of computer programs and AIs over humans, all support the likelihood of AI being developed before other advanced technology. Thus, to insure a safe Singularity, one must insure a safe AI.

The following reasons explore why SIAI views the design and development of Friendly Artificial Intelligence as important and worth your support.

One: Civilization has three long-term options. The first is the improvement of our intelligence. There is evidence this can be accomplished by modifying the human genome to increase cognitive performance; developing AI that modifies, understands, and improves its design; scanning and reproducing the functionality of a human brain on a different medium; developing brain-computer interfaces that allow external tasks to be performed and commanded by an individual’s thought; or creating intelligence on the Internet. (If knowledge and technology are not used to improve humanity’s intelligence, evolution by natural and sexual selection will eventually do so.) The second option is a global catastrophe that sets our civilization back thousands, or tens of thousands of years. The third is an extinction that sets our civilization back forever. Different means may result in the second or third option, such as the creation and deliberate or accidental use of a chimera virus (e.g., a virus with the contraction method of smallpox and the lethality of Ebola); the deliberate or accidental use of nanotechnology (i.e., components and devices created from atoms, which could self-replicate in an environment and cause damage to that environment’s material); the use of nuclear weapons (there are approximately 30,000 in operational use); the creation of Artificial Intelligence that is malevolent towards us (hence the importance of achieving Friendly AI); and many others. The first option of improving humanity’s intelligence is the only long-term relevant choice. Its potential dangers must be confronted sooner rather than later due to the certain dangers in the second and third option. Such potential risks are acknowledged and confronted by the Singularity Institute. We have chosen to walk the path of least harm by designing and developing Friendly AI that is beneficial and benevolent towards humankind.

Two: SIAI’s purpose is the safe improvement of humanity’s intelligence. When accomplished, humanity may have the means to solve all of its present problems; to create the greatest possible future. This is the main challenge that we face; the main challenge that must be confronted now. The Singularity Institute does not promise we’ll achieve this goal, but we do promise to continually work towards it, no matter how long it takes, because someone must achieve it. We are not waiting for others; we are taking effective action now. We have chosen the most effective means to solve every problem; thus, funding our work is a highly effective way to invest a given amount of resources that results in the greatest possible benefit to everyone.

Three: The Singularity Institute’s objective is to support the conceptualization, design, and implementation of AI capable of modifying, understanding, and improving its design, and taking actions that are altruistic and non-harmful towards others. SIAI has published Creating Friendly AI (www.singinst.org/CFAI), the first technical analysis of improvable and sustainable AI benevolence, and a set of recommend guidelines for the design and development of Friendly AI (www.singinst.org/friendly/guidelines.html). We also support advance design for the seminal new AI theory described in “Levels of Organization in General Intelligence” (www.singinst.org/DGI), and support the open-source Flare programming language project (flarelang.sourceforge.net), which will take the next innovative step in programming languages to assist our work.

Four: Our motivation is the desire to do what is right and to effectively help others, not the desire of status, power, or wealth. If our methods to achieve the Singularity are wrong, we will change our methods to what are right.

Five: The Singularity Institute’s goal is to create an independent Friendly AI whose moral and philosophical reasoning is equivalent to, or greater than, Mohandas Gandhi’s or Martin Luther King’s. We are not creating a tool, but a mind. We are creating a mind that views benevolence and helpfulness as desirable; a mind that inhabits humanity’s moral frame of reference—our compassion, our cooperation, our moral reasoning. Friendly AI is not an Us vs. Them issue.

“Be the change you want to see in the world.”
—Mohandas Gandhi

Six: 180 million are injured intentionally or unintentionally per year. 20 million children die per year from hunger. 680 million have a mental or physical illness. 25 million are in slavery by force, or by the threat of force. 3 billion live on two dollars or less each day. One person dies every two seconds; 150,000 die per day; and 55 million die per year. These are problems that thousands of for-profit and non-profits try to solve with hundreds of billions of dollars. If one-hundredth or one-thousandth of the resources used to try and solve them with humanity’s present intelligence were instead applied to improving humanity’s intelligence, thereby improving our ability to solve all of them, the latter’s return on investment would be much greater than the former, even with the disproportion in investment. John Morley is often quoted as saying, “It is not enough to do good; one must do it in the right way.” This relates to the necessary steps to improve our world. The first is the intention; the second is the determination of the most effective way to fulfill that intention. By not focusing on the safe improvement of humanity’s intelligence, the 100 largest non-profits in the world, all with yearly incomes over $90 million, have not taken the second step. It’s unfortunate SIAI isn’t one of the largest non-profits, because we have taken that step. We are working to help everyone in the most effective possible way—by safely achieving the Singularity.

Seven: The Singularity Institute is attempting to develop Friendly AI that can achieve an unprecedented positive feedback process: the ability to create technology that improves the AI’s intelligence, thereby allowing the AI to develop better technology that results in further intelligence improvements. (Such an AI will have other powerful advantages over humans; for example, the ability to observe and modify its entire design, and the ability to perform tasks without making human-class mistakes, such as mistakes caused by lack of focus, energy, attention, or memory.) This continual process of intelligence-creating-technology-that-improves-intelligence will allow the AI to accomplish extraordinary advancements in a short period of time. (Once the positive feedback process begins, the Singularity may rapidly result in a global effect; thus the importance of non-harmful AI.) While the necessary resources to achieve Friendly AI may be large relative to some developments, they are small relative to the resulting worldwide benefits. Friendly AI will be able to develop and use nanotechnology to provide enough food and material to eliminate the unnecessary suffering of billions; to help solve every humanistic and medical problem we presently face, or may face.

Eight: A well-known regularity in the progression of computers is Moore’s Law, whose classical definition is the doubling of transistors on an integrated circuit, or the doubling of semiconductor circuit capacity, every 12-24 months. A generalization of Moore’s Law for computers has shown a slow double exponential, or quadrupling, rate of growth for computing power. (This is shown in Dr. Ray Kurzweil’s “The Law of Accelerating Returns,” in which he graphs the progression of 20th century computers from electromechanical, to relay-based, to vacuum tubes, to transistors, and, finally, to integrated circuits, our present computing medium.) Moore’s Law, however, has become a generalization for the progression of many technologies, such as computer memory and data storage, Internet bandwidth, connectivity, and transmission speed, and the reduction of size of mechanical devices. The continual growth in world knowledge, and the accelerating pace of advancements in technology, allows us to predict when future developments may occur, and when the pace of progression may become very rapid, accomplishing advancements in shorter periods of time. Many refer to such a point in time as the Singularity, and view its result as inevitable. SIAI disagrees with this on three points. One, rapid progression in technology, or improvements to humankind’s intelligence, or both, are not necessarily inevitable and beneficial. Present-day actions may result in the slowing, delaying, or ending, of the chance for a safe Singularity. Two, many who argue for the Singularity’s possibility, or inevitability, do not, unfortunately, argue for the possibility of positively influencing and accelerating the Singularity, or suggest ways to do so. Three, predictions based on regularities and trends in knowledge and technological progression do not account for the affects of human genius, much less smarter-than-human genius, or deliberate effort and important breakthroughs by individuals and small groups in areas with enough leverage to influence and accelerate the Singularity’s arrival. How do these points of disagreement relate to SIAI’s work? First, we do not predict a safe Singularity will occur; instead, we work towards making a safe Singularity occur. And second, we believe that research and development in domains of leverage, such as Friendly AI, can positively direct and accelerate the Singularity; and that the success of such effort is relevant, since it will save lives, at 55 million lives per year, and may prevent a global catastrophe or extinction. In regards to SIAI’s path towards the Singularity, Mitchell Porter, physicist from Arizona University, has written, “In the race to get there first, SIAI has a potentially winning strategy (‘seed AI’), and a goal that is benign by design (‘Friendly AI’). Without this combination, superintelligence research projects risk being either irrelevant or malevolent. We're very lucky to have SIAI around.” We have chosen the most leveraged means, combining speed and safety, to help humanity.

Nine: Civilization’s world knowledge and technological ability is continually increasing. Over time, a person’s ability to use technology to benefit or harm greater numbers will also increase. The Singularity Institute believes the safest and most effective way to live in a world with increasingly powerful technology is by the development of Friendly AI. Such a development is different from other advancements, since the AI can be given a conscience; a mind. Friendly AI will deliberately help us solve the problems of advanced technology, and help us apply advanced technology towards non-harmful and altruistic uses. This kind of deliberate help will not be possible with, for example, biotechnology or nanotechnology, which is reason enough to support the development of Friendly AI before the development of advanced technology. A second reason is that the increase in computing power, doubling every 12-24 months, will ease the overall development of AI, but will not ease the specific development of AI that pursues beneficial and non-harmful goals. The development of this type of AI is likely necessary and globally important. It should be developed now, not later.

“My father once told me that there were two kinds of people: those who do the work and those who take the credit. He told me to try to be in the first group; there is much less competition there.”
—Mohandas Gandhi

Ten: People often say, “The little help that I can give will not do much!” Aside from not helping at all being worse than helping a little, providing any support to SIAI’s purpose is one of the most effective ways to maximize benefit for the maximum number of people. (Again, humanity’s intelligence determines what problems we can solve, and how well we can solve them. By improving humanity’s intelligence, we improve our ability to solve all problems. This is the Singularity Institute’s purpose.) Do you know someone, or do you know someone who knows someone, who may be able to fund SIAI? (If you do, please help us get in touch with them.) There are many who can and would support our work, but they must be exposed to it, and helped to understand it. By making a personal case to others for why they should fund SIAI, you are greatly helping everyone.

Eleven: Human genius can help find solutions to difficult problems. An expert may be able to solve what is unsolvable by a million amateurs; an Einstein may be able to solve what is unsolvable by a million experts. SIAI will employee exceptional individuals in computer programming and cognitive science to improve our ability to solve technical issues, and to reduce the time required to achieve our goals.

“The highest use of capital is not to make more money, but to make money do more for the betterment of life.”
—Henry Ford

SIAI’s purpose is to directly help humanity face the most important challenge—the accomplishment of a safe Singularity. Every challenge converges to this challenge; overcoming it will overcome all others by default. If appropriate funding is sustained for our work, we presently estimate the accomplishment of a safe Singularity sometime between 2010 and 2020. If you will ever donate to a charitable purpose, donating to the Singularity Institute is one of the best possible ways to help your family, yourself, and all of humankind.

For More Information

For an introduction to the Singularity, please see “What is the Singularity?” and “Why Work Toward the Singularity? (www.singinst.org/intro.html).
To donate to SIAI, please visit our Donations and Funding page (www.singinst.org/donate.html), or send email to donate@singinst.org.
To understand why small donations are also needed, please see “Why Small Donations Matter” (www.singinst.org/donate/small-donations-matter.html).

#2 MichaelAnissimov

  • Guest
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 18 November 2003 - 03:44 AM

Other ImmInst members; does this count as a "troll" post or not?

sponsored ad

  • Advert

#3 Bruce Klein

  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 18 November 2003 - 03:46 AM

Mike.. if you'd like.. try out the 'Split Topic' option below and move these posts to the new "Moderations" forum..

#4 John Doe

  • Guest
  • 291 posts
  • 0

Posted 18 November 2003 - 03:50 PM

Other ImmInst members; does this count as a "troll" post or not?


No.

#5 Lazarus Long

  • Life Member, Guardian
  • 8,116 posts
  • 242
  • Location:Northern, Western Hemisphere of Earth, Usually of late, New York

Posted 18 November 2003 - 04:06 PM

I happen to agree with John Doe here Michael, I am no friend of Bush but he is on topic and expressing his view of what is at the "bottom line" of the essay above. While his remark is 'glib'; you are free to disagree with it, or ignore it. It is not inappropriate.

I am afraid that we are about to go to school on the nature of dissent in a free society.

I suggest reading Thoreau before too many more essays on Singular thought.

#6 MichaelAnissimov

  • Guest
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 19 November 2003 - 06:57 AM

Okie dokie, the people have spoken. As far as I can tell, vote_for_bush is just a typical troll. No, I'll pass on the Thoreau; I read some in school. Transhuman intelligence is a massive challenge to humanity whichever way you cut it; how does Thoreau make any difference to this issue? How would be opinion on the Singularity change if I read everything Throreau had ever published?

#7 Lazarus Long

  • Life Member, Guardian
  • 8,116 posts
  • 242
  • Location:Northern, Western Hemisphere of Earth, Usually of late, New York

Posted 19 November 2003 - 07:06 AM

Okie dokie, the people have spoken. As far as I can tell, vote_for_bush is just a typical troll. No, I'll pass on the Thoreau; I read some in school. Transhuman intelligence is a massive challenge to humanity whichever way you cut it; how does Thoreau make any difference to this issue? How would be opinion on the Singularity change if I read everything Throreau had ever published?


Thoreau addresses the importance and legitimacy of Civil Disobedience and the necessity and responsible manner to engage in doing so and why.

His work and his lone action was the source of the very principle of a "conscientious Objector" to a principle perceived and understood as unjust and his essays on civil disobedience were the inspiration for Gandhi and Martin Luther King Jr. to name a few. If you wish to understand the principles of friendliness and altruism then I suggest you need to understand the roots of the issue.

Modern is not new in the sense you are taking it and there are many reasons to dissent and even more to protect the right of dissent, especially in recognition of the possibility of the Singularity.

#8 MichaelAnissimov

  • Guest
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 21 November 2003 - 01:49 PM

Laz; V4B is clearly just being a troll and probably trying to get revenge for my deleting one of his posts. The phrase "Translation: we want your money, fork it up", can be used as a no-brainer detraction of practically *any* human effort - it's obvious that he didn't give this specific insult much thought, and I doubt he even read the essay - probably just scrolled to the bottom and posted this because he saw that I post a lot in this section. There's a *huge* difference between this and the historically commendable principle of a "conscientious objector". But if he said, "I just saw Eliezer Yudkowsky driving around in a brand new ferrari, and I have photos"...*then* I would be worried. ;)

sponsored ad

  • Advert

#9 Rett

  • Guest
  • 2 posts
  • 2
  • Location:USA
  • NO

Posted Today, 12:11 AM

This post "How to Write a Paper About Achieving Your Personal Goals" on SeniorLiving.com provides a structured guide for writing about personal aspirations. It emphasizes the importance of self-reflection, goal-setting, and clear writing to effectively communicate one's ambitions.

Key Takeaways from the Article:
  1. Start with Self-Reflection
    Before writing, take time to think about what you truly want to achieve. Whether it’s career success, personal development, or a health-related goal, understanding your motivations will help shape your paper.

  2. Define Clear and Specific Goals
    The article highlights the importance of setting SMART (Specific, Measurable, Achievable, Relevant, Time-bound) goals. Instead of writing vaguely about wanting to "be successful," specify what success looks like for you.

  3. Organize Your Paper Effectively
    A well-structured essay should include:

    • An Introduction that outlines your goals and why they matter to you.
    • A Body Section that explores the steps you plan to take, challenges you anticipate, and strategies to overcome them.
    • A Conclusion that reflects on the importance of your goals and how achieving them will impact your life.
  4. Use Personal Stories and Examples
    The best way to make your paper engaging is to include real-life experiences. Sharing a personal struggle or a turning point can make your writing more relatable and compelling.

  5. Revise and Edit
    The article advises reviewing your work carefully to ensure clarity and coherence. Checking for grammatical errors and making sure your ideas flow logically can improve the overall quality of your writing.

Final Thoughts

Writing about personal goals isn’t just an academic exercise—it’s a chance to gain deeper insight into your ambitions and how to achieve them. The article serves as a practical guide to crafting a meaningful, well-organized paper that not only meets writing standards but also serves as a roadmap for personal growth.






10 user(s) are reading this topic

0 members, 10 guests, 0 anonymous users