Morality of Artificial Intelligence

Assign to a class (with edits).

by Steve Omohundro, Ph.D., president of Self-Aware Systems ( a Silicon Valley think tank aimed at bringing human values to emerging technologies. This talk examines the origins of human morality and its future development to cope with advances in artificial intelligence. It begins with a discussion of the dangers of philosophies which put ideas ahead of people. It presents Kohlberg's 6 stages of human moral development, evidence for recent advances in human morality, the theory underlying co-opetition, recent advances in understanding the sexual and social origins of altruism, and the 5 human moral emotions and their relationship to political systems. It then considers the likely behavior of advanced AI systems, showing that they will want to understand and improve themselves, will have drives toward self-preservation and resource acquisition, and will be vigilant in avoiding corruption and addiction. We end with a description of the 3 primary challenges that humanity faces in guiding future technology toward human-positive ends.

Posting this while the game is going on with Lee Sedol who just won game 4 (but lost the other three before that). This video has little to do with the morality of A.I. but does show at least some of the motivations people involved are doing this (to aid our experts, improve humanity, etc). One key point that came up in the Q/A was how games of definitive scores so "winning" is obvious, but in the real world that's not the case. One idea presented was to use human feedback ("Good job, A.I.!") as the "score". That's a really interesting idea for keeping A.I. on a leash. Basically task it at improving human well-being based on our feedback to it.

An argument made here that _NOT_ developing A.I. could be considering an immoral action because of the potential future benefit to human well-being A.I. could bring us.

“We have a moral imperative to continue reaping the promise [of artificial intelligence] while we control the peril. I tend to be optimistic, but that doesn't mean we should be lulled into a lack of concern.”

...we have no other choice lest we accept a scenario in which a totalitarian government controls AI. He stated it simply: "The best way to keep [artificial intelligence] safe is in fact widely distributed, which is what we are seeing in the world today."

One of my favorite sources of future technology news.

One of my favorite sources of future technology news.

"...the results reveal a view among experts that AI systems will probably (over 50%) reach overall human ability by 2040-50, and very likely (with 90% probability) by 2075. From reaching human ability, it will move on to superintelligence in 2 years (10%) to 30 years (75%) thereafter. The experts say the probability is 31% that this development turns out to be ‘bad’ or ‘extremely bad’ for humanity."

Post Image

I hope they make this into a full-length movie. Looks and sounds incredible. I really like the idea of telling the story from the perspective of the machines, much like I, Robot and others have done before it.

Great short (15 minutes) overview of the coming impact of automation on human society.

Just over 3 minutes getting into the real challenges of mind uploading, immortality, and what effect this will have on our humans understand morality. Great stuff.

This two part series by Wait But Why is one of the best summaries of all the information we currently have on what Artificial Superintelligence could mean for humanity. This may be a very, very important post for our species.

Nice, funny summary of Artificial Superintelligence concerns.

So good. Just... so good. Language warning.

Discussing some of the challenges around the morality of A.I. including how, as humans, we haven't yet solved many of the ethical challenges around us.