From Dr. Zimbardo in his recent IAMA on Reddit:
Mawkish asks:
If you could conduct any human [sic]bahaviour experiment, without risk to those participating, what would it be? What is your hypothesis for how it would turn out?
Zimbardo:
The answer to this provocative question is given in the introduction to chp 16 in my Lucifer Effect book (2007) where I invited anyone to perform a Reverse Milgram experiment. Milgram was able to demonstrate the relative ease with which ordinary people, 1000 of them, could be systematically led to administer increasingly dangerous levels of shock to an innocent victim by means of gradually raising the shock level with each trial by only 15 volts, until by the end of 30 shocks the voltage was raised to a near lethal 450 volts. At least 2 of every 3 participants went all the way down that slippery slope.
Now can we demonstrate the opposite, that ordinary people can be gradually led to engage in increasingly “good” socially redeeming deeds up to a point of engaging in extremely altruistic, heroic actions, which initially they assert they would never be willing to do?
It would have to be well crafted with early assessments of the prosocial value of each target action on the way up the slippery slope of goodness. It might have to be individually tailored to the values and interests of the target person, thus for some giving one’s time is precious, for others it would be money, or working in undesirable conditions, or with an unattractive population of people, etc.
It would be sad to conclude that it is easier to get ordinary people to do evil, than to do heroic actions, so I personally welcome someone to systematically take up my challenge, and I will serve as free consultant.
If you are in experimental psychology, taking him up on this promise sounds like a good way to make your career.
(And perhaps do some good along the way.)
“What are the important problems in your field? Why aren’t you working on them?”
So I read this, and my brain started brainstorming. None of the names I came up with were particularly good. But I did happen to produce a short mnemonic for explaining the agenda and the research focus of the Singularity Institute.
A one word acronym that unfolds into a one sentence elevator pitch:
Crisis: Catastrophic Risks in Self Improving Software
“So, what do you do?”
“We do CRISIS research, that is, we work on figuring out and trying to manage the catastrophic risks that may be inherent to self improving software systems. Consider, for example...”
Lots of fun ways to play around with this term, to make it memorable in conversations.
It has some urgency to it, it’s fairly concrete, it’s memorable.
It compactly combines goals of catastrophic risk reduction and self improving systems research.
Bonus: You practically own this term already.
An incognito Google search gives me no hits for “Catastrophic Risks In Self Improving Software”, when in quotes. Without quotes, top hits include the Singularity Institute, the Singularity Summit, intelligencexplosion.com. Nick Bostrom and the Oxford group is also in there. I don’t think he would mind too much.