It would be nice if the name reflected the SI’s concern that the dangers come not just from some cunning killer robots escaping a secret government lab or a Skynet gone amok, or a Frankenstein monster constructed by a mad scientist, but from recursive self-improvement (“intelligence explosion”) of an initially innocuous and not-very smart contraption.
I am also not sure whether the qualifier “artificial” conveys the right impression, as the dangers might come from an augmented human brain suddenly developing the capacity for recursive self-improvement, or from some other creation that does not look like a collection of silicon gates.
If I understand it correctly, SI wants to ensure “safe recursive self-improvement” of an intelligence of any kind, “safe” for the rest of the (human?) intelligences existing at that time, though not necessarily for the self-improver itself.
Of course, a name like “Society For Safe Recursive Self-Improvement” is both unwieldy and unclear to an outsider. (And the acronym sounds like parseltongue.) Maybe there is a way to phrase it better.
I am also not sure whether the qualifier “artificial” conveys the right impression, as the dangers might come from an augmented human brain suddenly developing the capacity for recursive self-improvement, or some other creation that does not look like a collection of silicon gates.
The Singularity Institute (folks) does consider the dangers to be from the “artificial” things. They don’t (unless I am very much mistaken) consider a human brain to have the possibility to recursively self-improve. Whole Brain Emulation FOOMing would fall under their scope of concern but that certainly qualifies as “artificial”.
It would be nice if the name reflected the SI’s concern that the dangers come not just from some cunning killer robots escaping a secret government lab or a Skynet gone amok, or a Frankenstein monster constructed by a mad scientist, but from recursive self-improvement (“intelligence explosion”) of an initially innocuous and not-very smart contraption.
I am also not sure whether the qualifier “artificial” conveys the right impression, as the dangers might come from an augmented human brain suddenly developing the capacity for recursive self-improvement, or from some other creation that does not look like a collection of silicon gates.
If I understand it correctly, SI wants to ensure “safe recursive self-improvement” of an intelligence of any kind, “safe” for the rest of the (human?) intelligences existing at that time, though not necessarily for the self-improver itself.
Of course, a name like “Society For Safe Recursive Self-Improvement” is both unwieldy and unclear to an outsider. (And the acronym sounds like parseltongue.) Maybe there is a way to phrase it better.
The Singularity Institute (folks) does consider the dangers to be from the “artificial” things. They don’t (unless I am very much mistaken) consider a human brain to have the possibility to recursively self-improve. Whole Brain Emulation FOOMing would fall under their scope of concern but that certainly qualifies as “artificial”.