As I said about a previous discussion with Ben Goertzel, they seem to agree about the dangers, but not about how much the Singularity Institute might affect the outcome.
To rephrase the primary disagreement: “Yes, AIs are incredibly, world-threateningly dangerous, but there’s nothing you can do about it.”
This seems based around limited views of what sort of AI minds are possible or likely, such as an anthropomorphized baby which can be taught and studied similar to human children.
To rephrase the primary disagreement: “Yes, AIs are incredibly, world-threateningly dangerous, but there’s nothing you can do about it.”
Is that really a disagreement? If the current SingInst can’t make direct contributions, AGI researchers can, by not pushing AGI capability progress. This issue is not addressed, the heuristic of endorsing technological progress has too much support in researchers’ minds to take seriously the possible consequences of following it in this instance.
In other words, there are separate questions of whether current SingInst is irrelevant and whether AI safety planning is irrelevant. If the status quo is to try out various things and see what happens, there is probably room for improvement over this process, even if particular actions of SingInst are deemed inadequate. Pointing out possible issues with SingInst doesn’t address the relevance of AI safety planning.
This seems based around limited views of what sort of AI minds are possible or likely, such as an anthropomorphized baby which can be taught and studied similar to human children.
The key difference between AI and other software is learning, and even current narrow AI systems require large learning/training times and these systems are only learning specific narrow functionalities.
Considering this, many (perhaps most?) AGI researchers believe that any practical human-level AGI will require an educational process much like human children do.
As I said about a previous discussion with Ben Goertzel, they seem to agree about the dangers, but not about how much the Singularity Institute might affect the outcome.
To rephrase the primary disagreement: “Yes, AIs are incredibly, world-threateningly dangerous, but there’s nothing you can do about it.”
This seems based around limited views of what sort of AI minds are possible or likely, such as an anthropomorphized baby which can be taught and studied similar to human children.
Is that really a disagreement? If the current SingInst can’t make direct contributions, AGI researchers can, by not pushing AGI capability progress. This issue is not addressed, the heuristic of endorsing technological progress has too much support in researchers’ minds to take seriously the possible consequences of following it in this instance.
In other words, there are separate questions of whether current SingInst is irrelevant and whether AI safety planning is irrelevant. If the status quo is to try out various things and see what happens, there is probably room for improvement over this process, even if particular actions of SingInst are deemed inadequate. Pointing out possible issues with SingInst doesn’t address the relevance of AI safety planning.
Agreed. But it does mean SI “loses the argument”. Yahtzee!
Does anyone really think that? What about when “you” refers to a whole bunch of people?
The key difference between AI and other software is learning, and even current narrow AI systems require large learning/training times and these systems are only learning specific narrow functionalities.
Considering this, many (perhaps most?) AGI researchers believe that any practical human-level AGI will require an educational process much like human children do.