As far as I can tell, Eliezer and Nate are relying on no results of experiments done on AI models (REAIMs) to conclude that superintelligence is dangerous, and if some clever young person (or more realistically a series of clever young people each building on the work of their predecessors) comes up with a good plan for creating an aligned superintelligence (which probably won’t happen any year soon) that plan probably also will rely on no REAIMs nor will Eliezer and Nate require any REAIMs to conclude that the plan is safe.
Experiments and tests are very useful; human engineers and designers facing sufficiently difficult challenges will usually choose to use them; but sufficiently capable people are far from helpless even in domains in which they cannot do experiments (because the experiments would be too risky or because they would rely an GPUs and data centers that have been banned by international agreement).
For more information, a person could do worse than read Eliezer’s Einstein’s arrogance.
As far as I can tell, Eliezer and Nate are relying on no results of experiments done on AI models (REAIMs) to conclude that superintelligence is dangerous, and if some clever young person (or more realistically a series of clever young people each building on the work of their predecessors) comes up with a good plan for creating an aligned superintelligence (which probably won’t happen any year soon) that plan probably also will rely on no REAIMs nor will Eliezer and Nate require any REAIMs to conclude that the plan is safe.
Experiments and tests are very useful; human engineers and designers facing sufficiently difficult challenges will usually choose to use them; but sufficiently capable people are far from helpless even in domains in which they cannot do experiments (because the experiments would be too risky or because they would rely an GPUs and data centers that have been banned by international agreement).
For more information, a person could do worse than read Eliezer’s Einstein’s arrogance.