Does this argument extend to crazy ideas like Scanless Whole Brain Emulation, or would ideas like that require so much superintelligence to complete, that the value of initial human progress will end up sort of negligible.
The key is still to distinguish good from bad ideas.
In the linked post, you essentially make the argument that “Whole brain emulation artificial intelligence is safer than LLM-based artificial superintelligence”. That’s a claim that might be true or not true. On aspect of spending more time with that idea would be to think more critically about whether that’s true.
However, even if it would be true, it wouldn’t help in a scenario where we already have LLM-based artificial superintelligence.
Does this argument extend to crazy ideas like Scanless Whole Brain Emulation, or would ideas like that require so much superintelligence to complete, that the value of initial human progress will end up sort of negligible.
The key is still to distinguish good from bad ideas.
In the linked post, you essentially make the argument that “Whole brain emulation artificial intelligence is safer than LLM-based artificial superintelligence”. That’s a claim that might be true or not true. On aspect of spending more time with that idea would be to think more critically about whether that’s true.
However, even if it would be true, it wouldn’t help in a scenario where we already have LLM-based artificial superintelligence.