The key is still to distinguish good from bad ideas.
In the linked post, you essentially make the argument that “Whole brain emulation artificial intelligence is safer than LLM-based artificial superintelligence”. That’s a claim that might be true or not true. On aspect of spending more time with that idea would be to think more critically about whether that’s true.
However, even if it would be true, it wouldn’t help in a scenario where we already have LLM-based artificial superintelligence.
The key is still to distinguish good from bad ideas.
In the linked post, you essentially make the argument that “Whole brain emulation artificial intelligence is safer than LLM-based artificial superintelligence”. That’s a claim that might be true or not true. On aspect of spending more time with that idea would be to think more critically about whether that’s true.
However, even if it would be true, it wouldn’t help in a scenario where we already have LLM-based artificial superintelligence.