The question becomes, do you expect “If you change random bits and try to run it, it mostly just breaks.” to hold up?
My suspicion is that the answer is likely no, and this is actually a partial crux on why I’m less doomy than others on AI risk, especially from misalignment.
My general expectation is that most of the difficulty is hardware + ethics, and in particular the hardware for running a human brain just does not exist right now, primarily because of the memory bottleneck/Von Neumann bottleneck that exists for GPUs, and it would at the current state of affairs require deleting a lot of memory from a human brain.
I disagree about the hardware difficulty of uploading-with-reverse-engineering—the short version of one aspect of my perspective is here, the longer version with some flaws is here, the fixed version of the latter exists as a half-complete draft that maybe I’ll finish sooner or later. :)
My suspicion is that the answer is likely no, and this is actually a partial crux on why I’m less doomy than others on AI risk, especially from misalignment.
My general expectation is that most of the difficulty is hardware + ethics, and in particular the hardware for running a human brain just does not exist right now, primarily because of the memory bottleneck/Von Neumann bottleneck that exists for GPUs, and it would at the current state of affairs require deleting a lot of memory from a human brain.
I disagree about the hardware difficulty of uploading-with-reverse-engineering—the short version of one aspect of my perspective is here, the longer version with some flaws is here, the fixed version of the latter exists as a half-complete draft that maybe I’ll finish sooner or later. :)