I’ll highlight that the brain and a hypothetical AI might not use the same primitives—they’re on very different hardware, after all
Sure. There are a lot of levels at which algorithms can differ.
quicksort.c compiled by clang versus quicksort.c compiled by gcc
quicksort optimized to run on a CPU vs quicksort optimized to run on an FPGA
quicksort running on a CPU vs mergesort running on an FPGA
quicksort vs a different algorithm that doesn’t involve sorting the list at all
There are people working on neuromorphic hardware, but I don’t put much stock in anything coming of it in terms of AGI (the main thrust of that field is low-power sensors). So I generally think it’s very improbable that we would copy brain algorithms at the level of firing patterns and synapses (like the first bullet-point or less). I put much more weight on the possibility of “copying” brain algorithms at like vaguely the second or third bullet-point level. But, of course, it’s also entirely possible for an AGI to be radically different from brain algorithms in every way. :-)
My guess here is that there are some instrumentally convergent abstractions/algorithms which both a brain and a hypothetical AGI needs to use. But a brain will have implemented some of those as hacks on top of methods which evolved earlier, whereas an AI could implement those methods directly. So for instance, one could imagine the brain implementing simple causal reasoning as a hack on top of pre-existing temporal sequence capabilities. When designing an AI, it would probably make more sense to use causal DAGs as the fundamental, and then implement temporal sequences as abstract stick-dags which don’t support many (if any) counterfactuals.
Possibly better example: tree search and logic. Humans seem to handle these mostly as hacks on top of pattern-matchers and trigger-action pairs, but for an AI it makes more sense to implement tree search as a fundamental.
Sure. There are a lot of levels at which algorithms can differ.
quicksort.c compiled by clang versus quicksort.c compiled by gcc
quicksort optimized to run on a CPU vs quicksort optimized to run on an FPGA
quicksort running on a CPU vs mergesort running on an FPGA
quicksort vs a different algorithm that doesn’t involve sorting the list at all
There are people working on neuromorphic hardware, but I don’t put much stock in anything coming of it in terms of AGI (the main thrust of that field is low-power sensors). So I generally think it’s very improbable that we would copy brain algorithms at the level of firing patterns and synapses (like the first bullet-point or less). I put much more weight on the possibility of “copying” brain algorithms at like vaguely the second or third bullet-point level. But, of course, it’s also entirely possible for an AGI to be radically different from brain algorithms in every way. :-)
My guess here is that there are some instrumentally convergent abstractions/algorithms which both a brain and a hypothetical AGI needs to use. But a brain will have implemented some of those as hacks on top of methods which evolved earlier, whereas an AI could implement those methods directly. So for instance, one could imagine the brain implementing simple causal reasoning as a hack on top of pre-existing temporal sequence capabilities. When designing an AI, it would probably make more sense to use causal DAGs as the fundamental, and then implement temporal sequences as abstract stick-dags which don’t support many (if any) counterfactuals.
Possibly better example: tree search and logic. Humans seem to handle these mostly as hacks on top of pattern-matchers and trigger-action pairs, but for an AI it makes more sense to implement tree search as a fundamental.