Meat brains are badly constrained in ways that non-meat brains need not be.
Agreed; and there’s an overbroad reading of this claim, which I’m kind of worried people encountering it (e.g. in the guise of Eliezer’s argument on “the space of all possible minds”) can inadvertently fall into: assuming that just because we can’t imagine them, there are no constraints that apply to any class of non-meat brains.
The movie that runs through our minds when we imagine “AGI recursive self-improvement” goes something like a version of Hollywood hacker movies, except with the AI in the role of the hacker. It’s sitting at a desk wearing mirrorshades, and looking for the line in its own code that has the parameter for “number of working memory items”. When it finds it, it goes “aha!” and suddenly becomes twice as powerful as before.
That is vivid, but probably not how it works. For instance, “number of working memory items” can be a functional description of the system, without having an easily identifiable bit of the code where it’s determined, just as well in an AI’s substrate as as in a human mind.
Agreed; and there’s an overbroad reading of this claim, which I’m kind of worried people encountering it (e.g. in the guise of Eliezer’s argument on “the space of all possible minds”) can inadvertently fall into: assuming that just because we can’t imagine them, there are no constraints that apply to any class of non-meat brains.
The movie that runs through our minds when we imagine “AGI recursive self-improvement” goes something like a version of Hollywood hacker movies, except with the AI in the role of the hacker. It’s sitting at a desk wearing mirrorshades, and looking for the line in its own code that has the parameter for “number of working memory items”. When it finds it, it goes “aha!” and suddenly becomes twice as powerful as before.
That is vivid, but probably not how it works. For instance, “number of working memory items” can be a functional description of the system, without having an easily identifiable bit of the code where it’s determined, just as well in an AI’s substrate as as in a human mind.