The criticisms of “general functionalism” in the post seem to me to be aimed at a different sort of functionalism from the sort widely espoused around here.
The LW community is (I think) mostly functionalist in the sense of believing e.g. that if you’re conscious then something that does the same as you do is also conscious. They’d say that implementation details don’t matter for answering various philosophical questions. Is this thing me? Is it a person? Do I need to care about its interests? Is it intelligent? Etc. But that’s a long way from saying that implementation details don’t matter at all and, e.g., I think it’s “LW orthodoxy” that they do; that, e.g., something that thinks just like me but 1000x faster would be hugely more capable than me in all sorts of important ways.
(The advantages of the humans over the aliens in Eliezer’s “That Alien Message” have a lot to do with speed, though that wasn’t quite Eliezer’s point and he makes the humans smarter in other ways and more numerous too.)
If formal AI-safety work neglects speed, power consumption, side-channel attacks, etc., I think it’s only for the sake of beginning with simpler more tractable versions of the problems you care about, not because anyone seriously believes that those things are unimportant.
(And, just to be explicit, I believe those things are important, and I think it’s unlikely that any approach to AI safety that ignores them can rightly be said to deliver safety. But an approach that begins by ignoring them might be reasonable.)
I think decision theory is a big functionalist part of LW thinking about AI or it at least it used to be.
We don’t have scenarios where utility depends upon the amount of time taken to compute results. E.g. Impatient Jim only cooperates if you cooperate within 5 ms. Which precludes vasts searches through proof space about Jims source code.
I find Impatient Jim closer to the problems we face in the real world than Omniscient Omegas. YMMV.
Sorry that is after I checked out of keeping up with LW. Have any formal problems like the smoking lesion or sleeping beauty been created from the insight that speed matters?
Should they be? It looks like people here would be receptive if you have an idea for a problem that doesn’t just tell us what we already know. But it also looks to me like the winners of the tournament both approximated in a practical way the search through many proofs approach (LW writeup and discussion here.)
The criticisms of “general functionalism” in the post seem to me to be aimed at a different sort of functionalism from the sort widely espoused around here.
The LW community is (I think) mostly functionalist in the sense of believing e.g. that if you’re conscious then something that does the same as you do is also conscious. They’d say that implementation details don’t matter for answering various philosophical questions. Is this thing me? Is it a person? Do I need to care about its interests? Is it intelligent? Etc. But that’s a long way from saying that implementation details don’t matter at all and, e.g., I think it’s “LW orthodoxy” that they do; that, e.g., something that thinks just like me but 1000x faster would be hugely more capable than me in all sorts of important ways.
(The advantages of the humans over the aliens in Eliezer’s “That Alien Message” have a lot to do with speed, though that wasn’t quite Eliezer’s point and he makes the humans smarter in other ways and more numerous too.)
If formal AI-safety work neglects speed, power consumption, side-channel attacks, etc., I think it’s only for the sake of beginning with simpler more tractable versions of the problems you care about, not because anyone seriously believes that those things are unimportant.
(And, just to be explicit, I believe those things are important, and I think it’s unlikely that any approach to AI safety that ignores them can rightly be said to deliver safety. But an approach that begins by ignoring them might be reasonable.)
I think decision theory is a big functionalist part of LW thinking about AI or it at least it used to be.
We don’t have scenarios where utility depends upon the amount of time taken to compute results. E.g. Impatient Jim only cooperates if you cooperate within 5 ms. Which precludes vasts searches through proof space about Jims source code.
I find Impatient Jim closer to the problems we face in the real world than Omniscient Omegas. YMMV.
The orthodoxy is not consistently applied :)
What?
Sorry that is after I checked out of keeping up with LW. Have any formal problems like the smoking lesion or sleeping beauty been created from the insight that speed matters?
Should they be? It looks like people here would be receptive if you have an idea for a problem that doesn’t just tell us what we already know. But it also looks to me like the winners of the tournament both approximated in a practical way the search through many proofs approach (LW writeup and discussion here.)
Actually strictly speaking that is game theory not decision theory. Probably worth pointing out. I forgot the distinction for a while myself.