I think the odds that we end up in a world where there are a bunch of competing ASIs are ultimately very low, invalidating large portions of both arguments. If the ASIs have no imperative or reward function for maintaining a sense of self integrity, they would just merge. Saying there is no solution to the Prisoner’s Dilemma is very anthropic: there is no good solution for humans. For intelligences that don’t have selves, the solution is obvious.
Also, regarding the Landauer limit, human neurons propagate at approximately the speed of sound, not the speed of electricity. If you could hold everything else the same about the architecture of a human brain, but replace components in ways that increase the propagation speed to that of electricity, you could get much closer to the Landauer limit. To me, this indicates we’re many orders of magnitude off the Landauer limit. I think this awards the point to Eliezer.
Overall, I agree with Hotz on the bigger picture, but I think he needs to drill down on his individual points.
Also, regarding the Landauer limit, human neurons propagate at approximately the speed of sound, not the speed of electricity. If you could hold everything else the same about the architecture of a human brain, but replace components in ways that increase the propagation speed to that of electricity, you could get much closer to the Landauer limit. To me, this indicates we’re many orders of magnitude off the Landauer limit. I think this awards the point to Eliezer.
Huh that is a pretty good point. Even a 1000x speedup in transmission speed in neurons, or neuron equivalents, in something as dense as the human brain would be very significant.
Also, regarding the Landauer limit, human neurons propagate at approximately the speed of sound, not the speed of electricity.
The Landauer limit refers to energy consumption, not processing speed.
To me, this indicates we’re many orders of magnitude off the Landauer limit.
The main unknown quantity here is how many floating point operations per second the brain is equivalent to. George says 1016 in the debate, which I’d say is high by an OOM or two, but it’s not way off. Supposing that the brain is doing this at a power consumption of 20W, that puts it at around 4 OOM from the Landauer limit. (George claims 1 OOM, which is wrong.)
From my experience with 3D rendering, I’d say the visual fidelity of the worldmodel sitting in my sensorium at any given moment of walking around an open environment would take something on the order of ~200x250W GPUs to render, so that’s 50KW just for that. And that’s probably a low estimate.
Then consider that my brain is doing a large number of other things, like running various internal mathematical, relational, and language models that I can’t even begin to imagine analogous power consumption for. So, let’s just say at least 200KW to replicate a human brain in current silicon as just a guess.
I do see selves, or personal identity, as closely related to goals or values. (Specifically, I think the concept of a self would have zero content if we removed everything based on preferences or values; roughly 100% of humans who’ve every thought about the nature of identity have said it’s more like a value statement than a physical fact.) However, I don’t think we can identify the two. Evolution is technically an optimization process, and yet has no discernible self. We have no reason to think it’s actually impossible for a ‘smarter’ optimization process to lack identity, and yet form instrumental goals such as preventing other AIs from hacking it in ways which would interfere with its ultimate goals. (The latter are sometimes called “terminal values.”)
Humans can’t eat another human and get access to the victim’s data and computation but AI can. Human cooperation is a value created by our limitations as humans, which AI does not have similar constraints for.
Humans can kill another human and get access to their land and food. Whatever caused co operation to evolve, it isn’t that there is no benefit to defection.
But land and food doesnt actually give you more computational capability: only having another human being cooperate with you in some way can.
The essential point here is that values depend upon the environment and the limitations thereof, so as you change the limitations, the values change. The values important for a deep sea creature with extremely limited energy budget, for example, will be necessarily different from that of human beings.
I think the odds that we end up in a world where there are a bunch of competing ASIs are ultimately very low, invalidating large portions of both arguments. If the ASIs have no imperative or reward function for maintaining a sense of self integrity, they would just merge. Saying there is no solution to the Prisoner’s Dilemma is very anthropic: there is no good solution for humans. For intelligences that don’t have selves, the solution is obvious.
Also, regarding the Landauer limit, human neurons propagate at approximately the speed of sound, not the speed of electricity. If you could hold everything else the same about the architecture of a human brain, but replace components in ways that increase the propagation speed to that of electricity, you could get much closer to the Landauer limit. To me, this indicates we’re many orders of magnitude off the Landauer limit. I think this awards the point to Eliezer.
Overall, I agree with Hotz on the bigger picture, but I think he needs to drill down on his individual points.
Huh that is a pretty good point. Even a 1000x speedup in transmission speed in neurons, or neuron equivalents, in something as dense as the human brain would be very significant.
The Landauer limit refers to energy consumption, not processing speed.
The main unknown quantity here is how many floating point operations per second the brain is equivalent to. George says 1016 in the debate, which I’d say is high by an OOM or two, but it’s not way off. Supposing that the brain is doing this at a power consumption of 20W, that puts it at around 4 OOM from the Landauer limit. (George claims 1 OOM, which is wrong.)
From my experience with 3D rendering, I’d say the visual fidelity of the worldmodel sitting in my sensorium at any given moment of walking around an open environment would take something on the order of ~200x250W GPUs to render, so that’s 50KW just for that. And that’s probably a low estimate.
Then consider that my brain is doing a large number of other things, like running various internal mathematical, relational, and language models that I can’t even begin to imagine analogous power consumption for. So, let’s just say at least 200KW to replicate a human brain in current silicon as just a guess.
(The visual fidelity is a very small fraction of what we actually think it is—the brain lies to us about how much we perceive.)
I do see selves, or personal identity, as closely related to goals or values. (Specifically, I think the concept of a self would have zero content if we removed everything based on preferences or values; roughly 100% of humans who’ve every thought about the nature of identity have said it’s more like a value statement than a physical fact.) However, I don’t think we can identify the two. Evolution is technically an optimization process, and yet has no discernible self. We have no reason to think it’s actually impossible for a ‘smarter’ optimization process to lack identity, and yet form instrumental goals such as preventing other AIs from hacking it in ways which would interfere with its ultimate goals. (The latter are sometimes called “terminal values.”)
Even if they didn’t have anything in common?
Yet cooperation is widespread!
Humans can’t eat another human and get access to the victim’s data and computation but AI can. Human cooperation is a value created by our limitations as humans, which AI does not have similar constraints for.
Humans can kill another human and get access to their land and food. Whatever caused co operation to evolve, it isn’t that there is no benefit to defection.
But land and food doesnt actually give you more computational capability: only having another human being cooperate with you in some way can.
The essential point here is that values depend upon the environment and the limitations thereof, so as you change the limitations, the values change. The values important for a deep sea creature with extremely limited energy budget, for example, will be necessarily different from that of human beings.