The multiple-realizability of computation “cuts the ties” to the substrate. These ties to the substrate are important. This idea leads Sahil to predict, for example, that LLMs will be too “stuck in simulation” to engage very willfully in their own self-defense.
Overall FGF’s side’s arguments seem very weak. I generally agree with CGF’s counterarguments, but would emphasize more that “Doesn’t that seem somehow important?” is not a good argument when there are many differences between a human brain and a LLM. It seems like a classic case of privileging the hypothesis.
I’m curious what about Sahil that causes you to pay attention to his ideas (and collaborate in other ways), sometimes (as in this case) in opposition to your own object-level judgment. E.g., what works of his impressed you and might be interesting for me to read?
Many copies of me are probably stuck in simulations around the multiverse, and I/we are still “engaging willfully in our own self-defense” e.g. by reasoning about who might be simulating me and for what reasons, and trying to be helpful/interesting to our possible simulators. This is a direct counter-example to Sahil’s prediction.
Overall FGF’s side’s arguments seem very weak. I generally agree with CGF’s counterarguments, but would emphasize more that “Doesn’t that seem somehow important?” is not a good argument when there are many differences between a human brain and a LLM. It seems like a classic case of privileging the hypothesis.
I’m curious what about Sahil that causes you to pay attention to his ideas (and collaborate in other ways), sometimes (as in this case) in opposition to your own object-level judgment. E.g., what works of his impressed you and might be interesting for me to read?