On the first point: you’re right that this does in some ways make the problem worse; my current best guess is that it’s basically necessary for a solution. I’m planning to write this up in more detail some time soon and I hope to get your thoughts when I do!
On the second: Yeah, I find this kind of thing pretty hard to be confident about. I could totally see you being right here, and I’d love for someone to think it through in detail.
And I think the differences in 3 and 4 indeed probably come down to deeper assumptions that would be hard to unpick in this thread: I’d tentatively guess I’m putting more weight on the societal impacts of AI, and on the eventual shape of AGI/ASI being easier to affect.
This comment thread probably isn’t the place, but if it ever seems like it would be important/feasible, I’d be happy to try to go deeper on where our models are differing.
Sure, briefly replying:
On the first point: you’re right that this does in some ways make the problem worse; my current best guess is that it’s basically necessary for a solution. I’m planning to write this up in more detail some time soon and I hope to get your thoughts when I do!
On the second: Yeah, I find this kind of thing pretty hard to be confident about. I could totally see you being right here, and I’d love for someone to think it through in detail.
And I think the differences in 3 and 4 indeed probably come down to deeper assumptions that would be hard to unpick in this thread: I’d tentatively guess I’m putting more weight on the societal impacts of AI, and on the eventual shape of AGI/ASI being easier to affect.
This comment thread probably isn’t the place, but if it ever seems like it would be important/feasible, I’d be happy to try to go deeper on where our models are differing.