Yeah, good question. Now that I reflect on it I don’t have a great reason. I think I had mentally cached their work on psychology as attempting to be general enough to apply to AIs too, but I don’t know if that’s accurate.
(Geoff’s orientation to philosophy of science feels related to some lines of thinking that led me to focus on agent foundations, but again that’s a pretty speculative connection.)
I think I had mentally cached their work on psychology as attempting to be general enough to apply to AIs too, but I don’t know if that’s accurate.
I think that some of them thought it was relevant to AI (eg at least one person was concerned about docs about Connection Theory being on the internet because it might yield AI capability advances), but that this was largely because they were extremely uninformed about what modern ML is.
What leads you to think that Leverage was doing #1?
Yeah, good question. Now that I reflect on it I don’t have a great reason. I think I had mentally cached their work on psychology as attempting to be general enough to apply to AIs too, but I don’t know if that’s accurate.
(Geoff’s orientation to philosophy of science feels related to some lines of thinking that led me to focus on agent foundations, but again that’s a pretty speculative connection.)
I think that some of them thought it was relevant to AI (eg at least one person was concerned about docs about Connection Theory being on the internet because it might yield AI capability advances), but that this was largely because they were extremely uninformed about what modern ML is.