This is a great question; I’m definitely going to think about this more next time I’m thinking about the prospects for automating AI safety research.
One part of it is that the LessWrong rationalist tradition is generally focused on individual excellence and great-man-theories, and so a lot of those proposals feel unnatural for people here to think about.
Incidentally, I feel like my personal experience with people who are really into philosophy of science has been quite negative on average—they tend to have confusing worldviews and to-me-bizarre takes, and I tend to not find them useful to talk to. The meta-science people have seemed pretty reasonable to me, but often they haven’t seemed that AI focused.
I feel like my personal experience with people who are really into philosophy of science has been quite negative on average—they tend to have confusing worldviews and to-me-bizarre takes, and I tend to not find them useful to talk to.
Yeah totally fair with the philosophy of science thing, I’ve more talked to AI and Metascience people who mention principles from philosophy of science which makes more sense to me. A little bit how virtue ethics is nice to talk about with certain AI Safety people whilst it’s less enjoyable to talk to a professor in virtue ethics (maybe, not too high sample size here).
(I think James Evans from knowledge lab is a cool person who’s at the intersection of AI and metascience, his main work is on knowledge and improving science and he’s over the last 3 years pivoted to how AI can help with this. An example of something he wrote is this article on Agentic AI and the next intelligence explosion)
This is a great question; I’m definitely going to think about this more next time I’m thinking about the prospects for automating AI safety research.
One part of it is that the LessWrong rationalist tradition is generally focused on individual excellence and great-man-theories, and so a lot of those proposals feel unnatural for people here to think about.
Incidentally, I feel like my personal experience with people who are really into philosophy of science has been quite negative on average—they tend to have confusing worldviews and to-me-bizarre takes, and I tend to not find them useful to talk to. The meta-science people have seemed pretty reasonable to me, but often they haven’t seemed that AI focused.
Can you give examples?
Yeah totally fair with the philosophy of science thing, I’ve more talked to AI and Metascience people who mention principles from philosophy of science which makes more sense to me. A little bit how virtue ethics is nice to talk about with certain AI Safety people whilst it’s less enjoyable to talk to a professor in virtue ethics (maybe, not too high sample size here).
(I think James Evans from knowledge lab is a cool person who’s at the intersection of AI and metascience, his main work is on knowledge and improving science and he’s over the last 3 years pivoted to how AI can help with this. An example of something he wrote is this article on Agentic AI and the next intelligence explosion)