I’m not sure it might result in a wake up of AI Researchers.
See this thread yesterday between Chris Painter and Dean Ball:
Chris: I have the sense that, as of the last 6 months, a lot of tech now thinks an intelligence explosion is more plausible than they did previously. But I don’t feel like I’m hearing a lot about that changing people’s minds on the importance of alignment and control research.
Dean: do people really think alignment and control research is unimportant? it seems like a big part of why opus is so good is the approach ant took to aligning it, and like basically everyone recognizes this?
Chris: I’m not sure they think it’s unimportant. It’s more that around a year ago a lot of people would’ve said something like “Well, some people are really nervous about alignment and control research and loss of control etc, but that’s because they have this whole story of AI foom and really dramatic self-improvement. I think that story is way overstated, these models just don’t speed me up that much today, and I think we’ll have issues with autonomy for a long time, it’s really hard.” So, they often stated their objection in a way that made it sound like rapid progress on AI R&D automation would change their mind. To be clear, I think there are stronger objections that they could have raised and could still raise, like “we will then move to hardware bottlenecks, which will require AI proliferation for a true speed up to materialize”. Also, sorry if it’s rough for me to not be naming specific names, would just take time to pull examples.
I’m not sure it might result in a wake up of AI Researchers.
See this thread yesterday between Chris Painter and Dean Ball: