How surprising is this to the alignment community professionals (e.g. people at MIRI, Redwood Research, or similar)? From an outside view, the volatility/flexibility and movement away from pure growth and commercialization seems unexpected and could be to alignment researchers’ benefit (although it’s difficult to see the repercussions at this point). While it is surprising to me because I don’t know the inner workings of OpenAI, I’m surprised that it seems similarly surprising to the LW/alignment community as well.
Perhaps the insiders are still digesting and formulating a response, or want to keep hot takes to themselves for other reasons. If not, I’m curious if there is actually so little information flowing between alignment communities and companies like OpenAI such that this would be as surprising as it is to an outsider. For example, there seems to be many people at Anthropic that are directly in or culturally aligned with LW/rationality, and I expected the same to be true to a lesser extend for OpenAI.
I understood there was a real distance between groups, but still, I had a more connected model in my head that is challenged by this news and the response in the first day.
I still don’t follow why EY assigns seemingly <1% chance of non-earth-destroying outcomes in 10-15 years (not sure if this is actually 1%, but EY didn’t argue with the 0% comments mentioned in the “Death with dignity” post last year). This seems to place fast takeoff as being the inevitable path forward, implying unrestricted fast recursive designing of AIs by AIs. There are compute bottlenecks which seem slowish, and there may be other bottlenecks we can’t think of yet. This is just one obstacle. Why isn’t there more probability mass for this one obstacle? Surely there are more obstacles that aren’t obvious (that we shouldn’t talk about).
It feels like we have a communication failure between different cultures. Even if EY thinks the top industry brass is incentivized to ignore the problem, there are a lot of (non-alignment oriented) researchers that are able to grasp the ‘security mindset’ that could be won over. Both in this interview, and in the Chollet response referenced, the arguments presented by EY aren’t always helping the other party bridge from their view over to his, but go on ‘nerdy/rationalist-y’ tangents and idioms that end up being walls that aren’t super helpful for working on the main point, but instead help the argument by showing that EY is smart and knowledgeable about this field and other fields.
Are there any digestible arguments out there for this level of confident pessimism that would be useful for the public industry folk? By publicly digestible, I’m thinking more of the style in popular books like Superintelligence or Human Compatible.