Also, I am just surprised I seem to be the only one making this fairly obvious point (?), and it raises some questions about our group epistemics.
First and foremost, I want to acknowledge the frustration and more combatitive tone in this post and ask whether it is more of a pointer towards confusion about how we can be doing this so wrong?
I think that more people are in a similar camp to you but that it feels really hard to change group epistemics of this belief? It feels quite core and even if you have longer conversations with people about underlying problems with the models I find that it is hard to pull people out of the AGI IS COMING attractor state. If you look at the AI Safety community as an information network, there are certain clusters that are quite tightly coupled in terms of epistemics, for me timelines seem to be one of these dividing lines. I think the talk about it has become a bit more like politics where it is war and arguments are soliders?
I don’t think this is anyone’s intentions but usually our emotions create our frame and if you believe that AGI might come in two years and that we’re probably going to die, it is very hard to remain calm.
The second problem is that the points around timelines and reasoning capacity of models is very hard to empirically forecast and I often think it comes down to a question to an individual’s views on philosophy of science. What are the frames that you’re using in order to predict useful real world progress? How are these coupled with pure ability on MMLU or Humanity’s Last Exam? It is hard to know and these are complicated questions and so I think a lot of people often then just go back on vibes.
The attractor state of the vibes being a more anxious one and so we get this collective cognitive effect where fear in an information network amplifies itself.
I do not know what is right, I do know that it can be hard to have a conversation about shorter timelines with someone with shorter timelines because of a state of justifiable emotional tension.
(You can always change the epistemic note at the top to include this! I think it might improve the probability of a disagreeing person changing their mind.)
First and foremost, I want to acknowledge the frustration and more combatitive tone in this post and ask whether it is more of a pointer towards confusion about how we can be doing this so wrong?
I think that more people are in a similar camp to you but that it feels really hard to change group epistemics of this belief? It feels quite core and even if you have longer conversations with people about underlying problems with the models I find that it is hard to pull people out of the AGI IS COMING attractor state. If you look at the AI Safety community as an information network, there are certain clusters that are quite tightly coupled in terms of epistemics, for me timelines seem to be one of these dividing lines. I think the talk about it has become a bit more like politics where it is war and arguments are soliders?
I don’t think this is anyone’s intentions but usually our emotions create our frame and if you believe that AGI might come in two years and that we’re probably going to die, it is very hard to remain calm.
The second problem is that the points around timelines and reasoning capacity of models is very hard to empirically forecast and I often think it comes down to a question to an individual’s views on philosophy of science. What are the frames that you’re using in order to predict useful real world progress? How are these coupled with pure ability on MMLU or Humanity’s Last Exam? It is hard to know and these are complicated questions and so I think a lot of people often then just go back on vibes.
The attractor state of the vibes being a more anxious one and so we get this collective cognitive effect where fear in an information network amplifies itself.
I do not know what is right, I do know that it can be hard to have a conversation about shorter timelines with someone with shorter timelines because of a state of justifiable emotional tension.
This all seems right—this is probably my most (and only?) “combative” post and I wish I’d toned it down a bit.
(You can always change the epistemic note at the top to include this! I think it might improve the probability of a disagreeing person changing their mind.)
Yeah.