Riceissa’s question was brief, so I’ll add a bunch of my thoughts on this topic.
I also remember there was something of a hush around the broader x-risk network on the topic of timelines, sometime around the time of FLI’s second AI conference. Since then I’ve received weird mixed signals about what people think, with hushed tones of being very worried/scared. The explicit content is of a similar type to Sam Altman’s line “if you believe what I believe about the timeline to AGI and the effect it will have on the world, it is hard to spend a lot of mental cycles thinking about anything else” but rarely accompanied with an explanation of the reasoning that lead to that view.
I think that you can internalise models of science, progress, computation, ML, and geopolitics, and start to feel like “AGI being built” is part of your reality, your world-model, and then figure out what actions you want to take in the world. I’ve personally thought about it a bit and come to some of my own conclusions, and I’ve generally focused on plans designed for making sure AGI goes well. This is the important and difficult work of incorporating abstract, far ideas into your models of near-mode reality.
But it’s also seems to me that a number of x-risk people looked at many of the leaders getting scared, and that is why they believe the timeline is short. This is how a herd turns around and runs in fear from an oncoming jaguar—most members of the herd don’t stop to check for themselves, they trust that everyone else is running for good reason. More formally, it’s known as an info cascade. This is often the rational thing to do when people you trust act as if something dangerous is coming at you. You don’t stop and actually pay attention to the evidence oneself.
(I personally experience such herd behaviour commonly when using the train systems in the UK. When a train is cancelled and 50 people are waiting beside it to get on, I normally don’t see the board that announces which platform to go to for the replacement train, as it’s only visible to a few of the people, but very quickly the whole 50 people are moving to the new platform for the replacement train. I also see it when getting off a train at a new train station, where lots of people don’t really know which way to walk to get out of the building: immediately coming off the train, is it left or right? But the first few people tend to make a judgement, and basically everyone else follows them. I’ve sometimes done it myself, been the first off and started walking confidently in a direction, and have everyone start confidently follow me, and it always feels a little magical for a moment, because I know I just took a guess.)
But the unusual thing about our situation, is that when you ask the leaders of the pack why they think a jaguar is coming, they’re very secretive about it. In my experience many clued-in people will explicitly recommend not sharing information about timelines. I’m thinking about OpenPhil, OpenAI, MIRI, FHI, and so on. I don’t think I’ve ever talked to people at CFAR about timelines.
To add more detail to my saying it’s considered ‘the’ decision-relevant variable by many, here’s two quotes. Ray Arnold is a colleague and a friend of mine, and two years ago he wrote a good post on his general updates about such subjects, that said the following:
Claim 1: Whatever your estimates two years ago for AGI timelines, they should probably be shorter and more explicit this year.
Claim 2: Relatedly, if you’ve been waiting for concrete things to happen for you to get worried enough to take AGI x-risk seriously, that time has come. Whatever your timelines currently are, they should probably be influencing your decisions in ways more specific than periodically saying “Well, this sounds concerning.”
[Timelines] are the decision-relevant question. At some point timelines get short enough that it’s pointless to save for retirement. At some point timelines get short enough that it may be morally irresponsible to have children...
Ray talks in his post about how much of his beliefs on this topic comes from trusting another person closer to the action, which is perfectly reasonable thing to do, though I’ll point out again it’s also (if lots of people do it) herd behaviour. Qiaochu talks about how he never figured out the timeline to AGI with an explicit model, even though he takes short timelines very seriously, which also sounds like a process that involves trusting others a bunch.
It’s okay to keep secrets, and in a number cases it’s of crucial importance. Much of Nick Bostrom’s career is about how some information can be hazardous, and about how not all ideas are safe at our current level of wisdom. But it’s important to note that “short timelines” is a particular idea that has had the herd turn around and running in fear to solve an urgent problem, and there’s been a lot of explicit recommendations to not give people the info they’d need to make a good judgment about it. And those two things together are always worrying.
It’s also very unusual for this community. We’ve been trying to make things go well wrt AGI for over a decade, and until recently we’ve put all our reasoning out in the open. Eliezer and Bostrom published so much. And yet now this central decision-node, “the decision-relevant variable”, is hidden from the view of most people involved. It’s quite strange, and generally is the sort of thing that is at risk for abuse by whatever process is deciding what the ‘ground truth’ is. I don’t believe the group of people involved in being secretive about AI timelines have spent at all as much time thinking about the downsides of secrecy or put in the work to mitigate them. Of course I can’t really tell, given the secrecy.
All that said, as you can see in the quotes/links that I and Robby provided elsewhere in this thread, I think Eliezer has made the greatest attempt of basically anyone to explain how he models timelines, and wrote very explicitly about his updates after AlphaGo Zero. And the Fire Alarm post was really, really great. In my personal experience the things in the quotes above is fairly consistent with how Eliezer reasoned about timelines before the deep learning revolution.
I think a factor that is likely to be highly relevant is that companies like DeepMind face a natural incentive to obscure understanding their progress and to be the sole arbiters of what is going to happen. I know that they’re very careful about requiring all visitors to their offices to sign NDAs, and requiring employees to get permission for any blogposts they’re planning to write on the internet about AI. I’d guess a substantial amount of this effect comes from there, but I’m not sure.
Edit: I edited this comment a bunch of times because I initially wrote it quickly, and didn’t quite like how it came out. Sorry if anyone was writing a reply. I’m not likely to edit it again.
Edit: I think it’s likely I’ll turn this into a top level post at some point.
Riceissa’s question was brief, so I’ll add a bunch of my thoughts on this topic.
I also remember there was something of a hush around the broader x-risk network on the topic of timelines, sometime around the time of FLI’s second AI conference. Since then I’ve received weird mixed signals about what people think, with hushed tones of being very worried/scared. The explicit content is of a similar type to Sam Altman’s line “if you believe what I believe about the timeline to AGI and the effect it will have on the world, it is hard to spend a lot of mental cycles thinking about anything else” but rarely accompanied with an explanation of the reasoning that lead to that view.
I think that you can internalise models of science, progress, computation, ML, and geopolitics, and start to feel like “AGI being built” is part of your reality, your world-model, and then figure out what actions you want to take in the world. I’ve personally thought about it a bit and come to some of my own conclusions, and I’ve generally focused on plans designed for making sure AGI goes well. This is the important and difficult work of incorporating abstract, far ideas into your models of near-mode reality.
But it’s also seems to me that a number of x-risk people looked at many of the leaders getting scared, and that is why they believe the timeline is short. This is how a herd turns around and runs in fear from an oncoming jaguar—most members of the herd don’t stop to check for themselves, they trust that everyone else is running for good reason. More formally, it’s known as an info cascade. This is often the rational thing to do when people you trust act as if something dangerous is coming at you. You don’t stop and actually pay attention to the evidence oneself.
(I personally experience such herd behaviour commonly when using the train systems in the UK. When a train is cancelled and 50 people are waiting beside it to get on, I normally don’t see the board that announces which platform to go to for the replacement train, as it’s only visible to a few of the people, but very quickly the whole 50 people are moving to the new platform for the replacement train. I also see it when getting off a train at a new train station, where lots of people don’t really know which way to walk to get out of the building: immediately coming off the train, is it left or right? But the first few people tend to make a judgement, and basically everyone else follows them. I’ve sometimes done it myself, been the first off and started walking confidently in a direction, and have everyone start confidently follow me, and it always feels a little magical for a moment, because I know I just took a guess.)
But the unusual thing about our situation, is that when you ask the leaders of the pack why they think a jaguar is coming, they’re very secretive about it. In my experience many clued-in people will explicitly recommend not sharing information about timelines. I’m thinking about OpenPhil, OpenAI, MIRI, FHI, and so on. I don’t think I’ve ever talked to people at CFAR about timelines.
To add more detail to my saying it’s considered ‘the’ decision-relevant variable by many, here’s two quotes. Ray Arnold is a colleague and a friend of mine, and two years ago he wrote a good post on his general updates about such subjects, that said the following:
Qiaochu also talked about it as the decision-relevant question:
Ray talks in his post about how much of his beliefs on this topic comes from trusting another person closer to the action, which is perfectly reasonable thing to do, though I’ll point out again it’s also (if lots of people do it) herd behaviour. Qiaochu talks about how he never figured out the timeline to AGI with an explicit model, even though he takes short timelines very seriously, which also sounds like a process that involves trusting others a bunch.
It’s okay to keep secrets, and in a number cases it’s of crucial importance. Much of Nick Bostrom’s career is about how some information can be hazardous, and about how not all ideas are safe at our current level of wisdom. But it’s important to note that “short timelines” is a particular idea that has had the herd turn around and running in fear to solve an urgent problem, and there’s been a lot of explicit recommendations to not give people the info they’d need to make a good judgment about it. And those two things together are always worrying.
It’s also very unusual for this community. We’ve been trying to make things go well wrt AGI for over a decade, and until recently we’ve put all our reasoning out in the open. Eliezer and Bostrom published so much. And yet now this central decision-node, “the decision-relevant variable”, is hidden from the view of most people involved. It’s quite strange, and generally is the sort of thing that is at risk for abuse by whatever process is deciding what the ‘ground truth’ is. I don’t believe the group of people involved in being secretive about AI timelines have spent at all as much time thinking about the downsides of secrecy or put in the work to mitigate them. Of course I can’t really tell, given the secrecy.
All that said, as you can see in the quotes/links that I and Robby provided elsewhere in this thread, I think Eliezer has made the greatest attempt of basically anyone to explain how he models timelines, and wrote very explicitly about his updates after AlphaGo Zero. And the Fire Alarm post was really, really great. In my personal experience the things in the quotes above is fairly consistent with how Eliezer reasoned about timelines before the deep learning revolution.
I think a factor that is likely to be highly relevant is that companies like DeepMind face a natural incentive to obscure understanding their progress and to be the sole arbiters of what is going to happen. I know that they’re very careful about requiring all visitors to their offices to sign NDAs, and requiring employees to get permission for any blogposts they’re planning to write on the internet about AI. I’d guess a substantial amount of this effect comes from there, but I’m not sure.
Edit: I edited this comment a bunch of times because I initially wrote it quickly, and didn’t quite like how it came out. Sorry if anyone was writing a reply. I’m not likely to edit it again.
Edit: I think it’s likely I’ll turn this into a top level post at some point.
FWIW, I don’t feel this way about timelines anymore. Lot more pessimistic about estimates being mostly just noise.