Kurzweil predicted a singularity around 2040. That’s only 18 years away, so in order for us to hit that date things have to start getting weird now.
I think this post underestimates the amount of “fossilized” intelligence in the internet. The “big model” transformer craze is like humans discovering coal and having an industrial revolution. There are limits to the coal though, and I suspect the late 2020s and early 2030s might have one final AI winter as we bump into those limits and someone has to make AI that doesn’t just copy what humans already do.
But that puts us on track for 2040, and the hardware will continue to move forward meaning that if there is a final push around 2040, the progress in those last few years may eclipse everything that came before.
As for alignment/safety, I’m still not sure whether the thing ends up self-aligning or something pleasant, or perhaps alignment just becomes a necessary part of making a useful system as we move forward and lies/confabulation become more of a problem. I think 40% doom is reasonable at this stage because (1) we don’t know how likely these pleasant scenarios are and (2) we don’t know how the sociopolitical side will go; will there be funding for safety research or not? Will people care? With such huge uncertainties I struggle to deviate much from 50⁄50, though for anthropic reasons I predicted a 99% chance of success on metaculus.
I’m curious as to what you think “getting weird” might mean. From my perspective, things are already “getting weird”. Three years ago, AI couldn’t generate good art, write college essays, write code, solve Minerva problems, beat players at Starcraft II, or generalise across multiple domains. Now, it can do all of those things. People who work in the field have trouble keeping up. People outside the field are frequently blindsided by things that appear to come out of nowhere, like “Did you know that I can generate artwork from text prompts?” and “Did you know I can use GPT-3 to write a passable essay?” and, just for me a few weeks ago “Holy shit, Github Copilot just answered the question I was going to use as a linear algebra exercise.”
So, my definition of “weird” is something like “It’s hard for professionals in a field to keep up with developments, and non-professionals will be frequently blindsided by seemingly discontinuous jumps” and I think ML has been doing that over the last few years.
in order for us to hit that date things have to start getting weird now.
I don’t think this is necessary. Isn’t the point of exponential growth that a period of normalcy can be followed by rapid dramatic changes? Example: the area of lilypads doubles on a pond and only becomes noticeable in the last several doublings.
Kurzweil predicted a singularity around 2040. That’s only 18 years away, so in order for us to hit that date things have to start getting weird now.
I think this post underestimates the amount of “fossilized” intelligence in the internet. The “big model” transformer craze is like humans discovering coal and having an industrial revolution. There are limits to the coal though, and I suspect the late 2020s and early 2030s might have one final AI winter as we bump into those limits and someone has to make AI that doesn’t just copy what humans already do.
But that puts us on track for 2040, and the hardware will continue to move forward meaning that if there is a final push around 2040, the progress in those last few years may eclipse everything that came before.
As for alignment/safety, I’m still not sure whether the thing ends up self-aligning or something pleasant, or perhaps alignment just becomes a necessary part of making a useful system as we move forward and lies/confabulation become more of a problem. I think 40% doom is reasonable at this stage because (1) we don’t know how likely these pleasant scenarios are and (2) we don’t know how the sociopolitical side will go; will there be funding for safety research or not? Will people care? With such huge uncertainties I struggle to deviate much from 50⁄50, though for anthropic reasons I predicted a 99% chance of success on metaculus.
I’m curious as to what you think “getting weird” might mean. From my perspective, things are already “getting weird”. Three years ago, AI couldn’t generate good art, write college essays, write code, solve Minerva problems, beat players at Starcraft II, or generalise across multiple domains. Now, it can do all of those things. People who work in the field have trouble keeping up. People outside the field are frequently blindsided by things that appear to come out of nowhere, like “Did you know that I can generate artwork from text prompts?” and “Did you know I can use GPT-3 to write a passable essay?” and, just for me a few weeks ago “Holy shit, Github Copilot just answered the question I was going to use as a linear algebra exercise.”
So, my definition of “weird” is something like “It’s hard for professionals in a field to keep up with developments, and non-professionals will be frequently blindsided by seemingly discontinuous jumps” and I think ML has been doing that over the last few years.
What would you consider “getting weird” to mean?
No I think you misunderstood me: I do agree that things are “getting weird”—I’m just saying that this is to be expected to make the 2040 date.
I don’t think this is necessary. Isn’t the point of exponential growth that a period of normalcy can be followed by rapid dramatic changes? Example: the area of lilypads doubles on a pond and only becomes noticeable in the last several doublings.
I’d love to hear about why anthropic reasoning made such a big difference for your prediction-market prediction. EDIT: Nevermind. Well played.