(To be a bit more transparent: I had not previously considered “a hundred thousand technical PhDs all try really hard to crack AGI”; I haven’t heard that argument/scenario before and I don’t know what to think about that and it would substantially shorten my timelines. I don’t yet understand why that seems likely / what scenario about that seems likely to you.)
Background: almost nothing that most humans do actually requires fluid intelligence. Most people, most of the time, are executing routine cognitive operations. And most of the people who are using their fluid intelligence on the job could do just as well or better if they had a massive memory of case studies to extrapolate from instead of attempting novel reasoning.
Most of earth’s geniuses currently spend most of their time doing routine cognitive operations—pattern matching from their prior experience to solve problems, often in the context of automatable tasks like implementing experiments or solving engineering problems. When those classes of task are automated, it will free up the capacity of the geniuses.
At this point, most of all the work in the world will be automated or in the process of being automated. Science and tech development will be going faster than ever in human history. It will be obvious to the whole world that AI is a really big deal.
Also, it will be obvious to many people that there’s something missing: the AIs are doing more and better design and engineering, faster, than human civilization ever did, and they’re accelerating the science, but they’re not doing the science. There will be enormous financial and strategic incentives to crack that.
Ok, thanks. I think I’ll probably have to chew on this scenario to say much of use. (I mean, I’ve thought about related things, but haven’t asked myself about this scenario.) My initial reaction is skepticism which I think comes from a combo of
LLMs are somewhat less useful than you seem to think
Humans apply somewhat more GI than you seem to think
The most important stuff would still be bottlenecked on human GI and would be hard to accelerate; you don’t just simply “free up” the humans in a super liquid, fungible way
If this were happening, some pretty strong political forces would be at play, including hopefully / kinda probably (??) a strong push to stop the spiral
But I’m not super confident about any of that. It’s strategically relevant but ATM I don’t have much novel perspective to offer, and it seems to need some other expertise (e.g. a good understanding of politics, of science and tech research, and similar).
Ok, I see. Now, regarding your disjunction earlier of (in my words)
A. (NAA (nonAGI AI) takeover) You can get strategic takeover AI without AGI
B. (AGI soon easy) Gippity+ will soon be AGI by adding a bit more ~mundane human research juice
C. (AGI soon hard) Gippity+++ will soon be AGI by adding a couple big insights
First, to clarify, I think the discourse on this thread is that I asked you about
“why talk about nonAGI AI”
and you said
“because 2) maybe we already have AGI basically (scenario B, AGI soon easy), and because 1) you could get transformative AI with current nonAGI AI (scenario A, NAA takeover)”,
and now we are discussing what NAA looks like and how timelines look in NAA takeover world.
Now I’m wondering, what are your very approximate relative probabilities of these things? E.g. is one of them 90% of the source of your confidence (I mean, 90% of your prob mass) in FOOM within 10 years? If they are roughly equal, I would raise my eyebrow and say “that seems kinda strange, unless there’s a shared factor such as you thinking that actually we basically have ~AGI in current systems; if so, could you clarify that shared factor”.
As stated, these don’t have to sum to 1. B and C are mutually exclusive but A can be true even if B or C are also true.
(I also object a bit to calling “strong fluid intelligence” “AGI.”
Part of what’s at stake is how far can you get with basically just specialized knowledge and the ability to train new specialized knowledge. It would be surprising to me, but not out of the question, that there’s almost nothing that such an AI can’t do that an AI with more fluid intelligence can do. But I only object a bit.)
Ass numbers:
A: 80%
B: 40%
C: 30%
If they are roughly equal, I would raise my eyebrow and say “that seems kinda strange, unless there’s a shared factor such as you thinking that actually we basically have ~AGI in current systems; if so, could you clarify that shared factor”.
I mean that’s kind of fair. But I in fact don’t have a lot of precise ability to distinguish between “one key idea is missing” and “only engineering schlep is missing”. Those wolds look very similar, to me, and so get similar amounts of mass.
What do you mean by genius human engineers?
Part of my model is that like 25% of the math and CS phds on earth, and especially the ones that win Nobel prizes, will be working on this problem.
I don’t know. I think my median is FOOM in 2 years? This is an ass-number though. I don’t feel super confident.
I’m like 90% probability that it happens within 10 years, and 95% probability that it happens within 35 years?
(To be a bit more transparent: I had not previously considered “a hundred thousand technical PhDs all try really hard to crack AGI”; I haven’t heard that argument/scenario before and I don’t know what to think about that and it would substantially shorten my timelines. I don’t yet understand why that seems likely / what scenario about that seems likely to you.)
Why I think that might happen:
Background: almost nothing that most humans do actually requires fluid intelligence. Most people, most of the time, are executing routine cognitive operations. And most of the people who are using their fluid intelligence on the job could do just as well or better if they had a massive memory of case studies to extrapolate from instead of attempting novel reasoning.
Most of earth’s geniuses currently spend most of their time doing routine cognitive operations—pattern matching from their prior experience to solve problems, often in the context of automatable tasks like implementing experiments or solving engineering problems. When those classes of task are automated, it will free up the capacity of the geniuses.
At this point, most of all the work in the world will be automated or in the process of being automated. Science and tech development will be going faster than ever in human history. It will be obvious to the whole world that AI is a really big deal.
Also, it will be obvious to many people that there’s something missing: the AIs are doing more and better design and engineering, faster, than human civilization ever did, and they’re accelerating the science, but they’re not doing the science. There will be enormous financial and strategic incentives to crack that.
Ok, thanks. I think I’ll probably have to chew on this scenario to say much of use. (I mean, I’ve thought about related things, but haven’t asked myself about this scenario.) My initial reaction is skepticism which I think comes from a combo of
LLMs are somewhat less useful than you seem to think
Humans apply somewhat more GI than you seem to think
The most important stuff would still be bottlenecked on human GI and would be hard to accelerate; you don’t just simply “free up” the humans in a super liquid, fungible way
If this were happening, some pretty strong political forces would be at play, including hopefully / kinda probably (??) a strong push to stop the spiral
But I’m not super confident about any of that. It’s strategically relevant but ATM I don’t have much novel perspective to offer, and it seems to need some other expertise (e.g. a good understanding of politics, of science and tech research, and similar).
Ok, I see. Now, regarding your disjunction earlier of (in my words)
A. (NAA (nonAGI AI) takeover) You can get strategic takeover AI without AGI
B. (AGI soon easy) Gippity+ will soon be AGI by adding a bit more ~mundane human research juice
C. (AGI soon hard) Gippity+++ will soon be AGI by adding a couple big insights
First, to clarify, I think the discourse on this thread is that I asked you about
and you said
and now we are discussing what NAA looks like and how timelines look in NAA takeover world.
Now I’m wondering, what are your very approximate relative probabilities of these things? E.g. is one of them 90% of the source of your confidence (I mean, 90% of your prob mass) in FOOM within 10 years? If they are roughly equal, I would raise my eyebrow and say “that seems kinda strange, unless there’s a shared factor such as you thinking that actually we basically have ~AGI in current systems; if so, could you clarify that shared factor”.
As stated, these don’t have to sum to 1. B and C are mutually exclusive but A can be true even if B or C are also true.
(I also object a bit to calling “strong fluid intelligence” “AGI.”
Part of what’s at stake is how far can you get with basically just specialized knowledge and the ability to train new specialized knowledge. It would be surprising to me, but not out of the question, that there’s almost nothing that such an AI can’t do that an AI with more fluid intelligence can do. But I only object a bit.)
Ass numbers:
A: 80%
B: 40%
C: 30%
I mean that’s kind of fair. But I in fact don’t have a lot of precise ability to distinguish between “one key idea is missing” and “only engineering schlep is missing”. Those wolds look very similar, to me, and so get similar amounts of mass.