None of this exists now though. Speculating about the future when it depends on all these unknowns and never before seen capabilities is dangerous—you’re virtually certain to be wrong. The uncertainty comes from all the moving parts in your model. Like you have:
Immense amounts of compute easily available
Accurate simulations of the world
Fully automated agi, there’s no humans helping at all, the model never gets stuck or just crashes from a bug in the lower framework
Enormously past human capabilities ASI. Not just a modest amount.
The reason you are probably wrong is just probability, if each step has a 50 percent chance of being right it’s 0.5^4. Dont think it of me saying you’re wrong.
And then only with all these pieces, humans are maybe doomed and will soon cease to exist. Therefore we should stop everything today.
While if just 1 piece is wrong, then this is the wrong choice to make. Right?
You’re also against a pro technology prior. Meaning I think you would have to actually prove the above—demo it—to convince people this the actual world we are in.
That’s because “future tech instead of turning out to be over hyped is going to be so amazing and perfect it can kill everyone quickly and easily” is against all the priors where tech turned out to be underwhelming and not that good. Like convincing someone the wolf is real when there’s been probably a million false alarms.
I don’t know how to think about this correctly. Like I feel like I should be weighting in the mountain of evidence I mentioned but if I do that then humans will always die to the ASI. Because there’s no warning. The whole threat model is that these are capabilities that are never seen prior to a certain point.
The whole threat model is that these are capabilities that are never seen prior to a certain point.
Yep, that’s how ChatGPT is a big deal for waking up policymakers, even as it’s not exactly relevant. I see two paths to a lasting pause. First, LLMs keep getting smarter and something object level scary happens before there are autonomous open weight AGIs, policymakers shut down big models. Second, 1e29 FLOPs is insufficient with LLMs, or LLMs stop getting smarter earlier and 1e29 FLOPs models are not attempted, and models at the scale that’s reached by then don’t get much smarter. It’s still unlikely that people won’t quickly find a way of using RL to extract more and more useful work out of the kind of data LLMs are trained on, but it doesn’t seem impossible that it might take a relatively long time.
Immense amounts of compute easily available
The other side to the argument for AGI in RTX 2070 is that the hardware that was sufficient to run humanity’s first attempt at AGI is sufficient to do much more than that when it’s employed efficiently.
Fully automated agi, there’s no humans helping at all, the model never gets stuck or just crashes from a bug in the lower framework
This is the argument’s assumption, the first AGI should be sufficiently close to this to fix the remaining limitations that make full autonomy reliable, including at research. Possibly requiring another long training run, if cracking online learning directly might take longer than that run.
Enormously past human capabilities ASI. Not just a modest amount.
I expect this, but this is not necessary for development of deep technological culture using serial speed advantage at very smart human level.
Accurate simulations of the world
This is more an expectation based on the rest than an assumption.
The reason you are probably wrong is just probability, if each step has a 50 percent chance of being right it’s 0.5^4.
These things are not independent.
Speculating about the future when it depends on all these unknowns and never before seen capabilities is dangerous—you’re virtually certain to be wrong.
That’s an argument about calibration. If you are doing the speculation correctly, not attempting to speculate is certain to leave a less accurate picture than doing it.
None of this exists now though. Speculating about the future when it depends on all these unknowns and never before seen capabilities is dangerous—you’re virtually certain to be wrong. The uncertainty comes from all the moving parts in your model. Like you have:
Immense amounts of compute easily available
Accurate simulations of the world
Fully automated agi, there’s no humans helping at all, the model never gets stuck or just crashes from a bug in the lower framework
Enormously past human capabilities ASI. Not just a modest amount.
The reason you are probably wrong is just probability, if each step has a 50 percent chance of being right it’s 0.5^4. Dont think it of me saying you’re wrong.
And then only with all these pieces, humans are maybe doomed and will soon cease to exist. Therefore we should stop everything today.
While if just 1 piece is wrong, then this is the wrong choice to make. Right?
You’re also against a pro technology prior. Meaning I think you would have to actually prove the above—demo it—to convince people this the actual world we are in.
That’s because “future tech instead of turning out to be over hyped is going to be so amazing and perfect it can kill everyone quickly and easily” is against all the priors where tech turned out to be underwhelming and not that good. Like convincing someone the wolf is real when there’s been probably a million false alarms.
I don’t know how to think about this correctly. Like I feel like I should be weighting in the mountain of evidence I mentioned but if I do that then humans will always die to the ASI. Because there’s no warning. The whole threat model is that these are capabilities that are never seen prior to a certain point.
Yep, that’s how ChatGPT is a big deal for waking up policymakers, even as it’s not exactly relevant. I see two paths to a lasting pause. First, LLMs keep getting smarter and something object level scary happens before there are autonomous open weight AGIs, policymakers shut down big models. Second, 1e29 FLOPs is insufficient with LLMs, or LLMs stop getting smarter earlier and 1e29 FLOPs models are not attempted, and models at the scale that’s reached by then don’t get much smarter. It’s still unlikely that people won’t quickly find a way of using RL to extract more and more useful work out of the kind of data LLMs are trained on, but it doesn’t seem impossible that it might take a relatively long time.
The other side to the argument for AGI in RTX 2070 is that the hardware that was sufficient to run humanity’s first attempt at AGI is sufficient to do much more than that when it’s employed efficiently.
This is the argument’s assumption, the first AGI should be sufficiently close to this to fix the remaining limitations that make full autonomy reliable, including at research. Possibly requiring another long training run, if cracking online learning directly might take longer than that run.
I expect this, but this is not necessary for development of deep technological culture using serial speed advantage at very smart human level.
This is more an expectation based on the rest than an assumption.
These things are not independent.
That’s an argument about calibration. If you are doing the speculation correctly, not attempting to speculate is certain to leave a less accurate picture than doing it.
If you feel there are further issues to discuss, pm me for a dialogue.