I’ll just say that this doesn’t read like Linch.
Amalthea
I agree that based on this we should assume large jumps in capability to be possible (if and when we get said algorithmic progress). I think this doesn’t directly address the ‘scaling will lead to AGI’ claim though: It’s at least plausible that large enough LLMs can be “generally intelligent enough” to outperform humans across the board on general reasoning tasks.
I think it’s a good argument, but Anthropic doesn’t seem quite aligned enough to make it work. E.g. they don’t seem to have been pushing for a coordinated Pause to any real extent (and if they don’t think this would be a good idea, haven’t clarified their position as far as I know).
The study seems to be about what is predicted by experts to be possible, not what is possible afaict?
Is there a mechanism to explicitly run a proposed agreement by the regulator to get their OK?
You’d need the alternative workable approach to not be basically runnable on GPUs, which is maybe plausible, but seems optimistic?
(E.g. anything that can run on a computer would most likely profit quite a bit from the cheap GPU compute even if it’s overall more complex and the current optimizations aren’t as targeted)
I agree with the general concern, but it’d be clearly a move in the right direction on that front?
With this kind of proposal I’m more worried that it could lead to a unilateral slowdown just after having animated China to be much more aggressive on AI.
This is the “stacked S-curves” effect often seen in the maturing of (usually ordinary) technologies. It’s perhaps slightly unusual that it’s more pronounced and “discrete” right now (relatively few innovations leading to large amounts of progress).
The other angles are probably already out there, but haven’t given the chance to shine while the current paradigm can be sufficiently leveraged, so I’m not very hopeful on progress stalling by itself.
We really need the update! I was going to share this with someone who just now has been hit with the full emotional force of realizing what’s going on with AI… but even this right at the beginning doesn’t seem so applicable anymore:
> Some combination of ‘we run out of training data and ways to improve the systems, and AI systems max out at not that much more powerful than current ones’ and ‘turns out there are regulatory and other barriers that prevent AI from impacting that much of life or the economy that much’ could mean that things during our lifetimes turn out to be not that strange. These are definitely world types my model says you should consider plausible.Maybe still plausible but imo less likely at this point than even “all the politicians and world leaders are going to wake up and are going to implement a halfway sensible solution”.
Narration doesn’t work on this.
I think it’s unclear that this was overall bad for Anthropic/Amodei if you factor in the reputational and ideological boost they got (“aura farming” according to roon).
I recently thought something like “community notes, but for the internet” would be awesome, but you’d need a critical mass of people.
Using the kind of thing presented by OP for bootstrapping combined with some mechanism to use (in the near term) humans as the ultimate arbitrators for reliability could be pretty fun.
Sure, assuming the development of your cure doesn’t have substantial negative externalities, which is the whole point of the AI debate. I understand that your stance is “the risks are not that high”, but it’s worth pointing out that this is really a core assumption that the rest of your position is based on.
I’d venture an uninformed guess that in 95 % or so percent of these cases the problem isn’t “taking ideas seriously” but rather people deferring proper judgement due to some emotional or social effect.
One point that I tend to believe is true, but that I don’t see raised much:
The straightforward argument (machines that are smarter than us might try to take over) is intuitively clear to many people, but for whatever reason many people have developed memetic anti-genes against it. (E.g. talk about the AI bubble, AI risk is all sci-fi, technological progress is good, we just won’t program it that way, etc.).
In my personal experience, the people I talk to with a relatively basic education and who are not terminally online are much more intuitively concerned about AI than either academics or people in tech, since they haven’t absorbed so much of the bad discourse.
(The other big reason for people not taking the issue is people not feeling the AGI, but there’s been less of that recently)
Sorry if that was weirdly obscure. I was asking because the principal reason I go out of my way to avoid rain is that I’m worried my phone would get wet and potentially die (and I’ve been somewhat sad about having to forego the experience of braving the rain at points). But it’s possible that this is not a big issue with current devices (and maybe never was)!
One general point I’ve heard in this regard, is that Japan’s debt is mostly owned by large Japanese companies and so carries a much smaller risk for the government.
Do you carry a smartphone with you in those occasions?
Wow, kudos—I genuinely wasn’t sure what was going on with the essay since the tone was indeed almost Borges-style (or perhaps a more grounded Bjartur).