Tenoke
separately, I am not sure what your comment is supposed to be doing.
My comment is a pretty neutral response to the central claim ‘Dario probably doesn’t believe in superintelligence’ which you specify you believe in and isn’t just a clickbait headline, and to the arguments for it. Do you react like this to all comments which disagree with you? May I suggest to just comment on the arguments in the comment than to have such a kneejerk reaction?
> Unless you mean to claim that I am wrong about how the thing he’s describing in MoLG is actually compatible with the kind of superintelligence I’m imagining?
Your definition is:
> Roughly speaking, that the returns to intelligence past the human level are large, in terms of the additional affordances they would grant for steering the world, and that it is practical to get that additional intelligence into a system.
As I said I think ‘a country of supergeniuses in a datacenter’ fits, yes:
>A country of geniuses in a datacenter is pretty clearly “Superintelligence” and he pretty clearly believes in it. He seems to rather belive that Superintelligence wouldn’t solve everything quite as quick as others think.
We could summarize this as a “country of geniuses in a datacenter”.
A country of geniuses in a datacenter is pretty clearly “Superintelligence” and he pretty clearly believes in it. He seems to rather belive that Superintelligence wouldn’t solve everything quite as quick as others think.
You can also use more recent sources—e.g. 2 months ago here where he discusses the topic with Demis Hassabis—they differ on timelines, how fast it’d achieve things etc. but clearly they both believe in it.
I suspect you wouldn’t believe a claim that it has improved it 4x (or whatever) months down the line either, and I struggle to see under what scenario you’d believe them, just how you dont trust this survey.
Relatedly, what do you believe the current improvement is for them from using their pre-Mythos models compared to if they used no AI? Is it close to nothing?
Also, don’t your estimates that if it was 4x timelines will be shortened by X forget that this survey compares to 0 AI, and the current timelines are (I hope) based on them already using AI?
This from June lists a lot of people who have read it, including Stephen Fry, Grimes, professors etc. Seperately on Twitter seemingly anyone who was someone in the scene had given their opinion after having read it.
Any thread from the first announcement onward had people saying they’ve read it already. From the same thread (and that was early on)
Many people (like >100 is my guess), with many different view points, have read the book and offered comments.
Note that IFP (a DC-based think tank) recently had someone deliver 535 copies of their new book to every US Congressional office.
More endorsements and there’s also a lot of twitter personalities that had mentioned reading it, which I wont hunt. It definitely felt like a lot more than 50. I’m not arguing it’s a bad or good strategy, just that it’s felt a bit off to wait for months for a ‘pre-order’ when anyone who I might see on Twitter and would’ve been interested to have read it already has.
While I mostly did it out of support and reducing x-risk, pre-ordering “If Anyone Builds It, Everyone Dies” has been one of the more frustrating book order experiences I’ve had. The main purpose of the order looks to be successful enough and the actual book experience doesn’t matter all that much but still:
I pre-ordered in mid May as soon as I heard about it, and since then it’s been months of nearly everyone on the Internet having already read it, then later pre-order prices (barely relevant) were lowered which seems a bit backwards, and now that it’s been ‘out’, I still don’t have the book (or even estimated shipping—from Amazon, Germany) while everyone else who hadn’t posted about it has now been posting reviews etc.
This is kind of annoying, as I’m not reading any of the commentary now—reading the book firsthand when I’ve already pre-ordered it would seem to make more sense, but by the time I even get it, It’d be far after most of the initial conversation happened so at this point I’m having a worse experience for having pre-ordered it.
Again, that experience is not that important, I’ve benefited a lot from Eliezer’s other writting before etc. but it’s disappointing enough to vent in at least one comment before taking the L and moving on.
While I believe SC2 and Dota would fail today with sufficient effort, the models didn’t quite perform superhuman, and as far as I am aware no community bots do either.
One of the reasons why it’s plausible that today’s or tomorrow’s LLMs can result in brief simulations of consciousness or even qualia is that it happens with dreams in humans. Dreams are likely some sort of processing of information/compression/garbage collection, yet they still result in (badly) simulated experiences as a clear side-effect of trying to work with human experience data.
I still want something even closer to Givewell but for AI Safety (though it is easier to find where to donate now than before). Hell, I wouldn’t mind if LW itself had recommended charities in a prominent place (though I guess LW now mostly asks for Lightcone donations instead).
Thanks for sharing this. Based on the About page, my ‘vote’ as a EU citizen working in an ML/AI position could conceivably count for a little more, so it seems worth doing it. I’ll put it in my backlog and aim to get to it on time (it does seem like a lengthy task).
If you don’t know who to believe then falling back on prediction markets or at least expert consensus is not the worst strategy.
Do you truly not believe that for your own ljfe—to use the examples there—solving aging, curing all disease, solving energy isn’t even more valuable? To you? Perhaps you don’t believe those possible but then that’s where the whole disagreement lies.
And if you are talking about Superintelligent AGI and automation why even talk about output per person? I thought you at least believe people are automated out and thus decoupled?
Does he not believe in AGI and Superintelligence at all? Why not just say that?
AI could cure all diseases and “solve energy”. He mentions “radical abundance” as a possibility as well, but beyond the R&D channel
This is clearly about Superintelligence and the mechanism through which it will happen in that scenario is straightforward and often talked about. And if he disagrees he either doesn’t believe in AGI (or at least advanced AGI) or believes that solving energy, curing disease is not that valuable? Or he is purposefully talking about a pre-AGI scenario while arguing against post-AGI views?
to lead to an increase in productivity and output *per person*
This quote certainly suggests this. It’s just hard to tell if this is due to bad reasoning or on purpose to promote his start-up.
AI 2027 is more useful for the arguments than the specific year but even if not as aggressive, prediction markets (or at least Manifold) predict 61% chance before 2030, 65% before 2031, 73% by 2033.
I, similarly, can see it happening slightly later than 2027-2028 because some specific issues take longer to solve than others but I see no reason to think a timeline beyond 2035, like yours, let alone 30 years is grounded in reality.
It also doesn’t help that when I look at your arguments and apply them to what would then seem to be very optimistic forecast in 2020 about progress in 2025 (or even Kokotajlo’s last forecast), those same arguments would have similarly rejected what has happened.
I believe he means rationality-associsted discourse and it’s not like there are so many contenders.
There’s indeed been no one with that level of reach that has spread this much misinformation and started this many negative rumors in the space as David Gerard and RW. Whoever the second closest contender is, is likely not even close.
You can trace back to him A LOT of the negative press online that LW, EY and a ton of other places and people have got. If it wasn’t for RW LW would be much, much more respected.
It’s hard for me to respect a Safety-ish org so obviously wrong about the most important factors of their chosen topic.
I won’t judge a random celebrity for expecting e.g. very long timelines but an AI research center? I’m sure they are very cool people but come on.
As in ultimately more people are likely to like their condition and agree (comparably more) with the AI’s decisions while having roughly equal rights.
Democratic in the ‘favouring or characterized by social equality; egalitarian.’ sense (one of the definitions from Google), rather than about Elections or whatever.
For example, I recently wrote a Short Story of my Day in 2035 in the scenario where things continue mostly like that and we get positive AGI that’s similarish enough to current trends. There, people influenced the initial values—mainly via The Spec, and can in theory vote to make some changes to The Spec that governs the general AI values, but in practice by that point AGI controls everything and it’s more or less set in stone. Still, it overall mostly tries to fulfil people’s desires (overly optimistic that we go this route, I know).
I’d call that more democratic than one that upholds CCP values specifically.
Western AI is much more likely to be democratic and have humanity’s values a bit higher up. Chinese one is much more likely to put CCP values and control higher up.
But yes, if it’s the current US administration specifically, neither option is that optimistic.
While, showing the other point of view and all that is a reasonable practice, it’s disappointing of Dwarkesh to use his platform specifically to promote this anti-safety start-up.
There used to be a lot of arguments about AI Timelines 5+ years ago of the sort ’if AI is coming why are the markets not reacting”. We’re now on the other side—by already being within the time horizon that markets react to—where the markets themselves are pointing in the directon of AGI, and people instead wonder how to undercount that (e.g. by saying it is a bubble, or that trends must slow).