Moving, but I have to wonder if rushing was net positive in expected lives ex ante, or even expected lives saved from the Titanic not counting the Carpathia’s passengers. Quite plausibly it was and the Carpathia’s captain knew it and wouldn’t have done it otherwise, but I don’t know it reading the story.
Nick_Tarleton
Sure, whatever.
Do you think the distinction Jared is making isn’t real?
(I see a parallel to the current discourse about violence: some people assume transgression must be profitable, without having a reasonable story about how & despite the problems with such a story being obvious.)
Thanks! The distinction between “generating capabilities” and “hoovering up capabilities” is another small click for me.
‘why isn’t the thought process you’re using to say that surprised, and downvoted, by how little stuff we needed to make LLMs?’
After I read this comment, my hasty-guess-of-a-Tsvi-model replies: ‘the big surprise is that “solid performance on a wide range of technical tasks is not that connected to GI.” This surprise sufficiently explains the surprise of ~easily achieving that performance. Any ex ante expectation that those tasks required lots of understanding would/should have been mediated by expecting they required GI. Given that they don’t require GI, it’s [not surprising? / not relevantly surprising?] that they don’t require much understanding.’
This comment felt like it made a better model of your views click. ISTM you think something like:
All the impressive ML results so far have only worked either in a narrow subspace around the training data (e.g. LLMs, still mostly the case even with RL), or in very small worlds (e.g. pure-RL game-players). There has been ~zero progress on fluid/general intelligence. Therefore, extrapolating straight lines on graphs predicts ~zero progress on fluid/general intelligence by doing more of the same kind of thing. The induction on increasing ‘intelligence’ that lots of other people appeal to only works by inappropriate compression.
It’s still likely that we live in something like the 2011-Yudkowsky world as described in this tweet, with AGI to come from a lot of accumulation of insight. ML successes misleadingly make that world look falsified, if you aren’t tracking what they are and aren’t successes at.
(Implied) The fact that [ML results so far required surprisingly little understanding-of-intelligence] is not significant evidence that [other-things-you-might-expect-to-require-understanding, e.g. fluid intelligence, will require less understanding]. If we’ve learned something about how little understanding-of-intelligence was needed to build things that succeed on some tasks, this still just doesn’t say much about AGI.
(Or maybe you don’t believe that ‘fact’ about ML results so far, idk.)
Intuitively-to-me, there should be a big inductive update on this level, even if induction on ‘intelligence’ doesn’t work.
Like, it’s evidence against the way of thinking that says understanding of intelligence is important. When you say (implicitly) ‘we probably need lots of AGI seedstuff’, I want to say ‘why isn’t the thought process you’re using to say that surprised, and downvoted, by how little stuff we needed to make LLMs?’.
I didn’t mean it’s not name-calling, I meant “the administration” is not “half the population”.
if (our best model of) the laws of physics fit on a postcard, what exactly does it mean to need to do an experiment, in principle? You need experiments to nail down the laws. Beyond that, they’re convenient for reducing computational requirements
The general point is solid, but you also need experiments to learn contingent things within physics e.g. how biology works.
My guess is that taking seriously the very concept of manipulation often makes [the type of person you’re talking about] uncomfortable, because it undercuts an ethically load-bearing abstraction of rational agency, and its fuzziness threatens to license paternalism with (what might feel like) no principled limit. (This is a genuinely hard problem.) This cashes out similarly to ‘people who are manipulated deserve it’, but I think isn’t quite the same thing.
aggressive name-calling at half of the population
Without prejudice to your larger point, the OP is literally not doing this.
You might not be aware LW is open-source?
Though note the difference between ‘single-PC compute after more expensive experimental work’ (what it sounds like Steven is predicting, and Habryka is assuming) and ‘single-PC compute without that’ (what Adam is predicting).
It sounds like I have more expectation of a much more efficient paradigm (a la e.g. Steven Byrnes) being feasibly discovered through purely theoretical work (though not necessarily single-2026-laptop efficient, or discovered on any particular schedule), which is coloring my takes here.
I agree that stigma is important and would reduce the level of intervention needed to shut down independent research. It’s only very recently that I’ve seen any discussion of stigma as load-bearing in pause scenarios, so I wasn’t thinking of it.
I don’t super understand why “AI chips that cost $1k+ can only run signed code” would be invasive in any meaningful way. I don’t really think it would change anyone’s life in any particularly meaningful way.
I was thinking of it as more invasive in affecting (by limiting what code they can run) far more actors (as opposed to, what, reactor operators and uranium handlers?, in the nuclear case). If unrestricted general-purpose CPUs are still readily available, it does seem like nothing much would change in practice & the important freedoms would be preserved; combined with only a few chipmakers actually being liable for compliance, I can see calling this not more invasive.
Both Android phones and iPhones can only run signed code, and the vast majority of gaming happens on game consoles that can only run signed code.
(I do think it’s probably meaningful that these aren’t legal mandates, and more meaningful that unrestricted platforms are also readily available.)
For example the IAEA has heavily curtailed research into how to build nuclear weapons more cheaply and efficiently, which seems like it applies pretty straightforwardly to algorithmic progress.
I assume very few people are interested in doing independent research into improving nuclear weapons. If institutional AI algorithmic research were effectively banned, all else equal, I assume many more people would be interested in independently researching it, which would require more in-practice restriction on speech to curtail. (Based on your tweets I’m guessing you think that curtailing independent research wouldn’t be necessary and aren’t considering it here; but this may be a background disagreement with people saying invasive restrictions would be needed.)
A simple code-signing regime where high-performance chips are limited by a code-signing regime seems like it would also not be “drastically more invasive than the IAEA”.
Controlling widely-used hardware and only allowing approved code to run on it does seem drastically more invasive, sufficiently obviously so that I have no idea where you’re coming from here. If this only applied to the largest supercomputers I might not call it more invasive, but the whole premise of this thread is not-that.
Why do you think the idea is for only Washingtonians to participate?
Are the preservation and the discount card both fully transferrable, not just in the sense of [designating someone else to be preserved], but [designating someone else to control them as if they’d bought them] (so that they’re resellable assets)?
if you had your window shade down you wouldn’t even notice it
You wouldn’t notice a change in apparent gravity, but you would (at least I would) notice the angular acceleration, like when entering or exiting a banked turn.
This is a problem with profit-maximization, or large corporations, or something, not billionaires — Facebook and Walmart would face the same incentives and do those same things even if their ownership were more distributed.
I think the rest of these points are colorable and appreciate you saying them, but
politicians offending foreign countries is not, in any sense of the word, an exceptional situation that demands exceptional action. Reagan famously joked that he was about to nuke the USSR, triggering an escalation in the alert state across East Asia.
Threatening to take territory from an ally by force is far beyond “offending foreign countries”, not precedented to my knowledge, and very bad.
This doesn’t follow. Something could be part of the default human-values package, but also reliably discarded under reflection.