Though note the difference between ‘single-PC compute after more expensive experimental work’ (what it sounds like Steven is predicting, and Habryka is assuming) and ‘single-PC compute without that’ (what Adam is predicting).
Nick_Tarleton
It sounds like I have more expectation of a much more efficient paradigm (a la e.g. Steven Byrnes) being feasibly discovered through purely theoretical work (though not necessarily single-2026-laptop efficient, or discovered on any particular schedule), which is coloring my takes here.
I agree that stigma is important and would reduce the level of intervention needed to shut down independent research. It’s only very recently that I’ve seen any discussion of stigma as load-bearing in pause scenarios, so I wasn’t thinking of it.
I don’t super understand why “AI chips that cost $1k+ can only run signed code” would be invasive in any meaningful way. I don’t really think it would change anyone’s life in any particularly meaningful way.
I was thinking of it as more invasive in affecting (by limiting what code they can run) far more actors (as opposed to, what, reactor operators and uranium handlers?, in the nuclear case). If unrestricted general-purpose CPUs are still readily available, it does seem like nothing much would change in practice & the important freedoms would be preserved; combined with only a few chipmakers actually being liable for compliance, I can see calling this not more invasive.
Both Android phones and iPhones can only run signed code, and the vast majority of gaming happens on game consoles that can only run signed code.
(I do think it’s probably meaningful that these aren’t legal mandates, and more meaningful that unrestricted platforms are also readily available.)
For example the IAEA has heavily curtailed research into how to build nuclear weapons more cheaply and efficiently, which seems like it applies pretty straightforwardly to algorithmic progress.
I assume very few people are interested in doing independent research into improving nuclear weapons. If institutional AI algorithmic research were effectively banned, all else equal, I assume many more people would be interested in independently researching it, which would require more in-practice restriction on speech to curtail. (Based on your tweets I’m guessing you think that curtailing independent research wouldn’t be necessary and aren’t considering it here; but this may be a background disagreement with people saying invasive restrictions would be needed.)
A simple code-signing regime where high-performance chips are limited by a code-signing regime seems like it would also not be “drastically more invasive than the IAEA”.
Controlling widely-used hardware and only allowing approved code to run on it does seem drastically more invasive, sufficiently obviously so that I have no idea where you’re coming from here. If this only applied to the largest supercomputers I might not call it more invasive, but the whole premise of this thread is not-that.
Why do you think the idea is for only Washingtonians to participate?
Are the preservation and the discount card both fully transferrable, not just in the sense of [designating someone else to be preserved], but [designating someone else to control them as if they’d bought them] (so that they’re resellable assets)?
if you had your window shade down you wouldn’t even notice it
You wouldn’t notice a change in apparent gravity, but you would (at least I would) notice the angular acceleration, like when entering or exiting a banked turn.
This is a problem with profit-maximization, or large corporations, or something, not billionaires — Facebook and Walmart would face the same incentives and do those same things even if their ownership were more distributed.
I think the rest of these points are colorable and appreciate you saying them, but
politicians offending foreign countries is not, in any sense of the word, an exceptional situation that demands exceptional action. Reagan famously joked that he was about to nuke the USSR, triggering an escalation in the alert state across East Asia.
Threatening to take territory from an ally by force is far beyond “offending foreign countries”, not precedented to my knowledge, and very bad.
More recently, Tao on a different problem:
Recently, the application of AI tools to Erdos problems passed a milestone: an Erdos problem (#728 https://www.erdosproblems.com/728) was solved more or less autonomously by AI (after some feedback from an initial attempt), in the spirit of the problem (as reconstructed by the Erdos problem website community), with the result (to the best of our knowledge) not replicated in existing literature (although similar results proven by similar methods were located).
So it seems we now have a non-example, or something closer to one (depending on how you weight “similar results proven by similar methods”).
To elaborate on this, I think one of the arguments for verbal consent was along these lines: “Some women panic freeze and become silent when they’re uncomfortable, they aren’t capable of saying no in that state.”
This should be an argument for affirmative consent, which isn’t the same as verbal consent (like, you mention “waiting for… nonverbal makeout initiation”). I do see people conflate them or background-assume that the one must be the other, which I think makes these conversations a lot worse.
IME in this domain, in epistemology-for-humans terms that may or may not translate easily into Solomonoff/MDL, taking a compact generator of a complex phenomenon too seriously — like, concentrating probability too strongly on its predictions, not taking anomalies seriously enough or looking hard enough for them, insufficiently expecting there to be more to say that sounds different in kind — is asking to be wrong, and not doing that is a way to be less wrong.
(Looking for compact generators but not taking them too seriously is good, but empirically seems to require more skill or experience.)
You don’t need to single out a specific complex theory to say “this simple theory is concentrating probability too strongly”, or to expect there to be some complex theory that pays for itself.
Look, I don’t like dealing with the sort of stuff I called “deep nonconsent” in this post. Sure, I’m quite kinky in bed, but in the rest of the mating process?
I believe you, but I also find the conjunction of [your kinks] and [choosing the ‘nonconsent’ frame/terminology] interesting, and would guess that it’s not a coincidence. In particular, I believe I’ve observed across people that [things like the former] are associated with [biases and blind-spots-biasing toward things like the latter].
(And I feel like ‘nonconsent’ as frame/terminology is strained at best, and… bad, muddled in a worrying way, in a way that rhymes with what t.g.t.a. is saying.)
Another I played with was e.g. “blame avoidance”, i.e. something-like-ladybrain really wants any dating/sex to happen in a way which is “not her fault”. That seems to mostly generate the same predictions.
Do you think it has some disadvantage, such that you didn’t choose to mention it at all in the OP?
So yeah, I am totally ready to believe there’s some other nearby generator, and if you have one which also better explains some additional things then please state it I want to know it.
A very nearby guess: women tending to prefer a ‘patient’ role in dating/mating, and/or tending to prefer men who take and are good at an ‘agent’ role — this has broader explanatory power for common male/female roles and attraction patterns.
(But also all the things you’re talking about, while anecdotally real, seem at least less broadly/strongly true to me than they do to you; women sending inexplicit-and-deniable-but-strong signals of interest seems more common and not a turnoff; flirting seems more symmetrical (not predicted by either of these models); etc.)
Reading this gave me the same feeling I used to get reading Brent’s stuff
FWIW, while I can see this if I try, it feels very different to me that I haven’t seen John do anything like be hostile to out-of-frame data once presented, or play games to dismiss it, or send what I read as signals that he will do so.
It’s not anti-inductive to believe that sets of phenomena in [some domain] are unlikely to have simple explanations at [some level of abstraction].
WRT social behavior, it does seem to me that ‘a whole lot of things pushing toward the same or similar results’ is remarkably common, and the OP is too dismissive of other explanations, once it’s thought of one, for my priors. (At the same time, I totally think [simple theories that help explain significant sets of dynamics] are real.)
I’m also curious about other observations, from James or others who agreed. (I’m not against the claim, but haven’t noticed it, and can’t think of examples. Bad if true.)
(… I guess there is a social-and-memetic cluster around Aella that rubs me the wrong way, for reasons that kinda rhyme with this? But I wouldn’t describe it the way James did, and definitely don’t see that cluster on LW itself much or see it as LW consensus reality.)
The medical system is full of safeguards to make very sure that no procedure it does will ever hurt you.
I agree that the medical system often has valuable safeguards than DIYing a treatment doesn’t, but this strikes me as way too optimistic about medical error & incompetence & bad epistemology.
Is the “nonconsent” of rape(play) fantasies (‘anticonsent’?) the same as the “nonconsent” of the other dynamics here (‘aconsent’?)?
(Probably tangential but:)
even relatively simple precautions would drastically increase the likelihood you would survive
This seems wrong to me, for most people, in the event of a prolonged supply chain collapse, which seems a likely consequence of large-scale nuclear war. It could be true given significant probability on either a limited nuclear exchange, or quick recovery of supply chains after a large war.
Some people seemingly just have a detail-less prior that the USG is more powerful than anyone else in whatever domain (e.g. has broken all encryption algorithms); without further information I’d assume this is more of that.
You might not be aware LW is open-source?