Absolute-zero-based suffering.
Does this imply that wireheading perfectly solves the problem, absent traditional Buddhist worries like reincarnation, which RTB presumably eschews?
Absolute-zero-based suffering.
Does this imply that wireheading perfectly solves the problem, absent traditional Buddhist worries like reincarnation, which RTB presumably eschews?
Players naturally distinguish “legitimate” actions (swinging sword, drinking potion) from “illegitimate” ones (using console commands to spawn items). This isn’t in the game’s code—the engine doesn’t care. It’s a social distinction we impose based on our intuitions about fair play and authentic experience. We’ve collectively decided that some causal interventions are kosher and others are “cheating,” even though they’re all just bits flipping in RAM.
It’s worth mentioning speedrunning here. When players decide to optimize some aspects of gameplay (e.g. getting to the victory screen as fast as possible), this leads to weird interactions with the apparent ontology of the game.
From one point of view, it doesn’t matter what developers intended (and we can’t be completely certain anyway, cf. the “death of the author”), so any legitimate inputs (that you can make while actually playing, so console commands excluded) are treated as fair play, up to arbitrary code execution (ACE) - essentially exploiting bugs to reprogram the game on the fly to make it load desired events. This often requires high skill to competently execute, offering opportunities for dedicated competition. While such “gameplay” usually results in confusing on-screen mess for the uninitiated, many consider “glitched” speedruns legitimate, and hundreds of thousands of people regularly watch them during Games Done Quick charity marathons on Twitch, marveling at what hides “behind the curtain” of beloved games.
However, another approach to speedrunning is to exclude some types of especially game-breaking bugs, in order to approximate the intended playing experience for the competition. Both kinds are popular, as are discussions about which is more legitimate—another way that gaming makes people engage in amateur philosophy, usually without realizing it, producing much confused nonsense in the process. Kind of like actual philosophy, except more amusing and less obscurantist.
One might also do, say, a thought experiment with alien civilisations untouched by whites’ hands and unaware about the oppression system.
Even though their supposed oppressor classes are unlikely to look like white males, that doesn’t guarantee the absence of platonic toxic whiteness & masculinity.
What #1,#2,#4 have in common is that it is harder to check experimentally unless you are immersed with the area and the potential difficulty of publishing your results threatening to invalidate the dominant narrative.
Indeed.
@Noosphere89′s discuaaion of ways for people to turn ideologically crazy
As it happens, I had a bit of a back-and-forth with the author in the comments.
It’s usually much easier to bullshit value claims than epistemic claims.
Sure, if we compare the sets of all value claims with all epistemic claims. However, the controversial epistemic claims aren’t typical, they’re selected for both being difficult to verify and having obvious value implications. Consider the following “factual” claims that are hacking people’s brains these days:
that there are innate gender identities entirely independent of biological sex;
that elite consensus undermines civilization by enforcing luxury beliefs;
that so-called objective truthseeking perpetuates a system of oppression invented by dead white men;
that safetyists are trying to stifle technological progress by means of totalitarian control.
It’s not clear to me that “putting truth first” is a reliable enough defense for ordinary people in the face of that.
Nah, the weird idea is AI x-risk, something that almost nobody outside of LW-sphere takes seriously, even if some labs pay lip service to it.
I’m surprised that you’re surprised. To me you’ve always been a go-to example of someone exceptionally good at both original seeing and taking weird ideas seriously, which isn’t a well-trodden intersection.
We need an epistemic-clarity-win that’s stable at the the level of a few dozen world/company leaders.
If you disagree with the premise of “we’re pretty likely to die unless the political situation changes A Lot”, well, it makes sense if you’re worried about the downside risks of the sort of thing I’m advocating for here. We might be political enemies some of the time, sorry about that.
These propositions seem in tension. I think that we’re unlikely to die, but agree with you that without an “epistemic-clarity-win” your side won’t get its desired policies implemented. Of course, the beauty of asymmetric weapons is that if I’m right and you’re wrong, epistemic clarity would reveal that and force you to change your approaches. So, we don’t appear to be political enemies, in ways that matter.
general public making bad arguments
My point is that “experts disagree with each other, therefore we’re justified in not taking it seriously” is a good argument, and this is what people mainly believe. If they instead offer bad object-level arguments, then sure, dismissing those is fine and proper.
Yoshua Bengio or Geoffrey Hinton do take AI doom seriously, and I agree that their attitude is reasonable (though for different reasons than you would say)
I agree that their attitude is reasonable, conditional on superintelligence being achievable in the foreseeable future. I personally think this is unlikely, but I’m far from certain.
And I think AI is exactly such a case, where conditional on AI doom being wrong, it will be for reasons that the general public mostly won’t know/care to say, and will still give bad arguments against AI doom.
Most people are clueless about AI doom, but they have always been clueless about approximately everything throughout history, and get by through having alternative epistemic strategies of delegating sense-making and decision-making to supposed experts.
Supposed experts clearly don’t take AI doom seriously, considering that many of them are doing their best to race as fast as possible, therefore people don’t either, an attitude that seems entirely reasonable to me.
Also, you haven’t linked to your comment properly, when I notice the link it goes to the post rather than your comments.
Thank you, fixed.
My core claim here is that most people, most of the time, are going to be terrible critics of your extreme idea. They will say confused, false, or morally awful things to you, no matter what idea you have.
I think that most unpopular extreme ideas have good simple counterarguments. E.g. for Marxism it’s that it whenever people attempt it, this leads to famines and various extravagant atrocities. Of course, “real Marxism hasn’t been tried” is the go-to counter-counterargument, but even if you are a true believer, it should give you pause that it has been very difficult to implement in practice, and it’s reasonable for people to be critical by default because of those repeated horrible failures.
The clear AI implication I addressed elsewhere.
the only divided country left after Germany
China/Taiwan seem to be (slightly) more so these days, after Kim explicitly repudiated the idea of reunification.
publishing the evidence is prosocial, because it helps people make higher-quality decisions regarding friendship and trade opportunities with Mallory
And by the same token, subsequent punishment would be prosocial too. Why, then, would Alice want to disclaim it? Because, of course, in reality the facts of the matter whether somebody deserves punishment are rarely unambiguous, so it makes sense for people to hedge. But that’s basically wanting to have the cake and eat it too.
The honorable thing for Alice to do would be to weigh the reliability of the evidence that she possesses, and disclose it only if she thinks that it’s sufficient to justify the likely punishment that would follow it. No amount of nuances of wording and tone could replace this essential consideration.
Feels true to me, but what’s the distinction between theoretical and non-theoretical arguments?
Having decent grounding for the theory at hand would be a start. To take the ignition of the atmosphere example, they did have a solid enough grasp of the underlying physics, with validated equations to plug numbers into. Another example would be global warming, where even though nobody has great equations, the big picture is pretty clear, and there were periods when the Earth was much hotter in the past (but still supported rich ecosystems, which is why most people don’t take the “existential risk” part seriously).
Whereas, even the notion of “intelligence” remains very vague, straight out of philosophy’s domain, let alone concepts like “ASI”, so pretty much all argumentation relies on analogies and intuitions, also prime philosophy stuff.
Policy has also ever been guided by arguments with little related maths, for example, the MAKING FEDERAL ARCHITECTURE BEAUTIFUL AGAIN executive order.
I mean, sure, all sorts of random nonsense can sway national policy from time to time, but strictly-ish enforced global bans are in an entirely different league.
Maybe the problem with AI existential risk arguments is that they’re not very convincing.
Indeed, and I’m proposing an explanation why.
I think that the primary heuristic that prevents drastic anti-AI measures is the following: “A purely theoretical argument about a fundamentally novel threat couldn’t seriously guide policy.”
There are, of course, very good reasons for it. For one, philosophy’s track record is extremely unimpressive, with profound, foundational disagreements between groups of purported subject matter experts continuing literally for millennia, and philosophy being the paradigmatic domain of purely theoretical arguments. For another, plenty of groups throughout history predicted an imminent catastrophic end of the world, yet the world stubbornly persists even so.
Certainly, it’s not impossible that “this time it’s different”, but I’m highly skeptical that humanity will just up and significantly alter the way it does things. For the nuclear non-proliferation playbook to becomes applicable, I expect that truly spectacular warning shots will be necessary.
There are tons of groups with significant motivation to publish just about anything detrimental to transgender people
In the academia? Come on now. If those people post their stuff on Substack, or even in some bottom-tier journal, nobody else would notice or care.
Well, there does seem to be no shortage of trans girls at any rate
Transgender people, total, between both transmasc and transfem individuals, make up around 0.5% of the population of the US.
Among youth aged 13 to 17 in the U.S., 3.3% (about 724,000 youth) identify as transgender, according to the first Google link—https://williamsinstitute.law.ucla.edu/publications/trans-adults-united-states/ In any case, when we’re talking about at least hundreds of thousands, “no shortage” seems like a reasonable description.
And again, the number of trans people in high level sports is in the double digit numbers.
So far.
Based on https://pmc.ncbi.nlm.nih.gov/articles/PMC10641525/ trans women get well within the expected ranges for cis women within around 3-4 years.
Yes Requires the Possibility of No. Do you think that such a study would be published if it happened to come to the opposite conclusion?
And, given how few trans women there are
Well, there does seem to be no shortage of trans girls at any rate, so these issues are only going to become more salient.
I agree, and yet it does seem to me that self-identified EAs are better people, on average. If only there was a way to harness that goodness without skirting Wolf-Insanity quite this close...
Offsetting makes no sense in terms of utility maximisation.
Donating less than 100% of your non-essential income also makes no sense in terms of utility maximization, and yet pretty much everybody is guilty of it, what’s up with that?
As it happens, people just aren’t particularly good at this utility maximization thing, so they need various crutches (like the GWWC pledge) to do at least better than most, and offsetting seems like a not-obviously-terrible crutch.
Yeah, but this doesn’t have much to do with conscription. Getting the moribund industrial capacity up to speed does make sense on the other hand.
There are similarities, but the space of hardware solutions is much bigger.
But surely something in the vicinity should work? In any case, I’m pretty sure that most people don’t want to exist in a permanent state of pure bliss, whatever it means, and wouldn’t take a drug to that effect, so the problem description seems lacking. I’m not claiming to be able to produce a better one, though.