There’s a story, the reason why you have a lot of anime that goes in very weird sexual directions is that Japan is an incredibly sexually repressive society, so the Japanese channel their compressed libido towards art.
I don’t really believe this story, but I think the pattern it exemplifies is at play here.
Rats have very ambitious goals of fixing the world. They see that the world is in a pretty bad shape and demands fixing. But they can’t fix it. No one else can fix it either (nihil supernum, deep atheism), and even if they could, it would likely be bad, because their values are not yours (even deeper atheism). So you yearn for a world in which you smash those limitations. “Power” (or a specific kind of it) is the thing you need, and its value is not bounded, so the threshold of a superstimulus is largely your imagination, perhaps constrained by some ontological assumptions. That’s why this theme recurs through ratfic so much.
This seems somewhat obvious to me[1] and so I am somewhat surprised that people in the comments mostly seem to explain it in terms of game-theoretic (or other) realism or people having consequentialist-ish views, etc.
Brandon Sanderson’s books have some interesting variants of this as well.
Spoilers for the Mistborn series (to the best of my recollection and with some consulting of the fandom wiki) (putting in collapsible for now, because I can’t get spoiler blocks to work with this new editor):
At the end of The Well of Ascension, Vin gets a hold of the power of Ruin (one of the Shards, roughly god-like entities of the universe). She knows that she can use it to eliminate lots of the world’s atrocities etc. Sazed, informed by a misty ghost, tells her that “it’s a trap” and she needs to release the power. Some other misty ghost appears out of nowhere and stabs Elend, nearly lethally, apparently to force Vin to accept the power, so that she can save her loved one. But she doesn’t do that, releasing the power, which turns out to free its prior weilder who is the bad guy and now goes on to destroy the world. He was also the spirit that told Sazed to stop Vin. Whereas the ghost that stabbed Elend was the Vessel of another shard, Preservation, a good guy (approximately).
At the end of the next book, The Hero of Ages, Sazed accepts the powers of Ruin and Preservation, ascending to godhood and becoming Harmony. He approximately fixes the world, but then in the following series, it turns out that he’s not as powerful, as we might have predicted, and also the divine power that he has been wielding starts shaping his mind, so that he becomes more interested in things being Harmonious, than in “goodness”.
ETA: it’s plausibly relevant that Sazed, at the moment of Ascension, absorbed all the knowledge that had been stored for millennia in his Coppermind “amulets”, so that he could learn from the mistakes of the past generations and not screw things up as much.
IDK what the morale is supposed to be here, if any. “Sometimes you need to power-grab because otherwise someone else will power-grab in your place, but also beware because power corrupts and divine power corrupts in an ungodly way, so attaining instrumentally convergent goals is of limited value if it meddles with your utility function in unendorsed ways, be it due to some contingent peculiarity of your mental structure, or some more general fact of how minds work.”?
(There might also be something along those lines in other Sanderson cosmere books, but it’s been long since I’ve read any and I’m not up to date.)
“You can just do things,” yes, really, but that doesn’t imply that you always should, or that you have high likelihood of said things causing the results you’d prefer.
IDK man. I mostly don’t care that much about either. I’m extroverted but quite picky about people and don’t particularly feel drawn to “human-shaped things” in general. I don’t particularly hate corporations, but surely corporate capitalism seems very far from ideal. And their drive doesn’t seem alien or sociopathic to me.
But now I realize that I’m actually confused by what you mean by “human-shaped things”.
Scott’s old post Concept-Shaped Holes Can Be Impossible To Notice says that concept-shaped holes can be impossible to notice, in oneself as well as others. You might be very off-base when estimating how much things that seem obvious and straightforward to you are obvious and straightforward to others. They might very well not be. See also: the curse of knowledge.
I learned that writing something up or starting a conversation about a thing that seemed [obvious and therefore not worth talking about] can reveal that this thing is not as [obvious and therefore not worth talking about] as it seemed to me.
So, if you’re experiencing that sort of thing as a blocker (“I don’t have any particularly interesting/novel ideas”), you might want to gain empirical info about which ideas are worth writing about. Getting feedback is crucial for cultivating intellectual activity.
Ironically, as I was contemplating posting this shortform, it occurred to me that “Is it really worth posting? Shouldn’t this already be in the LW-ish water supply?”.
ETA: A class of micro-examples is when you bang your head against the wall of some math concept until pieces of it start clicking, and once everything has clicked into place, every important aspect of the new concept seems to refer to, nearly imply, the rest of the concept, as do various other mathematical structures that are now connected to this concept. The concept has become truly part of you, circularly justified, so entangled with the rest of your mind that it’s hard for you to see how one can live without it.
A major way this example is non-representative is that presumably, we typically remember that we once did not know X and now we do know X, whereas many domains in which the thing I’m describing above tends to occur are those in which we didn’t even notice the transition from [not knowing X] to [knowing X] (or [not thinking in Y way] to [thinking in Y way]). So a better adjacent example is a more meta one: the general way of thinking that you acquire by studying math/logic, physics, chemistry, programming, analytical-ish philosophy: conceptual clarity and control over concepts with simple and precise reference.
Of course, not all of this is acquired or not acquired in an obviously educational way. Some people are “just” born with different minds (or with a predisposition to develop their minds differently). Some people have (acquired) some synesthesia-like thing that changes how they imagine/[relate to] some concepts (and likely also somewhat meaningfully alters the way they think about them), etc.
We’re also bad OOD and many of our supposed advantages over them boil down to our distribution differences (embodiment and first-person-first data).
Kind of and yeah?
I agree we’re much better OOD than them but not so much that I think there’s no comparison.
I wouldn’t say “there’s no comparison”[1], but I do think it looks like a “qualitative” difference. What exactly it is would require a more involved explication of the concept, which might be infohazardous.
To be honest I think there’s some chance this happens to research math as a whole, if we don’t adapt. It’s possible we end up with an equilibrium where the tools are worse than human mathematicians but good enough to “justify” massive cuts and loss of human capital.
On the one hand, yeah. On the other hand, the rest of the story (AFAICT based on your description) isn’t really that sci-fi, let alone “hard”, except insofar as it’s set up by the time travel. You could just as well write a story about the Spaniards ultra-strategizing about efficiently conquering the Mexica or the Inca.
We have ways to measure unemployment. It classifies some people as unemployed and classifies some people as labor force, and the fraction of the former divided by the latter is the unemployment rate, which today hovers around 5% (depending on the country). 50% of people are permanently unemployable if they are too useless to leave that category by the time they stop counting as participating in the labor force.
(Also, to be clear, Toby was the first one who used the phrase “50% of people permanently unemployable” without defining clearly what he meant by this, and I responded to it without reflecting that maybe it’s good to clarify (and then Kaarel did the same after me). So, I appreciate you pushing me for clarification, but depending on your interest, you might care more about what Toby means by that, not what I mean by that.)
Fair enough. EA groups were on my mind because I noticed the contrast between the Polish community and the general diffuse global EA vibe[1] and so it seemed worth pointing out as a potentially tappable resource.
Yeah, AI safety groups might be seen as more credible, but I’m not sure about that (but they’re surely at least as credible as the local countrywide EA, unles something weird happens?).
I consider TESCREAL to be pointing at a real social cluster, but the Gebru-Bender-Torres cluster’s reporting of it is so off-base that they borderline don’t deserve any engagement.
I mean, do you count people who got convinced by people in the LW/rationalists circle?
If so, you would have many examples. I don’t know the timelines of Brad Sherman, Neil deGrasse Tyson, Bernie Sanders, and similar “outsiders” who have been waving IABIED, but surely some of them think it’s plausible less than 50 years.
I firmly believe that the OP’s author should have reduced the uncertainty at least to a Lifland-like estimate.
Moreover, Kokotajlo’s timeline implies a 50% chance of TED-AI before Jan 2031 or before Oct 2032, Eli’s timeline implies a 50% chance of TED-AI before Feb 2035 or Apr 2036.
IDK what you mean by “TED-AI” but, in case you haven’t noticed, Ord’s median seems to be 2038, which is like 2 or 3 years later than Lifland.
I think everyone should have a distribution that is roughly this shape. Here’s mine:
Another idea, for completeness: Hanson’s hard-to-access but legal store with strongly discouraged substances. Might add enough friction to disincentivize many on the margin, without being enough to provide a possible source of profit for criminals.
One caveat I would add is that while I believe your list of examples to be representative of the general class of phenomena you’re trying to point at, there are notable exceptions: problems which can be solved by whacking moles. I don’t have specific examples off the top of my head, but, like, it would be weird if there are no cases where the player has like 100 available strategies, and the designer doesn’t know about those 100 strategies, so they are whacking the moles by banning them one by one, and eventually they succeed.
Of course, this doesn’t solve the problem for other classes of drugs; I’m not necessarily a fan of legalizing, regulating, and taxing fentanyl. So how else might we do this?
Worth mentioning that fentanyl is mostly a mole that popped up because of the crackdown on “traditional” opioids, so the black market came up with a compound that was, first, not-yet-illegal, but also (I think) ~100 times more potent than heroin, so even if you might get caught, the amount of high-inducing material that you can produce with it if you don’t get caught might offset the risks (EV >> 0).
Some other thoughts:
Years ago, I was reading Bostrom’s old papers about the ethics of human enhancement. When talking about augmenting human intelligence, he said something like “smarter humans would be able to design and properly follow more complicated, but globally better, tax systems”. I returned to this sometime recently (unsure what prompted the return) and thought that, well, sure, there must be some gains by complicating the taxes (or more generally the governance structure[1]) in an intelligent way, but humans are currently insufficiently intelligent for them to work. But the bigger gains are most likely in intelligent simplification and refactoring the governance spaghetti code that is powering our civilization. E.g., it’s fairly plausible that we’d 80⁄20 taxation improvements by just figuring out how to escape the current equilibrium where LVT is an abnormality, rather than one of the main pillars of governance.
Apparently, people systematically overlook subtractive changes. FWIW, it seems to be a “natural” human instinct that if something doesn’t seem to work, you fix it by adding more stuff, rather than figuring out what causes the problem, and then eliminating it.
One way to implement “legislative garbage collection” is to make the retention of rules/laws costly. IIRC, the way it worked in the Icelandic Commonwealth was that every year, at the Althing, one of the law-speakers was supposed to recite the entire law from memory, and if he missed/changed something and nobody protested, so it was. In this way, things that actually mattered to people were preserved, and things that weren’t weren’t. This is not a strong recommendation, and there are, of course, good counter-considerations (e.g., the Chesterton’s fence sort of stuff).
one example might be ranked-choice voting, the biggest problem of which, as far as I know, is that a lot of people seem to be unable to understand how it works
There’s a story, the reason why you have a lot of anime that goes in very weird sexual directions is that Japan is an incredibly sexually repressive society, so the Japanese channel their compressed libido towards art.
I don’t really believe this story, but I think the pattern it exemplifies is at play here.
Rats have very ambitious goals of fixing the world. They see that the world is in a pretty bad shape and demands fixing. But they can’t fix it. No one else can fix it either (nihil supernum, deep atheism), and even if they could, it would likely be bad, because their values are not yours (even deeper atheism). So you yearn for a world in which you smash those limitations. “Power” (or a specific kind of it) is the thing you need, and its value is not bounded, so the threshold of a superstimulus is largely your imagination, perhaps constrained by some ontological assumptions. That’s why this theme recurs through ratfic so much.
This seems somewhat obvious to me[1] and so I am somewhat surprised that people in the comments mostly seem to explain it in terms of game-theoretic (or other) realism or people having consequentialist-ish views, etc.
partly through introspection + the Copernican principle, FWIW