LessWrong team member / moderator. I’ve been a LessWrong organizer since 2011, with roughly equal focus on the cultural, practical and intellectual aspects of the community. My first project was creating the Secular Solstice and helping groups across the world run their own version of it. More recently I’ve been interested in improving my own epistemic standards and helping others to do so as well.
Raemon(Raymond Arnold)
It won’t affect your own karma. I’m not sure offhand about coauthor.
Neato, this was a clever use of LLMs.
I don’t think so.
My current best guess of what’s going on is a mix of:
It’s actually fairly cognitively demanding for me to play at my peak. When I do beat level 3 with full health, I typically feel like my brain just overclocked itself. So I think during normal play, I start out playing “medium hard”, and if I notice that I’m losing I start burning more cylinders or something. And if I start off playing quite hard, I get kinda tired by level 3.
But also, there’s a survivorship bias of “sometimes if I’ve taken damage I just give up and start over”, which may mean I’m forming an incorrect impression of how well I’d have done.
Raemon’s Deliberate (“Purposeful?”) Practice Club
Maybe to elaborate: I had a lot of neurotypical friends, and a lot of autistic friends, and barely any of them have ever called me up years later to talk if we didn’t have some kind of social context. It seems like this is not a thing people do very often.
I think it’s not just an autism thing but something of an atomic modernity thing.
You’re the one who asked “why did Screwtape invent his own terminology”, but I don’t know what words you think there was an existing terminology for. From my perspective you’re the one who didn’t include terms.
I don’t know which terms you didn’t understand and which terms you’re advocating replacing them with.
I think this part of HPMOR predates CFAR?
A claim I’ve heard habryka make before (I don’t know myself) is that there are actual rules to the kind of vague-deception that goes on in DC. And something like, while it’s a known thing that a politician will say “we’re doing policy X” when they don’t end up doing policy X, if you misrepresent who you’re affiliated with, this is an actual norm violation. (i.e. it’s lying about the Simulacrum 3 level, which is the primary level in DC)
I think I liked the first half of this article a lot, and thought the second half didn’t quite flesh it out with clear enough examples IMO. I like that it spells out the problem well though.
One note:
I don’t trust an arbitrary uploaded person (even an arbitrary LessWrong reader) to be “wise enough” to actually handle the situation correctly. I do think there are particular people who might do a good enough job.
Melting all the GPUs and then shutting down doesn’t actually count, I think (and I don’t think was intended to be the original example). Then people would just build more GPUs. It’s an important part of the problem that the system continues to melt all GPUs (at least until some better situation is achieved), and that the part where the world is like “hey, holy hell, I was using those GPUs” and tries to stop the system, is somehow resolved (either by having world governments bought into the solution, or having the system be very resistant to being stopped).
(Notably, you do eventually need to be able to stop the system somehow when you do know how to build aligned AIs so you don’t lose all most of the value of the future)
Oh lol I also just now got the pun.
Oh lol whoops.
fwiw, while the end of Ants and Grasshopper was really impactful to me, I did feel like the the first half was “worth the price of admission”. (Though yeah, this selkie story didn’t accomplish that for me). I can imagine an alt ending to the grasshopper one that focused on “okay, but, like, literally today right now, what I do with all these people who want resources from me that I can’t afford to give?”.
lol at the spellchecker choking on “Rumpelstiltskin” and not offering any alternate suggestions.
Yeah as I was writing it I realized “eh, okay it’s not exactly AI, it’s… transhumanism broadly?” but then I wasn’t actually sure what cluster I was referring to and figured AI was still a reasonable pointer.
I also did concretely wonder “man, how is he going to pack an emotional punch sticking to this agency/decision-theory theme?”. So, lol at that.
An idea fragment that just came to me is to showcase how the decision-theory applies to a lot of different situations, some of which are transhuman, but not in an escalating way, such that it feels like the whole point of the story. The transhuman angle gives it “ultimate stakes”, by virtue of making the numbers really big. And that was important to why the grasshopper story was so haunting to me. But, it doesn’t have to end on that note.
I guess it doesn’t accomplish the goal my original comment was getting at, but one solution here is for the last parable to be something like “the earliest human (or life form, if you can justify it for dogs or chimps or something) that ever faced this sort of dilemma.” And that gives it a kind of primal mythic Ur quality that has weight in part because of the transhumanist that descends from it, but centers it in something much more mundane and makes the mundanest version of it still feel important.
That feels like cheating though because it’s still drawing weight from the transhuman element. But is at least a different angle, and if the different vignettes aren’t in “order of ascending futurism” it could be more about the decisionmaking itself.
(The story
“Uprooted”“Spinning Silver” is coming to mind here, btw, and might be worth reading for inspiration ((and because it’s just good on it’s own)) It’s a novel that’s essentially a retelling of “Rumpelstiltskin”, but about a Jewish moneylender who faces various choices of how to relate to other townspeople ((who are treating her badly, antisemiticly)), but has to adopt a kind of coldness to force them to actually enforce them paying her back, with escalating stakes).
Spoiler response:
Man I started reading as was like “Wait, is this one still a metaphor for AI, or is it just actually about Selkies?”. Halfway in, I was like “oh cool, this is kind about different ways of conceptualizing agency/decision-theory-ish-stuff, which is AI-adjacent while also kind of it’s own topic, I like this variety while still sticking to some kind of overarching theme of ’parables about AI-adjacent philosophy.”
Then I got 2/3rds in and was like “oh lol, it just totally is about AI again.” I do think the topic and story here were good/important things that I could use help thinking through, although part of me is sad it didn’t somehow go in a different direction.
Rough guess, ~45 hours.