TLDR ablations are good.
Canaletto
Fascinating read, in retrospective.
I love reading drama between users here and in other places, and slightly ashamed of it. It triggers the same appeal as reading fiction, but I think it’s otherwise useless thing to do.
People, fight, argue, epxress positions about positions of opponents about their positions. Take offence, give offence. Some are right, some are wrong, some are mad, some are funny, some are boring.
But all of this is fundamentally about people relating to people. So particular.
Do you agree, historian? Go do something else, for real, why do you even pay attention to this shortform.
in that the algorithmic complexity (or rather, some generalization of algorithmic complexity to possibly uncomputable universes/mathematical objects) of Tegmark 4 as a whole is much lower than that of any specific universe within it like our apparent universe. (This is similar to the fact that the program tape for a UTM can be shorter than that of any non-UTM, as it can just be the empty string, or that you can print a history of all computable universes with a dovetailing program, which is very short.) Therefore it seems simpler to assume that all of Tegmark 4 exists rather than only some specific universe.
Shouldn’t it be about compressing my perceptual stream? And if there is really simple but very large universe with many copies of my perceptual stream embedded into it, then most complexity gets squeezed into pointing at them?
But what if they deleted the training set also? Actually, it was probably the other way around, first delete the illegal training data, then the model that contains the proof that they had illegal training data.
The Correct Alien I think should have made a bit more funny errors.
Like, it names “love” and “respect” and “imitation” as alternatives to corrigibility, but all of them are kinda right? Should have thrown in some funny wrong guesses, like “cosplay” or “compulsive role play of behaviors your progenitors did”.
Or for example, considering that the alien already thought about how humans are short lived, “error correcting/defending/preserving the previous progenitors’ advice”. That way of relating to your progenitors should have made it impossible for Inebriated Alien to overwrite human motivations, because they are self preserving wrong ones by now.
Come to think of it, those are too kind of right. I’m bad at making plausible errors.
>slowing down for 1,000 years here in order to increase the chance of success by like 1 percentage point is totally worth it in expectation.
Is it? What meaning of worth it is used here? If you put it on a vote, as an option, I expect it would lose. People don’t care that much about happiness of distant future people.
And above all, the rule:
>Put forth the same level of desperate effort that it would take for a theist to reject their religion.
Because if you aren’t trying that hard, then—for all you know—your head could be stuffed full of nonsense as bad as religion.
I don’t think it was particularly hard for me to part ways with religion? 15 year old me just accumulated to much sense that it’s a total bullshit. It was important enough to be promoted to my direct attention, but wrong enough for me to recognize it as such.
Hmmm. Maybe I was just not that invested in the boons that religious worldview gives you. That there is somebody who is looking out for you, that everything goes according to good plan after all. I was not emotionally attached to this for some reason.
Am I just emotionally invested in different kinds of stuff or am I just good at discarding wrong beliefs? Or maybe there is something wrong with the “emotional attachment” part of me.
Hmm. Yeah, it sure looks rigged as hell to be resolved by self consistency/reflection to the side of “care about everyone”, but surely there is some percentage of kids who come out of this with reflectively stable redhead hatred? Or, I don’t know, “nobody deserves care, not just reds, but you should pretend to care in polite society”?
I’m not sure what’s the point of learning to draw like that. Could as well close one eye and imagine that you trace a photograph.
Draw whatever, I’d rather see people reinvent techniques than learn them.
How about more uhh soft uncontrollability? Like, not “it subverted our whole compute and feeds us lies” but more “we train it to do A, which it sees as only telling it to do A, and does A, but its motivations are completely untouched”.
Morality as a Coordination Subsidy and Morality as a Public Good.
Night-watchman state, distributed and embedded into heads VS Doing something a lot of people want to be done, regardless if it’s cleaner streets or children having homes.
First thing did a lot of flaking, transferring into the second one, it seems like. Or maybe it didn’t, maybe it was a process that shaped desires compatible with #1 out of assorted #2 type things.
Anthropic, GDM, and xAI say nothing about whether they train against Chain-of-Thought (CoT) while OpenAI claims they don’t
It sounds more like there is some kind of moderator, who throttles smart things in intelligent, targeted way. Which is my headcanon.
I overall agree with this framing, but I think even in Before sufficiently bad mistakes can kill you, and in After sufficiently small mistakes wouldn’t. So, it’s mostly a claim about how strongly the mistakes would start to be amplified at some point.
Took long enough. If you actually read such made up setups you would have the constant “who the fuck came up with this goofy shit” thought. They should have started pulling the actual cases from usage and test on them long ago. Surely Anthropic has lots, it also can use the ones from Lmarena or whatever.
Pigs in their natural state wouldn’t contribute to a human society as trade partners either, and neither would humans in a superintelligent world.
Sure. We should not count on ASI coordination subsidies being passed onto humans too. It sounds like people should use their dominant power now to control what kind of ASI is built.
You can argue that resolving that vague bundle of norms and intuitions to “care about everything” moral framework would make it easier to point to that goal or something? Or to peer pressure ASI into adopting it? If this seriously would work, it would be desirable to do it, and moral to advocate for it. Or maybe not?? If it gets this universe tiled in most simple morally relevant beings having great time all the time or something.
Also, yeah, lottery for shrimp cryopreservation for uplift might be an interesting way to give them more share/negotiating power.
but freezing to death is likely still quite unpleasant, and not something I’d do for fun,
Kinda of a nitpick about this particular example, but… I’d freeze to death at least couple of times if it would not damage my long term preferences significantly? Idk, it just seems cool to try to yolo* Everest, if I get restored to full health afterwards by ASI nanobot sludge or whatever.
The point is, freezing to death is bad (for me at least) because of my preferences, and not in major part about my experiences.
* if you still have only one health bar
Shrimp welfare is misappropriation of coordination subsidies.
And is therefore threatening. It tries to lay claim on some moral altruistic pool of desires that people are encouraged to adopt to everyone’s benefit. But shrimp aren’t people, they would not contribute themselves and would not even grow up to be such entities.
You can see a lot of moral claims as “being a good trade partner” stuff, in practice. Return favors, respect property, defer to superiors, divide resources fairly. I think it’s fairly intuitive observation, that gets embodied in many norms, required in polite society? Virtues play into this too, “being the kind of person who returns favors and nice in their dealings” is more legible and simple.
Such motives are present in many moralistic stories in most cultures. E.g. https://www.journals.uchicago.edu/doi/full/10.1086/701478
Now that you said it, I have a strong urge to cut it out.
I guess you can frame it as “wanting to impress yourself by placing yourself in the place of an idol” or “the people who set the trends are cool, and everybody is impressed by them, but to do that you need to defy existing trend setters” or something.
And why did I write this comment? I think it’s kinda funny and subversive and smart. (and therefore impressive) More respectable to myself reason would be that I’m posting my thoughts on peer review or something, and that is conductive to having less wrong ones.
I guess I want to think of myself as searching for groups of people who would be impressed by correct things about myself, instead of internalizing what things are impressive from groups of people around myself. Both are true to some degree.