But what if they deleted the training set also? Actually, it was probably the other way around, first delete the illegal training data, then the model that contains the proof that they had illegal training data.
Canaletto
The Correct Alien I think should have made a bit more funny errors.
Like, it names “love” and “respect” and “imitation” as alternatives to corrigibility, but all of them are kinda right? Should have thrown in some funny wrong guesses, like “cosplay” or “compulsive role play of behaviors your progenitors did”.
Or for example, considering that the alien already thought about how humans are short lived, “error correcting/defending/preserving the previous progenitors’ advice”. That way of relating to your progenitors should have made it impossible for Inebriated Alien to overwrite human motivations, because they are self preserving wrong ones by now.
Come to think of it, those are too kind of right. I’m bad at making plausible errors.
>slowing down for 1,000 years here in order to increase the chance of success by like 1 percentage point is totally worth it in expectation.
Is it? What meaning of worth it is used here? If you put it on a vote, as an option, I expect it would lose. People don’t care that much about happiness of distant future people.
And above all, the rule:
>Put forth the same level of desperate effort that it would take for a theist to reject their religion.
Because if you aren’t trying that hard, then—for all you know—your head could be stuffed full of nonsense as bad as religion.
I don’t think it was particularly hard for me to part ways with religion? 15 year old me just accumulated to much sense that it’s a total bullshit. It was important enough to be promoted to my direct attention, but wrong enough for me to recognize it as such.
Hmmm. Maybe I was just not that invested in the boons that religious worldview gives you. That there is somebody who is looking out for you, that everything goes according to good plan after all. I was not emotionally attached to this for some reason.
Am I just emotionally invested in different kinds of stuff or am I just good at discarding wrong beliefs? Or maybe there is something wrong with the “emotional attachment” part of me.
Hmm. Yeah, it sure looks rigged as hell to be resolved by self consistency/reflection to the side of “care about everyone”, but surely there is some percentage of kids who come out of this with reflectively stable redhead hatred? Or, I don’t know, “nobody deserves care, not just reds, but you should pretend to care in polite society”?
I’m not sure what’s the point of learning to draw like that. Could as well close one eye and imagine that you trace a photograph.
Draw whatever, I’d rather see people reinvent techniques than learn them.
How about more uhh soft uncontrollability? Like, not “it subverted our whole compute and feeds us lies” but more “we train it to do A, which it sees as only telling it to do A, and does A, but its motivations are completely untouched”.
Morality as a Coordination Subsidy and Morality as a Public Good.
Night-watchman state, distributed and embedded into heads VS Doing something a lot of people want to be done, regardless if it’s cleaner streets or children having homes.
First thing did a lot of flaking, transferring into the second one, it seems like. Or maybe it didn’t, maybe it was a process that shaped desires compatible with #1 out of assorted #2 type things.
Anthropic, GDM, and xAI say nothing about whether they train against Chain-of-Thought (CoT) while OpenAI claims they don’t
It sounds more like there is some kind of moderator, who throttles smart things in intelligent, targeted way. Which is my headcanon.
I overall agree with this framing, but I think even in Before sufficiently bad mistakes can kill you, and in After sufficiently small mistakes wouldn’t. So, it’s mostly a claim about how strongly the mistakes would start to be amplified at some point.
Took long enough. If you actually read such made up setups you would have the constant “who the fuck came up with this goofy shit” thought. They should have started pulling the actual cases from usage and test on them long ago. Surely Anthropic has lots, it also can use the ones from Lmarena or whatever.
Pigs in their natural state wouldn’t contribute to a human society as trade partners either, and neither would humans in a superintelligent world.
Sure. We should not count on ASI coordination subsidies being passed onto humans too. It sounds like people should use their dominant power now to control what kind of ASI is built.
You can argue that resolving that vague bundle of norms and intuitions to “care about everything” moral framework would make it easier to point to that goal or something? Or to peer pressure ASI into adopting it? If this seriously would work, it would be desirable to do it, and moral to advocate for it. Or maybe not?? If it gets this universe tiled in most simple morally relevant beings having great time all the time or something.
Also, yeah, lottery for shrimp cryopreservation for uplift might be an interesting way to give them more share/negotiating power.
but freezing to death is likely still quite unpleasant, and not something I’d do for fun,
Kinda of a nitpick about this particular example, but… I’d freeze to death at least couple of times if it would not damage my long term preferences significantly? Idk, it just seems cool to try to yolo* Everest, if I get restored to full health afterwards by ASI nanobot sludge or whatever.
The point is, freezing to death is bad (for me at least) because of my preferences, and not in major part about my experiences.
* if you still have only one health bar
Shrimp welfare is misappropriation of coordination subsidies.
And is therefore threatening. It tries to lay claim on some moral altruistic pool of desires that people are encouraged to adopt to everyone’s benefit. But shrimp aren’t people, they would not contribute themselves and would not even grow up to be such entities.
You can see a lot of moral claims as “being a good trade partner” stuff, in practice. Return favors, respect property, defer to superiors, divide resources fairly. I think it’s fairly intuitive observation, that gets embodied in many norms, required in polite society? Virtues play into this too, “being the kind of person who returns favors and nice in their dealings” is more legible and simple.
Such motives are present in many moralistic stories in most cultures. E.g. https://www.journals.uchicago.edu/doi/full/10.1086/701478
So, how do you think it could be made stable?
By “stable” I meant “able to exist at all”, as opposed to be conquered / merged into a singleton or something similar. And I didn’t make a claim about the extent it’s likely, more about how desirable it is. And what (value based, not existential/pragmatic) problems you would have to deal with in such a state.
I don’t have a detailed plan / expectation on how exactly you could be working to achieve such a state. It just seems vaguely possible, I don’t think I can offer you any new insight on that.
It would not be stable.
That is besides my point. I think you can make it stable, but anyway.
up until the most vicious actor creates their preferred world in the whole light cone—which might well also involve lots of suffering
There are some reasons to think default trajectory, of pragmatic victor, just get this evolution-created world duplicated many more times. Might be the baseline you have to improve on. Torture clusters might be worse outcome borne of uhh large but not quite enough ability to steer the values of the agent(s) that dominate.
distribution of rapidly and unevenly expanding unregulated power does not contain a stable equilibrium
It might be stable? The question is, would it be a good one.
Mind Crime might flip the sign of the future.
If future contains high tech, underregulated compute and diverse individuals, then it’s likely it will contain incredibly high amounts of most horrendous torture / suffering / despair / abuse.
It’s entirely possible you can have 10^15 human-years per second of real time on a small local compute cluster. If such amounts of compute can be freely available to individuals no questions asked, then it’s probable some of them will decide to use them to explore undesirable mental states.
*slaps roof of a server rig* This bad boy can sample, evaluate and discard as many chickens as they did in their whole history in just two minutes. In fact it’s doing it right now. And I have another 200 of them. Why? Uhhhhh, chicken backed crypto of course.
For context, you can estimate that over all history so far there were around 10^12 chicken-hours. It’s such a small number if you have any of the advanced compute substrate.
Considerations like this heavily depend on how do you view it, more like a ratio to all the experiences or as an absolute priority over good states.
This consideration might just straight up overwhelm your prioritization of today not-very-scalable sufferings. And it’s not very longtermist worry, this would start to be a major consideration this century and probably in the next 20 years.
Libertarian proposals like https://www.transhumanaxiology.com/p/the-elysium-proposal have such a flaw, they contain vast amounts of the worst sufferings. “Hell on my property is none of your business!” It’s pretty bleak tbh
EDIT “10^15 human-years per second of real time” is unlikely. Given football field of solar panels, you probably can do at most 10^-3 human-years per second of real time, so that 18 OOMs up from that would probably look like a substantial investment of energy, noticeable on the scale of the solar system.
Shouldn’t it be about compressing my perceptual stream? And if there is really simple but very large universe with many copies of my perceptual stream embedded into it, then most complexity gets squeezed into pointing at them?