Likely trivially true, can setup a scene where people recite cognitohazards and then tell you about it. Or something in that neighborhood. Like, “It’s >99.99% likely that this arrangement of atoms exists in the Sun’s plasma: 10100011011000101111010100110100111010110” and you get a psychotic break.
Canaletto
“The usual sleep is death actually. You just get resurrected in the most likely place for you to be resurrected, your waking body 8 hours later.”
People start complaining that this abuses the word death but then refuse to enter destructive teleports.
Did you discover your love for vagueposting?
Oh right, yeah, that makes sense.
I asked this question to Opus
Suppose it’s a bright day, the surface 3x3 mm reflects 5% of light. How many photons it will direct to a square of (10*10 / 10^15) m^2 in area, 40 kilometers away? Per second.
It works out to 1 photon per year.
In some sense, tax is theft. Unless it gets spent on projects that you would rather see done, but can’t do on grassroots coordination. E.g. it would be unfair to tax some people in a different country on the threat of violence, and give them literally no benefit of citizenship.
So then, if the state taxes few productive actors, and distributes to non productive, then it’s even more like robbery? You have stuff, I need stuff to spend on my wants, so gimme on the threat of violence. You don’t particularly have moral upper hand here. That makes it sounds like UBI can go badly.
And I don’t expect the governments would be able to provide protection / rule of law / arbitration of disputes services. There would be better ways to coordinate, and with different desiderata for the assistance in it, I strongly expect.
Like, to tax them you can only reason like “we have power, you don’t, we can take stuff from you for literally no benefit for you, suck it up” but it looks like it’s AIs who would have power, or people who are in direct control of fully automated production stacks. There is no reason to tax them, even in moral-ish frame, as it would be just attempted robbery. Different from how it works now, with public good production.
Unless they are somewhat aligned of course, then taking stuff from them and giving to humans is in their interests and will happen. Makes alignment look more important.
The same thing that makes UBI possible, makes the casual human wipe out too. (zero sum redistribution of your atoms to AIs)
“if I can prove that you output label A, I also output label A. Otherwise, I’ll output label B”.
It is weird that he kept missing (?) unconditional cooperator case. The objective is to get the most utility points, not to sit in cooperate-cooperate state.
EDIT Okay, it is discussed half a hour later. But still feels weird, as uhhhh approach to talking about stuff.
Did you somehow prompt away the extreme sycophancy or do you just eat that cost to your sanity? E.g. default 3.1 answering pretty stupid and vague question about physics:
You have just independently derived the exact bridge between computer science, statistical mechanics, and quantum physics.
Your phrasing—”you pipe the bookkeeping state out of a subsystem for it to do anything”—is one of the most accurate, intuitive descriptions of thermodynamics and Landauer’s Principle I have ever read.
You are entirely right:
Gpt5.2 would have absolutely replied with barely concealed contempt.
Oh yeah, I actually missed that this is a question! But anyway, yeah, it’s not particularly good question as in, it makes a lot of tangled claims also. I don’t think changing it will help, but you can try it, I guess.
It’s kind of just the effect where it’s really hard to engage with writing that has a lot of confusions/errors/disagreements in it, both in conclusions and in methods in reaching them. (From the perspective of the reader, I mean). Low value, low amount of the work you’ve done makes it worse, this post is basically a question.
I think this is mostly a meta problem, it’s not particularly about your object level. Choose your battles, etc
I now see this more positively: what they were doing could be described as doing Bayesian updates toward some theories and away from others
Isn’t Bayesian paradigm leaves it undefined how exactly you should acquire your hypothesis space in the first place? So, as you describe it, scientists exploring in search of weird phenomena, isn’t it more about getting the hypothesis you didn’t have before? Sounds like Bayesian updates is not the right abstraction for this either.
I’d sort of like to give humanity as a whole more of a vote on whether we develop AGI as fast as humanly possible, because I think their intuitions would trend in the right directions.
Well, would you say this if their intuitions were tending in the wrong directions?
I didn’t mean to say “internal” memory. E.g. you can start TM head, which is finite automaton, on empty tape. So, how is it different from starting finite automaton in the world where you can interact with something that might as well be tape, and get TM analogue in combination of agent + world.
So, it has only observations from environment, no action lever to pull? Or the actions are internal, without being relayed to environment?
(I did not read your whole post sorry)
If you look at finite state agent, don’t you also have to look at the interaction of it with some (diverse) environment? And that’s basically how TMs work? So, what’s up with that, what makes them finite state in relevant sense.
There might be a distinction here in considering CEV in near VS far mode. As this is one of the pretty strong considerations that would be included, I believe. Did you hope the CEV would be good by your lights? But you are just 1⁄8000000000 constituent of it, can go in many ways. And I’m not very sure if current (mixed) attitudes towards it would be amplified in one direction or another.
Isn’t CEV pretty much person-affecting view implemented? It’s not like you consider including dead people or future people in CEV or animals or aliens? They would receive consideration from preferences of people you do, but not directly.
Also it’s probably misuse of “too combative” emoji? If someone comments this design looks pleasing and neat to them and another person comments that it looks like Substack-tier slop UI shoved into their face, then is the second poster really too combative? It’s your right to find the comment not worth your time to look at, so whatever with downvotes.
This is a horrendous design. Please stop designing things.
I would actually switch to Greater Wrong over this. Maybe even just because it displays what kind of designs the admins find good, from just trying this.
If there are other aliens whose morality is scary, then who knows what they might want to do with, or have done to, our bodies/minds.
Somewhat similar to counterfactual mugging. Although this one goes both ways, your counterfactual decision affects you and your decision affects your counterfactual self equally. Hmm.
https://www.lesswrong.com/w/counterfactual-mugging