To predict if a human ends up happy with something or not?
If you want to think about the outcomes of a a counterfactual its just a conditional whose antecedent didn’t happen.
But thats not the problem Rationalists have.
So what is the problem?
So you’re neither saying it’s not a counterfactual (despite it not involving either subjective or objective probability), nor you’re saying there is a problem with nobody being motivated to think about them.
So what are you saying?
So you’re saying that it is a counterfactual (despite not involving subjective or objective probability), but you’re saying there is a problem in nobody being motivated to think about said counterfactual?
Also, I found the post really hitting home with good ideas.
So if I’m not a Yudkowskian rationalist and I want to say that if, in Game of Life, the configuration of cells had been different (so instead of configuration1, it had been configuration2), the outcome would’ve also been different (outcome2 instead of outcome1), that’s not a counterfactual? (Since it’s not defined in terms of subjective or objective probability.)
This shows just as plaintext.
The idea is that as long as the state-machine-which-is-you continues from its last state somewhere, your consciousness survives (since that state machine running is you being conscious). Quantum suicide splits the state-machine-which-is-you into n state machines, and at the same time destroys all except 1.
That means there will be a state machine continuing the same computation from the last state, which is just another way of saying that your consciousness survives.
It all depends on the ontological status of other Everett branches. If they’re truly real (and not just something that appears in the calculations), quantum suicide should work.
Oh, so AI already writes code that Google uses. Amazing.
Your instance is the pattern, and the pattern is moved to the computer.
Since consciousness is numerically identical to the pattern (or, more precisely, the pattern being processed), the question of how to get my consciousness in the computer after the pattern is already there doesn’t make sense. The consciousness is already there, because the consciousness is the pattern, and the pattern is already there.
The reason why sexual harm is (correctly) considered a serious case of harm distinct from other harms might be evolutionary—in ancestral environment, being physically injured or hurt some other way might’ve been much less of a predictor of that person’s future than being raped (and possibly also a predictor of a much less seriously bad future).
I like the name focusing—it creates the feeling of the technique being powerful (since it signals it’s strong enough it can use a non-most-frequent meaning of focus (i.e. to concentrate)). Or maybe I’m feeling it incorrectly—English isn’t my first language—but I do like it.
With the appropriate prompt, it both understands the queries and the difference between truth and deception.
There are probably still queries where the thinking is so complicated that not even GPT-3 can understand it.
To my emotional brain cuckolding is about an order of magnitude worse than rape.
This could mean either:
a) I read about the psychological effects, and I concluded that they were one tenth as bad as I’d experience if I learned that my child isn’t mine.
b) Regardless of what either truly feels like, I feel the hypothetical of learning my child isn’t mine as an order of magnitude worse than the hypothetical of rape.
If you mean a), you’re probably extraordinarily sensitive, and when reading about a psychological impact on a normal person, you should multiply the estimate by a k>>1 constant to find out how it influences you. Being raped would probably break you completely.
If you mean b), it seems to me you’re not engaging with the topic, which is about the actual psychological damage, rather than our emotional estimates of it.
No (since I’d silence specific statements, not specific people). Nevertheless, thank you for your answer.
You might want to read the relevant article on Wikipedia, and then consider if learning that your child isn’t yours would truly have the same effect on you.
Are you one of the people in this thread who have never been raped, and yet write about how less-traumatizing-than-it’s-believed it would feel?
(You don’t have to respond.)
There is no other way it could work—the AI would know the difference between the actual world and the hallucinations it caused itself by sending data to its own sensors, and for that reason, that data wouldn’t cause its model of the world to update, and so it wouldn’t get utility from them.
It shows 5 years 12 months (instead of 6 years 0 months), which seems to be the same kind of a bug I pointed out 23 days ago.
You don’t have the data to make this conclusion (or a similar one). You haven’t explored how traumatizing it would be to be raped, and so merely observing that being treated as if you had been traumatized traumatized you isn’t enough to conclude that if you were raped and then treated as if you were traumatized, most of the trauma (or even a significant part) would come from the latter.
It’s entirely possible that the more progressive parts of the society aren’t mistaken, and that rape is so traumatizing that your experience of being treated as traumatized wouldn’t be anywhere close to making a meaningful contribution to the entirety of the trauma.
I don’t know how it’s actually done, because I don’t understand AI, but the conceptual difference is this:
The AI has a mental model of the world. If it fakes data into its sensors, it will know what it’s doing, and its mental model of the world will contain the true model of the world still being the same. Its utility won’t go up any more than a person feeding their sensory organs fake data would be actually happy (as long as they care about the actual world), because they’d know that all they’ve created by that for themselves is a virtual reality (and that’s not what they care about).