What do I get out of any of this?
If Bob asked this question, it would show he’s misunderstanding the point of Alice’s critique—unless I’m missing something, she claims he should, morally speaking, act differently.
Responding “What do I get out of any of this?” to that kind of critique is either a misunderstanding, or a rejection of morality (“I don’t care if I should be, morally speaking, doing something else, because I prefer to maximize my own utility.”).
Edit: Or also, possibly, a rejection of Alice (“You are so annoying that I’ll pretend this conversation is about something else to make you go away.”).
The author shares how terrible it feels that X is true, without bringing arguments for X being true in the first place (based on me skimming the post). That can bypass the reader’s fact-check (because why would he write about how bad it made him feel that X is true if it wasn’t?).
It feels to me like he’s trying to combine an emotional exposition (no facts, talking about his feelings) with an expository blogpost (explaining a topic), while trying to grab the best of both worlds (the persuasiveness and emotions of the former and the social status of the latter) without the substance to back it up.
omnizoid’s post as an example of where not to take EY’s side was poorly chosen. He two-boxes on Newcomb’s problem and any confident statements he makes about rationality or decision theory should be, for that reason, ignored entirely.
Of course, you go meta, without claiming that he’s object-level right, but I’m not sure using an obviously wrong post to take his side on the meta level is a good idea.
To understand why FDT is true, it’s best to start with Newcomb’s problem. Since you believe you should two-box, it might be best to debate the Newcomb’s problem with somebody first. Debating FDT at this stage seems like a waste of time for both parties.
(The visual fidelity is a very small fraction of what we actually think it is—the brain lies to us about how much we perceive.)
CharacterAI bots don’t show as public until some condition is fulfilled, which I don’t remember right now.
ChatGPT-3.5 (when prompted the perfect way) outperforms CharacterAI, I assume GPT-4 with the right prompt would as well, you might check those options out. (I haven’t tried a therapist specifically, though.)
That depends on how we define “information”—for one definition of information, qualia are information (and also everything else is, since we can only recognize something by the pattern it presents to us).
But for another definition of information, there is a conceptual difference—for example, morphine users report knowing they are in pain, but not feeling the quale of pain.
The “purpose” of most martial arts is to defeat other martial artists of roughly the same skill level, within the rules of the given martial art.
This is false—the reason they were created was self-defense. That you can have people of similar weight and belt color spar/fight each other in contests is only a side effect of that.
“Beginner’s luck” is a thing in almost all games. It’s usually what happens when someone tries a strategy so weird that the better player doesn’t immediately understand what’s going on.
That doesn’t work in chess if the difference in skill is large enough—if it did, anyone could simply make up n strategies weird enough, and without any skill, win any title or even the World Chess Championship (where n is the number of victories needed).
If you’re saying it works as a matter of random fluctuations—i.e. a player without skill could win, let’s say, 0.5% games against Magnus Carlsen, because these strategies (supposedly) usually almost never work but sometimes they do, that wouldn’t be useful against an AI, because it would still almost certainly win (or, more realistically, I think, simply model us well enough to know when we’d try the weird strategy).
Two points of order, without going into any specific accusations or their absence:
The post is transphobic, which anticorrelates with being correct/truthful/objective.
It seems optimized for smoothness/persuasion, which, based on my experience, also anticorrelates with both truth and objectivity.
That’s seemingly quite a convincing reason why you can’t be born too early. But what occurs to me now is that the problem can be about where you are, temporally, in relation to other people. (So you were still born on the same day, but depending on the entire size of the civilization (m), the probability of you having n people precede you is nm⋅100%.)
Depending on how “anthropic problem” is defined, that could potentially be true either for all, or for some anthropic problems.
How does one tell if they “are trans”
If introspection doesn’t help, maybe a specialized therapist would. I can’t offer any rationalist-level advice on how to find out if you’re transgender that you couldn’t google yourself. Good luck finding out, however.
Edit: It seems that whoever downvoted this completely correct comment has been psychologically empowered by the barrage of the transphobic crackpot comments in this thread, but it seems such is the price for having an anonymous upvote/downvote system.
Autogynephylia and being transgender are two distinct phenomena, the latter being caused by the brain of the opposite gender.
Experiencing the former doesn’t mean the latter is also secretly the former.
Surely you’re not saying that the point of arresting him is to prevent him from winning an election. Surely you’re not saying that.
There are many points to arresting criminals. Making it harder for them to amass power is one of them, of which winning an election is a subset.
Do you think such a taboo is likely to increase or decrease the risk from dictators taking over?
Increase, for the reasons I enumerated.
Maybe you could claim that would-be dictators are more likely than good candidates to have committed crimes
That’s one factor, yes.
On the other hand, if there is no such taboo, then a dictator who has already been elected is more likely to appoint cronies who will prosecute his political opponents for whatever might stick to them—even if they don’t stick, the prosecution itself can be damaging and onerous.
That doesn’t work for the reasons I gave (to very briefly repeat them—the dictator will not respect any informal taboos (or formal ones, for that matter)).
Aside from not respecting informal taboos, he will be helped by other Republicans in other branches of the government to get away with both overstepping his authority and committing outright crimes.
The idea you’re describing is the exact opposite of how social interaction and the system of power work, and has been generated and released into the wild by bad-faith actors who are invested in people falsely believing in them (another one is “we have to give the Babyeaters a platform and host their hate speech on our servers, so that people can see how terrible they are and stop supporting them,” which also works, in reality, the other way around).
Do you simultaneously know what it’s like when something looks red, and also believe that you don’t have qualia?
Are you taking into account the simulated exams? It doesn’t look like it mostly generates false facts?
It seemed to be key that I fight well by some metrics
That couldn’t be the case—that would leave you, even after having a black belt, vulnerable towards people who can’t fight, which would defeat the purpose of martial arts. Whichever technique you use, you use when responding to what the other person is currently doing. You don’t simply execute a technique that depends on the person fighting well by some metrics, and then get defeated when it turns out that they are, in fact, only in the 0.001st percentile of fighting well by any metrics we can imagine.
(That said, I’m really happy for your victories—maybe they weren’t quite as well-trained.)
This has me wonder whether an AI would have significant difficulties winning against humans who act inconsistently and suboptimally in some ways, without acting like utter idiots randomly all the time
I’m thinking the AI would predict the way in which the other person would act inconsistently and suboptimally.
If there were multiple paths to victory for the human and the AI could block only one (thereby seemingly giving the human the option to out-random the AI by picking one of the unguarded paths to victory), the AI would be better at predicting the human than the human would be at randomizing.
People are terrible at being unpredictable. I remember a 10+ years-old predictor of a rock-paper-scissors for predicting a “random” decision of a human in a series of games. The humans had no chance.
I found two statements in the article that I think are well-defined enough and go into your argument:
“The birth rank discussion isn’t about if I am born slightly earlier or later.”
How do you know? I think it’s exactly about that. I have x% probability of being born within the first x% of all humans (assuming all humans are the correct reference class—if they’re not, the problem isn’t in considering ourselves a random person from a reference class, but choosing the wrong reference class).
2. “Nobody can be born more than a few months away from their actual birthday.”
When reasoning probabilistically, we can imagine other possible worlds. We’re not talking about something being the case while at the same time not being the case. We imagine other possible worlds (created by the same sampling process that created our world) and compare them to ours. In some of those possible worlds, we were born sooner or later.
That’s true, but the definition of probability isn’t inapplicable to everything. From that, in conjunction with us being able to make probabilistic predictions about ourselves, follows that we are a random member of at least one reference class, which means that our soul has been selected at random from all possible souls from a specific reference class (if that’s what you meant by that).
By definition of probability, we can consider ourselves a random member of some reference class. (Otherwise, we couldn’t make probabilistic predictions about ourselves.) The question is picking the right reference class.
Back in 2000s, the official version was that it’s enough to ingest a pepper-grain-sized amount of the infected tissue to be infected with BSE, so maybe the decomposition of the proteins isn’t perfect (in the sense that some molecules might not be taken apart). The ingestion of the tissue is still the official mode of transmission.