Raemon, I had a long time to think on this and I wanted to break down a few points. I hope you will respond and help me clarify where I am confused.
By expected value, don’t you mean it in the mathematical sense? For example, take a case where at a casino gambling game you have a slight edge in EV. (Happens when the house gives custom rules to high rollers, on roulette with computer assistance, and blackjack).
This doesn’t mean an individual playing with positive EV will accumulate money until they are banned from playing. They can absolutely have a string of bad luck and go broke.
Similarly a person using rationality in their life can have bad luck and receive a bad outcome.
Some of the obvious ones are: if cryonics has a 15 percent chance of working, 85 percent of the futures they wasted money on it. The current drugs that extend lifespan in rats and other organisms that the medical-legal establishment is slow walking studying in humans may not work, or they may work but one of the side effects kills an individual rationalist.
With that said there’s another issue here.
There is the assumptions behind rationality, and the heuristics and algorithms this particular group tries to use.
Assumptions:
World is causal.
You can compute from past events general patterns that can be reused.
Individual humans, no matter their trappings of authority, must have a mechanism in order to know what they claim.
Knowing more information relevant to be a decision when making a decision improves your odds, it’s not all luck.
Rules not written as criminal law that society wants you to follow may not be to your benefit to obey. Example, “go to college first”.
It’s just us. Many things by humans are just made up and have no information content whatsoever, they can be ignored. Examples are the idea of “generations” and of course all religion.
Gears level model. How does A cause B. If there is no connection it is possible someone is mistaken.
Reason with numbers. It is possible to describe and implement any effective decisionmaking process as numbers and written rules, reasoning in the open. You can always beat a human “going with their gut”, assuming sufficient compute..
I have others but this seems like a start on it
Algorithms
Try to apply bayes theorem
Prediction markets, expressing opinions as probability
What do they claim to know and how do they know it? Specific to humans. This let’s you dismiss the advice of whole classes of people as they have no empirical support or are paid to work against you.
Psychologists with their unvalidated and ineffective “talk therapy”, psychiatrists in many cases with their obvious crude methods in manipulating entire classes of receptor and lack of empirical tools to monitor attempts at treatment, real estate agents, stock brokers pushing specific securities, and all religion employees.
Note that I will say each of the above is majority not helpful but there are edge cases. Meaning I would trust a psychologist that was an AI system validated against a million patient’s outcomes, I would trust a psychiatrist using fMRI or internal brain electrodes, I would trust a real estate agent who is not incentivized for me to make an immediate purchase, I would trust a stock advice system with open source code, and I would trust a religion employee who can show their communication device used to contact a deity or their supernatural powers.
Sorry for the long paragraph but these are heuristics. A truly rational ASI is going to simulate it all out. We humans can at best look if someone is misleading us by looking for outright impossibilities.
Is someone we are debating even responding to our arguments. For example authority figures simply don’t engage for questions on cryonics or existential AI risk, or give meaningless platitudes that are not responding to the question asked. Someone doing this is potentially wrong about their opinion.
If an authority figure with a deeply held belief that may be wrong is even updating their belief as evidence is available that invalidates it. Does any authority figure at medical research establishments even know 21CM revived a working kidney after cryo recently? Would it alter their opinion if they were told?
If the assumptions are true, and you pick the best algorithm available, you will win relative to your other humans in expected value. Rationality is winning.
Doesn’t mean as an individual you can’t die of a heart attack despite a the correct diet while AI stocks are in a winter so you never see the financial benefits. (A gears level model would say A, AI company capital can lead to B, goods and services from AI, and thus also feeds back to A and thus owning shares is a share of infinitely)
I’m not sure I understood the point you’re making.
A point which might be related: I’m not just saying “systemized winning still involves lucks of the dice” (i.e. just because it’s positive EV doesn’t mean you’ll win). I’m saying “studying systemized winning might be negative EV (for a given person in a given point in history.”
Illustrative example: a aspiring-doctor from the distant past might have looked at a superstitious shaman and thought “man, this guy’s arguments make no sense. Shamanism seems obviously irrational”. And the aspiring doctor goes to reason about medicine from first principles… and invents leeching/bloodletting. He might have some methods/mindsets that are “an improvement” over the shaman’s mindset, but the shaman might have generations of accumulated cultural tips/tricks that tend to work even if his arguments for them are really bad. See Book Review: The Secret Of Our Success, although also the counterpoint Reason isn’t magic.
Raemon, I had a long time to think on this and I wanted to break down a few points. I hope you will respond and help me clarify where I am confused.
By expected value, don’t you mean it in the mathematical sense? For example, take a case where at a casino gambling game you have a slight edge in EV. (Happens when the house gives custom rules to high rollers, on roulette with computer assistance, and blackjack).
This doesn’t mean an individual playing with positive EV will accumulate money until they are banned from playing. They can absolutely have a string of bad luck and go broke.
Similarly a person using rationality in their life can have bad luck and receive a bad outcome.
Some of the obvious ones are: if cryonics has a 15 percent chance of working, 85 percent of the futures they wasted money on it. The current drugs that extend lifespan in rats and other organisms that the medical-legal establishment is slow walking studying in humans may not work, or they may work but one of the side effects kills an individual rationalist.
With that said there’s another issue here.
There is the assumptions behind rationality, and the heuristics and algorithms this particular group tries to use.
Assumptions:
World is causal.
You can compute from past events general patterns that can be reused.
Individual humans, no matter their trappings of authority, must have a mechanism in order to know what they claim.
Knowing more information relevant to be a decision when making a decision improves your odds, it’s not all luck.
Rules not written as criminal law that society wants you to follow may not be to your benefit to obey. Example, “go to college first”.
It’s just us. Many things by humans are just made up and have no information content whatsoever, they can be ignored. Examples are the idea of “generations” and of course all religion.
Gears level model. How does A cause B. If there is no connection it is possible someone is mistaken.
Reason with numbers. It is possible to describe and implement any effective decisionmaking process as numbers and written rules, reasoning in the open. You can always beat a human “going with their gut”, assuming sufficient compute..
I have others but this seems like a start on it
Algorithms
Try to apply bayes theorem
Prediction markets, expressing opinions as probability
What do they claim to know and how do they know it? Specific to humans. This let’s you dismiss the advice of whole classes of people as they have no empirical support or are paid to work against you.
Psychologists with their unvalidated and ineffective “talk therapy”, psychiatrists in many cases with their obvious crude methods in manipulating entire classes of receptor and lack of empirical tools to monitor attempts at treatment, real estate agents, stock brokers pushing specific securities, and all religion employees.
Note that I will say each of the above is majority not helpful but there are edge cases. Meaning I would trust a psychologist that was an AI system validated against a million patient’s outcomes, I would trust a psychiatrist using fMRI or internal brain electrodes, I would trust a real estate agent who is not incentivized for me to make an immediate purchase, I would trust a stock advice system with open source code, and I would trust a religion employee who can show their communication device used to contact a deity or their supernatural powers.
Sorry for the long paragraph but these are heuristics. A truly rational ASI is going to simulate it all out. We humans can at best look if someone is misleading us by looking for outright impossibilities.
Is someone we are debating even responding to our arguments. For example authority figures simply don’t engage for questions on cryonics or existential AI risk, or give meaningless platitudes that are not responding to the question asked. Someone doing this is potentially wrong about their opinion.
If an authority figure with a deeply held belief that may be wrong is even updating their belief as evidence is available that invalidates it. Does any authority figure at medical research establishments even know 21CM revived a working kidney after cryo recently? Would it alter their opinion if they were told?
If the assumptions are true, and you pick the best algorithm available, you will win relative to your other humans in expected value. Rationality is winning.
Doesn’t mean as an individual you can’t die of a heart attack despite a the correct diet while AI stocks are in a winter so you never see the financial benefits. (A gears level model would say A, AI company capital can lead to B, goods and services from AI, and thus also feeds back to A and thus owning shares is a share of infinitely)
I’m not sure I understood the point you’re making.
A point which might be related: I’m not just saying “systemized winning still involves lucks of the dice” (i.e. just because it’s positive EV doesn’t mean you’ll win). I’m saying “studying systemized winning might be negative EV (for a given person in a given point in history.”
Illustrative example: a aspiring-doctor from the distant past might have looked at a superstitious shaman and thought “man, this guy’s arguments make no sense. Shamanism seems obviously irrational”. And the aspiring doctor goes to reason about medicine from first principles… and invents leeching/bloodletting. He might have some methods/mindsets that are “an improvement” over the shaman’s mindset, but the shaman might have generations of accumulated cultural tips/tricks that tend to work even if his arguments for them are really bad. See Book Review: The Secret Of Our Success, although also the counterpoint Reason isn’t magic.