The general “method of rationality” does not require any specific theorem to be true. Rationality will work so long as the universe has causality. All rationality is saying that, given actions an agent can take have some causal effect on the outcome the universe will take, the agent can estimate the optimal outcome for the agent’s goals. And the agent should do that by definition as this is what “winning” is.
We have many demonstrated such agents today, from simple control systems to cutting edge deep learning game-players. And we as humans should aspire to act as rationally as we can.
This is where non-mainstream actions come into play, for example cryonics is rational, taking risky drugs that may slow aging is rational, and so on. This is because the case for them is so strong that any rational approximation of the outcome of your actions says you should be doing these things. Another bit of non-mainstream thought is we don’t have to be certain of an outcome to chase it. For example, if cryonics has a 1% chance of working, mainstream thought says we should just take the 99% case of it failing as THE expected outcome, declare it “doesn’t work”, and not do it. But a 1% chance of not being dead is worth the expense for most people.
No theorems are required, only that the laws of physics allow for us to rationally compute what to do. [note that religious beliefs state the opposite of this. For example, were an invisible being pulling the strings of reality, then merely “thinking” in a way that being doesn’t like might cause that being to give you bad outcomes. mainstream religions contain various “hostile to rationality” memes, some religions state you should stop thinking, others that you should “take it on faith” that everything your local church leader states is factual, and so on.]
The general “method of rationality” does not require any specific theorem to be true. Rationality will work so long as the universe has causality. All rationality is saying that, given actions an agent can take have some causal effect on the outcome the universe will take, the agent can estimate the optimal outcome for the agent’s goals. And the agent should do that by definition as this is what “winning” is
And the agent can learn to do that better. In a universe where intuition and practical experience beat explicit reasoning, there is no point in teaching or learnjng rationality. So there are actually a couple of further assumptions.
This is because the case for them is so strong that any rational approximation of the outcome of your actions says you should be doing these things. Another bit of non-mainstream thought is we don’t have to be certain of an outcome to chase it. For example, if cryonics has a 1% chance of working, mainstream thought says we should just take the 99% case of it failing as THE expected outcome, declare it “doesn’t work”, and not do it. But a 1% chance of not being dead is worth the expense for most people.
Which also requires some unstated assumptions. You are assuming that it is a win to merely be successfully revivified from cryonics , but you would also need to assume that you are not in a hell world, and even if you are not , that you would be happy having no social ties. Also there are trade offs between the amount of money spent on cryonics, and leaving money to your descendants. It actually takes a rather unusual person, a person who who has a lot of spare money and few social connections, to value cryonics.
This is one of the tenants of rationality. If the best information available and the best method for assessing probability clear says something different than “mainstream” opinions, then probably the mainstream is simply wrong.
During the pandemic there were many examples of this, since modeling an exponential process is something that is easy to do with math but mainstream decision makers often failed to follow the predictions, using usually linear models or incorrect “Intuition and practical experience”.
As a side note there’s many famous examples where this fails, usually intuition or practical experience fails when contrasted with well collected large scale data. I should say it technically always fails. Another element of rationality is it’s not enough to be right you have to have made the right conclusion. As an example it is incorrect to hit on 20 on black jack even if you win the hand you are still wrong to do it, unless you have a way of seeing the next card.
(Or in more explicit terms the policy you use needs to be evidence based and the best available, and it’s effectiveness measured over large data sets not local and immediate term outcomes. This means that sometimes having the best chance of winning means you lose)
As for a “hell world”, known human history has had very few humans living in “hell” conditions for long. And someone can make new friends and family. So these objections are not rational.
Wrong about their values, or wrong about the actions they should take to maximize their values? Is it inconceivable that someone with strong preferences for maintaining their social connections, etc., could correctly reject cryonics?
As for a “hell world”, known human history has had very few humans living in “hell” conditions for long.
But you can still have a preference for experiencing zero torture.
Wrong about the actions they should take to maximize their values.
It’s inconceivable because it’s a failure of imagination. Someone who has many social connections now will potentially able to make many new ones then were they to survive cryo. Moreover reflecting on past successes requires one to still exist to remember
Could a human exist that should rationally say no to cryo? In theory yes but probably none have ever existed. As long as someone extracts any positive utility at all from a future day of existing then continuing to exist is better than death. And while yes certain humans live in chronic pain any technology able to rebuild a cryo patient can almost certainly fix the problem causing it.
Could a human exist that should rationally say no to cryo? In theory yes but probably none have ever existed. As long as someone extracts any positive utility at all from a future day of existing then continuing to exist is better than death. And while yes certain humans live in chronic pain any technology able to rebuild a cryo patient can almost certainly fix the problem causing it.
You need to say our of 100 billion humans someone lived who has a problem that can’t be fixed that suffers more existing than not. This is a paradox and I say none exist as all problems are brain or body faults that can be fixed.
No. https://www.lesswrong.com/posts/bAQDzke3TKfQh6mvZ/halpern-s-paper-a-refutation-of-cox-s-theorem
The general “method of rationality” does not require any specific theorem to be true. Rationality will work so long as the universe has causality. All rationality is saying that, given actions an agent can take have some causal effect on the outcome the universe will take, the agent can estimate the optimal outcome for the agent’s goals. And the agent should do that by definition as this is what “winning” is.
We have many demonstrated such agents today, from simple control systems to cutting edge deep learning game-players. And we as humans should aspire to act as rationally as we can.
This is where non-mainstream actions come into play, for example cryonics is rational, taking risky drugs that may slow aging is rational, and so on. This is because the case for them is so strong that any rational approximation of the outcome of your actions says you should be doing these things. Another bit of non-mainstream thought is we don’t have to be certain of an outcome to chase it. For example, if cryonics has a 1% chance of working, mainstream thought says we should just take the 99% case of it failing as THE expected outcome, declare it “doesn’t work”, and not do it. But a 1% chance of not being dead is worth the expense for most people.
No theorems are required, only that the laws of physics allow for us to rationally compute what to do. [note that religious beliefs state the opposite of this. For example, were an invisible being pulling the strings of reality, then merely “thinking” in a way that being doesn’t like might cause that being to give you bad outcomes. mainstream religions contain various “hostile to rationality” memes, some religions state you should stop thinking, others that you should “take it on faith” that everything your local church leader states is factual, and so on.]
And the agent can learn to do that better. In a universe where intuition and practical experience beat explicit reasoning, there is no point in teaching or learnjng rationality. So there are actually a couple of further assumptions.
Which also requires some unstated assumptions. You are assuming that it is a win to merely be successfully revivified from cryonics , but you would also need to assume that you are not in a hell world, and even if you are not , that you would be happy having no social ties. Also there are trade offs between the amount of money spent on cryonics, and leaving money to your descendants. It actually takes a rather unusual person, a person who who has a lot of spare money and few social connections, to value cryonics.
Or normal people are just wrong.
This is one of the tenants of rationality. If the best information available and the best method for assessing probability clear says something different than “mainstream” opinions, then probably the mainstream is simply wrong.
During the pandemic there were many examples of this, since modeling an exponential process is something that is easy to do with math but mainstream decision makers often failed to follow the predictions, using usually linear models or incorrect “Intuition and practical experience”.
As a side note there’s many famous examples where this fails, usually intuition or practical experience fails when contrasted with well collected large scale data. I should say it technically always fails. Another element of rationality is it’s not enough to be right you have to have made the right conclusion. As an example it is incorrect to hit on 20 on black jack even if you win the hand you are still wrong to do it, unless you have a way of seeing the next card.
(Or in more explicit terms the policy you use needs to be evidence based and the best available, and it’s effectiveness measured over large data sets not local and immediate term outcomes. This means that sometimes having the best chance of winning means you lose)
As for a “hell world”, known human history has had very few humans living in “hell” conditions for long. And someone can make new friends and family. So these objections are not rational.
Wrong about their values, or wrong about the actions they should take to maximize their values? Is it inconceivable that someone with strong preferences for maintaining their social connections, etc., could correctly reject cryonics?
But you can still have a preference for experiencing zero torture.
Wrong about the actions they should take to maximize their values.
It’s inconceivable because it’s a failure of imagination. Someone who has many social connections now will potentially able to make many new ones then were they to survive cryo. Moreover reflecting on past successes requires one to still exist to remember
Could a human exist that should rationally say no to cryo? In theory yes but probably none have ever existed. As long as someone extracts any positive utility at all from a future day of existing then continuing to exist is better than death. And while yes certain humans live in chronic pain any technology able to rebuild a cryo patient can almost certainly fix the problem causing it.
Waking from cryo is equivalent to exile. Exile is a punishment.
Yes. Doesn’t matter though.
Could a human exist that should rationally say no to cryo? In theory yes but probably none have ever existed. As long as someone extracts any positive utility at all from a future day of existing then continuing to exist is better than death. And while yes certain humans live in chronic pain any technology able to rebuild a cryo patient can almost certainly fix the problem causing it.
You need to say our of 100 billion humans someone lived who has a problem that can’t be fixed that suffers more existing than not. This is a paradox and I say none exist as all problems are brain or body faults that can be fixed.
You are assuming selfishness. A person has to trade off the cost of cryo against the benefits of leaving money to their family, or charity.
Now assuming benevolent motivations.