If it turns out that the whole MIRI/LessWrong memeplex is massively confused, what would that look like?
Note that in the late 19th century, many leading intellectuals followed a scientific/rationalist/atheist/utopian philosophy, socialism, which later turned out to be a horrible way to arrange society. See my article on this. (And it’s not good enough to say that we’re really rational, scientific, altruist, utilitarian, etc, in contrast to those people—they thought the same.)
So, how might we find that all these ideas are massively wrong?
Well, why do you think socialism is so horribly wrong? During the 20th century socialists more or less won and got what they wanted. Things like social security, govermental control over business and redistribution of wealth in general are all socialist. This all may be bad from some point of view, but it is in no way mainstream opinion.
Then, those guys whom you mention in your article called themselves communists and marxists. At most, they considered socialism as some intermediate stage for building communism. And communism went bad because it was founded on wrong assumptions about how both economy and human psychology work. So, which MIRI/Lesswrong assumptions can be wrong and cause a lot of harm? Well, here are some examples.
1) Building FAI is possible, and there is a reliable way to tell if it is truly FAI before launching it. Result if wrong: paperclips.
2) Building FAI is much more difficult than AI. Launching a random AI is civilization-level suicide. Result if this idea becomes widespread: we don’t launch any AI before civilization runs out of resources or collapses for some other reason.
3) Consciousness is sort of optional feature, intelligence can work just well without it. We can reliably say if given intelligence is a person. In other words, real world works the same way as in Peter Watts “Blindsight”. Results if wrong: many, among them classic sci-fi AI rebellion.
4) Subscribing for cryonics is generally a good idea. Result if widespread: these costs significantly contribute to worldwide economic collapse.
4) Subscribing for cryonics is generally a good idea. Result if widespread: these costs significantly contribute to worldwide economic collapse.
Under the assumption that cryonics patients will never be unfrozen, cryonics has two effects. Firstly, resources are spent on freezing people, keeping them frozen and researching how to improve cryonics. There may be fringe benefits to this (for example, researching how to freeze people more efficiently might lead to improvements in cold chains, which would be pretty snazzy). There would certainly be real resource wastage.
The second effect is in increasing the rate of circulation of the currency; freezing corpses that will never be revived is pretty close to burying money, as Keynes suggested. Widespread, sustained cryonic freezing would certainly have stimulatory, and thus inflationary, effects; I would anticipate a slightly higher inflation rate and an ambiguous effect on economic growth. The effects would be very small, however, as cryonics is relatively cheap and would presumably grow cheaper. The average US household wastes far more money and real resources by not recycling or closing curtains and by allowing food to spoil.
Firstly, resources are spent on freezing people, keeping them frozen and researching how to improve cryonics. There may be fringe benefits to this (for example, researching how to freeze people more efficiently might lead to improvements in cold chains, which would be pretty snazzy). There would certainly be real resource wastage.
How does this connect with the funding process of cryonics? When someone signs up and buys life insurance, they are eliminating consumption during their lifetime of the premiums and in effect investing it in the wider economy via the insurance company’s investment in bonds etc; when they die and the insurance is cashed in for cryonics, some of it gets used on the process itself, but a lot goes into the trust fund where again it is invested in the wider economy. The trust fund uses the return for expenses like liquid nitrogen but it’s supposed to be using only part of the return (so the endowment builds up and there’s protection against disasters) and in any case, society’s gain from the extra investment should exceed the fund’s return (since why would anyone offer the fund investments on which they would take a loss and overpay the fund?). And this gain ought to compound over the long run.
So it seems to me that the main effect of cryonics on the economy is to increase long-term growth.
Money circulates more when used for short-term consumption, than long-term investment, no? So I’d expect a shift from the former to the latter to slow economic growth.
Economic activity, i.e. positive-sum trades, are what generate economic output (that and direct labour). Investment and consumption demand can both lead to economic activity. AIUI the available evidence is that with the current economy a marginal dollar will produce a greater increase in economic activity in consumption than in investment.
I think you are failing to make a crucial distinction: positive-sum trades do not generate economic activity, they are economic activity. Investment generates future opportunities for such trades.
Can you define either one without reference to value judgements? If not, I suggest you make explicit the value judgement involved in saying that we currently have underconsumption.
Yes, due to those being standard terms in economics. Overinvestment occurs when investment is poorly allocated due to overly-cheap credit and is a key concept of the Austrian school. Underconsumption is the key concept of Keynesian economics and the economic views of every non-idiot since Keynes; even Friedman openly declared that “we are all Keynesians now”. Keynesian thought, which centres on the possibility of prolonged deficient demand (like what caused the recession), wasn’t wrong, it was incomplete; the reason fine-tuning by demand management doesn’t work simply wasn’t known until we had the concept of the vertical long-run Phillips curve. Both of these ideas are currently being taught to first-year undergraduates.
I think the whole MIRI/LessWrong memeplex is not massively confused.
But conditional on it turning out to be very very wrong, here is my answer:
A. MIRI
The future does indeed take radical new directions, but these directions are nothing remotely like the hard-takeoff de-novo-AI intelligence explosion which MIRI now treats as the max-prob scenario. Any sci-fi fan can imagine lots of weird futures, and maybe some other one will actually emerge.
MIRI’s AI work turns out to trigger a massive negative outcome—either the UFAI explosion they are trying to avoid, or something else almost as bad. This may result from fundamental mistakes in understanding, or because of some minor bug.
It turns out that the UFAI explosion really is the risk, but that MIRI’s AI work is just the wrong direction; e.g., it turns out that that building a community of AIs in rough power balance; or experimenting by trial-and-error with nascent AGIs is the right solution.
B. CfAR
It turns out that the whole CfAR methodology is far inferior to instrumental outcomes than, say, Mormonism. Of course, CfAR would say that if another approach is instrumentally better, they would adopt it. But if they only find this out years down the road, this could be a massive failure scenario.
It turns out that epistemologically non-rational techniques are instrumentally valuable. Cf. Mormonism. Again, CfAR knows this, but in this failure scenario, they fail to reconcile the differences between the two types of rationality they are trying for.
Again, I think that the above scenarios are not likely, but they’re my best guess at what “massively wrong” would look like.
They talk about AGI a bunch and end up triggering an AGI arms race.
AI doesn’t explode the way they talk about, causing them to lose credibility on the importance of AI safety as well. (Relatively slow-moving) disaster ensues.
The future is just way harder to predict than everyone thought it would be… we’re cavemen trying to envision the information age and all of our guesses are way off the mark in ways we couldn’t have possibly forseen.
It could be that it’s just impossible to build a safe FAI under the utilitarian framework and all AGI’s are UFAIs.
Otherwise the LessWrong memeplex has the advantage of being very diverse. When it comes to a subject like politics we do have people with mainstream views but we also have people who think that democracy is wrong. Having such a diversity of ideas makes it difficult for all of LessWrong to be wrong.
Some people paint a picture of LessWrong as a crowd of people who believe that everyone should do cryonics. In reality most the the participants aren’t signed up for cryonics.
Take a figure like Nassim Taleb. He’s frequently quoted on LessWrong so he’s not really outside the LessWrong memeplex. But he’s also a Christian.
There are a lot memes around flooting in the LessWrong memeplex that are there in a basic level but that most people don’t take to their full conclusion.
So, how might we find that all these ideas are massively wrong?
It’s a topic that’s very difficult to talk about. Basically you try out different ideas and look at the effects of those ideas in the real world.
Mainly because of QS data I delved into the system of Somato-Psychoeducation. The data I measured was improvement in a health variable. It was enough to get over the initial barrier to go inside the system. But know I can think inside the system and there a lot going on which I can’t put into good metrics.
There however no way to explain the framework in an article. Most people who read the introductory book don’t get the point before they spent years experiencing the system from the inside.
It’s the very nature of those really things outside the memeplex that there not easily expressible by ideas inside the memeplex in a way that won’t be misunderstood.
It could be that it’s just impossible to build a safe FAI under the utilitarian framework and all AGI’s are UFAIs.
That’s not LW-memeplex being wrong, that’s just a LW-meme which is slightly more pessimistic than the more customary “the vast majority of all UFAI’s are unfriendly but we might be able to make this work” view. I don’t think any high profile LWers who believed this would be absolutely shocked at finding out that it was too optimistic.
MIRI-LW being plausibly wrong about AI friendliness is more like, “Actually, all the fears about unfriendly AI were completely overblown. Self-improving AI don’t actually “FOOM” dramatically … they simply get smarter at the same exponential rate that the rest of the humans+tech system has been getting smarter all this time. There isn’t much practical danger of them rapidly outracing the rest of the system and seizing power and turning us all into paperclips, or anything like that.”
If that sort of thing were true, it would imply that a lot of prominent rationalists have been wasting time (or at least, doing things which end up being useful for reasons entirely different than the reasons that they were supposed to be useful for)
If it’s impossible to build FAI that might mean that one should in general discourage technological development to prevent AGI from being build.
It might building moral framework that allow for effective prevention of technological development. I do think that’s significantly differs from the current LW-memeplex.
What I mean is...the difference between “FAI is possible but difficult” and “FAI is impossible and all AI are uFAI” is like the difference between “A narrow subset of people go to heaven instead of hell” and ” and “every human goes to hell”. Those two beliefs are mostly identical
Whereas “FOOM doesn’t happen and there is no reason to worry about AI so much” is analogous to “belief in afterlife is unfounded in the first place”. That″s a massively different idea.
In one case, you’re committing a little heresy within a belief system. In the other, the entire theoretical paradigm was flawed to begin with. If it turns out that “all AI are UFAI” is true, then Lesswrong/MIRI would still be a lot more correct about things than most other people interested in futurology / transhumanism because they got the basic theoretical paradigm right. (Just like, if it turned out hell existed but not heaven, religionists of many stripes would still have reason to be fairly smug about the accuracy of their predictions even if none of the actions they advocated made a difference)
like the difference between “A narrow subset of people go to heaven instead of hell” and ” and “every human goes to hell”. Those two beliefs are mostly identical
Mostly identical as far as theology is concerned, but very different in terms of the optimal action. In the first case, you want (from a selfish-utilitarian standpoint) to ensure that you’re in the narrow subset. In the second, you want to overthrow the system.
The one I was thinking of was capitalism vs communism. I have had many communists tell me that communism only works if we make the whole world do it. A single point of failure.
That’s kind of surprising to me. A lot of systems have proportional tipping points, where a change is unstable up to a certain proportion of the sample but suddenly turns stable after that point. Herd immunity, traffic congestion, that sort of thing. If the assumptions of communism hold, that seems like a natural way of looking at it.
A structurally unstable social system just seems so obviously bad to me that I can’t imagine it being modeled as such by its proponents. Suppose Marx didn’t have access to dynamical systems theory, though.
This is what some modern communists say, and it is just an excuse (and in fact wrong, it will not work even in that case). Early communists actually believed the opposite thing: an example of one communitst nation would be enough to convert the whole world.
It’s been a while since I read Marx and Engels, but I’m not sure they would have been speaking in terms of conversion by example. IIRC, they thought of communism as a more-or-less inevitable development from capitalism, and that it would develop somewhat orthogonally to nation-state boundaries but establish itself first in those nations that were most industrialized (and therefore had progressed the furthest in Marx’s future-historical timeline). At the time they were writing, that would probably have meant Britain.
The idea of socialism in one country was a development of the Russian Revolution, and is something of a departure from Marxism as originally formulated.
Define “massively wrong”. My personal opinions (stated w/o motivation for brevity):
Building AGI from scratch is likely to be unfeasible (although we don’t know nearly enough to discard the risk altogether)
Mind uploading is feasible (and morally desirable) but will trigger intelligence growth of marginal speed rather than a “foom”
“Correct” morality is low Kolmogorov complexity and conforms with radical forms of transhumanism
Infeasibility of “classical” AGI and feasibility of mind uploading should be scientifically provable.
So: My position is very different from MIRI’s. Nevertheless I think LessWrong is very interesting and useful (in particular I’m all for promoting rationality) and MIRI is doing very interesting and useful research. Does it count as “massively wrong”?
We might find out by trying to apply them to the real world and seeing that they don’t work.
Well, it is less common now, but I think a slow retreat of the community from the position that instrumental rationality is applied science of winning at life is one of the cases when the beliefs had to be corrected to better match evidence.
Speaking broadly, the desire to lead happy / successful / interesting life (however winning is defined) it is a laudable goal shared by wast majority of humans. The problem was that some people took the idea further and decided that winning is a good qualification measure as to weather someone is a good rationalist or not, as debunked by Luke
here. There are better examples, but I can’t find them now.
Also, my two cents are that while rational agent may have some advantage over irrational one in a perfect universe, real world is so fuzzy and full of noisy information that if superior reasoning decision making skill really improves your life, improvements are likely to be not as impressive as advertised by hopeful proponents of systematized winning theory.
I think that post is wrong as a description of the LW crowd’s goals. That post talks as if one’s akrasia were a fixed fact that had nothing to do with rationality, but in fact a lot of the site is about reducing or avoiding it. Likewise intelligence; that post seems to assume that your intelligence is fixed and independent of your rationality, but in reality this site is very interested in methods of increasing intelligence. I don’t think anyone on this site is just interested in making consistent choices.
If it turns out that the whole MIRI/LessWrong memeplex is massively confused, what would that look like?
Note that in the late 19th century, many leading intellectuals followed a scientific/rationalist/atheist/utopian philosophy, socialism, which later turned out to be a horrible way to arrange society. See my article on this. (And it’s not good enough to say that we’re really rational, scientific, altruist, utilitarian, etc, in contrast to those people—they thought the same.)
So, how might we find that all these ideas are massively wrong?
Well, why do you think socialism is so horribly wrong? During the 20th century socialists more or less won and got what they wanted. Things like social security, govermental control over business and redistribution of wealth in general are all socialist. This all may be bad from some point of view, but it is in no way mainstream opinion.
Then, those guys whom you mention in your article called themselves communists and marxists. At most, they considered socialism as some intermediate stage for building communism. And communism went bad because it was founded on wrong assumptions about how both economy and human psychology work. So, which MIRI/Lesswrong assumptions can be wrong and cause a lot of harm? Well, here are some examples.
1) Building FAI is possible, and there is a reliable way to tell if it is truly FAI before launching it. Result if wrong: paperclips.
2) Building FAI is much more difficult than AI. Launching a random AI is civilization-level suicide. Result if this idea becomes widespread: we don’t launch any AI before civilization runs out of resources or collapses for some other reason.
3) Consciousness is sort of optional feature, intelligence can work just well without it. We can reliably say if given intelligence is a person. In other words, real world works the same way as in Peter Watts “Blindsight”. Results if wrong: many, among them classic sci-fi AI rebellion.
4) Subscribing for cryonics is generally a good idea. Result if widespread: these costs significantly contribute to worldwide economic collapse.
Under the assumption that cryonics patients will never be unfrozen, cryonics has two effects. Firstly, resources are spent on freezing people, keeping them frozen and researching how to improve cryonics. There may be fringe benefits to this (for example, researching how to freeze people more efficiently might lead to improvements in cold chains, which would be pretty snazzy). There would certainly be real resource wastage.
The second effect is in increasing the rate of circulation of the currency; freezing corpses that will never be revived is pretty close to burying money, as Keynes suggested. Widespread, sustained cryonic freezing would certainly have stimulatory, and thus inflationary, effects; I would anticipate a slightly higher inflation rate and an ambiguous effect on economic growth. The effects would be very small, however, as cryonics is relatively cheap and would presumably grow cheaper. The average US household wastes far more money and real resources by not recycling or closing curtains and by allowing food to spoil.
How does this connect with the funding process of cryonics? When someone signs up and buys life insurance, they are eliminating consumption during their lifetime of the premiums and in effect investing it in the wider economy via the insurance company’s investment in bonds etc; when they die and the insurance is cashed in for cryonics, some of it gets used on the process itself, but a lot goes into the trust fund where again it is invested in the wider economy. The trust fund uses the return for expenses like liquid nitrogen but it’s supposed to be using only part of the return (so the endowment builds up and there’s protection against disasters) and in any case, society’s gain from the extra investment should exceed the fund’s return (since why would anyone offer the fund investments on which they would take a loss and overpay the fund?). And this gain ought to compound over the long run.
So it seems to me that the main effect of cryonics on the economy is to increase long-term growth.
Money circulates more when used for short-term consumption, than long-term investment, no? So I’d expect a shift from the former to the latter to slow economic growth.
I don’t follow. How can consumption increase economic growth when it comes at the cost of investment? Investment is what creates economic output.
Economic activity, i.e. positive-sum trades, are what generate economic output (that and direct labour). Investment and consumption demand can both lead to economic activity. AIUI the available evidence is that with the current economy a marginal dollar will produce a greater increase in economic activity in consumption than in investment.
I think you are failing to make a crucial distinction: positive-sum trades do not generate economic activity, they are economic activity. Investment generates future opportunities for such trades.
There is such a thing as overinvestment. There is also such a thing as underconsumption, which is what we have right now.
Can you define either one without reference to value judgements? If not, I suggest you make explicit the value judgement involved in saying that we currently have underconsumption.
Yes, due to those being standard terms in economics. Overinvestment occurs when investment is poorly allocated due to overly-cheap credit and is a key concept of the Austrian school. Underconsumption is the key concept of Keynesian economics and the economic views of every non-idiot since Keynes; even Friedman openly declared that “we are all Keynesians now”. Keynesian thought, which centres on the possibility of prolonged deficient demand (like what caused the recession), wasn’t wrong, it was incomplete; the reason fine-tuning by demand management doesn’t work simply wasn’t known until we had the concept of the vertical long-run Phillips curve. Both of these ideas are currently being taught to first-year undergraduates.
I think the whole MIRI/LessWrong memeplex is not massively confused.
But conditional on it turning out to be very very wrong, here is my answer:
A. MIRI
The future does indeed take radical new directions, but these directions are nothing remotely like the hard-takeoff de-novo-AI intelligence explosion which MIRI now treats as the max-prob scenario. Any sci-fi fan can imagine lots of weird futures, and maybe some other one will actually emerge.
MIRI’s AI work turns out to trigger a massive negative outcome—either the UFAI explosion they are trying to avoid, or something else almost as bad. This may result from fundamental mistakes in understanding, or because of some minor bug.
It turns out that the UFAI explosion really is the risk, but that MIRI’s AI work is just the wrong direction; e.g., it turns out that that building a community of AIs in rough power balance; or experimenting by trial-and-error with nascent AGIs is the right solution.
B. CfAR
It turns out that the whole CfAR methodology is far inferior to instrumental outcomes than, say, Mormonism. Of course, CfAR would say that if another approach is instrumentally better, they would adopt it. But if they only find this out years down the road, this could be a massive failure scenario.
It turns out that epistemologically non-rational techniques are instrumentally valuable. Cf. Mormonism. Again, CfAR knows this, but in this failure scenario, they fail to reconcile the differences between the two types of rationality they are trying for.
Again, I think that the above scenarios are not likely, but they’re my best guess at what “massively wrong” would look like.
MIRI failure modes that all seem likely to me:
They talk about AGI a bunch and end up triggering an AGI arms race.
AI doesn’t explode the way they talk about, causing them to lose credibility on the importance of AI safety as well. (Relatively slow-moving) disaster ensues.
The future is just way harder to predict than everyone thought it would be… we’re cavemen trying to envision the information age and all of our guesses are way off the mark in ways we couldn’t have possibly forseen.
Uploads come first.
A few that come to mind:
Some religious framework being basically correct. Humans having souls, an afterlife, etc.
Antinatalism as the correct moral framework.
Romantic ideas of the ancestral environment are correct and what feels like progress is actually things getting worse.
The danger of existential risk peaked with the cold war and further technological advances will only hasten the decline.
It could be that it’s just impossible to build a safe FAI under the utilitarian framework and all AGI’s are UFAIs.
Otherwise the LessWrong memeplex has the advantage of being very diverse. When it comes to a subject like politics we do have people with mainstream views but we also have people who think that democracy is wrong. Having such a diversity of ideas makes it difficult for all of LessWrong to be wrong.
Some people paint a picture of LessWrong as a crowd of people who believe that everyone should do cryonics. In reality most the the participants aren’t signed up for cryonics.
Take a figure like Nassim Taleb. He’s frequently quoted on LessWrong so he’s not really outside the LessWrong memeplex. But he’s also a Christian.
There are a lot memes around flooting in the LessWrong memeplex that are there in a basic level but that most people don’t take to their full conclusion.
It’s a topic that’s very difficult to talk about. Basically you try out different ideas and look at the effects of those ideas in the real world. Mainly because of QS data I delved into the system of Somato-Psychoeducation. The data I measured was improvement in a health variable. It was enough to get over the initial barrier to go inside the system. But know I can think inside the system and there a lot going on which I can’t put into good metrics.
There however no way to explain the framework in an article. Most people who read the introductory book don’t get the point before they spent years experiencing the system from the inside.
It’s the very nature of those really things outside the memeplex that there not easily expressible by ideas inside the memeplex in a way that won’t be misunderstood.
That’s not LW-memeplex being wrong, that’s just a LW-meme which is slightly more pessimistic than the more customary “the vast majority of all UFAI’s are unfriendly but we might be able to make this work” view. I don’t think any high profile LWers who believed this would be absolutely shocked at finding out that it was too optimistic.
MIRI-LW being plausibly wrong about AI friendliness is more like, “Actually, all the fears about unfriendly AI were completely overblown. Self-improving AI don’t actually “FOOM” dramatically … they simply get smarter at the same exponential rate that the rest of the humans+tech system has been getting smarter all this time. There isn’t much practical danger of them rapidly outracing the rest of the system and seizing power and turning us all into paperclips, or anything like that.”
If that sort of thing were true, it would imply that a lot of prominent rationalists have been wasting time (or at least, doing things which end up being useful for reasons entirely different than the reasons that they were supposed to be useful for)
If it’s impossible to build FAI that might mean that one should in general discourage technological development to prevent AGI from being build.
It might building moral framework that allow for effective prevention of technological development. I do think that’s significantly differs from the current LW-memeplex.
What I mean is...the difference between “FAI is possible but difficult” and “FAI is impossible and all AI are uFAI” is like the difference between “A narrow subset of people go to heaven instead of hell” and ” and “every human goes to hell”. Those two beliefs are mostly identical
Whereas “FOOM doesn’t happen and there is no reason to worry about AI so much” is analogous to “belief in afterlife is unfounded in the first place”. That″s a massively different idea.
In one case, you’re committing a little heresy within a belief system. In the other, the entire theoretical paradigm was flawed to begin with. If it turns out that “all AI are UFAI” is true, then Lesswrong/MIRI would still be a lot more correct about things than most other people interested in futurology / transhumanism because they got the basic theoretical paradigm right. (Just like, if it turned out hell existed but not heaven, religionists of many stripes would still have reason to be fairly smug about the accuracy of their predictions even if none of the actions they advocated made a difference)
Mostly identical as far as theology is concerned, but very different in terms of the optimal action. In the first case, you want (from a selfish-utilitarian standpoint) to ensure that you’re in the narrow subset. In the second, you want to overthrow the system.
We should be wary of ideologies that involve one massive failure point....crap.
Could you elaborate/give-some-examples?
What are some ideologies that do/don’t have (one massive failure point)/(Lots of small failure points)?
The one I was thinking of was capitalism vs communism. I have had many communists tell me that communism only works if we make the whole world do it. A single point of failure.
I wouldn’t call that a single point of failure, I’d call that a refusal to test it and an admission of extreme fragility.
That’s kind of surprising to me. A lot of systems have proportional tipping points, where a change is unstable up to a certain proportion of the sample but suddenly turns stable after that point. Herd immunity, traffic congestion, that sort of thing. If the assumptions of communism hold, that seems like a natural way of looking at it.
A structurally unstable social system just seems so obviously bad to me that I can’t imagine it being modeled as such by its proponents. Suppose Marx didn’t have access to dynamical systems theory, though.
This is what some modern communists say, and it is just an excuse (and in fact wrong, it will not work even in that case). Early communists actually believed the opposite thing: an example of one communitst nation would be enough to convert the whole world.
It’s been a while since I read Marx and Engels, but I’m not sure they would have been speaking in terms of conversion by example. IIRC, they thought of communism as a more-or-less inevitable development from capitalism, and that it would develop somewhat orthogonally to nation-state boundaries but establish itself first in those nations that were most industrialized (and therefore had progressed the furthest in Marx’s future-historical timeline). At the time they were writing, that would probably have meant Britain.
The idea of socialism in one country was a development of the Russian Revolution, and is something of a departure from Marxism as originally formulated.
Define “massively wrong”. My personal opinions (stated w/o motivation for brevity):
Building AGI from scratch is likely to be unfeasible (although we don’t know nearly enough to discard the risk altogether)
Mind uploading is feasible (and morally desirable) but will trigger intelligence growth of marginal speed rather than a “foom”
“Correct” morality is low Kolmogorov complexity and conforms with radical forms of transhumanism
Infeasibility of “classical” AGI and feasibility of mind uploading should be scientifically provable.
So: My position is very different from MIRI’s. Nevertheless I think LessWrong is very interesting and useful (in particular I’m all for promoting rationality) and MIRI is doing very interesting and useful research. Does it count as “massively wrong”?
We might find out by trying to apply them to the real world and seeing that they don’t work.
Well, it is less common now, but I think a slow retreat of the community from the position that instrumental rationality is applied science of winning at life is one of the cases when the beliefs had to be corrected to better match evidence.
Is it? I mean, I’d happily say that the LW crowd as a whole does not seem particularly good at winning at life, but that is and should be our goal.
Speaking broadly, the desire to lead happy / successful / interesting life (however winning is defined) it is a laudable goal shared by wast majority of humans. The problem was that some people took the idea further and decided that winning is a good qualification measure as to weather someone is a good rationalist or not, as debunked by Luke here. There are better examples, but I can’t find them now.
Also, my two cents are that while rational agent may have some advantage over irrational one in a perfect universe, real world is so fuzzy and full of noisy information that if superior reasoning decision making skill really improves your life, improvements are likely to be not as impressive as advertised by hopeful proponents of systematized winning theory.
I think that post is wrong as a description of the LW crowd’s goals. That post talks as if one’s akrasia were a fixed fact that had nothing to do with rationality, but in fact a lot of the site is about reducing or avoiding it. Likewise intelligence; that post seems to assume that your intelligence is fixed and independent of your rationality, but in reality this site is very interested in methods of increasing intelligence. I don’t think anyone on this site is just interested in making consistent choices.
It would look like a failure to adequately discount for inferential chain length.
By their degree of similarity to ancient religious mythological and sympathetic magic forms with the nouns swapped out.