I think your pooh-pooh’ing such infantile and amateurish efforts as there is silly when the reasoning is entirely bogus.
I hope I am not “pooh-pooh’ing”. There do seem to be a number of points on which I disagree. I feel a bit as though I am up against a propaganda machine—or a reality distortion field. Part of my response is to point out that the other side of the argument has vested interests in promoting a particular world view—and so its views on the topic should be taken with multiple pinches of salt.
Why don’t you refocus your criticism on the more legitimate weakness of existential risks: that is highly likely to be irrelevant (either futile or unnecessary), since by its own prediction, the relevant risks are highly complex and hard to mitigate against, and people in general are highly unlikely to either understand the issues or cooperate on them.
I am not sure I understand fully—but I think the short answer is because I don’t agree with that. What risks there are, we can collectively do things about. I appreciate that it isn’t easy to know what to do, and am generally supportive and sympathetic towards efforts to figure that out.
Probably my top recommendation on that front so far is corporate reputation systems. We have these huge, powerful creatures lumbering around on the planet, and governments provide little infrastructure for tracking their bad deeds. Reviews and complaints scattered around the internet is just not good enough. If there’s much chance of corporation-originated intelligent machines, reputation-induced cooperation would help encourage these entities to be good and do good.
If our idea of an ethical corporation is one whose motto is “don’t be evil”, then that seems to be a pretty low standard. We surely want our corporations to aim higher than that.
One important aspect of corporate reputation is what it’s like to work there—and this is important on the department level and smaller level.
Abusive work environments cause a tremendous amount of misery, and there’s no reliable method of finding out whether a job is likely to land you in one.
This problem is made worse if leaving a job makes an potential employee seem less reliable.
Another aspect of a universal reputation system is that there needs to be some method of updating and verification. Credit agencies are especially notable for being sloppy.
What risks there are, we can collectively do things about.
Not necessarily. The risk might be virtually unstoppable, like a huge oil tanker compared to the force of a single person swimming in the water trying to slow it down.
What I mean is that, in my opinion, most of the risks under discussion are not like that. Large meteorites are a bit like that—but they are not very likely to hit us soon.
The usual Singularity Institute line is that it is worth trying too, I believe. As to what p(success) is, the first thing to do would be to make sure that the parties involved mean the same thing by “success”. Otherwise, comparing values would be rather pointless.
This all reminds me of the dirac delta function. Its width is infinitesimal but its area is 1. Sure, it’s worth trying in the “Dirac Delta Function” sense.
I hope I am not “pooh-pooh’ing”. There do seem to be a number of points on which I disagree. I feel a bit as though I am up against a propaganda machine—or a reality distortion field. Part of my response is to point out that the other side of the argument has vested interests in promoting a particular world view—and so its views on the topic should be taken with multiple pinches of salt.
I am not sure I understand fully—but I think the short answer is because I don’t agree with that. What risks there are, we can collectively do things about. I appreciate that it isn’t easy to know what to do, and am generally supportive and sympathetic towards efforts to figure that out.
Probably my top recommendation on that front so far is corporate reputation systems. We have these huge, powerful creatures lumbering around on the planet, and governments provide little infrastructure for tracking their bad deeds. Reviews and complaints scattered around the internet is just not good enough. If there’s much chance of corporation-originated intelligent machines, reputation-induced cooperation would help encourage these entities to be good and do good.
If our idea of an ethical corporation is one whose motto is “don’t be evil”, then that seems to be a pretty low standard. We surely want our corporations to aim higher than that.
One important aspect of corporate reputation is what it’s like to work there—and this is important on the department level and smaller level.
Abusive work environments cause a tremendous amount of misery, and there’s no reliable method of finding out whether a job is likely to land you in one.
This problem is made worse if leaving a job makes an potential employee seem less reliable.
Another aspect of a universal reputation system is that there needs to be some method of updating and verification. Credit agencies are especially notable for being sloppy.
Not necessarily. The risk might be virtually unstoppable, like a huge oil tanker compared to the force of a single person swimming in the water trying to slow it down.
What I mean is that, in my opinion, most of the risks under discussion are not like that. Large meteorites are a bit like that—but they are not very likely to hit us soon.
Ok, I see. Well, that’s just a big factual disagreement then.
The usual Singularity Institute line is that it is worth trying too, I believe. As to what p(success) is, the first thing to do would be to make sure that the parties involved mean the same thing by “success”. Otherwise, comparing values would be rather pointless.
This all reminds me of the dirac delta function. Its width is infinitesimal but its area is 1. Sure, it’s worth trying in the “Dirac Delta Function” sense.
Agreed that there are vested interests potentially biasing reasoning.