I’m not getting the impression that the actual Snowden’s actions are going to succeed in stopping universal surveillance.
Equally, if someone leaks information about an unsafe AI being developed by a superpower (or by a cabal of many countries, like the other intelligence services that cooperate with the NSA), that would likely only make the backing governments try to speed up the project (now that it’s public, it might have lost some of its lead in a first-take-all game) and to hide it better.
If someone is smart enough to build the first ever AGI, and they don’t care about Friendliness, are they likely to have any goals other than literally taking over the world? And are they going to care about public opinion, or a few commentators condemning their ethics?
So while all of this is probably a step in the right direction and worthwhile, I don’t think it would be nearly enough of a safeguard.
In this posting, I am not focusing on the value of a leak.
I am focusing on the value of making the project leaders suspicious that all the smart people are potentially against them.
To compare to the Snowden story: Hacker culture is anti-authoritarian. Yet the NSA has relied on the fact that there are good tech people who are willing to join a government agency. Now, the NSA is probably wondering how widespread the “ideologically incorrect” hacker ethos really is in their organization.
Comparing it to AGI: What if (1) only a handful of people were good enough to make AGI progress and (2) an anti-authoritarian ideology were known to be widespread among such people?
Actually, I wonder how NSA and the like have managed to get the best cryptographers, hackers, and other math/creative people to work there. To be sure, there are some fine math PhDs and the like who are glad to sign on, but if it is true that the smartest people, like Turing or Feynman, are irreverent rule-breakers, then does the NSA recruit these people?
(I asked the question on Quora, but I am not quite satisfied with the answer.)
World War II created a situation where even the rule-breaker types were willing to join the fight against Hitler.
I don’t think it would be nearly enough of a safeguard.
Sure, I am not suggesting this as an adequate safeguard on its own.
I am focusing on the value of making the project leaders suspicious that all the smart people are potentially against them.
Reportedly (going for the top Google result) there are 5 million people with high security clearance in the US, including 500,000 outside contractors like Snowden was. And yet in the last two decades there have been very few ethics-driven leaks (10? 20?) and none of them were on the same scale as Snowden’s. And that was before Snowden and the current crackdown on whistleblowing in the US; I expect the rate of whistleblowing/leaking to go down over time, not up.
This is strong evidence that defectors who leak data are extremely rare. You can never eliminate all leaks among millions of people, but so far the government has accomplished much more than I would have expected. Project leaders should not worry unduly.
To compare to the Snowden story: Hacker culture is anti-authoritarian.
Snowden’s leaks were apparently driven by specific ethics, not general hacker anti-authoritarianism. Whistleblowing designed to expose illegal or unethical conduct is probably correlated with anti-authoritarianism but they’re not the same.
What if (1) only a handful of people were good enough to make AGI progress and (2) an anti-authoritarian ideology were known to be widespread among such people? [...] if it is true that the smartest people, like Turing or Feynman, are irreverent rule-breakers [...]
I’m not convinced that it is true that intelligence is correlated with rule-breaking or anti-authoritorianism. What’s your evidence, aside from anecdotes of individuals like Turing and Feynman?
Note the new rules imposed since the Snowden leaks, such as the two-man rule for accessing sensitive files, and a stronger concern for vetting (which inevitably slows down recruitment and excludes some good people).
In general, bureaucracies always respond to scandals with excessive new rules that slow down all work for everyone.
All it takes is a reasonable probability of one leak, and project leaders get uptight.
I’m not convinced that it is true that intelligence is correlated with rule-breaking or anti-authoritorianism.
What’s your evidence, aside from anecdotes of individuals like Turing and Feynman?
It’s a good question, and other than a general impression and a wide variety of slogans (“think out of the box,” “Be an individual” etc), I don’t have any evidence.
Now, the NSA is probably wondering how widespread the “ideologically incorrect” hacker ethos really is in their organization.
But that hasn’t stopped them. Indeed, similar programs are only picking up.
Not that I particularly object to any attempt to raise awareness about this issue, quite the opposite in fact. This objection is based on your own analogy.
Either you copied the wrong thing into that quotation, or I am disastrously failing to understand your point.
(For the benefit of future readers in case it was indeed a mistake and David fixes it: At the moment his comment begins with the following in a blockquote: “The current official image of Bernice Summerfield, as used on the Bernice Summerfield Inside Story book, published October 2007”. Bernice Summerfield is a character in Doctor Who and not, so far as I know, in any way associated with the NSA or AGI or mentioned anywhere else in this thread.)
I’m guessing that the quoted material was actually meant to be “I am focusing on the value of making the project leaders suspicious that all the smart people are potentially against them”.
I’m not getting the impression that the actual Snowden’s actions are going to succeed in stopping universal surveillance.
I am getting the impression that they were a good start, probably the best that he—a single guy—could have done. Certainly many more people are aware of it and many of them are pissed. Big companies are unhappy too.
It’s certainly a great deal for one person to have accomplished (and most of the data he leaked hasn’t been released yet). Nevertheless, we don’t yet know if government surveillance and secrecy will be reduced as a result.
Nevertheless, we don’t yet know if government surveillance and secrecy will be reduced as a result.
This is a pretty much impossible criterion to satisfy.
Just as with AGI defectors, what you get might not be ideal or proper or satisfying or even sufficient—but that’s what you got. Working with that is much preferable to waiting for perfection.
The criterion of “will surveillance and secrecy be reduced as a result” is the only relevant one. Naturally we can’t know results in advance for certain, and that means we can’t grade actions like Snowden’s in advance either. We do the best we can, but it’s still legitimate to point out that the true impact of Snowden’s actions is not yet known, when the discussion is about how much you expect to benefit from similar actions taken by others in the future.
I’m not getting the impression that the actual Snowden’s actions are going to succeed in stopping universal surveillance.
Equally, if someone leaks information about an unsafe AI being developed by a superpower (or by a cabal of many countries, like the other intelligence services that cooperate with the NSA), that would likely only make the backing governments try to speed up the project (now that it’s public, it might have lost some of its lead in a first-take-all game) and to hide it better.
If someone is smart enough to build the first ever AGI, and they don’t care about Friendliness, are they likely to have any goals other than literally taking over the world? And are they going to care about public opinion, or a few commentators condemning their ethics?
So while all of this is probably a step in the right direction and worthwhile, I don’t think it would be nearly enough of a safeguard.
In this posting, I am not focusing on the value of a leak.
I am focusing on the value of making the project leaders suspicious that all the smart people are potentially against them.
To compare to the Snowden story: Hacker culture is anti-authoritarian. Yet the NSA has relied on the fact that there are good tech people who are willing to join a government agency. Now, the NSA is probably wondering how widespread the “ideologically incorrect” hacker ethos really is in their organization.
Comparing it to AGI: What if (1) only a handful of people were good enough to make AGI progress and (2) an anti-authoritarian ideology were known to be widespread among such people?
Actually, I wonder how NSA and the like have managed to get the best cryptographers, hackers, and other math/creative people to work there. To be sure, there are some fine math PhDs and the like who are glad to sign on, but if it is true that the smartest people, like Turing or Feynman, are irreverent rule-breakers, then does the NSA recruit these people?
(I asked the question on Quora, but I am not quite satisfied with the answer.)
World War II created a situation where even the rule-breaker types were willing to join the fight against Hitler.
Sure, I am not suggesting this as an adequate safeguard on its own.
Reportedly (going for the top Google result) there are 5 million people with high security clearance in the US, including 500,000 outside contractors like Snowden was. And yet in the last two decades there have been very few ethics-driven leaks (10? 20?) and none of them were on the same scale as Snowden’s. And that was before Snowden and the current crackdown on whistleblowing in the US; I expect the rate of whistleblowing/leaking to go down over time, not up.
This is strong evidence that defectors who leak data are extremely rare. You can never eliminate all leaks among millions of people, but so far the government has accomplished much more than I would have expected. Project leaders should not worry unduly.
Snowden’s leaks were apparently driven by specific ethics, not general hacker anti-authoritarianism. Whistleblowing designed to expose illegal or unethical conduct is probably correlated with anti-authoritarianism but they’re not the same.
I’m not convinced that it is true that intelligence is correlated with rule-breaking or anti-authoritorianism. What’s your evidence, aside from anecdotes of individuals like Turing and Feynman?
Maybe they shouldn’t worry, but they always do.
Note the new rules imposed since the Snowden leaks, such as the two-man rule for accessing sensitive files, and a stronger concern for vetting (which inevitably slows down recruitment and excludes some good people).
In general, bureaucracies always respond to scandals with excessive new rules that slow down all work for everyone.
All it takes is a reasonable probability of one leak, and project leaders get uptight.
It’s a good question, and other than a general impression and a wide variety of slogans (“think out of the box,” “Be an individual” etc), I don’t have any evidence.
But that hasn’t stopped them. Indeed, similar programs are only picking up.
Not that I particularly object to any attempt to raise awareness about this issue, quite the opposite in fact. This objection is based on your own analogy.
You realise, of course, that achieving this precise effect was a lot of the point of Wikileaks.
*(corrected per gjm below)
Either you copied the wrong thing into that quotation, or I am disastrously failing to understand your point.
(For the benefit of future readers in case it was indeed a mistake and David fixes it: At the moment his comment begins with the following in a blockquote: “The current official image of Bernice Summerfield, as used on the Bernice Summerfield Inside Story book, published October 2007”. Bernice Summerfield is a character in Doctor Who and not, so far as I know, in any way associated with the NSA or AGI or mentioned anywhere else in this thread.)
I’m guessing that the quoted material was actually meant to be “I am focusing on the value of making the project leaders suspicious that all the smart people are potentially against them”.
You are of course quite correct. I have edited my post. Thank you :-)
I am getting the impression that they were a good start, probably the best that he—a single guy—could have done. Certainly many more people are aware of it and many of them are pissed. Big companies are unhappy too.
It’s certainly a great deal for one person to have accomplished (and most of the data he leaked hasn’t been released yet). Nevertheless, we don’t yet know if government surveillance and secrecy will be reduced as a result.
This is a pretty much impossible criterion to satisfy.
Just as with AGI defectors, what you get might not be ideal or proper or satisfying or even sufficient—but that’s what you got. Working with that is much preferable to waiting for perfection.
The criterion of “will surveillance and secrecy be reduced as a result” is the only relevant one. Naturally we can’t know results in advance for certain, and that means we can’t grade actions like Snowden’s in advance either. We do the best we can, but it’s still legitimate to point out that the true impact of Snowden’s actions is not yet known, when the discussion is about how much you expect to benefit from similar actions taken by others in the future.
Keep in mind that leakers will be a problem for FAI projects as well.