An interesting metaphor, given how the balrog basically went back to sleep after eating the local (and only the local) dwarves. And after some clumsy hobbitses managed to wake him up again, he was safely disposed of by a professional. In no case did the balrog threaten the entire existence of the Middle-Earth.
In the first draft of the lord of the rings, the Balrog ate the hobbits and destroyed middle Earth. Tolkien considered this ending unsatisfactory, if realistic, and wisely decided to revise it.
I think Houshalter thinks it means “given the premises, is this a way things are likely to turn out?”. It might be true that “balrog eats hobbits, destroys Middle-earth” is a realistic outcome given everything up to the release of the balrog as premise.
So you are using the word in the sense that a balrog “realistically” can be killed only by a very specific magic sword, or, say, Ilúvatar “realistically” decides that all this is too much and puts his foot down (with an audible splat!)? X-)
I think the appropriate word in the context is “plausible”.
Making a small step towards seriousness, yes, Ilúvatar suddenly taking interest in Middle Earth isn’t terribly plausible, but super-specificity has its place in Tolkien’s world: the only way Sauron can be defeated is by dropping some magical jewelry into a very specific place.
That was a Shallow Balrog. Everyone knows a Balrogs strength and hunger increases as you dig deeper, and the dwarfs are starting to dig pretty deep to get the mithril out.
Yeah, you know why Deep Balrogs are so rare? Every time someone manages to find and wake one and he climbs out of the pit and starts to eat the Middle-Earth, a certain all-seeing Eye goes “MY WORLD! SPLAT!” and there is one less Deep Balrog around.
Er, no. Because we don’t (so far as I know) have any reason to expect that if we somehow produce a problematically powerful AI anything like an “all-seeing Eye” will splat it.
(Why on earth would you think my reason for saying what I said was “because it didn’t go the way [I] liked”? It seems a pointlessly uncharitable, as well as improbable, explanation.)
Because there are plenty of all-seeing eye superpowers in this world. Not everyone is convinced that the very real, very powerful security regimes around the world would be suddenly left inept when the opponent is a computer instead of a human being.
My comment didn’t contribute any less than yours to the discussion, which is rather the point. The validity of an allegory depends on the accuracy of the setup and rules, not the outcome. You seemed happy to engage until it was pointed out that the outcome was not what you expected.
Those “very real, very powerful security regimes around the world” are surprisingly inept at handling a few million people trying to migrate to other countries, and similarly inept at handling the crime waves and the political fallout generated by it.
And if you underestimate how much a threat could a mere “computer” be, read the “Friendship is Optimal” stories.
I’ve read the sequences on friendliness here and find them completely unconvincing with lack of evidence and a one-sided view the problem. I’m not about to start generalizing from fictional evidence.
I’m not sure I agree with the assessment of the examples that you give. There are billions of people who would like to live in first world countries but don’t. I think immigration controls have a particularly effective if there’s only a few million people crossing borders illegally in a world of 7 billion. And most of the immigration issues being faced by the world today such as Syrian refugees are about asylum-seekers who are in fact being permitted just in larger numbers than secondary systems were designed to support. Also the failure modes are different. If you let the wrong person in, what happens? Statistically speaking, nothing of great consequence.
Crime waves? We are currently at one of the lowest periods of violence per capita. I think the powers that be have been doing quite a good job actually.
there are plenty of all-seeing eye superpowers in this world
Oh, I see. OK then.
My impression was that it was generally agreed that superintelligences sufficiently visible and slow-growing to be squelched by governments and the like aren’t much of a threat; the balrog-like (hypothetical) ones are the ones that emerge too quickly and powerfully to be so easily stopped. So the threats you have in mind aren’t in the “balrog” category at all, for me.
You seemed happy to engage until it was pointed out that the outcome was not what you expected.
My first comment in the balrog discussion was the one you took exception to. The point at which you say I stopped being “happy to engage” is the point at which I started engaging. The picture you’re trying to paint is literally the exact opposite of the truth.
My impression was that it was generally agreed that superintelligences sufficiently visible and slow-growing to be squelched by governments and the like aren’t much of a threat; the balrog-like (hypothetical) ones are the ones that emerge too quickly and powerfully to be so easily stopped.
I don’t think that’s the case. A superintelligence doesn’t have to be balrog like to advance to the point where it’s too big to fail and thus not easily regulated by the government.
EY et al focus more on the threat of a superintelligence that can improve itself fast and have a lot of power in short amount of time but that’s not the only concerning scenario.
When a bank like HSBC can launder drug and terrorist money without any of it’s officials going to prison for it, the amount of control that a government could exert on a big company run by a complex AI might also be quite limited.
When the superintelligence becomes good enough at making money and buying politicians, it doesn’t have to worry so much about government action, and has enough time to grow slowly.
You have at least two options: either buy Putin, or hire someone to replace him, whatever is cheaper. It’s not like Putin single-handedly rules his country—he relies on his army, police, secret services, etc. All these institutions probably have many people who would enjoy replacing Putin at the top of the pyramid. Throw in some extra money (“if you are going to replace Putin, here you have a few extra billions to bribe whoever needs to be bribed to help you with the coup”).
Or the Chinese Politbureau?
I am not familiar with the internal structure of the Chinese Politbureau, but I would guess this one is easier. There are probably competing factions, so you will support the one more friendly to you.
But there is always to option to ignore both Putin and the Chinese Politbureau, and upload yourself to a computer center built in some other country.
If you are looking at an AGI that manages investment at a company like Goldman Sachs in an effective way it doesn’t even need to know directly how to buy politicians.
If it makes a lot of money for Goldman Sachs, there are other people at Goldman who can do the job of buying politicians.
When Ray Dalio of Bridgewater Associates wants to build an AI that can replace him after he retires, it’s not clear whether any government can effectively regulate it.
My impression was that it was generally agreed that superintelligences sufficiently visible and slow-growing to be squelched by governments and the like aren’t much of a threat; the balrog-like (hypothetical) ones are the ones that emerge too quickly and powerfully to be so easily stopped.
Ah, now we are at the crux of the issue. That is not generally agreed upon, at least not outside of the Yudkowski-Bostrom echo chamber. You’ll find plenty of hard-takeoff skeptics even here on LessWrong, let along AI circles where hard-takeoff scenarios are given much credence.
I think you have misunderstood me. I was not intending to say that hard-takeoff scenarios are likely (for what it’s worth, I don’t think they are) but that they are what was being analogized to balrogs here.
(Of course a slow-takeoff controllable-by-governments superintelligence can still pose a threat—e.g., some are worried about technological unemployment, or about those who own the AI(s) ending up having almost all the world’s resources. But these are different, not very balrog-like, kinds of threat.)
Oh, but that’s an entirely different proposition—that’s about the Deep Balrogs believing that an all-seeing Eye will splat them if they try to eat Middle-earth. (Also, I didn’t get the impression that the “fear of God” proposal was regarded as terribly convincing by most readers...)
An interesting metaphor, given how the balrog basically went back to sleep after eating the local (and only the local) dwarves. And after some clumsy hobbitses managed to wake him up again, he was safely disposed of by a professional. In no case did the balrog threaten the entire existence of the Middle-Earth.
In the first draft of the lord of the rings, the Balrog ate the hobbits and destroyed middle Earth. Tolkien considered this ending unsatisfactory, if realistic, and wisely decided to revise it.
“You keep using that word, I do not think it means what you think it means”
I think Houshalter thinks it means “given the premises, is this a way things are likely to turn out?”. It might be true that “balrog eats hobbits, destroys Middle-earth” is a realistic outcome given everything up to the release of the balrog as premise.
So you are using the word in the sense that a balrog “realistically” can be killed only by a very specific magic sword, or, say, Ilúvatar “realistically” decides that all this is too much and puts his foot down (with an audible splat!)? X-)
I’m not using the word at all in this thread, so far as I can recall. FWIW neither of those seems super-realistic to me given Tolkien’s premises.
Well, yes, by “you” I meant “all you people” :-D
I think the appropriate word in the context is “plausible”.
Making a small step towards seriousness, yes, Ilúvatar suddenly taking interest in Middle Earth isn’t terribly plausible, but super-specificity has its place in Tolkien’s world: the only way Sauron can be defeated is by dropping some magical jewelry into a very specific place.
That was a Shallow Balrog. Everyone knows a Balrogs strength and hunger increases as you dig deeper, and the dwarfs are starting to dig pretty deep to get the mithril out.
Yeah, you know why Deep Balrogs are so rare? Every time someone manages to find and wake one and he climbs out of the pit and starts to eat the Middle-Earth, a certain all-seeing Eye goes “MY WORLD! SPLAT!” and there is one less Deep Balrog around.
I think this may have started to be less useful as an analogy for AI safety now.
Because it didn’t go the way you liked?
Er, no. Because we don’t (so far as I know) have any reason to expect that if we somehow produce a problematically powerful AI anything like an “all-seeing Eye” will splat it.
(Why on earth would you think my reason for saying what I said was “because it didn’t go the way [I] liked”? It seems a pointlessly uncharitable, as well as improbable, explanation.)
Because there are plenty of all-seeing eye superpowers in this world. Not everyone is convinced that the very real, very powerful security regimes around the world would be suddenly left inept when the opponent is a computer instead of a human being.
My comment didn’t contribute any less than yours to the discussion, which is rather the point. The validity of an allegory depends on the accuracy of the setup and rules, not the outcome. You seemed happy to engage until it was pointed out that the outcome was not what you expected.
Those “very real, very powerful security regimes around the world” are surprisingly inept at handling a few million people trying to migrate to other countries, and similarly inept at handling the crime waves and the political fallout generated by it.
And if you underestimate how much a threat could a mere “computer” be, read the “Friendship is Optimal” stories.
I’ve read the sequences on friendliness here and find them completely unconvincing with lack of evidence and a one-sided view the problem. I’m not about to start generalizing from fictional evidence.
I’m not sure I agree with the assessment of the examples that you give. There are billions of people who would like to live in first world countries but don’t. I think immigration controls have a particularly effective if there’s only a few million people crossing borders illegally in a world of 7 billion. And most of the immigration issues being faced by the world today such as Syrian refugees are about asylum-seekers who are in fact being permitted just in larger numbers than secondary systems were designed to support. Also the failure modes are different. If you let the wrong person in, what happens? Statistically speaking, nothing of great consequence.
Crime waves? We are currently at one of the lowest periods of violence per capita. I think the powers that be have been doing quite a good job actually.
Oh, I see. OK then.
My impression was that it was generally agreed that superintelligences sufficiently visible and slow-growing to be squelched by governments and the like aren’t much of a threat; the balrog-like (hypothetical) ones are the ones that emerge too quickly and powerfully to be so easily stopped. So the threats you have in mind aren’t in the “balrog” category at all, for me.
My first comment in the balrog discussion was the one you took exception to. The point at which you say I stopped being “happy to engage” is the point at which I started engaging. The picture you’re trying to paint is literally the exact opposite of the truth.
I don’t think that’s the case. A superintelligence doesn’t have to be balrog like to advance to the point where it’s too big to fail and thus not easily regulated by the government.
EY et al focus more on the threat of a superintelligence that can improve itself fast and have a lot of power in short amount of time but that’s not the only concerning scenario.
When a bank like HSBC can launder drug and terrorist money without any of it’s officials going to prison for it, the amount of control that a government could exert on a big company run by a complex AI might also be quite limited.
When the superintelligence becomes good enough at making money and buying politicians, it doesn’t have to worry so much about government action, and has enough time to grow slowly.
How much does Putin cost? Or the Chinese Politbureau?
You have at least two options: either buy Putin, or hire someone to replace him, whatever is cheaper. It’s not like Putin single-handedly rules his country—he relies on his army, police, secret services, etc. All these institutions probably have many people who would enjoy replacing Putin at the top of the pyramid. Throw in some extra money (“if you are going to replace Putin, here you have a few extra billions to bribe whoever needs to be bribed to help you with the coup”).
I am not familiar with the internal structure of the Chinese Politbureau, but I would guess this one is easier. There are probably competing factions, so you will support the one more friendly to you.
But there is always to option to ignore both Putin and the Chinese Politbureau, and upload yourself to a computer center built in some other country.
Correct, and yet Putin rules with hardly a challenge to his supremacy.
Money is not very useful when you’re dead.
If you are looking at an AGI that manages investment at a company like Goldman Sachs in an effective way it doesn’t even need to know directly how to buy politicians. If it makes a lot of money for Goldman Sachs, there are other people at Goldman who can do the job of buying politicians.
When Ray Dalio of Bridgewater Associates wants to build an AI that can replace him after he retires, it’s not clear whether any government can effectively regulate it.
Ah, now we are at the crux of the issue. That is not generally agreed upon, at least not outside of the Yudkowski-Bostrom echo chamber. You’ll find plenty of hard-takeoff skeptics even here on LessWrong, let along AI circles where hard-takeoff scenarios are given much credence.
I think you have misunderstood me. I was not intending to say that hard-takeoff scenarios are likely (for what it’s worth, I don’t think they are) but that they are what was being analogized to balrogs here.
(Of course a slow-takeoff controllable-by-governments superintelligence can still pose a threat—e.g., some are worried about technological unemployment, or about those who own the AI(s) ending up having almost all the world’s resources. But these are different, not very balrog-like, kinds of threat.)
Only on LW: disputes about ways in which an AI is like (or unlike) a balrog X-D
Well, we’ve had a basilisk already. Apparently we’re slowly crawling backwards through alphabetical order. Next up, perhaps, Bahamut or Azathoth.
Azathoth, check.
Is there a directory of the gods and monsters somewhere? If not, I think I’ll start one.
I dunno :-) Didn’t we just have a discussion about controlling the (Beta) AI by putting the fear of God (Alpha AI) into it?
Oh, but that’s an entirely different proposition—that’s about the Deep Balrogs believing that an all-seeing Eye will splat them if they try to eat Middle-earth. (Also, I didn’t get the impression that the “fear of God” proposal was regarded as terribly convincing by most readers...)
Well, they are Maiar and so definitely should have a clue.