I am trying to imagine the weakest dangerous Google Search successor.
Probably this: Imagine that the search engine is able to model you. Adding such ability would make sense commercially, if the producers want to make sure that the customers are satisfied with their product. Let’s assume that the computing power is too cheap and they added too much of this ability. Now the search engine could e.g. find a result with highest rank, but then predict that seeing this result would make you disapointed, so it chooses another result instead, with somewhat lower rank, but with high predicted satisfaction. For the producers this may seem like a desired ability (tailored, personally relevant search results).
As an undesired side-effect, the search engine would de facto gain an ability to lie to you, convincingly. For example, let’s say that the function for measuring customer satisfaction only includes emotional reaction, and doesn’t include things like “a desire to know truth, even if it’s unpleasant”. That could happen for various reason, such as the producers not giving a fuck about our abstract desires, or concluding that abstract desires are mostly a hypocrisy but emotions are honest. Now as a side-effect, instead of unpleasant truth, the search engine would return a comfortable lie, if available. (Because the answer which makes the customer most happy is selected.)
Perhaps people would become aware of this, and would always double-check the answers. But suppose that the search engine is insanely good at modelling you, so it can also predict how specifically are you going to verify the questions, and whether you succeed or fail to find the truth. Now we get the more scary version which lies to you if and only if you are unable to find out that it lied. Thus to you, the search engine will seem completely trustworthy. All answers you have ever received, if you verified them, you learned that they were correct. You are only surprised to see that the search engine sometimes delivers wrong answers to other people; but in such situations you are always unable to convince the other people that those answers were wrong, because the answers are perfectly aligned with their existing beliefs. You could be smart enough to use an outside view to suspect that maybe something similar is happening to you, too. Or you may conclude that the other people are simply idiots.
Let’s imagine even more powerful search engine, and more clever designers, who instead of individual satisfaction with search results try to optimize for general satisfaction with their product in the population as a whole. As a side effect of this, now the search engine would only lie in ways that make society as a whole more happy with the results, and where the society as a whole is unable to find out what is happening. So for example, you could notice that the search engine is spreading a false information, but you would not be able to convince a majority of other people about it (because if the search engine would predict that you could, it would not have displayed the information at the first place).
Why could this be dangerous? A few “noble lies” here and there, what’s the worst thing that could happen? Imagine that the working definition of “satisfaction” is somewhat simplistic and does not include all human values. And imagine an insanely powerful search engine that could predict the results of its manipulation centuries ahead. Such engine could gently push the whole humanity towards some undesired attractor, such as a future where all people are wireheaded (from the point of view of the search engine: customers are maximally satisfied with the outcome), or just brainwashed in a cultish society which supports the search engine because the search engine never contradicts the cult teaching. That pushing would be achieved by giving higher visibility to pages supporting the idea (especially if the idea would seem appealing to the reader), lower visibility to pages explaining the dangers of the idea; and also on more meta levels, e.g. giving higher visibility to pages explaining personal scandals related to the people prominently explaining the dangers of the idea, etc.
Okay, this is stretching the credibility at a few places, but I tried to find a hypothetical scenario where a too powerful but still completely transparently designed Google Search successor would doom humanity.
I am trying to imagine the weakest dangerous Google Search successor.
Probably this: Imagine that the search engine is able to model you. Adding such ability would make sense commercially, if the producers want to make sure that the customers are satisfied with their product. Let’s assume that the computing power is too cheap and they added too much of this ability. Now the search engine could e.g. find a result with highest rank, but then predict that seeing this result would make you disapointed, so it chooses another result instead, with somewhat lower rank, but with high predicted satisfaction. For the producers this may seem like a desired ability (tailored, personally relevant search results).
As an undesired side-effect, the search engine would de facto gain an ability to lie to you, convincingly. For example, let’s say that the function for measuring customer satisfaction only includes emotional reaction, and doesn’t include things like “a desire to know truth, even if it’s unpleasant”. That could happen for various reason, such as the producers not giving a fuck about our abstract desires, or concluding that abstract desires are mostly a hypocrisy but emotions are honest. Now as a side-effect, instead of unpleasant truth, the search engine would return a comfortable lie, if available. (Because the answer which makes the customer most happy is selected.)
Perhaps people would become aware of this, and would always double-check the answers. But suppose that the search engine is insanely good at modelling you, so it can also predict how specifically are you going to verify the questions, and whether you succeed or fail to find the truth. Now we get the more scary version which lies to you if and only if you are unable to find out that it lied. Thus to you, the search engine will seem completely trustworthy. All answers you have ever received, if you verified them, you learned that they were correct. You are only surprised to see that the search engine sometimes delivers wrong answers to other people; but in such situations you are always unable to convince the other people that those answers were wrong, because the answers are perfectly aligned with their existing beliefs. You could be smart enough to use an outside view to suspect that maybe something similar is happening to you, too. Or you may conclude that the other people are simply idiots.
Let’s imagine even more powerful search engine, and more clever designers, who instead of individual satisfaction with search results try to optimize for general satisfaction with their product in the population as a whole. As a side effect of this, now the search engine would only lie in ways that make society as a whole more happy with the results, and where the society as a whole is unable to find out what is happening. So for example, you could notice that the search engine is spreading a false information, but you would not be able to convince a majority of other people about it (because if the search engine would predict that you could, it would not have displayed the information at the first place).
Why could this be dangerous? A few “noble lies” here and there, what’s the worst thing that could happen? Imagine that the working definition of “satisfaction” is somewhat simplistic and does not include all human values. And imagine an insanely powerful search engine that could predict the results of its manipulation centuries ahead. Such engine could gently push the whole humanity towards some undesired attractor, such as a future where all people are wireheaded (from the point of view of the search engine: customers are maximally satisfied with the outcome), or just brainwashed in a cultish society which supports the search engine because the search engine never contradicts the cult teaching. That pushing would be achieved by giving higher visibility to pages supporting the idea (especially if the idea would seem appealing to the reader), lower visibility to pages explaining the dangers of the idea; and also on more meta levels, e.g. giving higher visibility to pages explaining personal scandals related to the people prominently explaining the dangers of the idea, etc.
Okay, this is stretching the credibility at a few places, but I tried to find a hypothetical scenario where a too powerful but still completely transparently designed Google Search successor would doom humanity.