When Holden wrote his criticism of SIAI he also made the point that SIAI is overly optimistic when it comes to creating a FAI.
Holden: I believe that the probability of an unfavorable outcome—by which I mean an outcome essentially equivalent to what a UFAI would bring about—exceeds 90% in such a scenario. I believe the goal of designing a “Friendly” utility function is likely to be beyond the abilities even of the best team of humans willing to design such a function.
SIAI considers the problem of creating FAI solvable. That view can be described as believing that there’s good in the machine. If only we program them right, then they will be good.
Those journalists think that belief is naive.
The description isn’t nice but I have seen worse in my own contact with the German media while promoting Quantified Self in German press.
When Holden wrote his criticism of SIAI he also made the point that SIAI is overly optimistic when it comes to creating a FAI.
SIAI considers the problem of creating FAI solvable. That view can be described as believing that there’s good in the machine. If only we program them right, then they will be good.
Those journalists think that belief is naive.
The description isn’t nice but I have seen worse in my own contact with the German media while promoting Quantified Self in German press.