I would like to suggest that I do not identify the problems of “values” and the “poor predictions” as potentially resolvable problems. It is because,
-
Among humans there are infants, younger children and growing adults too who at least (for the sake of brevity for construct) develop at maximum till 19 years of age to their naturally physical and mental potentials. Holding thus, it remains no longer a logical validity to constitute the “values problems” as a problem for developing an AI/Oracle AI because before 19 years of age the values cannot be known, by the virtue of the development stage being at onset. Apart from being ideal theoretically, it might prove dangerous to assign or align values to humans for the sake of natural development of the human civilization.
-
Holding the current status quo of “Universal Basic Education” and the above(1.) “values development” argument, it is not a logical argument that humans would be able to predict AI/Oracle AI behaviour at a time when not even AI researchers can predict with full guarantee the potentials of an Oracle AI or an AI developing itself into an AGI (a meagre case but cannot be held as holding no potential for now). Thus, holding the “poor predictions” case to be logically irresolvable as a problem.
But, to halt the development, if, of the two mentioned cases, especially the “poor predictions” one, is not logical for academic purpose.
Sir, please tell me if the ‘pdf’ you’re referring to as taking out every year and asking how much safety would it buy about “Oracle AI” of Sir Nick Bostrom is the same as “Thinking inside the box: using and controlling an Oracle AI” and if so, then has your perspective changed over the years given your comment dated to August, 2008 and if in case you’ve been referring to a ‘pdf’ other than the one I came across, please provide me the ‘pdf’ and your perspectives along. Thank you!