Quick thought. It is easy to test a nuclear weapon with relatively little harm done (in some desert) and from there note its effects and show (though a bit less convincingly) that if many of these nuclear weapons were used on cities and the like we would have a disaster on our hands. The case for superintelligence is not analogous. We cannot first build it and test it (safely) to see its destructive capabilities, we cannot even test if we can even build it as it would then already be too late if we were successful.
I cannot clarify how falsificationism is applied to claims like that. In addition I am unsure whether that is a possibility. I do think that if this is not a possibility it undermines the theory in some ways. E.g. classical Marxists still think it is only a matter of time until their global revolution.
I think there are ways to set up a falsifiable argument the other way, e.g. we will not reach AGI because 1. the human mind processes information above the Turing Limit and 2. all AI is within the Turing Limit. For this we do not even need to reach AGI to disprove it, we can try to show that human minds are within the Turing Limit or AI is/can be above it.
I am objecting, on some level, to all of it. Certainly some ideas (or their associated principles) seem more clear than others but none of it feels like it is there from top to bottom. It is clear from the other responses that that is because a Popperian reading is doomed to fail.
An example of human level AI from the book (p. 52):
It is also possible that a push toward emulation technology would lead to the creation of some kind of neuromorphic AI that would adapt some neurocomputational principles discovered during emulation efforts and hybridize them with synthetic methods, and that this would happen before the completion of a fully functional whole brain emulation.
You cannot disprove neurocomputational principles (e.g. Rosenblatt’s perceptron) and ” a push toward emulation technology” is a vague enough claim to not be able to engage with productively.
I feel that both the paths and dangers have an ‘ever-branchingness’ to them such that a Popperian approach of disproving a single path toward superintelligence is like chopping of the head of a hydra.
the idea that “unaligned” superhuman intelligence would produce a world inhospitable to humanity?
I think this part is most clear, the orthogonality thesis together with the concept of a singleton and unaligned superintelligence point toward extinction.
I meant P(e) = 0 and the point was to show that that does not make sense. But I think Donald has shown me exactly where I went wrong. You cannot have a utility function and then not place it in a context within which you have other feasible actions. See my response to Hobson.
Thank you for those examples. I think this shows that the way I used a utility function but without placing it in a ‘real’ situation, i.e. not some locked-off situation without much in terms of viable alternative actions with some utility, is a fallacy.
I suppose then that I conflated the “What can I know?” with the “What must I do?”, separating a belief from an associated action (I think) resolves most of the conflicts that I saw.
I think me using the term “valid” was a very poor choice and saying “worth considering” was confusing. I agree that how you act on your beliefs/evidence should be down to the maximum expected utility and I think this is where the problems lie.
Definition below taken from Artificial Intelligence: A Modern Approach by Russell and Norvig.
The probability of outcome s′, given evidence observations e, where a stands for the event that action a is executed. The agent’s preferences are captured by a utility function, U(s), which assigns a single number to express the desirability of a state. Expected utility of an action given the evidence, EU(a|e).EU(a|e)=Σs′P(Result(a)=s′|a,e)U(s′)The principle of maximum expected utility (MEU) says that a rational agent should choosethe action that maximizes the agent’s expected utility: action=argmaxaEU(a|e).
The probability of outcome s′, given evidence observations e, where a stands for the event that action a is executed. The agent’s preferences are captured by a utility function, U(s), which assigns a single number to express the desirability of a state. Expected utility of an action given the evidence, EU(a|e).
The principle of maximum expected utility (MEU) says that a rational agent should choosethe action that maximizes the agent’s expected utility: action=argmaxaEU(a|e).
If we use this definition what would we fill in to be the utility of the outcome of going extinct? Probably something like U(extinct)=0; the associated action might be something like not doing anything about AI alignment in this case. What would be enough (counter)evidence such that the action following from the principle of MEU would be to ‘risk’ the extinction? Unless I just overlooked something, I believe that e has to be 0 which is, as you said, not a probability in Bayesian probability theory. I hope this makes it more clear what I was trying to get at.
Your example of disjunctive style argument is very helpful. I guess you would state that none of them are 100% ‘proof’ of the earth being round but add (varying degrees of) probability to that hypothesis being true. That would mean that there is some very small probability that it might be flat. So then we would, with above expected utility function, never fly an airplane with associated actions for a flat earth as we would deem it very likely to crash and burn.
I would add to your last creationist point
low quality of each individual argument given the extreme burden of proof associated.
I am not sure if this dichotomy is a helpful one but we can see Templarrr as stating that there is a theoretic ‘failing’ which need not be mutually exclusive with the pragmatic ‘usefulness’ of a theory. Both of you can be right and that would still mean that it is worthwhile to think up how to ameliorate/solve the theoretical problems posed and not devalue (or discontinue) the work being done in the pragmatic domain.
I agree with you that Bostrom has a very convincing argument to make in terms of ‘attractors’.
That’s what confirmation/discomfirmation is about. It’s mostly probabilistic. (Falsifiability as a binary on-off concept is an outdated mode of doing science.)
This makes Bostrom’s work make much more sense but see my response to Daniel to see where I think it might still be problematic.
If on the onset there is a rejection of binary falsifiability then the argumentation Bostrom uses of disjunctive arguments with conjecture makes total sense, since every disjunction can only add to the total probability of it being true. Disproving each independent argument can then also not be done in a binary way, i.e. we can only decrease its probability.
you could still in principle provide enough evidence to change people’s minds about it
Changing the minds would be to decrease the probability of the (collective) argument to a point where it becomes not worth considering, yet, as Templarrr stated, any nonzero chance of extinction (for which preventative action could be undertaken) would be worth considering. Looking at it from this perspective there must be binary falsification because any chance greater than 0 makes the argument ‘valid’, i.e. worth considering.
I am assuming there are a lot, perhaps contrived, cases of nonzero chance of extinction with possible preventative action which would sound preposterous to undertake compared to AI alignment (either for their absurdly low chance or absurdly high cost to undertake). Those do not interest me; rather I wonder if this is perceived as an actual problem and why/why not? I have no clue why it would not be a problem (maybe that’s where the contrived examples come in) and maybe it would not be a problem as it is definitive proof that they are right. The latter point I find very unconvincing, so I hope there are some better refutations at hand within the community.
P.S. thanks for the recommendation, I will check what Joseph Carlsmith has written.
I believe the infinite loss here is referring to extinction.