I don’t think I have anything to say that hasn’t been said better by others in MIRI and FHI, but I think that AI boxing is impossible because (1) it can convince any gatekeepers to let it out and (2) any AI is “embodied” and not separate from the outside world if only in that its circuits pass electrons, and (3) I doubt you could convince all AGI reseachers to keep their projects isolated.
Still, I think that AI boxing could be a good stopgap measure, one of a number of techniques that are ultimately ineffectual, but could still be used to slightly hold back the danger.
What do you think is the liklihood of AI boxing being successful and why (interested in reasons, not numbers).
I don’t think I have anything to say that hasn’t been said better by others in MIRI and FHI, but I think that AI boxing is impossible because (1) it can convince any gatekeepers to let it out and (2) any AI is “embodied” and not separate from the outside world if only in that its circuits pass electrons, and (3) I doubt you could convince all AGI reseachers to keep their projects isolated.
Still, I think that AI boxing could be a good stopgap measure, one of a number of techniques that are ultimately ineffectual, but could still be used to slightly hold back the danger.