My interpretation of MIRI is that they are ARE looking for alternatives and so far have not found any that don’t also seem doomed.
I mean, I’m also assuming something like this is true, probably, but it’s mostly based on “it seems like something they should do, and I ascribe a lot of competence to them”.
we can’t even ban gain-of-function research and that should be a lot easier
How much effort have we as a community put into banning gain of function vs. solving alignment? Given this, if, say, banning AGI research is 0.5 as hard as alignment (which would make it a great approach) and gain-of-function 0.1 as hard as banning AGI, would we have succeeded at a gain-of-function ban? I doubt it.
My interpretation of MIRI is that their recent public doomsaying is NOT aimed at getting people to just keep thinking harder about doomed AI alignment research agendas; rather, it is aimed at getting people to think outside the box and hopefully come up with a new plan that might actually work.
Idk, I skimmed the April Fool’s post again before submitting this, and I did not get that impression.
I mean, I’m also assuming something like this is true, probably, but it’s mostly based on “it seems like something they should do, and I ascribe a lot of competence to them”.
How much effort have we as a community put into banning gain of function vs. solving alignment? Given this, if, say, banning AGI research is 0.5 as hard as alignment (which would make it a great approach) and gain-of-function 0.1 as hard as banning AGI, would we have succeeded at a gain-of-function ban? I doubt it.
Idk, I skimmed the April Fool’s post again before submitting this, and I did not get that impression.