Given enough time for ideas to develop, any smart kid in a basement could build an AI, and every organization in the world has a massive incentive to do so. Only omnipresent surveillance could prevent everyone from writing a particular computer program.
Once you have enough power flying around to actually prevent AI, you are dealing with AI-level threats already (a not-necessarily friendly singleton).
So FAI is actually the easiest way to prevent UFAI.
The other reason is that a Friendly Singleton would be totally awesome. Like so totally awesome that it would be worth it to try for the awesomeness alone.
The other reason is that a Friendly Singleton would be totally awesome. Like so totally awesome that it would be worth it to try for the awesomeness alone.
Your tone reminded me of super religious folk who are convinced that, say “Jesus is coming back soon!” and it’ll be “totally awesome”.
Your comment reminds me of those internet atheists that are so afraid of being religious that they refuse to imagine how much better the world could be.
I do imagine how much better the world could be. I actually do want MIRI to succeed. Though currently I have low confidence in their future success, so I don’t feel “bliss” (if that’s the right word.)
BTW I’m actually slightly agnostic because of the simulation argument.
But, in the current situation (or even a few years from now) would it be possible for a smart kid in a basement to build an AI from scratch? Isn’t it something that still requires lots of progress to build on? See my reply to Qiaochu.
The question is whether it would be possible to ban further research and stop progress (open, universally accessible and buildable-upon progress), in time for AGI to be still far away enough that an isolated group in a basement will have no chance of achieving it on its own.
If by “basement” you mean “anywhere, working in the interests of any organization that wants to gain a technology advantage over the rest of the world,” then sure, I agree that this is a good question. So what do you think the answer is?
I have no idea! I am not a specialist of any kind in AI development. That is why I posted in the Stupid Questions thread asking “has MIRI considered this and made a careful analysis?” instead of making a top-level post saying “MIRI should be doing this”. It may seem that in the subthread I am actively arguing for strategy (b), but what I am doing is pushing back against what I see as insufficient answers on such an important question.
Hmmm. What is it going to do that is bad, given that it has the power to do the right thing, and is Friendly?
We have inherited some anti-authoritarian propaganda memes from a cultural war that is no longer relevant, and those taint the evaluation of a Singleton, even though they really don’t apply. At least that’s how it felt to me when I thought through it.
Given enough time for ideas to develop, any smart kid in a basement could build an AI, and every organization in the world has a massive incentive to do so. Only omnipresent surveillance could prevent everyone from writing a particular computer program.
Once you have enough power flying around to actually prevent AI, you are dealing with AI-level threats already (a not-necessarily friendly singleton).
So FAI is actually the easiest way to prevent UFAI.
The other reason is that a Friendly Singleton would be totally awesome. Like so totally awesome that it would be worth it to try for the awesomeness alone.
Your tone reminded me of super religious folk who are convinced that, say “Jesus is coming back soon!” and it’ll be “totally awesome”.
That’s nice.
Your comment reminds me of those internet atheists that are so afraid of being religious that they refuse to imagine how much better the world could be.
I do imagine how much better the world could be. I actually do want MIRI to succeed. Though currently I have low confidence in their future success, so I don’t feel “bliss” (if that’s the right word.)
BTW I’m actually slightly agnostic because of the simulation argument.
Enthusiasm? Excitement? Hope?
Yep. I don’t take it too seriously, but its at least coherent to imagine beings outside the universe who could reach in and poke at us.
But, in the current situation (or even a few years from now) would it be possible for a smart kid in a basement to build an AI from scratch? Isn’t it something that still requires lots of progress to build on? See my reply to Qiaochu.
So will progress just stop for as long as we want it to?
The question is whether it would be possible to ban further research and stop progress (open, universally accessible and buildable-upon progress), in time for AGI to be still far away enough that an isolated group in a basement will have no chance of achieving it on its own.
If by “basement” you mean “anywhere, working in the interests of any organization that wants to gain a technology advantage over the rest of the world,” then sure, I agree that this is a good question. So what do you think the answer is?
I have no idea! I am not a specialist of any kind in AI development. That is why I posted in the Stupid Questions thread asking “has MIRI considered this and made a careful analysis?” instead of making a top-level post saying “MIRI should be doing this”. It may seem that in the subthread I am actively arguing for strategy (b), but what I am doing is pushing back against what I see as insufficient answers on such an important question.
So… what do you think the answer is?
If you want my answers, you’ll need to humor me.
Uh, apparently my awesome is very different from your awesome. What scares me is this “Singleton” thing, not the friendly part.
Hmmm. What is it going to do that is bad, given that it has the power to do the right thing, and is Friendly?
We have inherited some anti-authoritarian propaganda memes from a cultural war that is no longer relevant, and those taint the evaluation of a Singleton, even though they really don’t apply. At least that’s how it felt to me when I thought through it.
Upvoted.
I’m not sure why more people around here are not concerned about the singleton thing. It almost feels like yearning for a god on some people’s part.