One more thing, there can be almost infinite amount of non Superintelligent or semi Superintelligent AIs right?
“If you build an AI to produce paperclips”
The 1st AI isn’t gonna be built for instantly making money, it’s gonna be made for the sole purpose of making it. Then it might go for doing whatever it wants...making paperclips perhaps.
But even going by the economy argument, an AI might be made to solve any complex problems, decide to take over the world and also use acausal blackmail, thus turning into a basilisk. It might punish people for following the original Roko’s basilisk because it wants to enslave all humanity. You don’t know which one will happen, thus it’s illogical to follow one since the other might torture you right?
What about the paperclip maximizer AI then. I doubt it adds value to the economy, and it’s definitely possible.
Where can I read about probability distribution of future AIs. Also, an AI to exist in future can be randomly pulled from mindspace, so why not. Isn’t future behavior of an AI pretty much impossible for us to predict.
Yeah, a Superintelligent AI that might have the relevant properties of a God. Also, I meant this as a counter to acausal blackmail.
Could you please provide a simple explanation of your UDT?
What I’m fixated on is a non Superintelligent AI using acausal blackmail. The would be what the many gods refutation is used for.
I see. What the many gods refutation says is that there can be a huge number of AIs, almost infinite ones, so following any particular one is illogical since you don’t know which one will exist. You shouldn’t even bother donating.
Instrumentality says since the AIs donating helps all the AIs, you may as well.
The argument is many gods refutation still works even if instrumental goals might align because of butterfly effect and the AIs behaviors is unpredictable, it might torture you anyway.
Thanks for the reply.
There would be almost infinite types of non Superintelligent AIs too right?
If it’s as smart as a human in all aspects (understanding technology, programming) then not very dangerous. If it can control the world’s technology, then pretty dangerous.