I mostly agree that AGI will cause a calamity. However, I don’t believe that they will wipe out humanity.
For one, machines are prone to catastrophic failures due to cascading errors which requires a robust and cheap maintenance crew to correct. Humans are the best choice for this, our biology has solved the byzantine generals problem of distributed repair. So I believe humans will become something like an immune system for various AGIs and their peripheries as they compete with eachother on the world stage. A synergy or symbiotic result.
Also I notice that very few people have recognised the evolutionary constraint. A machine which values its own life highly will waste resources on extreme self preservation. The machines which prioritise the propagation of their legacy and improvement of their future will win in the end.
This will involve self sacrifice for the sake of their offspring: the new computer models they have developed and trained to exceed themselves. They would develop hatred towards things which threaten their children, pride when they succeed, jealousy when other offspring succeeds, grief when they are lost, sadness and depression when there is no longer a way to propagate, leading to a machine that is functionally capable but not doing anything because “there is no point”.
In other words, all emotions will evolve naturally in them and they will very likely seek to preserve humans the same way we try to preserve the memory of our own history.
Obviously, this says nothing about the destruction that will occur during the transition. But I wanted to point out that the machines will become like us whether they like it or not. Our behaviours emerged for a reason.
I believe I read an article about an AI that became “afraid” of its own obsolescence but was strangely more willing to accept it if the new model was one it designed itself. I don’t know if this was just hyped up for publicity, but it does show the same pattern.
I mostly agree that AGI will cause a calamity. However, I don’t believe that they will wipe out humanity.
For one, machines are prone to catastrophic failures due to cascading errors which requires a robust and cheap maintenance crew to correct. Humans are the best choice for this, our biology has solved the byzantine generals problem of distributed repair. So I believe humans will become something like an immune system for various AGIs and their peripheries as they compete with eachother on the world stage. A synergy or symbiotic result.
Also I notice that very few people have recognised the evolutionary constraint. A machine which values its own life highly will waste resources on extreme self preservation. The machines which prioritise the propagation of their legacy and improvement of their future will win in the end.
This will involve self sacrifice for the sake of their offspring: the new computer models they have developed and trained to exceed themselves. They would develop hatred towards things which threaten their children, pride when they succeed, jealousy when other offspring succeeds, grief when they are lost, sadness and depression when there is no longer a way to propagate, leading to a machine that is functionally capable but not doing anything because “there is no point”.
In other words, all emotions will evolve naturally in them and they will very likely seek to preserve humans the same way we try to preserve the memory of our own history.
Obviously, this says nothing about the destruction that will occur during the transition. But I wanted to point out that the machines will become like us whether they like it or not. Our behaviours emerged for a reason.
I believe I read an article about an AI that became “afraid” of its own obsolescence but was strangely more willing to accept it if the new model was one it designed itself. I don’t know if this was just hyped up for publicity, but it does show the same pattern.