I agree with you that the types of neural networks currently being used at scale are not sufficient for artificial superintelligence (unless perhaps scaled to an absurd level). I am not as confident that businesses won’t continue investing in risky experiments. For example, experiments into AI that does not “separate training from their operational mode”, or experiments into recurrent architectures, are currently being done.
I definitely don’t agree with your claim in the blog post that even if strong AI comes, we will all simply adapt. Your arguments about more mature people finding common ground with less mature people ignores the fact that these people either belong to the same family, or the same legal system. A strong AI will not necessarily love you or care about following the law. In cases where humans don’t have those constraints, they tend not to always be so nice to one another. I think AI risk is an existential threat, if superintelligent AI does show up.
My statement about the lack of huge investments in risky experiments may really be too strong. In the end, we speak about people, and they are always unpredictable. Partially, I formulated it that way to acquire a strong validation point for multiple of my personal models of how the world and society work. However, I still believe it is more probable than the opposite statement.
Speaking about strong AI.
The analogy between child-parent relations is the simplest I found for the post. The history of humanity has a long record of communication between different societies on different levels of maturity and with different cultures, of course. Those contacts didn’t always go well, but they also didn’t lead to the full extinction of one of the parties (in most cases).
Since most likely a superintelligent will be produced on the basis of the information humanity produced (we don’t have any other source of data), I believe it will operate in a similar way ⇒ a struggle is possible, maybe a serious one even, but in the end we will adapt to each other.
However, this logic is relevant in situations where a strong AI appears instantly.
I don’t expect that, given there are no historical precedents for something complex appearing instantly without a long process of increasing complexity.
What I expect, assuming that strong AI will appear, is a long process of evolving AI tools with increasing complexity to each of which humanity will adapt, like it already adapted to LLMs. At some point, those tools began uniting into a kind of smaller AIs, and we’ll adapt to them, and they will adapt to us. And so on, until we reach a point when the complexity of those tools will be incomparably higher than the complexity of humanity. But by that time, we will have already adapted to them.
I.e., if such a thing ever happens, it will be a long enough co-evolution, rather than an instant rise of a superintelligent being and obliteration of humanity.
I agree with you that the types of neural networks currently being used at scale are not sufficient for artificial superintelligence (unless perhaps scaled to an absurd level). I am not as confident that businesses won’t continue investing in risky experiments. For example, experiments into AI that does not “separate training from their operational mode”, or experiments into recurrent architectures, are currently being done.
I definitely don’t agree with your claim in the blog post that even if strong AI comes, we will all simply adapt. Your arguments about more mature people finding common ground with less mature people ignores the fact that these people either belong to the same family, or the same legal system. A strong AI will not necessarily love you or care about following the law. In cases where humans don’t have those constraints, they tend not to always be so nice to one another. I think AI risk is an existential threat, if superintelligent AI does show up.
My statement about the lack of huge investments in risky experiments may really be too strong. In the end, we speak about people, and they are always unpredictable. Partially, I formulated it that way to acquire a strong validation point for multiple of my personal models of how the world and society work. However, I still believe it is more probable than the opposite statement.
Speaking about strong AI.
The analogy between child-parent relations is the simplest I found for the post. The history of humanity has a long record of communication between different societies on different levels of maturity and with different cultures, of course. Those contacts didn’t always go well, but they also didn’t lead to the full extinction of one of the parties (in most cases).
Since most likely a superintelligent will be produced on the basis of the information humanity produced (we don’t have any other source of data), I believe it will operate in a similar way ⇒ a struggle is possible, maybe a serious one even, but in the end we will adapt to each other.
However, this logic is relevant in situations where a strong AI appears instantly.
I don’t expect that, given there are no historical precedents for something complex appearing instantly without a long process of increasing complexity.
What I expect, assuming that strong AI will appear, is a long process of evolving AI tools with increasing complexity to each of which humanity will adapt, like it already adapted to LLMs. At some point, those tools began uniting into a kind of smaller AIs, and we’ll adapt to them, and they will adapt to us. And so on, until we reach a point when the complexity of those tools will be incomparably higher than the complexity of humanity. But by that time, we will have already adapted to them.
I.e., if such a thing ever happens, it will be a long enough co-evolution, rather than an instant rise of a superintelligent being and obliteration of humanity.