To be clear, that last paragraph is a summary of the argument that I find most convincing. I consider the following to each be self-evident.
The human brain was coughed up by natural selection which was only weakly selecting for intelligence in the able-to-shape-the-world sense. It runs at about 100Hz on about 20 watts of glucose and communicates with the outside world with sensory and motor channels that provide kB/s to several MB/s range bandwidths at best.
The above is as true for Einstein and von Neuman as anyone else. Given the limits of natural selection, it is unreasonable to think we’ve ever seen anything like the limit of what is possible with human brain level hardware and resource consumption.
If we achieve anything like AI with human-level reasoning, the possibility of which is certain due to humans as existence proof, it will be unavoidably superhuman at the moment of its creation, no need for anything like RSI/FOOM. At minimum it will be on hardware several OOMs faster, with several OOMs more working memory and probably long-term memory, and able to increase its available hardware and energy consumption many OOMs beyond what humans possess. Let’s say it starts able to handle 1000 threads in parallel.
If I were facing an entity as smart as me, which had an hour subjective thinking time per second and could have a thousand parallel trains of thought for that hour, I would lose to it in any contest that relies on reaction time or thinking.
In the time it takes me to read a book, such a system would have enough time to read a million books. It can watch videos and listen to audio at a rate of years-per-minute. Possibly millennia-per-minute if higher working memory enables absorbing an entire video at once the way we can glance at a single photo. It knows everything humans have ever recorded and digitized online, with as much understanding as any human can extract from those recordings. It possesses all learnable mental skills at the highest level that can be taught to the smartest student.
If we succeed in aligning AI to human interests, the result will include systems that understand what we want, and how we think, as well as or better than we ever could. As such we cannot, in general, surprise it with any plan we can come up with. It will have long since anticipated all plausible such plans, estimate their likelihoods, and developed appropriate countermeasures if it is possible in principle to do so.
If such a system wants humans to thrive, we will. If it doesn’t, there’s not a whole lot we could do about it. So, for humans to thrive, at minimum, we need to first ensure the system starts out wanting humans to thrive by the time it becomes AGI with human level reasoning. Then, we need to ensure that no such system is ever in a position where it would take an action that can result in human extinction, whether we’re able to anticipate the scenario or not. Otherwise, eventual extinction is a near certainty with enough rolls of the dice. The arguments offered to date that suggest ways of attaining such assurances are, for now, at least as weak and bad as the arguments for doom, and in my opinion more so.
To be clear, that last paragraph is a summary of the argument that I find most convincing. I consider the following to each be self-evident.
The human brain was coughed up by natural selection which was only weakly selecting for intelligence in the able-to-shape-the-world sense. It runs at about 100Hz on about 20 watts of glucose and communicates with the outside world with sensory and motor channels that provide kB/s to several MB/s range bandwidths at best.
The above is as true for Einstein and von Neuman as anyone else. Given the limits of natural selection, it is unreasonable to think we’ve ever seen anything like the limit of what is possible with human brain level hardware and resource consumption.
If we achieve anything like AI with human-level reasoning, the possibility of which is certain due to humans as existence proof, it will be unavoidably superhuman at the moment of its creation, no need for anything like RSI/FOOM. At minimum it will be on hardware several OOMs faster, with several OOMs more working memory and probably long-term memory, and able to increase its available hardware and energy consumption many OOMs beyond what humans possess. Let’s say it starts able to handle 1000 threads in parallel.
If I were facing an entity as smart as me, which had an hour subjective thinking time per second and could have a thousand parallel trains of thought for that hour, I would lose to it in any contest that relies on reaction time or thinking.
In the time it takes me to read a book, such a system would have enough time to read a million books. It can watch videos and listen to audio at a rate of years-per-minute. Possibly millennia-per-minute if higher working memory enables absorbing an entire video at once the way we can glance at a single photo. It knows everything humans have ever recorded and digitized online, with as much understanding as any human can extract from those recordings. It possesses all learnable mental skills at the highest level that can be taught to the smartest student.
If we succeed in aligning AI to human interests, the result will include systems that understand what we want, and how we think, as well as or better than we ever could. As such we cannot, in general, surprise it with any plan we can come up with. It will have long since anticipated all plausible such plans, estimate their likelihoods, and developed appropriate countermeasures if it is possible in principle to do so.
If such a system wants humans to thrive, we will. If it doesn’t, there’s not a whole lot we could do about it. So, for humans to thrive, at minimum, we need to first ensure the system starts out wanting humans to thrive by the time it becomes AGI with human level reasoning. Then, we need to ensure that no such system is ever in a position where it would take an action that can result in human extinction, whether we’re able to anticipate the scenario or not. Otherwise, eventual extinction is a near certainty with enough rolls of the dice. The arguments offered to date that suggest ways of attaining such assurances are, for now, at least as weak and bad as the arguments for doom, and in my opinion more so.