yeah this sounds like a reasonable description of the importance of extremely high quality data. that training data limit I ended on is not trivial by any means.
80 percent confidence seems unsupported by evidence. With the evidence that human data is very poor—for clear and convincing evidence look at all the meta analysis of prior studies, or the constant “well actually” rebuttals to facts people thought they knew. (And then a rebuttal rebuttal and in the end nobody knows anything) A world where analyzing all the data humans know on a subject leaves most people less confident about anything than before they started is not one where we have the data to train a superintelligence.
Such a machine will be as confused as we are even if it has the memory to simultaneously assume every single assumption is both true and not true, and keep track of the combinatorial explosion of possibilities.
To describe the problem succinctly: if you have a problem that only a superintelligence can solve in front of you, and your beliefs about all the variables form a tree with hundreds of millions of possibilities (medical problems will be this way), you may have the cognitive capacity of a superintelligence but in actual effectiveness your actions will be barely better than humans. As in functionally not an ASI.
Getting the data is straightforward. You just need billions of robots. You replicate every study and experiment humans ever did with robots this time, you replicate human body failures with “reference bodies” that are consistent in behavior and artificial. All data analysis is done from raw data, all conclusions always take into account all prior experiments data, no p-hacking.
We don’t have the robots yet, though apparently Amazon robotics is on an exponential trajectory, having added 750k in the last 2 years, which is more than all prior years combined.
Assuming the trajectory continues, it will be 22 years until 1 billion robots. Takeoff but not foom.
yeah this sounds like a reasonable description of the importance of extremely high quality data. that training data limit I ended on is not trivial by any means.
80 percent confidence seems unsupported by evidence. With the evidence that human data is very poor—for clear and convincing evidence look at all the meta analysis of prior studies, or the constant “well actually” rebuttals to facts people thought they knew. (And then a rebuttal rebuttal and in the end nobody knows anything) A world where analyzing all the data humans know on a subject leaves most people less confident about anything than before they started is not one where we have the data to train a superintelligence.
Such a machine will be as confused as we are even if it has the memory to simultaneously assume every single assumption is both true and not true, and keep track of the combinatorial explosion of possibilities.
To describe the problem succinctly: if you have a problem that only a superintelligence can solve in front of you, and your beliefs about all the variables form a tree with hundreds of millions of possibilities (medical problems will be this way), you may have the cognitive capacity of a superintelligence but in actual effectiveness your actions will be barely better than humans. As in functionally not an ASI.
Getting the data is straightforward. You just need billions of robots. You replicate every study and experiment humans ever did with robots this time, you replicate human body failures with “reference bodies” that are consistent in behavior and artificial. All data analysis is done from raw data, all conclusions always take into account all prior experiments data, no p-hacking.
We don’t have the robots yet, though apparently Amazon robotics is on an exponential trajectory, having added 750k in the last 2 years, which is more than all prior years combined.
Assuming the trajectory continues, it will be 22 years until 1 billion robots. Takeoff but not foom.