building fusion power plants, treating and preventing cancer, high-temperature superconductors, programmable contracts, genetic engineering, fluctuations in the value of money, biological and artificial neural networks.
vs
building bridges and skyscrapers, treating and preventing infections, satellites and GPS, cars and ships, oil wells and gas pipelines and power plants, cell networks and databases and websites.
Note that there is a way to split these sets into “problems we can easily perform experiments both real and simulated” and “problems where experimentation is extremely expensive and sometimes unethical”.
Perhaps the element making this problems less tractable is we cannot easily obtain a lot of good quality information about the problem itself.
Fusion you need giga-dollars to actually tinker with the plasmas at the scale you would get net power from. Cancer, you can easily find a way to kill cancer in a lab or lab rat but there are no functioning mockups of human bodies (yet) to try your approach on. Also there are government barriers that create shortages of workers and slow down any trial of new ideas. HTSC, well, the physical models predict these poorly and it is not certain if a solution exists under STP. Programmable contracts are easy to write but difficult to prove impervious to assault. Genetic engineering, easy to do on small scales, difficult to do on complex creatures like humans due to the same barriers behind cancer treatment. Money fluctuations—there are hostile and irrational agents blocking you from learning clean information about how it works, so your model will be confused by the noise they are injecting [in real economies]. And biological NNs have the information barrier, artificial NNs seem to be tractable they are just new.
How is this relevant? Well to me it sounds like if we invent a high end AGI, it’ll still be throttled for solving this problems until the right robotics/mockups are made for the AGI to get the information it needs to solve them.
The AGI will not be able to formulate a solution merely reading human writings and journals on these subjects, we will need to authorize it to build thousands of robotic research systems where it then generates it’s own experiments to fill in the gaps in our knowledge and to learn enough to solve them.
Agree. I like to split the empirical problems out using levels of abstraction:
Traversal problems: each experiment is expensive or it isn’t clear how to generate a new experiment from old ones because of lack of visibility about controlled variables.
Representation space problems: the search space is too large, our experiments don’t reliably screen off large portions of it. So we can’t expect to converge in any reasonable time.
Intentional problems: we’re not even clear on what we’re trying to do or whether our representation of what we’re trying to do matches the natural categories of the solution space such that we are even testing real things when we design the experiment.
Implementation problems: we can’t build the tooling or control the variable we need to control even if we are pretty sure what it is. Measurement problems means we can’t distinguish between important final or intermediate outcomes (eg error bars).
It is often characterized as 3 levels but if you read his book the algorithmic level is split into traversal and representation (which is highly useful as a way to think about algorithms in general) As four levels it also corresponds to Aristotle’s 4 whys: final, formal, efficient, and material.
So intentional problems would be markets, where noise is being injected and any clear pattern is being drained dry by automated systems, preventing you from converging to a model. Or public private encryption where you aren’t supposed to be able to solve it? (But possibly you can)
building fusion power plants, treating and preventing cancer, high-temperature superconductors, programmable contracts, genetic engineering, fluctuations in the value of money, biological and artificial neural networks.
vs
building bridges and skyscrapers, treating and preventing infections, satellites and GPS, cars and ships, oil wells and gas pipelines and power plants, cell networks and databases and websites.
Note that there is a way to split these sets into “problems we can easily perform experiments both real and simulated” and “problems where experimentation is extremely expensive and sometimes unethical”.
Perhaps the element making this problems less tractable is we cannot easily obtain a lot of good quality information about the problem itself.
Fusion you need giga-dollars to actually tinker with the plasmas at the scale you would get net power from. Cancer, you can easily find a way to kill cancer in a lab or lab rat but there are no functioning mockups of human bodies (yet) to try your approach on. Also there are government barriers that create shortages of workers and slow down any trial of new ideas. HTSC, well, the physical models predict these poorly and it is not certain if a solution exists under STP. Programmable contracts are easy to write but difficult to prove impervious to assault. Genetic engineering, easy to do on small scales, difficult to do on complex creatures like humans due to the same barriers behind cancer treatment. Money fluctuations—there are hostile and irrational agents blocking you from learning clean information about how it works, so your model will be confused by the noise they are injecting [in real economies]. And biological NNs have the information barrier, artificial NNs seem to be tractable they are just new.
How is this relevant? Well to me it sounds like if we invent a high end AGI, it’ll still be throttled for solving this problems until the right robotics/mockups are made for the AGI to get the information it needs to solve them.
The AGI will not be able to formulate a solution merely reading human writings and journals on these subjects, we will need to authorize it to build thousands of robotic research systems where it then generates it’s own experiments to fill in the gaps in our knowledge and to learn enough to solve them.
Agree. I like to split the empirical problems out using levels of abstraction:
Traversal problems: each experiment is expensive or it isn’t clear how to generate a new experiment from old ones because of lack of visibility about controlled variables.
Representation space problems: the search space is too large, our experiments don’t reliably screen off large portions of it. So we can’t expect to converge in any reasonable time.
Intentional problems: we’re not even clear on what we’re trying to do or whether our representation of what we’re trying to do matches the natural categories of the solution space such that we are even testing real things when we design the experiment.
Implementation problems: we can’t build the tooling or control the variable we need to control even if we are pretty sure what it is. Measurement problems means we can’t distinguish between important final or intermediate outcomes (eg error bars).
Does the phrase “levels of abstraction” imply that those four problems form some kind of hierarchy? If so, could you explain how that hierarchy works?
https://en.wikipedia.org/wiki/David_Marr_(neuroscientist)#Levels_of_analysis
It is often characterized as 3 levels but if you read his book the algorithmic level is split into traversal and representation (which is highly useful as a way to think about algorithms in general) As four levels it also corresponds to Aristotle’s 4 whys: final, formal, efficient, and material.
So intentional problems would be markets, where noise is being injected and any clear pattern is being drained dry by automated systems, preventing you from converging to a model. Or public private encryption where you aren’t supposed to be able to solve it? (But possibly you can)