To whomever just downvoted this comment to −2 : care to explain why? I’m saying our universe limits the practical utility of increasing intelligence, and therefore WBEs would be more suited for creating AIs that human beings. Perhaps I am wrong about these assumptions.
If WBEs create UFAIs, they will hopefully do so only after all of the refinements for themselves are complete, they’ve had the thinking equivalent of millions of years to work on the problem, and so on.
If one assumes that intelligence has diminishing utility past a certain point, as it approaches some asymptote described as “maximum useful intelligence in the Universe as constrained by the laws of physics”, then UFAIs by definition would NOT be akin to gods relative to the WBEs.
The WBEs would be nearly as intelligent (just not as optimal, perhaps, requiring tons more hardware and energy to accomplish the same task) and so the UFAIs would lack the power to fool their handlers and repattern the solar system in their image.
Why do the acronyms need to be expanded? It seems you understand them.
If one assumes that intelligence has diminishing utility past a certain point, [...] then UFAIs by definition would NOT be akin to gods relative to the WBEs.
The second doesn’t follow from the first. The existence of a limit doesn’t imply that it is small. (e.g. what if the limit was 10^1000 times more intelligent than a WBE.)
Ah, the implicit assumption here is “no wild new physics”. I’m assuming that the universe actually does respect the speed of light, that there are no major security exploits that let you cheat and violate conservation of energy, momentum, etc, etc, etc.
Assumption : the best way to survive inside the universe is to pattern matter the way you want it, and to collect free energy as needed to perform work.
Conclusion : increasing intelligence has a diminishing return. If it would take 1000 years for a human molecular engineer to design a good mechanism to pattern matter with, it would take about 8 hours for a WBE of that engineer sped up a million-fold. “Good” mechanisms are within a percentage point or so of the “best” mechanism that physics will allow.
Assuming that the WBEs have already occupied the easily accessible matter before they create UFAIs, and their engineering is “good” but not excellent, then UFAIs no matter how much smarter will be limited in what they can practically do.
Analogy : if you’re out in the woods and possess only a hatchet as a tool, there are a finite number of things you can accomplish even if you had infinite intelligence.
Counterargument : since I, the author of this post, am not really all that smart compared to the hypothetical entities we are discussing, I cannot safely say this.
To whomever just downvoted this comment to −2 : care to explain why? I’m saying our universe limits the practical utility of increasing intelligence, and therefore WBEs would be more suited for creating AIs that human beings. Perhaps I am wrong about these assumptions.
If WBEs create UFAIs, they will hopefully do so only after all of the refinements for themselves are complete, they’ve had the thinking equivalent of millions of years to work on the problem, and so on.
If one assumes that intelligence has diminishing utility past a certain point, as it approaches some asymptote described as “maximum useful intelligence in the Universe as constrained by the laws of physics”, then UFAIs by definition would NOT be akin to gods relative to the WBEs.
The WBEs would be nearly as intelligent (just not as optimal, perhaps, requiring tons more hardware and energy to accomplish the same task) and so the UFAIs would lack the power to fool their handlers and repattern the solar system in their image.
Why do the acronyms need to be expanded? It seems you understand them.
The second doesn’t follow from the first. The existence of a limit doesn’t imply that it is small. (e.g. what if the limit was 10^1000 times more intelligent than a WBE.)
Ah, the implicit assumption here is “no wild new physics”. I’m assuming that the universe actually does respect the speed of light, that there are no major security exploits that let you cheat and violate conservation of energy, momentum, etc, etc, etc.
Assumption : the best way to survive inside the universe is to pattern matter the way you want it, and to collect free energy as needed to perform work. Conclusion : increasing intelligence has a diminishing return. If it would take 1000 years for a human molecular engineer to design a good mechanism to pattern matter with, it would take about 8 hours for a WBE of that engineer sped up a million-fold. “Good” mechanisms are within a percentage point or so of the “best” mechanism that physics will allow.
Assuming that the WBEs have already occupied the easily accessible matter before they create UFAIs, and their engineering is “good” but not excellent, then UFAIs no matter how much smarter will be limited in what they can practically do.
Analogy : if you’re out in the woods and possess only a hatchet as a tool, there are a finite number of things you can accomplish even if you had infinite intelligence.
Counterargument : since I, the author of this post, am not really all that smart compared to the hypothetical entities we are discussing, I cannot safely say this.