[Question] What is the probability that a superintelligent, sentient AGI is actually infeasible?

Hello,

I am not at all adverse to discussing contingencies for things that are either uncertain or unlikely, but I’m curious what the general consensus is for how likely a superintelligent AGI scenario actually is.

To be clear, I am certainly aware that advances in AI have made leaps and bounds throughout my own lifetime alone, with public feats such as AlphaGo far exceeding the expectations of its creators.

But just because there is a trend over a certain period of time doesn’t mean that it will stay that way forever, or develop in a way that one expects intuitively. For example, population models tend to grow quickly at first, but flatten out over time. It is possible that computer technology (or technology in general) has some natural, undiscovered limit, such that the graph will flatten towards a logistic asymptote or similar curve.

Or, alternatively, civilization may discover that there is a certain trade-off in computer science, analogous to Heisenberg Uncertainty, such that either of the two scenarios are possible, but not in combination:

  1. Superintelligent, non-sentient AI or Tool AI: Machines maximized for computational power and autonomous problem-solving, but has no self-awareness or autonomy associated with consciousness.

  2. Unintelligent AGI or Infant AI: An artificial consciousness with full autonomy and self-awareness, but relatively poor computational power, such that it is no smarter than a real human.

In this scenario, a superintelligent AGI isn’t possible because of the trade-off: it’s either a superintelligent tool or an unintelligent consciousness. How probable do you think such a scenario might be?

No comments.