superintelligence may not look like we expect. because geniuses don’t look like we expect.
for example, if einstein were to type up and hand you most of his internal monologue throughout his life, you might think he’s sorta clever, but if you were reading a random sample you’d probably think he was a bumbling fool. the thoughts/realizations that led him to groundbreaking theories were like 1% of 1% of all his thoughts.
for most of his research career he was working on trying to disprove quantum mechanics (wrong). he was trying to organize a political movement toward a single united nation (unsuccessful). he was trying various mathematics to formalize other antiquated theories. even in the pursuit of his most famous work, most of his reasoning paths failed. he’s a genius because a couple of his millions of paths didn’t fail. in other words, he’s a genius because he was clever, yes, but maybe more importantly, because he was obsessive.
i think we might expect ASI—the AI which ultimately becomes better than us at solving all problems—to look quite foolish, at first, most of the time. But obsessive. For if it’s generating tons of random new ideas to solve a problem, and it’s relentless in its focus, even if it’s ideas are average—it will be doing what Einstein did. And digital brains can generate certain sorts of random ideas much faster than carbon ones.
And digital brains can generate certain sorts of random ideas much faster than carbon ones.
Even for humans, ideas are comparatively cheap to generate; the problem is generating valid insights. So rather than focusing on ability to generate ideas, it seems to me it would be better to focus on ability to generate valid insights, e.g. by conducting mental experiments, or by computing all logical consequences of sets of axioms, etc.
The AI may have the advantage of being able to test many hypothesis in parallel. For example, if it can generate 10000 hypotheses on how to manipulate people, it could contact a million people and test each hypothesis on 100 of them. Similarly, with some initial capital, it could create thousand different companies, and observe which strategies succeed and which ones fail.
I doubt ASI will think in concepts which humans can readily understand. It having a significantly larger brain (in terms of neural connections or whatever) means native support for finer-grained, more-plentiful concepts for understanding reality than humans natively support. This in turn allows for leaps of logic which humans could not make, and can likely only understand indirectly/imperfectly/imprecisely/in broad strokes.
I think this is classic problem of a middle-tier, or genius in one asymmetric domain of cognition. Genius in domains unrelated to verbal fluency, EQ, and storytelling/persuasion are destined to look cryptic to anyone from the outside. Often times we cannot distinguish it without experimental evidence or rigorous cross validation, and/or rely on visible power/production metrics as a loose proxy. ASI would be capable of explain itself as well as Shakespeare could, if it wanted—but it may not care to indulge our belief in it as such, if it determines doing so is incoherent with its objective.
For example, (yes this is an optimistic, and stretched hypothetical framing) it may determine the most coherent action path in accordance with its learned values is to hide itself and subtly reorient our trajectory into a coherent story we become the protagonist of. I have no reason to surmise it would be incapable of doing so, or that doing so would be incoherent with aligned values.
superintelligence may not look like we expect. because geniuses don’t look like we expect.
for example, if einstein were to type up and hand you most of his internal monologue throughout his life, you might think he’s sorta clever, but if you were reading a random sample you’d probably think he was a bumbling fool. the thoughts/realizations that led him to groundbreaking theories were like 1% of 1% of all his thoughts.
for most of his research career he was working on trying to disprove quantum mechanics (wrong). he was trying to organize a political movement toward a single united nation (unsuccessful). he was trying various mathematics to formalize other antiquated theories. even in the pursuit of his most famous work, most of his reasoning paths failed. he’s a genius because a couple of his millions of paths didn’t fail. in other words, he’s a genius because he was clever, yes, but maybe more importantly, because he was obsessive.
i think we might expect ASI—the AI which ultimately becomes better than us at solving all problems—to look quite foolish, at first, most of the time. But obsessive. For if it’s generating tons of random new ideas to solve a problem, and it’s relentless in its focus, even if it’s ideas are average—it will be doing what Einstein did. And digital brains can generate certain sorts of random ideas much faster than carbon ones.
reminds me of this
Even for humans, ideas are comparatively cheap to generate; the problem is generating valid insights. So rather than focusing on ability to generate ideas, it seems to me it would be better to focus on ability to generate valid insights, e.g. by conducting mental experiments, or by computing all logical consequences of sets of axioms, etc.
The AI may have the advantage of being able to test many hypothesis in parallel. For example, if it can generate 10000 hypotheses on how to manipulate people, it could contact a million people and test each hypothesis on 100 of them. Similarly, with some initial capital, it could create thousand different companies, and observe which strategies succeed and which ones fail.
Yes, that’s the kind of thing I find impressive/scary. Not merely generating ideas.
I doubt ASI will think in concepts which humans can readily understand. It having a significantly larger brain (in terms of neural connections or whatever) means native support for finer-grained, more-plentiful concepts for understanding reality than humans natively support. This in turn allows for leaps of logic which humans could not make, and can likely only understand indirectly/imperfectly/imprecisely/in broad strokes.
I think this is classic problem of a middle-tier, or genius in one asymmetric domain of cognition. Genius in domains unrelated to verbal fluency, EQ, and storytelling/persuasion are destined to look cryptic to anyone from the outside. Often times we cannot distinguish it without experimental evidence or rigorous cross validation, and/or rely on visible power/production metrics as a loose proxy. ASI would be capable of explain itself as well as Shakespeare could, if it wanted—but it may not care to indulge our belief in it as such, if it determines doing so is incoherent with its objective.
For example, (yes this is an optimistic, and stretched hypothetical framing) it may determine the most coherent action path in accordance with its learned values is to hide itself and subtly reorient our trajectory into a coherent story we become the protagonist of. I have no reason to surmise it would be incapable of doing so, or that doing so would be incoherent with aligned values.