We are a successful species. The reason for our success is slightly expanded mental faculties compared with other species, allowing better cultural transmission. Thus suggests that substantially greater intelligence would bring extreme power.
Our general intelligence isn’t obviously the source of this improved cultural transmission. Why suppose general intelligence is the key thing, instead of improvements specific to storing and communicating information? Doesn’t the observation that our cultural transmission abilities made us powerful much more strongly suggest that improved abilities to transmit culture would be very powerful? e.g. more bandwidth, better languages, better storage, better retrieval of relevant facts. It’s true that AI may well have these things, but we have mostly been talking as if individual mental skills will be the important innovation.
Though Bostrom seems right to talk about better transmission—which could have been parsed into more reliable, robust, faster, compact, nested etc.… - he stops short of looking deep into what made cultural transmission better. To claim that a slight improvement in (general) mental faculties did it would be begging the question. Brilliant though he is, Bostrom is “just” a physicist, mathematical logician, philosopher, economist, computational neuroscientist who invented the field of existential-risks and revolutionized anthropics, so his knowledge of cultural evolution and this transition is somewhat speculative. That’s why we need other people :)
In that literature we have three main contenders for what allowed human prowess to reshape earth:
Symbolic ability: the ability to decently process symbols—which have a technical definition hard to describe here—and understand them in a timely fashion is unique to humans and some other currently extinct anthropoids. Terrence Deacon argues for this being what matters in The Symbolic Species.
Iterative recursion processing: This has been argued in many styles.
Chomsky argued the primacy of recursion as a requisite ability for human language in the late fifties
Pinker endorses this in his Language Instinct and in The Stuff of Thought
The Mind Is A Computer metaphor (Lakoff 1999) has been widely adopted and very successful memetically, and though it has other distinctions, the main distinction from “Mind Is A Machine” is that recursion is involved in computers, but not in all machines. The computational theory of mind thrived in the hands of Pinker, Koch, Dennet, Kahneman and more recently Tononi. Within LW and among programmers Mind is a Computer is frequently thought to be the fundamental metaphysics of mind, and a final shot at the ontological constituent of our selves—a perspective I considered naïve here.
Ability to share intentions: the ability to share goals and intentions and parallelize in virtue of doing so with other co-specimens. Tomasello (2005)
Great books on evolutionary transmission are Not By Genes Alone, The Meme Machine and LWer Tim Tyler’s Memetics.
When I was thinking about past discussions I was realized something like:
(selfish) gene → meme → goal.
When Bostrom is thinking about singleton’s probability I am afraid he overlook possibility to run more ‘personalities’ on one substrate. (we could suppose more teams to have possibility to run their projects on one hardware. Like more teams could use Hubble’s telescope to observe diffferent objects)
And not only possibility but probably also necessity.
If we want to prevent destructive goal to be realized (and destroy our world) then we have to think about multipolarity.
We need to analyze how to slightly different goals could control each other.
I’ll coin the term Monolithing Multipolar for what I think you mean here, one stable structure that has different modes activated at different times, and these modes don’t share goals, like a human—specially like a schizophrenic one.
The problem with Monolithic Multipolarity is that it is fragile. In humans, what causes us to behave differently and want different things at different times is not accessible for revision, otherwise, each party may have an incentive to steal the other’s time. An AI would need not to deal with such triviality, since, by definition of explosive recursively-self improving it can rewrite it-selves.
We need other people, but Bostrom doesn’t let simple things left out easily.
The capabilities of a homo sapiens sapiens 20,000 years ago are more chimp-like than comparable to a modern internet- and technology-amplified human. Our base human intelligence seems to be only a very little above the necessary threshold to develop cultural technologies that allow us to accumulate knowledge over generations. Standardized languages, the invention of writing and further technological developments improved our capabilities far above this threshold. Today children need years until they aquire enough cultural technologies and knowledge to become full members of society.
Intelligence alone does not bring extreme power. If a superintelligent AI has learned cultural technologies and aquired knowledge and skills it could bring it.
I’m not a prehistorian or whatever the relevant field is, but didn’t paleolithic humans spread all over the planet in a way chimps completely failed to? Doesn’t that indicate some sort of very dramatic adaptability advantage?
Yes indeed. Adaptability and intelligence are enabling factors. The human capabilities of making diverse stone tools, making cloth and fire had been sufficient to settle in other climate zones. Modern humans have many more capabilities: Agriculture, transportation, manipulating of any physical matter from atomic scales to earth surrounding infrastructures; controlling energies from quantum mechanical condensation up to fusion bomb explosions; information storage, communication, computation, simulation, automation up to narrow AI.
Change of human intelligence and adaptability do not account for this huge rise in capabilities and skills over the recent 20,000 years. The rise of capabilities is a cultural evolutionary process. Leonardo da Vinci was the last real universal genius of humanity. Capabilities diversified and expanded exponentially since exceeding the human brain capacity by magnitudes. Hundreds of new knowledge domains developed. The more domains an AI masters the more power has it.
We might be approaching a point of diminishing returns as far as improving cultural transmission is concerned. Sure, it would be useful to adopt a better language, e.g. one less ambiguous, less subject to misinterpretation, more revealing of hidden premises and assumptions. More bandwidth and better information retrieval would also help. But I don’t think these constraints are what’s holding AI back.
Bandwidth, storage, and retrieval can be looked at as hardware issues, and performance in these areas improves both with time and with adding more hardware. What AI requires are improvements in algorithms and in theoretical frameworks such as decision theory, morality, and systems design.
Bostrom summarized (p91):
Our general intelligence isn’t obviously the source of this improved cultural transmission. Why suppose general intelligence is the key thing, instead of improvements specific to storing and communicating information? Doesn’t the observation that our cultural transmission abilities made us powerful much more strongly suggest that improved abilities to transmit culture would be very powerful? e.g. more bandwidth, better languages, better storage, better retrieval of relevant facts. It’s true that AI may well have these things, but we have mostly been talking as if individual mental skills will be the important innovation.
Though Bostrom seems right to talk about better transmission—which could have been parsed into more reliable, robust, faster, compact, nested etc.… - he stops short of looking deep into what made cultural transmission better. To claim that a slight improvement in (general) mental faculties did it would be begging the question. Brilliant though he is, Bostrom is “just” a physicist, mathematical logician, philosopher, economist, computational neuroscientist who invented the field of existential-risks and revolutionized anthropics, so his knowledge of cultural evolution and this transition is somewhat speculative. That’s why we need other people :) In that literature we have three main contenders for what allowed human prowess to reshape earth:
Symbolic ability: the ability to decently process symbols—which have a technical definition hard to describe here—and understand them in a timely fashion is unique to humans and some other currently extinct anthropoids. Terrence Deacon argues for this being what matters in The Symbolic Species.
Iterative recursion processing: This has been argued in many styles.
Chomsky argued the primacy of recursion as a requisite ability for human language in the late fifties
Pinker endorses this in his Language Instinct and in The Stuff of Thought
The Mind Is A Computer metaphor (Lakoff 1999) has been widely adopted and very successful memetically, and though it has other distinctions, the main distinction from “Mind Is A Machine” is that recursion is involved in computers, but not in all machines. The computational theory of mind thrived in the hands of Pinker, Koch, Dennet, Kahneman and more recently Tononi. Within LW and among programmers Mind is a Computer is frequently thought to be the fundamental metaphysics of mind, and a final shot at the ontological constituent of our selves—a perspective I considered naïve here.
Ability to share intentions: the ability to share goals and intentions and parallelize in virtue of doing so with other co-specimens. Tomasello (2005)
Great books on evolutionary transmission are Not By Genes Alone, The Meme Machine and LWer Tim Tyler’s Memetics.
When I was thinking about past discussions I was realized something like:
(selfish) gene → meme → goal.
When Bostrom is thinking about singleton’s probability I am afraid he overlook possibility to run more ‘personalities’ on one substrate. (we could suppose more teams to have possibility to run their projects on one hardware. Like more teams could use Hubble’s telescope to observe diffferent objects)
And not only possibility but probably also necessity.
If we want to prevent destructive goal to be realized (and destroy our world) then we have to think about multipolarity.
We need to analyze how to slightly different goals could control each other.
I’ll coin the term Monolithing Multipolar for what I think you mean here, one stable structure that has different modes activated at different times, and these modes don’t share goals, like a human—specially like a schizophrenic one.
The problem with Monolithic Multipolarity is that it is fragile. In humans, what causes us to behave differently and want different things at different times is not accessible for revision, otherwise, each party may have an incentive to steal the other’s time. An AI would need not to deal with such triviality, since, by definition of explosive recursively-self improving it can rewrite it-selves.
We need other people, but Bostrom doesn’t let simple things left out easily.
One mode could have goal to be something like graphite moderator in nuclear reactor. To prevent unmanaged explosion.
In this moment I just wanted to improve our view at probability of only one SI in starting period.
The capabilities of a homo sapiens sapiens 20,000 years ago are more chimp-like than comparable to a modern internet- and technology-amplified human. Our base human intelligence seems to be only a very little above the necessary threshold to develop cultural technologies that allow us to accumulate knowledge over generations. Standardized languages, the invention of writing and further technological developments improved our capabilities far above this threshold. Today children need years until they aquire enough cultural technologies and knowledge to become full members of society.
Intelligence alone does not bring extreme power. If a superintelligent AI has learned cultural technologies and aquired knowledge and skills it could bring it.
I’m not a prehistorian or whatever the relevant field is, but didn’t paleolithic humans spread all over the planet in a way chimps completely failed to? Doesn’t that indicate some sort of very dramatic adaptability advantage?
Yes indeed. Adaptability and intelligence are enabling factors. The human capabilities of making diverse stone tools, making cloth and fire had been sufficient to settle in other climate zones. Modern humans have many more capabilities: Agriculture, transportation, manipulating of any physical matter from atomic scales to earth surrounding infrastructures; controlling energies from quantum mechanical condensation up to fusion bomb explosions; information storage, communication, computation, simulation, automation up to narrow AI.
Change of human intelligence and adaptability do not account for this huge rise in capabilities and skills over the recent 20,000 years. The rise of capabilities is a cultural evolutionary process. Leonardo da Vinci was the last real universal genius of humanity. Capabilities diversified and expanded exponentially since exceeding the human brain capacity by magnitudes. Hundreds of new knowledge domains developed. The more domains an AI masters the more power has it.
We might be approaching a point of diminishing returns as far as improving cultural transmission is concerned. Sure, it would be useful to adopt a better language, e.g. one less ambiguous, less subject to misinterpretation, more revealing of hidden premises and assumptions. More bandwidth and better information retrieval would also help. But I don’t think these constraints are what’s holding AI back.
Bandwidth, storage, and retrieval can be looked at as hardware issues, and performance in these areas improves both with time and with adding more hardware. What AI requires are improvements in algorithms and in theoretical frameworks such as decision theory, morality, and systems design.