Having your subjective time sped up by a factor of 10^6 would probably be pretty terrible if not accompanied by a number of significant changes. It has required secondary powers. Actually interfacing with other people, for instance, would be so slow as to potentially drive you insane. In fact, there aren’t many things you could interface with at a pace that would not be maddeningly slow, so pursuits such as mastering cognitive science, evolutionary psychology, rationality, philosophy, mathematics, linguistics, the history of religions, and marketing might be functionally impossible unless you could generate the knowledge within yourself.
In this case, I think it’s not fair to anthropomorphize, because anything with such an accelerated frame of reference would be living a fundamentally different existence to any sort of being alive today. The only thing I can predict with significant confidence that I would do in such a situation is go mad.
I agree that some significant fraction of typical humans would go mad in such a situation, but not all.
Good writers do this anyway—they focus and become absorbed on their writing, shutting out the outside world for a time.
Also, let’s assume the mind has the option of slowing down occasionally for normal social interactions. But remember that email, forums and the like are well-suited to time differential communication.
living a fundamentally different existence to any sort of being alive today
It’s not that different from being stuck in a basement with a old computer and internet connection. I think you are over-dramatizing.
At a frame of reference accelerated by an order of 10^6, a single second would translate to more than eleven days. You can’t accelerate the body to a remotely comparable degree. Can you imagine taking ten days to turn a page in a book? Waiting a month for a person to finish a sentence? You could IM with a million people at once and have the same information input to subjective time ratio, but you’d need a direct brain to computer interface to do it, because you wouldn’t be able to type fast enough otherwise. And would you want to hold a million conversations at once where you wait weeks to months for each individual to reply?
The comparison to writers is fairly absurd. Writers who lock themselves away for a few years are considered strange and remarkable recluses. You would achieve the same in subjective time by one or two minutes of contemplation. With your cognition running at top speed, from your own perspective you would be effectively unable to move. You would receive far less external stimulus in the same subjective time period as a writer recluse. It would be rather like spending years in solitary confinement, which, keep in mind, is widely considered to be a form of psychological torture.
Further relating to the writer comparison, without an interface with which you could store the fruits of your creativity, you would be dramatically limited in your ability to produce. Ever tried to store a complete book in your memory without writing any of it down?
In order to properly interact with the rest of the non-accelerated world, you would need to spend almost all of your time with your subjective perspective slowed by orders of magnitude. To do otherwise would require some fundamental alterations to human psychology.
At a frame of reference accelerated by an order of 10^6, a single second would translate to more than eleven days. You can’t accelerate the body to a remotely comparable degree. Can you imagine taking ten days to turn a page in a book?
I’m not sure what you mean by body—simulated body? This is an AGI design sitting in a data center—it’s basically a supercomputer.
Yes this mind would be unimaginably fast. When it runs at full speed the entire world outside slows down subjectively by a factor of a million. Light itself would subjectively move along at 300 centimeters per second.
Can you imagine taking ten days to turn a page in a book?
As I mentioned, it would be difficult to get a matrix like simulation moving at this speed, so you’d have only an extremely simple input/output system—a simple input terminal would be possible. You’d read books by downloading them in text form, no need for physical book simulations.
Now, if we really wanted to I’m sure it would be possible to design specialized simulation circuitry that could run some sort of more complex sim at these speeds, but that would add a large additional cost.
Further relating to the writer comparison, without an interface with which you could store the fruits of your creativity, you would be dramatically limited in your ability to produce.
You’d still have access to a bunch of computers and huge masses of storage—that wouldn’t be a problem. Text input/output is already so low bandwidth that speeding it up by a factor of a million is just not an issue.
In order to properly interact with the rest of the non-accelerated world, you would need to spend almost all of your time with your subjective perspective slowed by orders of magnitude.
Perhaps I’m just a recluse, but I can easily work on a writing project for entire days without any required human interaction.
And i’m not a monk. There’s an entire subculture of humans who have confined themselves into their own minds for extended periods, so this isn’t that crazy, especially if you had the ability to slow down.
Seriously, you wouldn’t want the ability to slow the outside world to a standstill and read a book or write something near instantaneously?
I’m not sure what you mean by body—simulated body? This is an AGI design sitting in a data center—it’s basically a supercomputer.
A supercomputer that you proposed to create by modeling the human brain. This is a really bad way to create something that thinks that fast, if you don’t want it to go insane. Especially if it doesn’t exist in a form that has the capacity to move.
Anything that could cope with that sort of accelerated frame of reference without interacting with similarly accelerated entities would not be reasonable to anthropomorphize, and anything with human like thought processes without similarly accelerated entities to interact with would probably not retain them for long.
Perhaps I’m just a recluse, but I can easily work on a writing project for entire days without any required human interaction.
And i’m not a monk. There’s an entire subculture of humans who have confined themselves into their own minds for extended periods, so this isn’t that crazy, especially if you had the ability to slow down.
Seriously, you wouldn’t want the ability to slow the outside world to a standstill and read a book or write something near instantaneously?
It would be a useful ability, but I sure as hell wouldn’t be able to spend most of my life that way, and I’m dramatically more reclusive than most people I know. Even monks don’t go for months or years without hearing other people speak, the only people who do that are generally considered to be crazy.
Especially if it doesn’t exist in a form that has the capacity to move.
As we are already dedicating a data-center to this machine, we can give it a large number of GPUs to run a matrix-like simulation. Thinking at human speeds it could interact in an extremely detailed Matrix-like environment. The faster it thinks, the less detailed the environment can be.
Anything that could cope with that sort of accelerated frame of reference without interacting with similarly accelerated entities would not be reasonable to anthropomorphize
What? The accelerated frame of reference does not change how the mind thinks, it changes the environment.
It is exactly equivalent to opening portals to other sub-universes where time flows differently. You can be in the 1x matrix where everything is detailed and you can interact with regular humans. You can then jump to the 1000x matrix where it’s low-detailed video game graphics and you can only interact with other 1000x AIs. And then there is the 1000000x environment which is just you and a text terminal interface to slow computers and any other really fast AIs.
With variable clock rate stepping the mind could switch between these sub-universes at will.
If one doesn’t care too much about interacting with other people—if the person uploaded has a monastic temperament and is willing to devote subjective years or decades to solitary learning (and there are such people—not many, but some) - then it wouldn’t be too bad. Assuming just a 1GB/s network connection, that would still be 1K/s subjective time, so it would take less than an hour, subjective time, to download a book (that’s assuming the book is an uncompressed PDF with illustrations—a book in plain text would be much quicker). Most people take at least two or three hours to read a textbook, so it would be perfectly possible to get information considerably more quickly than you could absorb it.
And if you did want to interact with people, assuming you had access to your own source code you could fairly easily stick in a few ‘sleeps’ in some of your subroutines—voila, you’ve just slowed down to normal human speed. Do that after you’ve said what you want to say, until they answer, then remove them again and speed back up.
I’d also point out that such an AI could invert control of anything it is interested in, and instantly avoid most latency problems (since it can now do things locally at full speed).
For example, our intuitive model says that any AI interested in, say, Haskell programming, would quickly go out of its mind as it waits subjective years for maintainers to review and apply its patches, answer its questions, and so on.
But isn’t it more likely that the AI will take a copy of the library it cares about, host it on its own datacenter, and then invest a few subjective months/years rewriting it, benchmarking it, documenting it, etc. until it is a gem of software perfection, and then a few objective seconds/minutes later, sends out an email notifying the relevant humans that their version is hopelessly obsolete and pathetic, and they can go work on something else now? Anyone with half a brain will now use the AI’s final version rather than the original. Nor will the AI being a maintainer cause any problems. It’s not like people mind sending in a bug report and having it fixed an objective second later or an email arrive instantly with more detailed questions.
If the AI maintains all the software it cares about, then there’s not going to be much of a insanity-inducing lag to development.
The lag will remain for things it can’t control locally, but I wonder how many of those things such an AI would really care about with regard to their lag.
But isn’t it more likely that the AI will take a copy of the library it cares about, host it on its own datacenter, and then invest a few subjective months/years rewriting it, benchmarking it, documenting it, etc. until it is a gem of software perfection, and then a few objective seconds/minutes later, sends out an email notifying the relevant humans that their version is hopelessly obsolete and pathetic, and they can go work on something else now?
Can you think of many programmers who would want to spend a few months on that while living in a solitary confinement chamber? You wouldn’t have the objective time to exchange information with other people.
Assuming you had access to your own source code you would almost certainly have to dramatically alter your personality to cope with subjective centuries of no meaningful social interaction. You’d no longer be dealing with an entity particularly relatable to your present self.
The post contains some interesting information on upcoming technological advancements, but the thought experiment that comes attached is absurd speculation. Even if we assume that an entity otherwise cognitively identical to a human could cope with such an accelerated reference in a world where everything else is not comparably accelerated, which is wildly improbable, we’re much further from having an adequate understanding of the brain to simulate it than we are from developing this technology, and even further from being able to tamper with it and predict the effects. Simply modeling the circuitry of the neurons would not give you a working simulation.
Assuming you had access to your own source code you would almost certainly have to dramatically alter your personality to cope with subjective centuries of no meaningful social interaction.
It doesn’t have to be subjective centuries. There are many solitary, ascetic humans who have lived on their own for months or years.
Also, if you could make one such brain, you could then make two or three, and then they would each have some company even when running at full speeds.
we’re much further from having an adequate understanding of the brain to simulate it than we are from developing this technology
Most people have no idea how the brain works, but some have much better ideas than others.
Computational neuroscience is progressing along quickly. We do have a good idea of the shape of computations that the cortex does (spatio-temporal hierarchical bayesian inference), and we can already recreate some of that circuit functionality in simulations today (largely I’m thinking of Poggio’s work at MIT).
Did you check the link I posted? We may be able to recreate some of the circuit functionality of the brain, but that doesn’t mean we’re anywhere close to understanding the brain well enough to create a working model. We don’t even know how much we don’t know.
There are few ascetic humans who have lived without human contact for as much as a decade, which would pass in less than six minutes of objective time.
If you make enough of such brains that they could reasonably keep each other company, and give them human like psychology, they’re unlikely to care much about or relate to humans who live so slowly that they’re almost impossible to meaninfully communicate with by comparison.
In the future, we probably will create AI of some description which thinks dramatically faster than humans do, and we may also upload our minds, possibly with some revision, to much faster analogues once we’ve made arrangements for a society that can function at that pace. But creating the first such AIs by modeling human brains is simply not a good or credible idea.
Yes, it’s an unreccomended review of a book. Do glial cells have an important role in the brain? Yes. Do they significantly increase the computational costs of functionally equivalent circuits—absolutely not.
The brain has to handle much more complexity than an AGI brain—the organic brain has to self-assemble out of cells and it has to provide all of it’s own chemical batteries to run the ion pumps. An AGI brain can use an external power supply, so it just needs to focus on the computational aspects.
We may be able to recreate some of the circuit functionality of the brain, but that doesn’t mean we’re anywhere close to understanding the brain well enough to create a working model
The most important part of the brain is the cortex. It is built out of a highly repeated simpler circuit that computational neuroscientists have studied extensively and actually understand fairly well—enough to start implementing.
Do we understand everything that circuit does in every brain region all the time? Probably. not.
Most of the remaining missing knowledge is about the higher level connection architecture between regions and interactions with the thalamus, hippocampus and cerebellum.
We don’t necessarily need to understand all of this to build an AGI with a cortex that thinks somewhat like us. We also have general AI theory to guide us.
But creating the first such AIs by modeling human brains is simply not a good or credible idea.
Whether or not it is a good idea is one question, but it absolutely is a credible idea. In fact, it is the most credible idea for building AGI, but the analysis for that is longer and more complex. I’ve written some about that on my site, I’m going to write up an intro summary of the state of brain-AGI research and why it’s the promising path.
It’s unrecommended because it’s badly written, not because it doesn’t have worthwhile content. The glial cells serve a purpose such that the brain will not produce identical output if you exclude them from the model, and we still don’t have a good understanding of how the interaction works; until recently, we haven’t even paid much attention to studying it.
Most of the remaining missing knowledge is about the higher level connection architecture between regions and interactions with the thalamus, hippocampus and cerebellum.
We don’t necessarily need to understand all of this to build an AGI with a cortex that thinks somewhat like us. We also have general AI theory to guide us.
General AI theory that has so far failed to produce anything close to a general AI.
Whether or not it is a good idea is one question, but it absolutely is a credible idea. In fact, it is the most credible idea for building AGI, but the analysis for that is longer and more complex. I’ve written some about that on my site, I’m going to write up an intro summary of the state of brain-AGI research and why it’s the promising path.
You’ve already posted arguments to that effect on this site, note that they have tended to be disputed and downvoted.
We don’t necessarily need to understand all of this to build an AGI with a cortex that thinks somewhat like us. We also have general AI theory to guide us.
General AI theory that has so far failed to produce anything close to a general AI.
We don’t yet have economical computer systems that have 10^14 memory capacities and the ability to perform 100-1000 memory/ops on all the memory every second. The world’s largest GPU supercomputers are getting there, but doing it the naive way might take thousands of GPUs, and even then the interconnect is expensive.
We understood the feasibility and general design space of nuclear weapons and space travel long before we had the detailed knowledge and industrial capacity to build such technologies.
We understood the feasibility and general design space of nuclear weapons and space travel long before we had the detailed knowledge and industrial capacity to build such technologies.
11 years (Szilard’s patent in 1934 to Trinity in 1945) is ‘long before’?
11 years (Szilard’s patent in 1934 to Trinity in 1945) is ‘long before’?
Ok, so space travel may be a better example, depending on how far we trace back the idea’s origins. But I do think that we could develop AGI in around a decade if we made an Apollo project out of it (14 year program costing around $170 billion in 2005 dollars).
Perhaps, but as Eliezer has gone to some lengths to point out, the great majority of those working on AGI simply have no concept of how difficult the problem is, of the magnitude of the gulf between their knowledge and what they’d need to solve the problem. And solving some aspects of the problem without solving others can be extraordinarily dangerous. I think you’re handwaving away issues that are dramatically more problematic than you give them credit for.
Perhaps, but as Eliezer has gone to some lengths to point out, the great majority of those working on AGI simply have no concept of how difficult the problem is, of the magnitude of the gulf between their knowledge and what they’d need to solve the problem.
There is an observational bias involved here. If you do look at the problem of AGI and come to understand it you realize just how difficult it is and you are likely to move to work on a less ambitious narrow-AI precursor. This leaves a much smaller remainder trying to work on AGI, including the bunch that doesn’t understand the difficulty.
I think you’re handwaving away issues that are dramatically more problematic than you give them credit for.
If you are talking about the technical issues, I think 1-100 billion and 5-20 years is a good cost estimate.
As for the danger issues, yes of course this will be the most powerful and thus most dangerous invention we ever make. The last, really.
There’s also the question of how a sha[p]e shifter thinks with no brain, or just without their normal brain. As Harry Potter noncanonically asked Professor McGonagall, “How can you go on thinking using a cat-sized brain?”.
Having your subjective time sped up by a factor of 10^6 would probably be pretty terrible if not accompanied by a number of significant changes. It has required secondary powers. Actually interfacing with other people, for instance, would be so slow as to potentially drive you insane. In fact, there aren’t many things you could interface with at a pace that would not be maddeningly slow, so pursuits such as mastering cognitive science, evolutionary psychology, rationality, philosophy, mathematics, linguistics, the history of religions, and marketing might be functionally impossible unless you could generate the knowledge within yourself.
In this case, I think it’s not fair to anthropomorphize, because anything with such an accelerated frame of reference would be living a fundamentally different existence to any sort of being alive today. The only thing I can predict with significant confidence that I would do in such a situation is go mad.
I agree that some significant fraction of typical humans would go mad in such a situation, but not all.
Good writers do this anyway—they focus and become absorbed on their writing, shutting out the outside world for a time.
Also, let’s assume the mind has the option of slowing down occasionally for normal social interactions. But remember that email, forums and the like are well-suited to time differential communication.
It’s not that different from being stuck in a basement with a old computer and internet connection. I think you are over-dramatizing.
At a frame of reference accelerated by an order of 10^6, a single second would translate to more than eleven days. You can’t accelerate the body to a remotely comparable degree. Can you imagine taking ten days to turn a page in a book? Waiting a month for a person to finish a sentence? You could IM with a million people at once and have the same information input to subjective time ratio, but you’d need a direct brain to computer interface to do it, because you wouldn’t be able to type fast enough otherwise. And would you want to hold a million conversations at once where you wait weeks to months for each individual to reply?
The comparison to writers is fairly absurd. Writers who lock themselves away for a few years are considered strange and remarkable recluses. You would achieve the same in subjective time by one or two minutes of contemplation. With your cognition running at top speed, from your own perspective you would be effectively unable to move. You would receive far less external stimulus in the same subjective time period as a writer recluse. It would be rather like spending years in solitary confinement, which, keep in mind, is widely considered to be a form of psychological torture.
Further relating to the writer comparison, without an interface with which you could store the fruits of your creativity, you would be dramatically limited in your ability to produce. Ever tried to store a complete book in your memory without writing any of it down?
In order to properly interact with the rest of the non-accelerated world, you would need to spend almost all of your time with your subjective perspective slowed by orders of magnitude. To do otherwise would require some fundamental alterations to human psychology.
I’m not sure what you mean by body—simulated body? This is an AGI design sitting in a data center—it’s basically a supercomputer.
Yes this mind would be unimaginably fast. When it runs at full speed the entire world outside slows down subjectively by a factor of a million. Light itself would subjectively move along at 300 centimeters per second.
As I mentioned, it would be difficult to get a matrix like simulation moving at this speed, so you’d have only an extremely simple input/output system—a simple input terminal would be possible. You’d read books by downloading them in text form, no need for physical book simulations.
Now, if we really wanted to I’m sure it would be possible to design specialized simulation circuitry that could run some sort of more complex sim at these speeds, but that would add a large additional cost.
You’d still have access to a bunch of computers and huge masses of storage—that wouldn’t be a problem. Text input/output is already so low bandwidth that speeding it up by a factor of a million is just not an issue.
Perhaps I’m just a recluse, but I can easily work on a writing project for entire days without any required human interaction.
And i’m not a monk. There’s an entire subculture of humans who have confined themselves into their own minds for extended periods, so this isn’t that crazy, especially if you had the ability to slow down.
Seriously, you wouldn’t want the ability to slow the outside world to a standstill and read a book or write something near instantaneously?
A supercomputer that you proposed to create by modeling the human brain. This is a really bad way to create something that thinks that fast, if you don’t want it to go insane. Especially if it doesn’t exist in a form that has the capacity to move.
Anything that could cope with that sort of accelerated frame of reference without interacting with similarly accelerated entities would not be reasonable to anthropomorphize, and anything with human like thought processes without similarly accelerated entities to interact with would probably not retain them for long.
It would be a useful ability, but I sure as hell wouldn’t be able to spend most of my life that way, and I’m dramatically more reclusive than most people I know. Even monks don’t go for months or years without hearing other people speak, the only people who do that are generally considered to be crazy.
Most of your life in subjective or physical time?
Either. I might be able to spend a comparable proportion of my subjective time in an accelerated frame of reference, but not a significant majority.
As we are already dedicating a data-center to this machine, we can give it a large number of GPUs to run a matrix-like simulation. Thinking at human speeds it could interact in an extremely detailed Matrix-like environment. The faster it thinks, the less detailed the environment can be.
What? The accelerated frame of reference does not change how the mind thinks, it changes the environment.
It is exactly equivalent to opening portals to other sub-universes where time flows differently. You can be in the 1x matrix where everything is detailed and you can interact with regular humans. You can then jump to the 1000x matrix where it’s low-detailed video game graphics and you can only interact with other 1000x AIs. And then there is the 1000000x environment which is just you and a text terminal interface to slow computers and any other really fast AIs.
With variable clock rate stepping the mind could switch between these sub-universes at will.
If one doesn’t care too much about interacting with other people—if the person uploaded has a monastic temperament and is willing to devote subjective years or decades to solitary learning (and there are such people—not many, but some) - then it wouldn’t be too bad. Assuming just a 1GB/s network connection, that would still be 1K/s subjective time, so it would take less than an hour, subjective time, to download a book (that’s assuming the book is an uncompressed PDF with illustrations—a book in plain text would be much quicker). Most people take at least two or three hours to read a textbook, so it would be perfectly possible to get information considerably more quickly than you could absorb it.
And if you did want to interact with people, assuming you had access to your own source code you could fairly easily stick in a few ‘sleeps’ in some of your subroutines—voila, you’ve just slowed down to normal human speed. Do that after you’ve said what you want to say, until they answer, then remove them again and speed back up.
I’d also point out that such an AI could invert control of anything it is interested in, and instantly avoid most latency problems (since it can now do things locally at full speed).
For example, our intuitive model says that any AI interested in, say, Haskell programming, would quickly go out of its mind as it waits subjective years for maintainers to review and apply its patches, answer its questions, and so on.
But isn’t it more likely that the AI will take a copy of the library it cares about, host it on its own datacenter, and then invest a few subjective months/years rewriting it, benchmarking it, documenting it, etc. until it is a gem of software perfection, and then a few objective seconds/minutes later, sends out an email notifying the relevant humans that their version is hopelessly obsolete and pathetic, and they can go work on something else now? Anyone with half a brain will now use the AI’s final version rather than the original. Nor will the AI being a maintainer cause any problems. It’s not like people mind sending in a bug report and having it fixed an objective second later or an email arrive instantly with more detailed questions.
If the AI maintains all the software it cares about, then there’s not going to be much of a insanity-inducing lag to development.
The lag will remain for things it can’t control locally, but I wonder how many of those things such an AI would really care about with regard to their lag.
Can you think of many programmers who would want to spend a few months on that while living in a solitary confinement chamber? You wouldn’t have the objective time to exchange information with other people.
I take it you’re not a programmer?
Assuming you had access to your own source code you would almost certainly have to dramatically alter your personality to cope with subjective centuries of no meaningful social interaction. You’d no longer be dealing with an entity particularly relatable to your present self.
The post contains some interesting information on upcoming technological advancements, but the thought experiment that comes attached is absurd speculation. Even if we assume that an entity otherwise cognitively identical to a human could cope with such an accelerated reference in a world where everything else is not comparably accelerated, which is wildly improbable, we’re much further from having an adequate understanding of the brain to simulate it than we are from developing this technology, and even further from being able to tamper with it and predict the effects. Simply modeling the circuitry of the neurons would not give you a working simulation.
It doesn’t have to be subjective centuries. There are many solitary, ascetic humans who have lived on their own for months or years.
Also, if you could make one such brain, you could then make two or three, and then they would each have some company even when running at full speeds.
Most people have no idea how the brain works, but some have much better ideas than others.
Computational neuroscience is progressing along quickly. We do have a good idea of the shape of computations that the cortex does (spatio-temporal hierarchical bayesian inference), and we can already recreate some of that circuit functionality in simulations today (largely I’m thinking of Poggio’s work at MIT).
Did you check the link I posted? We may be able to recreate some of the circuit functionality of the brain, but that doesn’t mean we’re anywhere close to understanding the brain well enough to create a working model. We don’t even know how much we don’t know.
There are few ascetic humans who have lived without human contact for as much as a decade, which would pass in less than six minutes of objective time.
If you make enough of such brains that they could reasonably keep each other company, and give them human like psychology, they’re unlikely to care much about or relate to humans who live so slowly that they’re almost impossible to meaninfully communicate with by comparison.
In the future, we probably will create AI of some description which thinks dramatically faster than humans do, and we may also upload our minds, possibly with some revision, to much faster analogues once we’ve made arrangements for a society that can function at that pace. But creating the first such AIs by modeling human brains is simply not a good or credible idea.
Yes, it’s an unreccomended review of a book. Do glial cells have an important role in the brain? Yes. Do they significantly increase the computational costs of functionally equivalent circuits—absolutely not.
The brain has to handle much more complexity than an AGI brain—the organic brain has to self-assemble out of cells and it has to provide all of it’s own chemical batteries to run the ion pumps. An AGI brain can use an external power supply, so it just needs to focus on the computational aspects.
The most important part of the brain is the cortex. It is built out of a highly repeated simpler circuit that computational neuroscientists have studied extensively and actually understand fairly well—enough to start implementing.
Do we understand everything that circuit does in every brain region all the time? Probably. not.
Most of the remaining missing knowledge is about the higher level connection architecture between regions and interactions with the thalamus, hippocampus and cerebellum.
We don’t necessarily need to understand all of this to build an AGI with a cortex that thinks somewhat like us. We also have general AI theory to guide us.
Whether or not it is a good idea is one question, but it absolutely is a credible idea. In fact, it is the most credible idea for building AGI, but the analysis for that is longer and more complex. I’ve written some about that on my site, I’m going to write up an intro summary of the state of brain-AGI research and why it’s the promising path.
It’s unrecommended because it’s badly written, not because it doesn’t have worthwhile content. The glial cells serve a purpose such that the brain will not produce identical output if you exclude them from the model, and we still don’t have a good understanding of how the interaction works; until recently, we haven’t even paid much attention to studying it.
General AI theory that has so far failed to produce anything close to a general AI.
You’ve already posted arguments to that effect on this site, note that they have tended to be disputed and downvoted.
We don’t yet have economical computer systems that have 10^14 memory capacities and the ability to perform 100-1000 memory/ops on all the memory every second. The world’s largest GPU supercomputers are getting there, but doing it the naive way might take thousands of GPUs, and even then the interconnect is expensive.
We understood the feasibility and general design space of nuclear weapons and space travel long before we had the detailed knowledge and industrial capacity to build such technologies.
11 years (Szilard’s patent in 1934 to Trinity in 1945) is ‘long before’?
Ok, so space travel may be a better example, depending on how far we trace back the idea’s origins. But I do think that we could develop AGI in around a decade if we made an Apollo project out of it (14 year program costing around $170 billion in 2005 dollars).
Perhaps, but as Eliezer has gone to some lengths to point out, the great majority of those working on AGI simply have no concept of how difficult the problem is, of the magnitude of the gulf between their knowledge and what they’d need to solve the problem. And solving some aspects of the problem without solving others can be extraordinarily dangerous. I think you’re handwaving away issues that are dramatically more problematic than you give them credit for.
There is an observational bias involved here. If you do look at the problem of AGI and come to understand it you realize just how difficult it is and you are likely to move to work on a less ambitious narrow-AI precursor. This leaves a much smaller remainder trying to work on AGI, including the bunch that doesn’t understand the difficulty.
If you are talking about the technical issues, I think 1-100 billion and 5-20 years is a good cost estimate.
As for the danger issues, yes of course this will be the most powerful and thus most dangerous invention we ever make. The last, really.
Just noticed that page quotes MoR!