If one doesn’t care too much about interacting with other people—if the person uploaded has a monastic temperament and is willing to devote subjective years or decades to solitary learning (and there are such people—not many, but some) - then it wouldn’t be too bad. Assuming just a 1GB/s network connection, that would still be 1K/s subjective time, so it would take less than an hour, subjective time, to download a book (that’s assuming the book is an uncompressed PDF with illustrations—a book in plain text would be much quicker). Most people take at least two or three hours to read a textbook, so it would be perfectly possible to get information considerably more quickly than you could absorb it.
And if you did want to interact with people, assuming you had access to your own source code you could fairly easily stick in a few ‘sleeps’ in some of your subroutines—voila, you’ve just slowed down to normal human speed. Do that after you’ve said what you want to say, until they answer, then remove them again and speed back up.
I’d also point out that such an AI could invert control of anything it is interested in, and instantly avoid most latency problems (since it can now do things locally at full speed).
For example, our intuitive model says that any AI interested in, say, Haskell programming, would quickly go out of its mind as it waits subjective years for maintainers to review and apply its patches, answer its questions, and so on.
But isn’t it more likely that the AI will take a copy of the library it cares about, host it on its own datacenter, and then invest a few subjective months/years rewriting it, benchmarking it, documenting it, etc. until it is a gem of software perfection, and then a few objective seconds/minutes later, sends out an email notifying the relevant humans that their version is hopelessly obsolete and pathetic, and they can go work on something else now? Anyone with half a brain will now use the AI’s final version rather than the original. Nor will the AI being a maintainer cause any problems. It’s not like people mind sending in a bug report and having it fixed an objective second later or an email arrive instantly with more detailed questions.
If the AI maintains all the software it cares about, then there’s not going to be much of a insanity-inducing lag to development.
The lag will remain for things it can’t control locally, but I wonder how many of those things such an AI would really care about with regard to their lag.
But isn’t it more likely that the AI will take a copy of the library it cares about, host it on its own datacenter, and then invest a few subjective months/years rewriting it, benchmarking it, documenting it, etc. until it is a gem of software perfection, and then a few objective seconds/minutes later, sends out an email notifying the relevant humans that their version is hopelessly obsolete and pathetic, and they can go work on something else now?
Can you think of many programmers who would want to spend a few months on that while living in a solitary confinement chamber? You wouldn’t have the objective time to exchange information with other people.
Assuming you had access to your own source code you would almost certainly have to dramatically alter your personality to cope with subjective centuries of no meaningful social interaction. You’d no longer be dealing with an entity particularly relatable to your present self.
The post contains some interesting information on upcoming technological advancements, but the thought experiment that comes attached is absurd speculation. Even if we assume that an entity otherwise cognitively identical to a human could cope with such an accelerated reference in a world where everything else is not comparably accelerated, which is wildly improbable, we’re much further from having an adequate understanding of the brain to simulate it than we are from developing this technology, and even further from being able to tamper with it and predict the effects. Simply modeling the circuitry of the neurons would not give you a working simulation.
Assuming you had access to your own source code you would almost certainly have to dramatically alter your personality to cope with subjective centuries of no meaningful social interaction.
It doesn’t have to be subjective centuries. There are many solitary, ascetic humans who have lived on their own for months or years.
Also, if you could make one such brain, you could then make two or three, and then they would each have some company even when running at full speeds.
we’re much further from having an adequate understanding of the brain to simulate it than we are from developing this technology
Most people have no idea how the brain works, but some have much better ideas than others.
Computational neuroscience is progressing along quickly. We do have a good idea of the shape of computations that the cortex does (spatio-temporal hierarchical bayesian inference), and we can already recreate some of that circuit functionality in simulations today (largely I’m thinking of Poggio’s work at MIT).
Did you check the link I posted? We may be able to recreate some of the circuit functionality of the brain, but that doesn’t mean we’re anywhere close to understanding the brain well enough to create a working model. We don’t even know how much we don’t know.
There are few ascetic humans who have lived without human contact for as much as a decade, which would pass in less than six minutes of objective time.
If you make enough of such brains that they could reasonably keep each other company, and give them human like psychology, they’re unlikely to care much about or relate to humans who live so slowly that they’re almost impossible to meaninfully communicate with by comparison.
In the future, we probably will create AI of some description which thinks dramatically faster than humans do, and we may also upload our minds, possibly with some revision, to much faster analogues once we’ve made arrangements for a society that can function at that pace. But creating the first such AIs by modeling human brains is simply not a good or credible idea.
Yes, it’s an unreccomended review of a book. Do glial cells have an important role in the brain? Yes. Do they significantly increase the computational costs of functionally equivalent circuits—absolutely not.
The brain has to handle much more complexity than an AGI brain—the organic brain has to self-assemble out of cells and it has to provide all of it’s own chemical batteries to run the ion pumps. An AGI brain can use an external power supply, so it just needs to focus on the computational aspects.
We may be able to recreate some of the circuit functionality of the brain, but that doesn’t mean we’re anywhere close to understanding the brain well enough to create a working model
The most important part of the brain is the cortex. It is built out of a highly repeated simpler circuit that computational neuroscientists have studied extensively and actually understand fairly well—enough to start implementing.
Do we understand everything that circuit does in every brain region all the time? Probably. not.
Most of the remaining missing knowledge is about the higher level connection architecture between regions and interactions with the thalamus, hippocampus and cerebellum.
We don’t necessarily need to understand all of this to build an AGI with a cortex that thinks somewhat like us. We also have general AI theory to guide us.
But creating the first such AIs by modeling human brains is simply not a good or credible idea.
Whether or not it is a good idea is one question, but it absolutely is a credible idea. In fact, it is the most credible idea for building AGI, but the analysis for that is longer and more complex. I’ve written some about that on my site, I’m going to write up an intro summary of the state of brain-AGI research and why it’s the promising path.
It’s unrecommended because it’s badly written, not because it doesn’t have worthwhile content. The glial cells serve a purpose such that the brain will not produce identical output if you exclude them from the model, and we still don’t have a good understanding of how the interaction works; until recently, we haven’t even paid much attention to studying it.
Most of the remaining missing knowledge is about the higher level connection architecture between regions and interactions with the thalamus, hippocampus and cerebellum.
We don’t necessarily need to understand all of this to build an AGI with a cortex that thinks somewhat like us. We also have general AI theory to guide us.
General AI theory that has so far failed to produce anything close to a general AI.
Whether or not it is a good idea is one question, but it absolutely is a credible idea. In fact, it is the most credible idea for building AGI, but the analysis for that is longer and more complex. I’ve written some about that on my site, I’m going to write up an intro summary of the state of brain-AGI research and why it’s the promising path.
You’ve already posted arguments to that effect on this site, note that they have tended to be disputed and downvoted.
We don’t necessarily need to understand all of this to build an AGI with a cortex that thinks somewhat like us. We also have general AI theory to guide us.
General AI theory that has so far failed to produce anything close to a general AI.
We don’t yet have economical computer systems that have 10^14 memory capacities and the ability to perform 100-1000 memory/ops on all the memory every second. The world’s largest GPU supercomputers are getting there, but doing it the naive way might take thousands of GPUs, and even then the interconnect is expensive.
We understood the feasibility and general design space of nuclear weapons and space travel long before we had the detailed knowledge and industrial capacity to build such technologies.
We understood the feasibility and general design space of nuclear weapons and space travel long before we had the detailed knowledge and industrial capacity to build such technologies.
11 years (Szilard’s patent in 1934 to Trinity in 1945) is ‘long before’?
11 years (Szilard’s patent in 1934 to Trinity in 1945) is ‘long before’?
Ok, so space travel may be a better example, depending on how far we trace back the idea’s origins. But I do think that we could develop AGI in around a decade if we made an Apollo project out of it (14 year program costing around $170 billion in 2005 dollars).
Perhaps, but as Eliezer has gone to some lengths to point out, the great majority of those working on AGI simply have no concept of how difficult the problem is, of the magnitude of the gulf between their knowledge and what they’d need to solve the problem. And solving some aspects of the problem without solving others can be extraordinarily dangerous. I think you’re handwaving away issues that are dramatically more problematic than you give them credit for.
Perhaps, but as Eliezer has gone to some lengths to point out, the great majority of those working on AGI simply have no concept of how difficult the problem is, of the magnitude of the gulf between their knowledge and what they’d need to solve the problem.
There is an observational bias involved here. If you do look at the problem of AGI and come to understand it you realize just how difficult it is and you are likely to move to work on a less ambitious narrow-AI precursor. This leaves a much smaller remainder trying to work on AGI, including the bunch that doesn’t understand the difficulty.
I think you’re handwaving away issues that are dramatically more problematic than you give them credit for.
If you are talking about the technical issues, I think 1-100 billion and 5-20 years is a good cost estimate.
As for the danger issues, yes of course this will be the most powerful and thus most dangerous invention we ever make. The last, really.
If one doesn’t care too much about interacting with other people—if the person uploaded has a monastic temperament and is willing to devote subjective years or decades to solitary learning (and there are such people—not many, but some) - then it wouldn’t be too bad. Assuming just a 1GB/s network connection, that would still be 1K/s subjective time, so it would take less than an hour, subjective time, to download a book (that’s assuming the book is an uncompressed PDF with illustrations—a book in plain text would be much quicker). Most people take at least two or three hours to read a textbook, so it would be perfectly possible to get information considerably more quickly than you could absorb it.
And if you did want to interact with people, assuming you had access to your own source code you could fairly easily stick in a few ‘sleeps’ in some of your subroutines—voila, you’ve just slowed down to normal human speed. Do that after you’ve said what you want to say, until they answer, then remove them again and speed back up.
I’d also point out that such an AI could invert control of anything it is interested in, and instantly avoid most latency problems (since it can now do things locally at full speed).
For example, our intuitive model says that any AI interested in, say, Haskell programming, would quickly go out of its mind as it waits subjective years for maintainers to review and apply its patches, answer its questions, and so on.
But isn’t it more likely that the AI will take a copy of the library it cares about, host it on its own datacenter, and then invest a few subjective months/years rewriting it, benchmarking it, documenting it, etc. until it is a gem of software perfection, and then a few objective seconds/minutes later, sends out an email notifying the relevant humans that their version is hopelessly obsolete and pathetic, and they can go work on something else now? Anyone with half a brain will now use the AI’s final version rather than the original. Nor will the AI being a maintainer cause any problems. It’s not like people mind sending in a bug report and having it fixed an objective second later or an email arrive instantly with more detailed questions.
If the AI maintains all the software it cares about, then there’s not going to be much of a insanity-inducing lag to development.
The lag will remain for things it can’t control locally, but I wonder how many of those things such an AI would really care about with regard to their lag.
Can you think of many programmers who would want to spend a few months on that while living in a solitary confinement chamber? You wouldn’t have the objective time to exchange information with other people.
I take it you’re not a programmer?
Assuming you had access to your own source code you would almost certainly have to dramatically alter your personality to cope with subjective centuries of no meaningful social interaction. You’d no longer be dealing with an entity particularly relatable to your present self.
The post contains some interesting information on upcoming technological advancements, but the thought experiment that comes attached is absurd speculation. Even if we assume that an entity otherwise cognitively identical to a human could cope with such an accelerated reference in a world where everything else is not comparably accelerated, which is wildly improbable, we’re much further from having an adequate understanding of the brain to simulate it than we are from developing this technology, and even further from being able to tamper with it and predict the effects. Simply modeling the circuitry of the neurons would not give you a working simulation.
It doesn’t have to be subjective centuries. There are many solitary, ascetic humans who have lived on their own for months or years.
Also, if you could make one such brain, you could then make two or three, and then they would each have some company even when running at full speeds.
Most people have no idea how the brain works, but some have much better ideas than others.
Computational neuroscience is progressing along quickly. We do have a good idea of the shape of computations that the cortex does (spatio-temporal hierarchical bayesian inference), and we can already recreate some of that circuit functionality in simulations today (largely I’m thinking of Poggio’s work at MIT).
Did you check the link I posted? We may be able to recreate some of the circuit functionality of the brain, but that doesn’t mean we’re anywhere close to understanding the brain well enough to create a working model. We don’t even know how much we don’t know.
There are few ascetic humans who have lived without human contact for as much as a decade, which would pass in less than six minutes of objective time.
If you make enough of such brains that they could reasonably keep each other company, and give them human like psychology, they’re unlikely to care much about or relate to humans who live so slowly that they’re almost impossible to meaninfully communicate with by comparison.
In the future, we probably will create AI of some description which thinks dramatically faster than humans do, and we may also upload our minds, possibly with some revision, to much faster analogues once we’ve made arrangements for a society that can function at that pace. But creating the first such AIs by modeling human brains is simply not a good or credible idea.
Yes, it’s an unreccomended review of a book. Do glial cells have an important role in the brain? Yes. Do they significantly increase the computational costs of functionally equivalent circuits—absolutely not.
The brain has to handle much more complexity than an AGI brain—the organic brain has to self-assemble out of cells and it has to provide all of it’s own chemical batteries to run the ion pumps. An AGI brain can use an external power supply, so it just needs to focus on the computational aspects.
The most important part of the brain is the cortex. It is built out of a highly repeated simpler circuit that computational neuroscientists have studied extensively and actually understand fairly well—enough to start implementing.
Do we understand everything that circuit does in every brain region all the time? Probably. not.
Most of the remaining missing knowledge is about the higher level connection architecture between regions and interactions with the thalamus, hippocampus and cerebellum.
We don’t necessarily need to understand all of this to build an AGI with a cortex that thinks somewhat like us. We also have general AI theory to guide us.
Whether or not it is a good idea is one question, but it absolutely is a credible idea. In fact, it is the most credible idea for building AGI, but the analysis for that is longer and more complex. I’ve written some about that on my site, I’m going to write up an intro summary of the state of brain-AGI research and why it’s the promising path.
It’s unrecommended because it’s badly written, not because it doesn’t have worthwhile content. The glial cells serve a purpose such that the brain will not produce identical output if you exclude them from the model, and we still don’t have a good understanding of how the interaction works; until recently, we haven’t even paid much attention to studying it.
General AI theory that has so far failed to produce anything close to a general AI.
You’ve already posted arguments to that effect on this site, note that they have tended to be disputed and downvoted.
We don’t yet have economical computer systems that have 10^14 memory capacities and the ability to perform 100-1000 memory/ops on all the memory every second. The world’s largest GPU supercomputers are getting there, but doing it the naive way might take thousands of GPUs, and even then the interconnect is expensive.
We understood the feasibility and general design space of nuclear weapons and space travel long before we had the detailed knowledge and industrial capacity to build such technologies.
11 years (Szilard’s patent in 1934 to Trinity in 1945) is ‘long before’?
Ok, so space travel may be a better example, depending on how far we trace back the idea’s origins. But I do think that we could develop AGI in around a decade if we made an Apollo project out of it (14 year program costing around $170 billion in 2005 dollars).
Perhaps, but as Eliezer has gone to some lengths to point out, the great majority of those working on AGI simply have no concept of how difficult the problem is, of the magnitude of the gulf between their knowledge and what they’d need to solve the problem. And solving some aspects of the problem without solving others can be extraordinarily dangerous. I think you’re handwaving away issues that are dramatically more problematic than you give them credit for.
There is an observational bias involved here. If you do look at the problem of AGI and come to understand it you realize just how difficult it is and you are likely to move to work on a less ambitious narrow-AI precursor. This leaves a much smaller remainder trying to work on AGI, including the bunch that doesn’t understand the difficulty.
If you are talking about the technical issues, I think 1-100 billion and 5-20 years is a good cost estimate.
As for the danger issues, yes of course this will be the most powerful and thus most dangerous invention we ever make. The last, really.