I read that article once and some parts of it more, but I still fail to see how its relevant to this. It must be because 2 people have already given links.
The point is that even with only moderate intelligence, if you speed that intelligence up enough you can potentially have a lot of gains. Thus for example, if you took a moderately smart human (say an average Less Wrongian) and were able to think a hundred times as fast, they’d be pretty damn productive, even if their overall creativity was not that much higher. Now, we don’t know what the minimal processing power it takes to create an intelligence. Imagine for example what it would turn out if you could simulate in realtime an intelligence of about a human on an old 486 and that the main issue was just figuring out the algorithms. That means, that a cheap commercial machine computer now can run that AI at around a thousand times as fast as a human. Now, you may object that you find it implausible that an AI would be able to run in real time on a 486. That’s ok. Do you think it is plausible it could run on a machine today if we had the algorithms? Ok. Then imagine what happens if we find those algorithms 20 years from now. The same end result. Unless you believe that we will coincidentally discover how to make general AI at about the same time we have precisely the processing power to run them, they will likely be quite fast little buggers.
Thats a misconception. We’re not trying to simulate human or human-like brains. IMO, NNs and the like are dead ends. The AI project I’m currently working on will be (theoretically) able to run on any machine. The thing is, that on a super fast machine, it can just spend extra time analyzing problems, while on the slow one it will probably have to spend most of its time figuring out how to do the problem without wasting so much power. So, yes, there is a definite advantage in speed, but it will always be as efficient as possible given the power it has. So measuring intelligence by how well it does compared to a human isn’t practical. With that, a calculator could be argued to be thousands of times faster then a human.
That’s a response that relies on specific models of AI. If one can construct any AI that does functionally resemble that of a human then speed of this sort will matter.
To actually simulate the brain, you have to simulate all the complex chemical reactions and neurons. You can simplify it by just simulating the algorithms that neurons use, but thats still 10 billion things you have to simulate every millisecond or less. To make it faster you could use hashing, you can cut unnecesary parts out, you can use compression, etc, but it’s still to much for what we have to work with now. Even if you let the human your simulating modify his own program, it only makes things worse considering he could easily make a mistake. You can take the processes which the brain performs or appears to perform and model them in a computer. You might achieve the same results or better, but its not a human. But thats not the point. I don’t want a computer that can only do the things I can do and nothing else, I want one which is good at things I can’t do. Essentially, all I care about is results. To get them I will have to use an entirley different system, and then your comparing apples and oranges. A human brain doesn’t run on a serial machine, so you can’t compare the two systems.
To simulate a neuron you don’t need to necessarily simulate every chemical reaction inside it. We have pretty decent models of how neurons act. While there are serious potential problems with our understanding of the brain (for example, there’s evidence that glial cells matter for cognition but we don’t really understand what they are doing) , we don’t need to examine every chemical reaction to make a good approximation. Yes, that is still a lot of simulating, but that’s part of the reason we can’t do it now, it isn’t a reason we can’t do it in the future.
Even if you let the human your simulating modify his own program, it only makes things worse considering he could easily make a mistake. You can take the processes which the brain performs or appears to perform and model them in a computer. You might achieve the same results or better, but its not a human. But thats not the point. I don’t want a computer that can only do the things I can do and nothing else, I want one which is good at things I can’t do.
I’m confused by the relevance of your statements here compared to what we were discussing earlier in this thread about efficiency. Having a lot of human brain equivalents running much faster than humans will still help out a lot. Since the earlier claim was about using these entities to improve technologies, it should be clear that having them would help a lot. To one somewhat futuristic (and ethically questionable) example, imagine that every desktop had a system that allowed you to either simulate a brain a hundred times a fast as a human or simulate a hundred brains at normal speed, do you not think that such technology would be very helpful?
Having a lot of human brain equivalents running much faster than humans will still help out a lot.
But unless it can be done (and theres that dang speed of light thing as well as our lack of understanding our own brain) its not practical. I’m confused about where this argument is going, but my original point was to defend simple systems which are not based off of biology in any way other then the occasional genetic algorithm. If you have a way to build a “brain box” no ones stopping you, go ahead (well, there are ethical considerations, but you could get around them if you dropped emotions and stuff.) Ever heard of Eurisko (its actually how I found this site)? It achieved amazing engineering feats but was not based in any way off of actual models of the brain.
I’m confused about where this argument is going, but my original point was to defend simple systems which are not based off of biology in any way other then the occasional genetic algorithm.
This is confusing given that a few posts up we were discussing how AI would improve efficiency on many different levels. Starting with your initial post in this thread. The point then is that fast simulated brains will result in more increase in efficiency than the same humans thinking about those ideas slowly. Now, the upshot is that this logic works fine even if one has an AI that isn’t a simulation of a human brain but can act even like a minimally scientifically productive human.
I’m familiar with Eurisko but I don’t see how it as at all relevant.
I read that article once and some parts of it more, but I still fail to see how its relevant to this. It must be because 2 people have already given links.
The point is that even with only moderate intelligence, if you speed that intelligence up enough you can potentially have a lot of gains. Thus for example, if you took a moderately smart human (say an average Less Wrongian) and were able to think a hundred times as fast, they’d be pretty damn productive, even if their overall creativity was not that much higher. Now, we don’t know what the minimal processing power it takes to create an intelligence. Imagine for example what it would turn out if you could simulate in realtime an intelligence of about a human on an old 486 and that the main issue was just figuring out the algorithms. That means, that a cheap commercial machine computer now can run that AI at around a thousand times as fast as a human. Now, you may object that you find it implausible that an AI would be able to run in real time on a 486. That’s ok. Do you think it is plausible it could run on a machine today if we had the algorithms? Ok. Then imagine what happens if we find those algorithms 20 years from now. The same end result. Unless you believe that we will coincidentally discover how to make general AI at about the same time we have precisely the processing power to run them, they will likely be quite fast little buggers.
Thats a misconception. We’re not trying to simulate human or human-like brains. IMO, NNs and the like are dead ends. The AI project I’m currently working on will be (theoretically) able to run on any machine. The thing is, that on a super fast machine, it can just spend extra time analyzing problems, while on the slow one it will probably have to spend most of its time figuring out how to do the problem without wasting so much power. So, yes, there is a definite advantage in speed, but it will always be as efficient as possible given the power it has. So measuring intelligence by how well it does compared to a human isn’t practical. With that, a calculator could be argued to be thousands of times faster then a human.
That’s a response that relies on specific models of AI. If one can construct any AI that does functionally resemble that of a human then speed of this sort will matter.
To actually simulate the brain, you have to simulate all the complex chemical reactions and neurons. You can simplify it by just simulating the algorithms that neurons use, but thats still 10 billion things you have to simulate every millisecond or less. To make it faster you could use hashing, you can cut unnecesary parts out, you can use compression, etc, but it’s still to much for what we have to work with now. Even if you let the human your simulating modify his own program, it only makes things worse considering he could easily make a mistake. You can take the processes which the brain performs or appears to perform and model them in a computer. You might achieve the same results or better, but its not a human. But thats not the point. I don’t want a computer that can only do the things I can do and nothing else, I want one which is good at things I can’t do. Essentially, all I care about is results. To get them I will have to use an entirley different system, and then your comparing apples and oranges. A human brain doesn’t run on a serial machine, so you can’t compare the two systems.
To simulate a neuron you don’t need to necessarily simulate every chemical reaction inside it. We have pretty decent models of how neurons act. While there are serious potential problems with our understanding of the brain (for example, there’s evidence that glial cells matter for cognition but we don’t really understand what they are doing) , we don’t need to examine every chemical reaction to make a good approximation. Yes, that is still a lot of simulating, but that’s part of the reason we can’t do it now, it isn’t a reason we can’t do it in the future.
I’m confused by the relevance of your statements here compared to what we were discussing earlier in this thread about efficiency. Having a lot of human brain equivalents running much faster than humans will still help out a lot. Since the earlier claim was about using these entities to improve technologies, it should be clear that having them would help a lot. To one somewhat futuristic (and ethically questionable) example, imagine that every desktop had a system that allowed you to either simulate a brain a hundred times a fast as a human or simulate a hundred brains at normal speed, do you not think that such technology would be very helpful?
But unless it can be done (and theres that dang speed of light thing as well as our lack of understanding our own brain) its not practical. I’m confused about where this argument is going, but my original point was to defend simple systems which are not based off of biology in any way other then the occasional genetic algorithm. If you have a way to build a “brain box” no ones stopping you, go ahead (well, there are ethical considerations, but you could get around them if you dropped emotions and stuff.) Ever heard of Eurisko (its actually how I found this site)? It achieved amazing engineering feats but was not based in any way off of actual models of the brain.
This is confusing given that a few posts up we were discussing how AI would improve efficiency on many different levels. Starting with your initial post in this thread. The point then is that fast simulated brains will result in more increase in efficiency than the same humans thinking about those ideas slowly. Now, the upshot is that this logic works fine even if one has an AI that isn’t a simulation of a human brain but can act even like a minimally scientifically productive human.
I’m familiar with Eurisko but I don’t see how it as at all relevant.