It can simultaneously be the case that whole-brain emulation is unlikely to arise first and that pursuing stable whole-brain emulation is more likely to give rise to a positive singularity than by pursuing Friendly AI.
The alleged facts that you cite about software engineering don’t seem relevant here: as far as I know the current state of general artificial intelligence research is still very primitive.
I would recur to FAWS’s response to one of your comments from nine months ago.
It is true that it’s possible that whole-brain emulation could be safer—but humans are psychopaths. Having agents whose brains we don’t understand in charge would be a terrible situation—removing one of our safeguards. Human brains are known to be very bad—stupid, unreliable, etc. The idea that engineered machine intelligence would be worse is, of course, possible—but doesn’t seem to be very likely. Engineered machine intelligence would be much more configurable and controllable.
Anyway, the safely of WBE seems likely to be irrelevant—if it is sufficiently likely to be beaten. We can imagine all kinds of time-consuming super-safe fantasy contraptions—but there’s a clock ticking.
Large-scale bioinspiration is used infrequently by engineers: an aeroplane is not a scanned bird, a car is not a scanned horse, a submarine is not a scanned fish, garders are not scanned logs, ropes are not a scanned vines—and so on. We scan when we really want a copy. Photos, videos, audio. In this case, a copy is exactly what we don’t want. We have cheap human brains. The main vacancies are for inhuman entities—things like a Google datacentre, or a roomba—for example.
IMO, we can see from the details of the situation that scanning isn’t going to happen in this case. If we get a fruit-fly brain scanned and working, someone is bound to scale it up 10,000 times, and then hack it into a shape that is useful for something. There are innumerable short-cuts like that on the path—and some of them seem bound to be taken.
Thinking of “general artificial intelligence” as a field is an artefact. “Artificial General Intelligence” is a marketing term used by a break-away group who were apparently irritated by the barriers to presenting at mainstream machine intelligence conferences. The rest of the field is enormous—by comparison with that splinter group—and the efforts of that mainstream on machine learning seem highly relevant to overall development to me.
Machine intelligence research started to get serious the 1950s. My projected mode “arrival time” [sic] is 2025. If correct, that makes the field about 80% “there”, timewise. Of course, it may not look as though it is close yet—but that could well be an artefact of exponential growth processes, which appear to reach the destination all at once, in a dramatic final surge.
FAWS’s comment seems practically all wrong to me—each paragraph after the first one.
It can simultaneously be the case that whole-brain emulation is unlikely to arise first and that pursuing stable whole-brain emulation is more likely to give rise to a positive singularity than by pursuing Friendly AI.
The alleged facts that you cite about software engineering don’t seem relevant here: as far as I know the current state of general artificial intelligence research is still very primitive.
I would recur to FAWS’s response to one of your comments from nine months ago.
It is true that it’s possible that whole-brain emulation could be safer—but humans are psychopaths. Having agents whose brains we don’t understand in charge would be a terrible situation—removing one of our safeguards. Human brains are known to be very bad—stupid, unreliable, etc. The idea that engineered machine intelligence would be worse is, of course, possible—but doesn’t seem to be very likely. Engineered machine intelligence would be much more configurable and controllable.
Anyway, the safely of WBE seems likely to be irrelevant—if it is sufficiently likely to be beaten. We can imagine all kinds of time-consuming super-safe fantasy contraptions—but there’s a clock ticking.
Large-scale bioinspiration is used infrequently by engineers: an aeroplane is not a scanned bird, a car is not a scanned horse, a submarine is not a scanned fish, garders are not scanned logs, ropes are not a scanned vines—and so on. We scan when we really want a copy. Photos, videos, audio. In this case, a copy is exactly what we don’t want. We have cheap human brains. The main vacancies are for inhuman entities—things like a Google datacentre, or a roomba—for example.
IMO, we can see from the details of the situation that scanning isn’t going to happen in this case. If we get a fruit-fly brain scanned and working, someone is bound to scale it up 10,000 times, and then hack it into a shape that is useful for something. There are innumerable short-cuts like that on the path—and some of them seem bound to be taken.
Thinking of “general artificial intelligence” as a field is an artefact. “Artificial General Intelligence” is a marketing term used by a break-away group who were apparently irritated by the barriers to presenting at mainstream machine intelligence conferences. The rest of the field is enormous—by comparison with that splinter group—and the efforts of that mainstream on machine learning seem highly relevant to overall development to me.
Machine intelligence research started to get serious the 1950s. My projected mode “arrival time” [sic] is 2025. If correct, that makes the field about 80% “there”, timewise. Of course, it may not look as though it is close yet—but that could well be an artefact of exponential growth processes, which appear to reach the destination all at once, in a dramatic final surge.
FAWS’s comment seems practically all wrong to me—each paragraph after the first one.