writing essays and SF short short stories about digital civilizations which climb to transcendence within a few human days or hours (I have examined your blog); a little vague about exactly what a “positive Singularity” might be, except a future where the good things happen and the bad things don’t.
The most recent post on my blog is indeed a very short story, but it is the only such post. Most of the blog is concerned with particular technical ideas and near term predictions about the impact of technology on specific fields: namely the video game industry. As a side note, several of the game industry blog posts have been published. The single recent hastily written story was more about illustrating the out of context problem and speed differential, which I think are the most well grounded important generalizations we can make about the Singularity at this point. We all must make quick associative judgements to conserve precious thought-time, but please be mindful of generalizing from a single example and lumping my mindstate into the “just like me 15 years ago.” But I’m not trying to take the argumentative stance by saying this, I’m just requesting it: I value your outlook.
Yes, my concept of a positive Singularity is definitely vague, but that of a Singularity less so, and within this one can draw a positive/negative delineation.
But is it rational to anticipate: immortality; existence becoming transcendentally better or worse than it is;
Immortality with the caveat of continuous significant change (evolution in mindstate) is rational, and it is pretty widely accepted inherent quality of future AGI. Mortality is not an intrinsic property of minds-in-general, its a particular feature of our evolutionary history. On the whole, there’s a reasonable argument that its net utility was greater before the arrival of language and technology.
Uploading is a whole other animal, and at this point I think physics permits it, but it will be considerably more difficult than AGI itself and would come sometime after (but of course, time acceleration must be taken into account). However, I do think skepticism is reasonable, and I accept that it may prove to be impossible in principle at some level, even if this proof is not apparent now. (I have one article about uploading and identity on my blog)
If you haven’t seen them, you should pay a visit to Dale Carrico’s writings on “superlative futurology”.
I will have to investigate Carrico’s “superlative futurology”.
Imagination guides human future. If we couldn’t imagine the future, we wouldn’t be able to steer the present towards it.
there are also many who aspire to something resembling sainthood, and whose notion of what is possible for the current inhabitants of Earth exhibits an interpersonal utopianism hitherto found only in the most benevolent and optimistic religious and secular eschatologies
Yes, and this is the exact branch of transhumanism that I subscribe to, in part simply because I believe it has the most potential, but moreover because I find it has the strongest evolutionary support. That may sound like a strange claim, so I should qualify it.
Worldviews have been evolving since the dawn of language. Realism, the extent to which the worldview is consistent with evidence, the extent to which it actually explains the way the world was, the way the world is, and the way the world can be in the future, is only one aspect of the fitness landscape which shapes the evolution of worldviews and ideas.
Worldviews also must appeal to our sense of what we want the world to be, as opposed to what it actually is. The scientific worldview is effective exactly because it allows us to think rationally and cleanly divorce is-isms from want-isms.
AGI is a technology that could amplify ‘our’ knowledge and capability to such a degree that it could literally enable ‘us’ to shape our reality in any way ‘we’ can imagine. This statement is objectively true or false, and its veracity has absolutely nothing to do with what we want.
However, any reasonable prediction of the outcome of such technology will necessarily be nearly equivalent to highly evolved religious eschatologies. Humans have had a long, long time to evolve highly elaborate conceptions of what we want the world to become, if we only had the power. A technology that gives us such power will enable us to actualize those previous conceptions.
The future potential of Singularity technologies needs to be evaluated on purely scientific grounds, but everyone must be aware that the outcome and impact of such technologies will necessarily tech the shape of our old dreams of transcendence, and this is no way, shape, or form is anything resembling a legitimate argument concerning the feasibility and timelines of said technologies.
In short, many people when they hear about the Singularity reach this irrational conclusion—“that sounds like religious eschatologies I’ve heard before, therefore its just another instance of that”. You can trace the evolution of ideas and show that the Singularity inherits conceptions of what-the-world-can-become from past gnostic transcendental mythology or christian utopian millennialism or whatever, but using that to dismiss the predictions themselves is irrational.
I had enthusiasm a decade ago when I was in college, but this faded and recessed into the back of my mind. More lately, it has been returning.
I look at the example of someone like Elisier and I see one who was exposed to the same ideas, in around the same timeframe, but did not delegate them to a dusty shelf and move on with a normal life. Instead he took upon himself to alert the world and attempt to do what he could to create that better imagined future. I find this admirable.
But enthusiasm for spreading the singularity gospel, the desire to set the world aflame with the “knowledge” of immortality through mind uploading (just one example)… that, almost certainly, achieves nothing deeply useful.
Naturally, I strongly disagree, but I’m confused as to whether you doubt 1.) that the world outcome would improve with greater awareness, or 2.) whether increasing awareness is worth any effort.
I think is just unrealistic, and usually the product of some young person who realizes that maybe they can save themselves and their friends from death and drudgery if all this comes to pass, so how can anyone not be interested in it?
Most people are interested in it. Last I recall, well over 50% of Americans are Christians and believe that just through acceptance of a few rather simple memes and living a good life, they will be rewarded with a unimaginably good afterlife.
I’ve personally experienced introducing the central idea to previous unexposed people in the general atheist/agnostic camp, and seeing it catch on. I wonder if you have had similar experiences.
I was once at a party at some film producer’s house and I saw the Singularity is Near sitting alone as a center piece on a bookstand as you walk in, and it made me realize that perhaps there is hope for wide-scale recognition in a reasonable timeframe. Ideas can move pretty fast in this modern era.
Computing hardware is a fact, but consciousness in a program is not yet a fact and
I’ve yet to see convincing arguments showing “consciousness in a program is impossible”, and at the moment I don’t assign special value to consciousness as distinguishable from human level self-awareness and intelligence.
The idea of the first superintelligent process following a particular utility function explicitly selected to be the basis of a humane posthuman order I consider to be a far more logical approach to achieving the best possible outcome, than just wanting to
My position is not to just “promote the idea of immortality through mind uploading, or reverse engineering the brain”—those are only some specific component ideas, although they are important. But I do believe promoting the overall awareness does increase the probability of positive outcome.
I agree with the general idea of ethical or friendly AI, but I find some of the details sorely lacking. Namely, how do you compress a supremely complex concept, such as a “humane posthuman order” (which itself is a funny play on words—don’t you think) into a simple particular utility function? I have not seen even the beginnings of a rigid analysis of how this would be possible in principle. I find this to be the largest defining weakness in the SIAI’s current mission.
To put it another way: whose utility function?
To many technical, Singularity aware outsiders (such as myself) reading into FAI theory for the first time, the idea that the future of humanity can be simplified down into a single utility function or a transparent, cleanly casual goal system appears to be delusion at best, and potentially dangerous.
I find it far more likely (and I suspect that most of the Singularity-aware mainstream agrees), that complex concepts such as “humane future of humanity” will have to be expressed in human language, and the AGI will have to learn them as it matures in a similar fashion to how human minds learn the concept. This belief is based on reasonable estimates of the minimal information complexity required to represent concepts. I believe the minimal requirements to represent even a concept as simple as “dog” are orders of magnitude higher than anything that could be cleanly represented in human code.
However, the above criticism is in the particulars of implementation, and doesn’t cause disagreement with the general idea of FAI or ethical AI. But as far as actual implementation goes, I’d rather support a project exploring multiple routes, and brain-like routes in particular—not only because there are good technical reasons to believe such routes are the most viable, but because they also accelerate the path towards uploading.
I agree with the general idea of ethical or friendly AI, but I find some of the details sorely lacking. Namely, how do you compress a supremely complex concept, such as a “humane posthuman order” (which itself is a funny play on words—don’t you think) into a simple particular utility function? I have not seen even the beginnings of a rigid analysis of how this would be possible in principle.
Ironically, the idea involves reverse-engineering the brain—specifically, reverse-engineering the basis of human moral and metamoral cognition. One is to extract the essence of this, purifying it of variations due to the contingencies of culture, history, and the genetics and life history of the individual, and then extrapolate it until it stabilizes. That is, the moral and metamoral cognition of our species is held to instantiate a self-modifying decision theory, and the human race has not yet had the time or knowledge necessary to take that process to its conclusion. The ethical heuristics and philosophies that we already have are to be regarded as approximations of the true theory of right action appropriate to human beings. CEV is about outsourcing this process to an AI which will do neuroscience, discover what we truly value and meta-value, and extrapolate those values to their logical completion. That is the utility function a friendly AI should follow.
I’ll avoid returning to the other issues for the moment since this is the really important one.
I agree with your general elucidation of the CEV principle, but this particular statement stuck out like a red flag:
One is to extract the essence of this, purifying it of variations due to the contingencies of culture, history,
Our morality and ‘metamorality’ already exists, the CEV in a sense has already been evolving for quite some time, but it is inherently a cultural & memetic evolution that supervenes on our biological brains. So purging it of cultural variations is less than wrong—it is cultural.
The flaw then is assuming there is a single evolutionary target for humanity’s future, when in fact the more accurate evolutionary trajectory is adaptive radiation. So the C in CEV is unrealistic. Instead of a single coherent future, we will have countless many, corresponding to different universes humans will want to create and inhabit after uploading.
There will be convergent cultural effects (trends we see now), but there will also be powerful divergent effects imposed by the speed of light when posthuman minds start thinking thousands and millions of times accelerated. This is a constraint of physics which has interesting implications. more on this towards the bottom area of this post
If one single religion and culture had taken over the world, a universal CEV might have a stronger footing. The dominant religious branch of the west came close, but not quite.
Its more than just a theory of right action appropriate to human beings, its also what do you do with all the matter, how do you divide resources, political and economic structure, etc etc.
Given the success of Xtianity and related worldviews, we have some guess at features of the CEV—people generally will want immortality in virtual reality paradises, and they are quite willing (even happy) to trust an intelligence far beyond their own to run the show—but they have a particular interest in seeing it take a human face. Also, even though willing to delegate up ultimate authority, they will want to take an active role in helping shape universes.
The other day I was flipping through channels and happened upon some late night christian preacher channel, and he was talking about new Jerusalem and all that and there was this one bit that I found amusing. He was talking about how humans would join god’s task force and help shape the universe and would be able to zip from star system to star system without anything as slow or messy as a rocket.
I found this amusing, because in a way its accurate (physical space travel will be too slow for beings that think a million times accelerated and have molecular level computers for virtual reality simulation.)
Our morality and ‘metamorality’ already exists, the CEV in a sense has already been evolving for quite some time, but it is inherently a cultural & memetic evolution that supervenes on our biological brains. So purging it of cultural variations is less than wrong—it is cultural.
Existing human cultures result from the cumulative interaction of human neurogenetics with the external environment. CEV as described is meant to identify the neurogenetic invariants underlying this cultural and memetic evolution, precisely so as to have it continue in a way that humans would desire. The rise of AI requires that we do this explicitly, because of the contingency of AI goals. The superior problem-solving ability of advanced AI implies that advanced AI will win in any deep clash of directions with the human race. Better to ensure that this clash does not occur in the first place, by setting the AI’s initial conditions appropriately, but then we face the opposite problem: if we use current culture (or just our private intuitions) as a template for AI values, we risk locking in our current mistakes. CEV, as a strategy for Friendly AI, is therefore a middle path between gambling on a friendly outcome and locking in an idiosyncratic cultural notion of what’s good: you try to port the cognitive kernel of human ethical progress (which might include hardwired metaethical criteria of progress) to the new platform of thought. Anything less risks leaving out something essential, and anything more risks locking in something inessential (but I think the former risk is far more serious).
Mind uploading is another way you could try to humanize the new computational platform, but I think there’s little prospect of whole human individuals being copied intact to some new platform, before you have human-rivaling AI being developed for that platform. (One might also prefer to have something like a theory of goal stability before engaging in self-modification as an uploaded individual.)
Instead of a single coherent future, we will have countless many, corresponding to different universes humans will want to create and inhabit after uploading.
I think we will pass through a situation where some entity or coalition of entities has absolute power, thanks primarily to the conjunction of artificial intelligence and nanotechnology. If there is a pluralistic future further beyond that point, it will be because the values of that power were friendly to such pluralism.
The most recent post on my blog is indeed a very short story, but it is the only such post. Most of the blog is concerned with particular technical ideas and near term predictions about the impact of technology on specific fields: namely the video game industry. As a side note, several of the game industry blog posts have been published. The single recent hastily written story was more about illustrating the out of context problem and speed differential, which I think are the most well grounded important generalizations we can make about the Singularity at this point. We all must make quick associative judgements to conserve precious thought-time, but please be mindful of generalizing from a single example and lumping my mindstate into the “just like me 15 years ago.” But I’m not trying to take the argumentative stance by saying this, I’m just requesting it: I value your outlook.
Yes, my concept of a positive Singularity is definitely vague, but that of a Singularity less so, and within this one can draw a positive/negative delineation.
Immortality with the caveat of continuous significant change (evolution in mindstate) is rational, and it is pretty widely accepted inherent quality of future AGI. Mortality is not an intrinsic property of minds-in-general, its a particular feature of our evolutionary history. On the whole, there’s a reasonable argument that its net utility was greater before the arrival of language and technology.
Uploading is a whole other animal, and at this point I think physics permits it, but it will be considerably more difficult than AGI itself and would come sometime after (but of course, time acceleration must be taken into account). However, I do think skepticism is reasonable, and I accept that it may prove to be impossible in principle at some level, even if this proof is not apparent now. (I have one article about uploading and identity on my blog)
I will have to investigate Carrico’s “superlative futurology”.
Imagination guides human future. If we couldn’t imagine the future, we wouldn’t be able to steer the present towards it.
Yes, and this is the exact branch of transhumanism that I subscribe to, in part simply because I believe it has the most potential, but moreover because I find it has the strongest evolutionary support. That may sound like a strange claim, so I should qualify it.
Worldviews have been evolving since the dawn of language. Realism, the extent to which the worldview is consistent with evidence, the extent to which it actually explains the way the world was, the way the world is, and the way the world can be in the future, is only one aspect of the fitness landscape which shapes the evolution of worldviews and ideas.
Worldviews also must appeal to our sense of what we want the world to be, as opposed to what it actually is. The scientific worldview is effective exactly because it allows us to think rationally and cleanly divorce is-isms from want-isms.
AGI is a technology that could amplify ‘our’ knowledge and capability to such a degree that it could literally enable ‘us’ to shape our reality in any way ‘we’ can imagine. This statement is objectively true or false, and its veracity has absolutely nothing to do with what we want.
However, any reasonable prediction of the outcome of such technology will necessarily be nearly equivalent to highly evolved religious eschatologies. Humans have had a long, long time to evolve highly elaborate conceptions of what we want the world to become, if we only had the power. A technology that gives us such power will enable us to actualize those previous conceptions.
The future potential of Singularity technologies needs to be evaluated on purely scientific grounds, but everyone must be aware that the outcome and impact of such technologies will necessarily tech the shape of our old dreams of transcendence, and this is no way, shape, or form is anything resembling a legitimate argument concerning the feasibility and timelines of said technologies.
In short, many people when they hear about the Singularity reach this irrational conclusion—“that sounds like religious eschatologies I’ve heard before, therefore its just another instance of that”. You can trace the evolution of ideas and show that the Singularity inherits conceptions of what-the-world-can-become from past gnostic transcendental mythology or christian utopian millennialism or whatever, but using that to dismiss the predictions themselves is irrational.
I had enthusiasm a decade ago when I was in college, but this faded and recessed into the back of my mind. More lately, it has been returning.
I look at the example of someone like Elisier and I see one who was exposed to the same ideas, in around the same timeframe, but did not delegate them to a dusty shelf and move on with a normal life. Instead he took upon himself to alert the world and attempt to do what he could to create that better imagined future. I find this admirable.
Naturally, I strongly disagree, but I’m confused as to whether you doubt 1.) that the world outcome would improve with greater awareness, or 2.) whether increasing awareness is worth any effort.
Most people are interested in it. Last I recall, well over 50% of Americans are Christians and believe that just through acceptance of a few rather simple memes and living a good life, they will be rewarded with a unimaginably good afterlife.
I’ve personally experienced introducing the central idea to previous unexposed people in the general atheist/agnostic camp, and seeing it catch on. I wonder if you have had similar experiences.
I was once at a party at some film producer’s house and I saw the Singularity is Near sitting alone as a center piece on a bookstand as you walk in, and it made me realize that perhaps there is hope for wide-scale recognition in a reasonable timeframe. Ideas can move pretty fast in this modern era.
I’ve yet to see convincing arguments showing “consciousness in a program is impossible”, and at the moment I don’t assign special value to consciousness as distinguishable from human level self-awareness and intelligence.
My position is not to just “promote the idea of immortality through mind uploading, or reverse engineering the brain”—those are only some specific component ideas, although they are important. But I do believe promoting the overall awareness does increase the probability of positive outcome.
I agree with the general idea of ethical or friendly AI, but I find some of the details sorely lacking. Namely, how do you compress a supremely complex concept, such as a “humane posthuman order” (which itself is a funny play on words—don’t you think) into a simple particular utility function? I have not seen even the beginnings of a rigid analysis of how this would be possible in principle. I find this to be the largest defining weakness in the SIAI’s current mission.
To put it another way: whose utility function?
To many technical, Singularity aware outsiders (such as myself) reading into FAI theory for the first time, the idea that the future of humanity can be simplified down into a single utility function or a transparent, cleanly casual goal system appears to be delusion at best, and potentially dangerous.
I find it far more likely (and I suspect that most of the Singularity-aware mainstream agrees), that complex concepts such as “humane future of humanity” will have to be expressed in human language, and the AGI will have to learn them as it matures in a similar fashion to how human minds learn the concept. This belief is based on reasonable estimates of the minimal information complexity required to represent concepts. I believe the minimal requirements to represent even a concept as simple as “dog” are orders of magnitude higher than anything that could be cleanly represented in human code.
However, the above criticism is in the particulars of implementation, and doesn’t cause disagreement with the general idea of FAI or ethical AI. But as far as actual implementation goes, I’d rather support a project exploring multiple routes, and brain-like routes in particular—not only because there are good technical reasons to believe such routes are the most viable, but because they also accelerate the path towards uploading.
Ironically, the idea involves reverse-engineering the brain—specifically, reverse-engineering the basis of human moral and metamoral cognition. One is to extract the essence of this, purifying it of variations due to the contingencies of culture, history, and the genetics and life history of the individual, and then extrapolate it until it stabilizes. That is, the moral and metamoral cognition of our species is held to instantiate a self-modifying decision theory, and the human race has not yet had the time or knowledge necessary to take that process to its conclusion. The ethical heuristics and philosophies that we already have are to be regarded as approximations of the true theory of right action appropriate to human beings. CEV is about outsourcing this process to an AI which will do neuroscience, discover what we truly value and meta-value, and extrapolate those values to their logical completion. That is the utility function a friendly AI should follow.
I’ll avoid returning to the other issues for the moment since this is the really important one.
I agree with your general elucidation of the CEV principle, but this particular statement stuck out like a red flag:
Our morality and ‘metamorality’ already exists, the CEV in a sense has already been evolving for quite some time, but it is inherently a cultural & memetic evolution that supervenes on our biological brains. So purging it of cultural variations is less than wrong—it is cultural.
The flaw then is assuming there is a single evolutionary target for humanity’s future, when in fact the more accurate evolutionary trajectory is adaptive radiation. So the C in CEV is unrealistic. Instead of a single coherent future, we will have countless many, corresponding to different universes humans will want to create and inhabit after uploading.
There will be convergent cultural effects (trends we see now), but there will also be powerful divergent effects imposed by the speed of light when posthuman minds start thinking thousands and millions of times accelerated. This is a constraint of physics which has interesting implications. more on this towards the bottom area of this post
If one single religion and culture had taken over the world, a universal CEV might have a stronger footing. The dominant religious branch of the west came close, but not quite.
Its more than just a theory of right action appropriate to human beings, its also what do you do with all the matter, how do you divide resources, political and economic structure, etc etc.
Given the success of Xtianity and related worldviews, we have some guess at features of the CEV—people generally will want immortality in virtual reality paradises, and they are quite willing (even happy) to trust an intelligence far beyond their own to run the show—but they have a particular interest in seeing it take a human face. Also, even though willing to delegate up ultimate authority, they will want to take an active role in helping shape universes.
The other day I was flipping through channels and happened upon some late night christian preacher channel, and he was talking about new Jerusalem and all that and there was this one bit that I found amusing. He was talking about how humans would join god’s task force and help shape the universe and would be able to zip from star system to star system without anything as slow or messy as a rocket.
I found this amusing, because in a way its accurate (physical space travel will be too slow for beings that think a million times accelerated and have molecular level computers for virtual reality simulation.)
Existing human cultures result from the cumulative interaction of human neurogenetics with the external environment. CEV as described is meant to identify the neurogenetic invariants underlying this cultural and memetic evolution, precisely so as to have it continue in a way that humans would desire. The rise of AI requires that we do this explicitly, because of the contingency of AI goals. The superior problem-solving ability of advanced AI implies that advanced AI will win in any deep clash of directions with the human race. Better to ensure that this clash does not occur in the first place, by setting the AI’s initial conditions appropriately, but then we face the opposite problem: if we use current culture (or just our private intuitions) as a template for AI values, we risk locking in our current mistakes. CEV, as a strategy for Friendly AI, is therefore a middle path between gambling on a friendly outcome and locking in an idiosyncratic cultural notion of what’s good: you try to port the cognitive kernel of human ethical progress (which might include hardwired metaethical criteria of progress) to the new platform of thought. Anything less risks leaving out something essential, and anything more risks locking in something inessential (but I think the former risk is far more serious).
Mind uploading is another way you could try to humanize the new computational platform, but I think there’s little prospect of whole human individuals being copied intact to some new platform, before you have human-rivaling AI being developed for that platform. (One might also prefer to have something like a theory of goal stability before engaging in self-modification as an uploaded individual.)
I think we will pass through a situation where some entity or coalition of entities has absolute power, thanks primarily to the conjunction of artificial intelligence and nanotechnology. If there is a pluralistic future further beyond that point, it will be because the values of that power were friendly to such pluralism.