I have a compute-market startup called vast.ai, and I’m working towards aligned AI. Currently seeking networking, collaborators, and hires—especially top notch cuda/gpu programmers.
My personal blog: https://entersingularity.wordpress.com/
I have a compute-market startup called vast.ai, and I’m working towards aligned AI. Currently seeking networking, collaborators, and hires—especially top notch cuda/gpu programmers.
My personal blog: https://entersingularity.wordpress.com/
My father was a forensic psychiatrist heavily involved in some of these cases, testifying for the defense of the accused. The moral panic phenomenon is real and complex, but there’s a more basic failure of rationality underlying the whole movement which was the false belief in the inherent veracity of children.
Apparently juries (and judges alike) took the testimony of children at face value. The problem was that investigative techniques of the social workers invariably elicited the desired reactions in the children. In law you have the concept of leading the witness, but that doesn’t apply for investigations of child abuse. The children are taken away from their parents and basically locked up with the investigators until they tell them what they want to hear. It wasn’t even necessarily deliberate—from what I understand in many cases the social workers just had a complete lack of understanding of how they were conditioning the children to fabricate complex and in many cases outright ridiculous stories. Its amazing how similar the whole scare was to historical accounts of the witch trials. Although as far as I know, in the recent scare nobody was put to death (but I could even be wrong about that, and certainly incalculable damage was done nonetheless).
What is the outcome that you want to socially engineer into existence? What is it that you want the world to realize?
Global Positive Singularity. As opposed to annihilation, or the many other likely scenarios.
I’m reading through and catching up on this thread, and rather strongly agreed with your statement:
Eliezer and others at SIAI to assign (relatively) large amounts of probability mass to the scenario of a small set of people having some “insight” which allows them to suddenly invent AGI in a basement. In other words, they tend to view AGI as something like an unsolved math problem, like those on the Clay Millennium list, whereas it seems to me like a daunting engineering task analogous to colonizing Mars (or maybe Pluto).
However, pondering it again, I realize there is an epistemological spectrum ranging from math on the one side to engineering on the other. Key insights into new algorithms can undoubtedly speed up progress, and such new insights often can be expressed as pure math, but at the end of the day it is a grand engineering (or reverse engineering) challenge.
However, I’m somewhat taken aback when you say, “the notion that AGI is only decades away, as opposed to a century or two.”
A century or two?
AIXI-shaped magic bullet?
AIXI’s contribution is more philosophical than practical. I find a depressing over-emphasis of bayesian probability theory here as the ‘math’ of choice vs computational complexity theory, which is the proper domain.
The most likely outcome of a math breakthrough will be some rough lower and or upper bounds on the shape of the intelligence over space/time complexity function. And right now the most likely bet seems to be that the brain is pretty well optimized at the circuit level, and that the best we can do is reverse engineer it.
EY and the math folk here reach a very different conclusion, but I have yet to find his well considered justification. I suspect that the major reason the mainstream AI community doesn’t subscribe to SIAI’s math magic bullet theory is that they hold the same position outline above: ie that when we get the math theorems, all they will show is what we already suspect: human level intelligence requires X memory bits and Y bit ops/second, where X and Y are roughly close to brain levels.
This, if true, kills the entirety of the software recursive self-improvement theory. The best that software can do is approach the theoretical optimum complexity class for the problem, and then after that point all one can do is fix it into hardware for a further large constant gain.
I explore this a little more here
Edge detection is rather trivial. Visual recognition however is not, and there certainly are benchmarks and comparable results in that field. Have you browsed the recent pubs of Poggio et al at MIT vision lab? There is lots of recent progress, with results matching human levels for quick recognition tasks.
Also, vision is not a tiny part of intelligence. Its the single largest functional component of the cortex, by far. The cortex uses the same essential low-level optimization algorithm everywhere, so understanding vision at the detailed level is a good step towards understanding the whole thing.
And finally and most relevant for AGI, the higher visual regions also give us the capacity for visualization and are critical for higher creative intelligence. Literally all scientific discovery and progress depends on this system.
“visualization is the key to enlightenment” and all that
Why is AGI a math problem? What is abstract about it?
We don’t need math proofs to know if AGI is possible. It is, the brain is living proof.
We don’t need math proofs to know how to build AGI—we can reverse engineer the brain.
This view arises from what I understand about the “modular” nature of the human brain: we think we’re a single entity that is “flexible enough” to think about lots of different things, but in reality our brains consist of a whole bunch of highly specialized “modules”, each able to do some single specific thing.
The brain has many different components with specializations, but the largest and human dominant portion, the cortex, is not really specialized at all in the way you outline.
The cortex is no more specialized than your hard drive.
Its composed of a single repeating structure and associated learning algorithm that appears to be universal. The functional specializations that appear in the adult brain arise due to topological wiring proximity to the relevant sensory and motor connections. The V1 region is not hard-wired to perform mathematically optimal gabor-like edge filters. It automatically evolves into this configuration because it is the optimal configuration for modelling the input data at that layer, and it evolves thus soley based on exposure to said input data from retinal ganglion cells.
You can think of cortical tissue as a biological ‘neuronium’. It has a semi-magical emergent capacity to self-organize into an appropriate set of feature detectors based on what its wired to. more on this
All that being said, the inter-regional wiring itself is currently less understood and is probably more genetically predetermined.
Learning is the capacity to build complex unconscious machinery for dealing with novel problems. Thats the whole point of AGI.
And Learning is equivalent to absorbing memes. The two are one and the same.
Don’t you realize the default scenario?
The default scenario is some startup or big company or mix therein develops strong AGI for commercialization, attempts to ‘control it’, fails, and inadvertently unleashes a god upon the earth. To first approximation the type of AGI we are discussing here could just be called a god. Nanotechnology is based on science, but it will seem like magic.
The question then is what kind of god do we want to unleash.
Religions are worldviews. The Singularity is also a worldview, and one with a future prediction is quite different than the older more standard linear atheist scientific worldview, where the future is unknown but probably like the past, AI has no role, etc etc.
I read the “by (some) definition” and I find it actually supports the cluster mapping utility of the god term as it applies to AI’s. “Scary powerful optimization process” just doesn’t instantly convey the proper power relation.
But nonetheless, I do consider your public relations image point to be important. But I’m not convinced that one needs to hide fully behind the accepted confines of the scientific magisterium and avoid the unspoken words.
Science tells us how the world was, is, and can become. Religion/Mythology/Science Fiction tells us what people want the world to be.
Understanding the latter domain is important for creating good AI and CEV and all that.
Fine, if you take memes to be just symbolic level transferable knowledge (which, thinking it over, I agree with), then at a more detailed level learning involves several sub-processes, one of which is the rapid transfer of memes into short term memory.
A ‘few clues’ sounds like a gross underestimation. It is the only working example, so it certainly contains all the clues, not just a few. The question of course is how much of a shortcut is possible. The answer to date seems to be: none to slim.
I agree engineers reverse engineering will succeed way ahead of full emulation, that wasn’t my point.
Edit: removed a faulty argument at the end pointed out by wedrifid.
I am talking about optimality for AGI in particular with respect to circuit complexity, with the typical assumptions that a synapse is vaguely equivalent to a transistor, maybe ten transistors at most. If you compare on that level, the brain looks extremely efficient given how slow the neurons are. Does this make sense?
The brain’s circuits have around 10^15 transistor equivalents, and a speed of 10^3 cycles per second. 10^18 transistor cycles / second
A typical modern CPU has 10^9 transistors, with a speed of 10^9 cycles per second. 10^18 transistor cycles / second
Our CPU’s strength is not their circuit architecture or software—its the raw speed of CMOS, its a million X substrate advantage. The learning algorithm, the way in which the cortex rewires in response to input data, appears to be a pretty effective universal learning algorithm.
I liked this, will reply when I have a chance.
Its certainly a possibility, ranging from the terrifying if its created as something like a central intelligence agent, to the beneficial if its created as a more transparent public achievement, like landing on the moon.
The potential for arms race seems to contribute to possibility of doom.
The government seems on par with the private sector in terms of likelihood, but I dont have a strong notion of that. At this point it is already some sort of blip on their radar, even if small.
Then some questions: How long would moore’s law have to continue into the future with no success in AGI for that to show that the brain’s is well optimized for AGI at the circuit level?
I’ve taken some attempts to show rough bounds on the brain’s efficiency, are you aware of some other approach or estimate?
writing essays and SF short short stories about digital civilizations which climb to transcendence within a few human days or hours (I have examined your blog); a little vague about exactly what a “positive Singularity” might be, except a future where the good things happen and the bad things don’t.
The most recent post on my blog is indeed a very short story, but it is the only such post. Most of the blog is concerned with particular technical ideas and near term predictions about the impact of technology on specific fields: namely the video game industry. As a side note, several of the game industry blog posts have been published. The single recent hastily written story was more about illustrating the out of context problem and speed differential, which I think are the most well grounded important generalizations we can make about the Singularity at this point. We all must make quick associative judgements to conserve precious thought-time, but please be mindful of generalizing from a single example and lumping my mindstate into the “just like me 15 years ago.” But I’m not trying to take the argumentative stance by saying this, I’m just requesting it: I value your outlook.
Yes, my concept of a positive Singularity is definitely vague, but that of a Singularity less so, and within this one can draw a positive/negative delineation.
But is it rational to anticipate: immortality; existence becoming transcendentally better or worse than it is;
Immortality with the caveat of continuous significant change (evolution in mindstate) is rational, and it is pretty widely accepted inherent quality of future AGI. Mortality is not an intrinsic property of minds-in-general, its a particular feature of our evolutionary history. On the whole, there’s a reasonable argument that its net utility was greater before the arrival of language and technology.
Uploading is a whole other animal, and at this point I think physics permits it, but it will be considerably more difficult than AGI itself and would come sometime after (but of course, time acceleration must be taken into account). However, I do think skepticism is reasonable, and I accept that it may prove to be impossible in principle at some level, even if this proof is not apparent now. (I have one article about uploading and identity on my blog)
If you haven’t seen them, you should pay a visit to Dale Carrico’s writings on “superlative futurology”.
I will have to investigate Carrico’s “superlative futurology”.
Imagination guides human future. If we couldn’t imagine the future, we wouldn’t be able to steer the present towards it.
there are also many who aspire to something resembling sainthood, and whose notion of what is possible for the current inhabitants of Earth exhibits an interpersonal utopianism hitherto found only in the most benevolent and optimistic religious and secular eschatologies
Yes, and this is the exact branch of transhumanism that I subscribe to, in part simply because I believe it has the most potential, but moreover because I find it has the strongest evolutionary support. That may sound like a strange claim, so I should qualify it.
Worldviews have been evolving since the dawn of language. Realism, the extent to which the worldview is consistent with evidence, the extent to which it actually explains the way the world was, the way the world is, and the way the world can be in the future, is only one aspect of the fitness landscape which shapes the evolution of worldviews and ideas.
Worldviews also must appeal to our sense of what we want the world to be, as opposed to what it actually is. The scientific worldview is effective exactly because it allows us to think rationally and cleanly divorce is-isms from want-isms.
AGI is a technology that could amplify ‘our’ knowledge and capability to such a degree that it could literally enable ‘us’ to shape our reality in any way ‘we’ can imagine. This statement is objectively true or false, and its veracity has absolutely nothing to do with what we want.
However, any reasonable prediction of the outcome of such technology will necessarily be nearly equivalent to highly evolved religious eschatologies. Humans have had a long, long time to evolve highly elaborate conceptions of what we want the world to become, if we only had the power. A technology that gives us such power will enable us to actualize those previous conceptions.
The future potential of Singularity technologies needs to be evaluated on purely scientific grounds, but everyone must be aware that the outcome and impact of such technologies will necessarily tech the shape of our old dreams of transcendence, and this is no way, shape, or form is anything resembling a legitimate argument concerning the feasibility and timelines of said technologies.
In short, many people when they hear about the Singularity reach this irrational conclusion—“that sounds like religious eschatologies I’ve heard before, therefore its just another instance of that”. You can trace the evolution of ideas and show that the Singularity inherits conceptions of what-the-world-can-become from past gnostic transcendental mythology or christian utopian millennialism or whatever, but using that to dismiss the predictions themselves is irrational.
I had enthusiasm a decade ago when I was in college, but this faded and recessed into the back of my mind. More lately, it has been returning.
I look at the example of someone like Elisier and I see one who was exposed to the same ideas, in around the same timeframe, but did not delegate them to a dusty shelf and move on with a normal life. Instead he took upon himself to alert the world and attempt to do what he could to create that better imagined future. I find this admirable.
But enthusiasm for spreading the singularity gospel, the desire to set the world aflame with the “knowledge” of immortality through mind uploading (just one example)… that, almost certainly, achieves nothing deeply useful.
Naturally, I strongly disagree, but I’m confused as to whether you doubt 1.) that the world outcome would improve with greater awareness, or 2.) whether increasing awareness is worth any effort.
I think is just unrealistic, and usually the product of some young person who realizes that maybe they can save themselves and their friends from death and drudgery if all this comes to pass, so how can anyone not be interested in it?
Most people are interested in it. Last I recall, well over 50% of Americans are Christians and believe that just through acceptance of a few rather simple memes and living a good life, they will be rewarded with a unimaginably good afterlife.
I’ve personally experienced introducing the central idea to previous unexposed people in the general atheist/agnostic camp, and seeing it catch on. I wonder if you have had similar experiences.
I was once at a party at some film producer’s house and I saw the Singularity is Near sitting alone as a center piece on a bookstand as you walk in, and it made me realize that perhaps there is hope for wide-scale recognition in a reasonable timeframe. Ideas can move pretty fast in this modern era.
Computing hardware is a fact, but consciousness in a program is not yet a fact and
I’ve yet to see convincing arguments showing “consciousness in a program is impossible”, and at the moment I don’t assign special value to consciousness as distinguishable from human level self-awareness and intelligence.
The idea of the first superintelligent process following a particular utility function explicitly selected to be the basis of a humane posthuman order I consider to be a far more logical approach to achieving the best possible outcome, than just wanting to
My position is not to just “promote the idea of immortality through mind uploading, or reverse engineering the brain”—those are only some specific component ideas, although they are important. But I do believe promoting the overall awareness does increase the probability of positive outcome.
I agree with the general idea of ethical or friendly AI, but I find some of the details sorely lacking. Namely, how do you compress a supremely complex concept, such as a “humane posthuman order” (which itself is a funny play on words—don’t you think) into a simple particular utility function? I have not seen even the beginnings of a rigid analysis of how this would be possible in principle. I find this to be the largest defining weakness in the SIAI’s current mission.
To put it another way: whose utility function?
To many technical, Singularity aware outsiders (such as myself) reading into FAI theory for the first time, the idea that the future of humanity can be simplified down into a single utility function or a transparent, cleanly casual goal system appears to be delusion at best, and potentially dangerous.
I find it far more likely (and I suspect that most of the Singularity-aware mainstream agrees), that complex concepts such as “humane future of humanity” will have to be expressed in human language, and the AGI will have to learn them as it matures in a similar fashion to how human minds learn the concept. This belief is based on reasonable estimates of the minimal information complexity required to represent concepts. I believe the minimal requirements to represent even a concept as simple as “dog” are orders of magnitude higher than anything that could be cleanly represented in human code.
However, the above criticism is in the particulars of implementation, and doesn’t cause disagreement with the general idea of FAI or ethical AI. But as far as actual implementation goes, I’d rather support a project exploring multiple routes, and brain-like routes in particular—not only because there are good technical reasons to believe such routes are the most viable, but because they also accelerate the path towards uploading.
The success or failure of adding more hardware might give an indication of how hard it is to find the target of intelligence in the search space
For every computational system and algorithm, there is a minimum level of space-time complexity in which this system can be encoded. As of yet we don’t know how close the brain is to the minimum space-time complexity design for an intelligence of similar capability.
Lets make the question more specific: whats the minimum bit representation of a human-equivalent mind? If you think the brain is far off that, how do you justify that?
Of course more hardware helps: it allows you to search through the phase space faster. Keep in mind the enormity of the training time.
I happen to believe the problem is ‘mostly down to software’, but I don’t see that as a majority view—the Moravec/Kurzweil view that we need brain-level hardware (within an order of magnitude or so) seems to be majoritive at this point.
I am fascinated by applying the ethic of reciprocity to simulationism, but is a bidirectional transfer the right approach?
Can we deduce the ethics of our simulator with respect to simulations by reference to how we wish to be simulated? And is that the proper ethics? This would be projecting the ethics up.
Or rather should we deduce the proper ethics from how we appear to be simulated? This would be projecting the ethics down.
The latter approach would lead to a different set of simulation ethics, probably based more on historicity and utility. ie “Simulations should be historically accurate.” This would imply that simulation of past immorality and tragedy is not unethical if it is accurate.
Greetings All.
I’ve been a Singularitan since my college years more than a decade ago. I still clearly remember the force with which that worldview and its attendant realizations colonized my mind.
At that time I was strongly enamored with a vision of computer graphics advancing to the point of pervasive, Matrix-like virtual reality and that medium becoming the creche from which superhuman artificial intelligence would arise. (the Matrix of Gibson’s Neuromancer, as this was before the film of the same name). Actually, I still have that vision, and although it has naturally changed, we do appear finally to be on the brink of a major revolution in graphics and perhaps the attendant display tech to materialize said vision.
Anyway, I studied computer graphics, immersed myself in programming and figured making a video game startup would be a good first step to amassing some wealth so that I could then do the ‘real work’ of promoting the Singularity and doing AI research. I took a little investment, borrowed some money, and did consulting work on the side. After four years or so the main accomplishment was taking a runner up prize in a business plan competition and paying for a somewhat expensive education. That isn’t as bad as it sounds though—I did learn a good deal of atypical knowledge.
Eventually I threw in the towel with the independent route and took a regular day job as a graphics programmer in the industry. After working so much on startups I had some fun with life for a change. I went to a couple of free ‘workshops’ at a strange house where some unusual guys with names like ‘Mystery’ and ‘Style’ taught the game, back before Style wrote his book and that community blew up. I found some interesting roommates (not affiliated with the above), and moved into a house in the Hollywood Hills. One of our neighbors had made a fortune from a website called Sextoy.com and threw regular pool parties, sometimes swinger parties. Another regular life in LA.
Over the years I had this mounting feeling that I was wasting my life, that there was something important I had forgotten. I still read and followed some of the Singularity related literature, but wasn’t that active. But occasionally it would come back and occupy my mind, albeit temporarily. Kurzweil’s TSIN reactivated my attention, and I attended the Singularity Summit in 2008, 2010. I already had a graphics blog and had written some articles for gaming publications, but in the last few years started reading more neuroscience and AI. I have a deep respect for the brain’s complexity, but I’m still somewhat surprised at the paucity of large-scale research and the concomitant general lack of success in AGI. I’m not claiming (as of yet) to have some deep revolutionary new metamathical insight, but a graphics background gives one a particular visualizing intuition and toolbox for optimizing simulations that should come in handy.
All that being said, and even though I’m highly technical by trade, I actually think the engineering challenge is the easier part of the problem (only in relation), and I’m more concerned with the social engineering challenge. From my current reading, I gather that EY and the SIAI folks here believe that is all rolled up into the FAI task. I agree with the importance of the challenge, but I do not find the most likely hypothesis to be: SIAI develops FriendlyAI before anyone else in the world develops AI in general. I do not think that SIAI currently holds >50% of the lottery tickets, not even close.
However, I do think the movement can win regardless, if we can win on the social engineering front. To me now it seems that the most likely hypothesis is that the winning ticket will be some academic team or startup in this decade or the next, and thus the winning ticket (with future hindsght) is currently held by someone young. So it is a social engineering challenge.
The Singularity challenges everything: our social institutions, politics, religion, economic infrastructure, all of our current beliefs. I share the deep concern about existential risk and Hard Takeoff scenarios, although perhaps differing in particulars with typical viewpoints I’ve seen on this site.
How can we get the world to wake up?
I somehow went to two Singularity Summits without ever reading LessWrong or OverComingBias. I think I had read partly through EY’s Seed AI doc at some point previously, but that was it. I went to school with some folks who are now part of LessWrong or SIAI: (Anna, Steve, Jennifer), and was pointed to this site through them. I’ve quite enjoyed reading through most of the material so far, and I don’t think i’m half way through yet, although I don’t see a completion meter anywhere.
I’m somewhat less interested in: raw ‘Bayesianism’ as enlightment, and Evo Psych. I used to be more into Evo Psych when I was into the game, but I equate that with my childish years. I do believe it has some utility in understanding the brain, but not nearly as much as neuroscience or AI themselves.
Also, as an aside, I’m curious about the note for theists. From what I gather, many LWers find the Simulation Argument to work. If so, that technically makes you a deist, and theism is just another potential hypothesis. Its actually even potentially a testable hypothesis. And even without the Simulation Argument, the Singularity seriously challenges strict atheism—most plausible Singluarity aware Eschatologies result in some black-hole diety spawning new universes—a god in every useful sense of the term at the end of our timeline.
I’ve always felt this great isolation imposed by my worldview: something one cannot discuss in polite company. Of course, that isolation was only ever self-imposed, and this site has opened my mind to the possibility that there’s many now who have ventured along similar lines.