Journal of Consciousness Studies issue on the Singularity
...has finally been published.
Contents:
Uziel Awret—Introduction
Susan Blackmore—She Won’t Be Me
Damien Broderick—Terrible Angels: The Singularity and Science Fiction
Barry Dainton—On Singularities and Simulations
Daniel Dennett—The Mystery of David Chalmers
Ben Goertzel—Should Humanity Build a Global AI Nanny to Delay the Singularity Until It’s Better Understood?
Susan Greenfield—The Singularity: Commentary on David Chalmers
Robin Hanson—Meet the New Conflict, Same as the Old Conflict
Francis Heylighen—Brain in a Vat Cannot Break Out
Marcus Hutter—Can Intelligence Explode?
Drew McDermott—Response to ‘The Singularity’ by David Chalmers [this link is a McDermott-corrected version, and therefore preferred to the version that was published in JCS]
Jurgen Schmidhuber—Philosophers & Futurists, Catch Up!
Frank Tipler—Inevitable Existence and Inevitable Goodness of the Singularity
Roman Yampolskiy—Leakproofing the Singularity: Artificial Intelligence Confinement Problem
The issue consists of responses to Chalmers (2010). Future volumes will contain additional articles from Shulman & Bostrom, Igor Aleksander, Richard Brown, Ray Kurzweil, Pamela McCorduck, Chris Nunn, Arkady Plotnitsky, Jesse Prinz, Susan Schneider, Murray Shanahan, Burt Voorhees, and a response from Chalmers.
McDermott’s chapter should be supplemented with this, which he says he didn’t have space for in his JCS article.
- AI Risk and Opportunity: Humanity’s Efforts So Far by 21 Mar 2012 2:49 UTC; 56 points) (
- 13 Mar 2012 1:21 UTC; 4 points) 's comment on LINK: Can intelligence explode? by (
Tipler paper
Wow, that’s all kinds of crazy. I’m not sure how much as I’m not a mathematical physicist—MWI and quantum mechanics implied by Newton? Really? - but one big flag for me is pg187-188 where he doggedly insists that the universe is closed, although as far as I know the current cosmological consensus is the opposite, and I trust them a heck of a lot more than a fellow who tries to prove his Christianity with his physics.
(This is actually convenient for me: a few weeks ago I was wondering on IRC what the current status of Tipler’s theories were, given that he had clearly stated they were valid only if the universe were closed and if the Higgs boson was within certain values, IIRC, but I was feeling too lazy to look it all up.)
And the extraction of a transcendent system of ethics from a Feynman quote...
This is just too wrong for words. This is like saying that looking both ways before crossing the street is obviously a part of rational street-crossing—a moment’s thought will convince the reader (Dark Arts) - and so we can collapse Hume’s fork and promote looking both ways to a universal meta-ethical principal that future AIs will obey!
Show me this morality in the AIXI equation or GTFO!
A map from range to domain, a proof in propositional logic, or a series of lambda equations and reductions all come to mind...
One man’s modus ponens is another man’s modus tollens. That the ‘honestly’ requires other entities is proof that this cannot be an ethical system which encompasses all rational beings.
Any argument that rests on a series of rhetorical questions is untrustworthy. Specifically, sure, I can in 5 seconds come up with a reason they would not preserve us: there are X mind-states we can be in while still maintaining identity or continuity; there are Y (Y < X) that we would like or would value; with infinite computing power, we will exhaust all Y. At that point, by definition, we could choose to not be preserved. Hence, I have proven we will inevitably choose to die even if uploaded to Tipler’s Singularity.
(Correct and true? Dunno. But let’s say this shows Tipler is massively overreaching...)
What a terrible paper altogether. This was a peer-reviewed journal, right?
The quote that stood out for me was the following:
Now, all that’s well and good, except for one, tiny, teensy little flaw: there is no such thing as aether. Michelson and Morley proved that quite conclusively in 1887. Tipler, in this case, appears to be basing his argument on a theory that was discredited over a century ago. Yes, some of the conclusions of aetheric theory are superficially similar to the conclusions of relativity. That, however, doesn’t make the aetheric theory any less wrong.
Hi, Quanticle. You state that “there is no such thing as aether. Michelson and Morley proved that quite conclusively in 1887.” For the details on how General Relativity is inherently an æther theory, see physicist and mathematician Prof. Frank J. Tipler and mathematician Maurice J. Dupré′s following paper:
Maurice J. Dupré and Frank J. Tipler, “General Relativity as an Æther Theory”, International Journal of Modern Physics D, Vol. 21, No. 2 (Feb. 2012), Art. No. 1250011, 16 pp., doi:10.1142/S0218271812500113, bibcode: 2012IJMPD..2150011D, http://webcitation.org/6FEvt2NZ8 . Also at arXiv:1007.4572, July 26, 2010, http://arxiv.org/abs/1007.4572 .
Argh.
Also, this makes me wonder if the SIAI’s intention to publish in philosophy journals is such a good idea. Presumably part of the point was for them to gain status by being associated with respected academic thinkers. But this isn’t really the kind of thinking anyone would want to be associated with...
The way I look at it, it’s ‘if such can survive peer review, what do people make of things whose authors either did not try to pass peer review or could not pass peer review? They probably think pretty poorly of them.’
I can’t speak to this particular article, but oftentimes special editions of journals, like this one (i.e. effectively a symposium on the work of another), are not subjected to rigorous peer review. The responses are often solicited by the editors and there is minimal correction or critique of the content of the papers, certainly nothing like you’d normally get for an unsolicited article in a top philosophy journal.
But, to reiterate, I can’t say whether or not the Journal of Consciousness Studies did that in this instance.
On the one hand, this is the cached defense that I have for the Sokal hoax, so now I have an internal conflict on my hands. If I believe that Tipler’s paper shouldn’t have been published, then it’s unclear why Sokal’s should have been.
Oh dear, oh dear. How to resolve this conflict?
Perhaps rum...
Someone think the visibility for philosophers have pratically impact for the solution of technical problems? Apparently who can possibly cause some harm in the near time are AI researchs, but much of these people are scalating Internet flux or working on their own projects.
Gaining visibility is a good thing when what’s needed is social acceptance, or when is necessary more people to solution a problem. Publishing in peer reviews (philosophical)journals can give more scholars to the cause, but more people caring about AI don’t mean a good thing per se.
Some things even peer-review can’t cure. I looked through a few of their back-issues and was far from impressed. On the other hand, this ranking puts them above Topoi, Nous, and Ethics. I’m not even sure what that means—maybe their scale is broken?
Maybe there’s some confounding factor—like sudden recent interest in Singularity/transhumanist topics forcing the cite count up?
Unlikely, they have been highly ranked for a long time and singularity/transhumanist topics are only a very small part of what JCS covers.
Tipler did some excellent work in mathematical relativity before going off the rails shortly thereafter.
I’m very grateful to the undergraduate professor of mine that introduced me to Penrose and Tipler as a freshman. I think at that time I was on the cusp of falling into a similar failure state, and reading Shadows of the Mind and The Physics of Immortality shocked me out of what would have been a very long dogmatic slumber indeed.
And yet humans kill eachother. His only possible retort is that some humans are not rational. Better hope that nobody builds an “irrational” AI.
Hi, Gwern. You asked, ”… MWI and quantum mechanics implied by Newton? Really?” Yes, the Hamilton-Jacobi Equation, which is the most powerful formulation of Newtonian mechanics, is, like the Schrödinger Equation, a multiverse equation. Quantum Mechanics is the unique specialization of the Hamilton-Jacobi Equation with the specification imposed that determinism is maintained: since the Hamilton-Jacobi Equation is indeterministic, because when particle trajectories cross paths a singularity is produced (i.e., the values in the equations become infinite) and so it is not possible to predict (even in principle) what happens after that. On the inherent multiverse nature of Quantum Mechanics, see physicist and mathematician Prof. Frank J. Tipler’s following paper:
Frank J. Tipler, “Quantum nonlocality does not exist”, Proceedings of the National Academy of Sciences of the United States of America, Vol. 111, No. 31 (Aug. 5, 2014), pp. 11281-11286, doi:10.1073/pnas.1324238111, http://www.pnas.org/content/111/31/11281.full.pdf , http://webcitation.org/6WeupHQoM .
Regarding the universe necessarily being temporally closed according to the known laws of physics: all the proposed solutions to the black hole information issue except for Prof. Tipler’s Omega Point cosmology share the common feature of using proposed new laws of physics that have never been experimentally confirmed—and indeed which violate the known laws of physics—such as with Prof. Stephen Hawking’s paper on the black hole information issue which is dependent on the conjectured String Theory-based anti-de Sitter space/conformal field theory correspondence (AdS/CFT correspondence). (See S. W. Hawking, “Information loss in black holes”, Physical Review D, Vol. 72, No. 8 [Oct. 15, 2005], Art. No. 084013, 4 pp.) Hence, the end of the universe in finite proper time via collapse before a black hole completely evaporates is required if unitarity is to remain unviolated, i.e., if General Relativity and Quantum Mechanics—which are what the proofs of Hawking radiation derive from—are true statements of how the world works.
Pertaining to your comments doubting “a universal meta-ethical principal that future AIs will obey!”: Prof. Tipler is quite correct regarding his aforecited discussion on ethics. In order to understand his point here, one must keep in mind that the Omega Point cosmology is a mathematical theorem per the known physical laws (viz., the Second Law of Thermodynamics, General Relativity, and Quantum Mechanics) that requires sapient life (in the form of, e.g., immortal superintelligent human-mind computer-uploads and artificial intelligences) take control over all matter in the universe, for said life to eventually force the collapse of the universe, and for the computational resources of the universe (in terms of both processor speed and memory space) to diverge to infinity as the universe collapses into a final singularity, termed the Omega Point. Said Omega Point cosmology is also an intrinsic component of the Feynman-DeWitt-Weinberg quantum gravity/Standard Model Theory of Everything (TOE) correctly describing and unifying all the forces in physics, of which TOE is itself mathematically forced by the aforesaid known physical laws. Thus, existence itself selects which ethics is correct in order for existence to exist. Individual actors, and individuals acting in groups, can of course go rogue, but there is a limit to how bad things can get: e.g., life collectively cannot choose to extirpate itself.
You go on to state, “there are X mind-states we can be in while still maintaining identity or continuity; there are Y (Y < X) that we would like or would value; with infinite computing power, we will exhaust all Y. At that point, by definition, we could choose to not be preserved. Hence, I have proven we will inevitably choose to die even if uploaded to Tipler’s Singularity.” Yet if Y is infinite, then this presents no problem to literal immortality. Traditional Christian theology has maintained that Y is indeed infinite.
Interestingly, the Omega Point final singularity has all the unique properties (quiddities) claimed for God in the traditional religions. For much more on Prof. Tipler’s Omega Point cosmology and the details on how it uniquely conforms to, and precisely matches, the cosmology described in the New Testament, see my following article, which also addresses the societal implications of the Omega Point cosmology:
James Redford, “The Physics of God and the Quantum Gravity Theory of Everything”, Social Science Research Network (SSRN), Sept. 10, 2012 (orig. pub. Dec. 19, 2011), 186 pp., doi:10.2139/ssrn.1974708, https://archive.org/download/ThePhysicsOfGodAndTheQuantumGravityTheoryOfEverything/Redford-Physics-of-God.pdf , http://sites.google.com/site/physicotheism/home/Redford-Physics-of-God.pdf .
Additionally, in the below resource are different sections which contain some helpful notes and commentary by me pertaining to multimedia wherein Prof. Tipler explains the Omega Point cosmology and the Feynman-DeWitt-Weinberg quantum gravity/Standard Model TOE.
James Redford, “Video of Profs. Frank Tipler and Lawrence Krauss’s Debate at Caltech: Can Physics Prove God and Christianity?”, alt.sci.astro, Message-ID: jghev8tcbv02b6vn3uiq8jmelp7jijluqk[at sign]4ax[period]com , July 30, 2013, https://groups.google.com/forum/#!topic/alt.sci.astro/KQWt4KcpMVo , http://archive.is/a04w9 .
Not to rescue Tipler, but:
None of these possibilities seem to exclude being also a series of imperative sentences.
In much the same way rhetorically asking ‘After all, what is a computer program but a proof in an intuitionistic logic?’ doesn’t rule out ‘a series of imperative sentences’.
The “AIXI equation” is not an AI in the relevant sense.
Fine, ‘show me this morality in a computable implementation of AIXI using the speed prior or GTFO’ (what was it called, AIXI-tl?).
That also isn’t an AI in the relevant sense, as it doesn’t actually exist. Tipler would simply deny that such an AI would be able to anything for Searlian reasons. You can’t prove that an AIXI-style AI will ever work, and it’s presumably part of Tipler’s argument that it won’t work, so simply asserting that it will work is sort of pointless. I’m just saying that if you want to engage with his argument you’ll have to get closer to it ’cuz you’re not yet in bowshot range. If your intention was to repeat the standard counterargument rather than show why it’s correct then I misinterpreted your intention; apologies if so.
The AIXI proofs seem pretty adequate to me. They may not be useful, but that’s different from not working.
More to the point, nothing in Tipler’s paper gave me the impression he had so much as heard of AIXI, and it’s not clear to me that he does accept Searlian reasons—what is that, by the way? It can’t be Chinese room stuff since Tipler has been gung ho on uploading for decades now.
It’s really not obvious that if you run an AIXI-like AI it will actually do anything other than self-destruct, no matter how much juice you give it. There have been various papers on this theme recently and it’s a common LW meme (“AIXI drops an anvil on its head”).
By “Searlian reasons” I mean something like emphasizing the difference between syntax and semantics and the difficulty of the grounding problem as representative of this important dichotomy between narrow and general intelligence that philosophers of mind get angry with non-philosophers of mind for ignoring.
I don’t think Tipler’s not having heard of AIXI is particularly damning, even if true.
I don’t think it’s obvious it would self-destruct—any more than it’s obvious humans will not self-destruct. (And that anvil phrase is common to Eliezer.) The papers you allude to apply just as well to humans.
I believe you are the one who is claiming AIXI will never work, and suggesting Tipler might think like you.
You might enjoy reading this for more context.
Yes: nonsense.
Daniel Dennett’s “The Mystery of David Chalmers” quickly dismissed the Singularity without really saying why:
and then spent the rest of his paper trying to figure out why Chalmers isn’t a type-A materialist.
By the way, procrastinating on internet may be the #1 factor that delays Singularity. Before we make a first machine capable of programming better machines, we may make dozen machines capable of distracting us so much that we will never accomplish anything beyond that point.
People need cool names to treat ideas seriously, so let’s call this apex of human invention “Procrastinarity”. Formally, the better tools people can make, the more distraction they provide, so there is a limit for a human civilization where there is so much distraction that no one is able to focus on making better tools. (More precisely: even if some individuals can focus at this point, they will not find enough support, friends, mentors, etc., so without the necessary scientific infrastructure they cannot meaningfully contribute to human progress.) This point is called Procrastinarity and all the real human progress stops here. A natural disaster may eventually reduce humanity to pre-Procrastinarity levels, but if humans overcome these problems, they will just achieve another Procrastinarity phase. We will reach the first Procrastinarity in the following 30 years with probability 50%.
There’s another such curve, incidentally—I’ve been reading up on scientific careers, and there’s solid-looking evidence that a modern scientist makes his better discoveries about a decade later than in the early 1900s. This is a problem because productivity drops off in the 40s and is pretty small in the 50s and later, and this has remained constant (despite the small improvements in longevity over the 20th century).
So if your discoveries only really begin in your late 20s and you face a deadline of your 40s, and each century we lose a decade, this suggests within 2 centuries, most of a scientist’s career will be spent being trained, learning, helping out on other experiments, and in general just catching up!
We might call this the PhDalarity—the rate at which graduate and post-graduate experience is needed before one can make a major discovery.
As a former teacher I have noticed some unlucky trends in education (it may be different in different coutries), namely that it seems to slow down. On one end there is a public pressure to make schools easier for small children, like not giving them grades in the first class. On the other end there is a pressure to send everyone to university, for signalling (by having more people in universities we can pretend to be smart, even if the price is dumbing down university education) and reducing unemployment (more people in schools, less people in unemployment registry).
While I generally approve friendlier environment for small children and more opportunities for getting higher education, the result seems like shifting the education to later age. Students learn less in high schools (some people claim otherwise, but e.g. math curicullum is being reduced in recent decades) and many people think it’s ok, because they can still learn the necessary things in university, can’t they? So the result is a few “child prodigies” and a majority of students who are kept at schools only for legal or financial reasons.
Yeah, people live longer, prolong their childhoods, but their peak productivity does not shift accordingly. We feel there is enough time, but that’s because most people underestimate how much there is to learn.
OTOH there is a saying—just learn where and how to get the information you need.
And it’s a big truth in that. It is easier every day to learn something (anything) when you need it.
Knowledge market value could be easily grossly overestimated.
It’s easy to learn something when you need it… if the inferential distance is short. Problem is, it often isn’t. Second problem, it is easy to find information, but it is more difficult to separate right and wrong information if the person has no background knowledge. Third problem, the usefullness of some things becomes obvious only after a person learns them.
I have seen smart people trying to jump across a large informational gap and fail. For example there are many people who taught themselves programming from internet tutorials and experiments. They can do many impressive things, just to fail at something rather easy later, because they have no concepts of “state automata” or “context-free grammar” or “halting problem”—the things that may seem like a useless academic knowledge at university, but they allow to quickly classify groups of problems into categories with already known rather easy solutions (or in the last case: known to be generally unsolvable). Lack of proper abstractions slows them at learning, they invent their own bad analogies. In theory, there are enough materials online that would allow them to learn everything properly, but that would take a lot of time and someone’s guidance. And that’s exactly what schools are for: they select materials, offer guidance, and connect you with other people studying the same topic.
In my opinion, a good “general education” is one that makes inferential distances shorter on average. Mathematics is very important, because it takes good basic knowledge to understand statistics, and without statistics you can’t understand scientific results in many fields. A recent example: in a local Mensa group there was a discussion on web whether IQ tests are really necessary, because most people know what their IQ is. I dropped them a link to an article saying that the correlation between self-reported IQ and measured value is less than 0.3. I thought that would solve the problem. Well, it did, kind of… because the discussion switched to whether “correlation 0.3″ means “0.3%” or “30%”. I couldn’t make this up. IMHO a good education should prevent such things from happening.
Though I agree that a conversion from “knowledge” to “money” is overestimated, or at least it is not very straightforward.
You are advocating a strategically devised network of knowledge which would always offer you a support from the nearest base, when you are wandering on a previously unknown land. “Here comes the marines”—you can always count on that.
Well, in science you can’t. You must fight the marines as the enemies sometimes, and you are often so far out, that nobody even knows for you. You are on your own and all the heavy equipment is both useless and to expensive to carry.
This is the situation when the stakes are high, when it really matters. When it doesn’t, it doesn’t anyway.
I think we can plausibly fight this by improving education to compress the time necessary to teach concepts. Hardly any modern education uses the Socratic method to teach, which in my experience is much faster than conventional methods, and could in theory be executed by semi-intelligent computer programs (the Stanford machine learning class embedding questions part way through their videos is just the first step).
Also, SENS.
Even better would be http://en.wikipedia.org/wiki/Bloom%27s_2_Sigma_Problem incidentally, and my own idée fixe, spaced repetition.
Like Moore’s Law, at any point proponents have a stable of solutions for tackling the growth; they (or enough of them) have been successful for Moore’s Law, and it has indeed continued pretty smoothly, so if they were to propose some SENS-style intervention, I’d give them decent credit for it. But in this case, the overall stylized evidence says that nothing has reversed the changes up until I guess the ’80s at which point one could begin arguing that there’s underestimation involved (especially for the Nobel prizes). SENS and online education are great, but reversing this trend any time soon? It doesn’t seem terribly likely.
(I also wonder how big a gap between the standard courses and the ‘cutting edge’ there will be—if we make substantial gains in teaching the core courses, but there’s a ‘no mans land’ of long-tail topics too niche to program and maintain a course on which extends all the way out to the actual cutting edge, then the results might be more like a one-time improvement.)
Thanks for the two sigma problem link.
http://arstechnica.com/web/news/2009/04/study-surfing-the-internet-at-work-boosts-productivity.ars
The article says that internet use boosts productivity only if it is done less than 20% of time. How is this relevant to the real life? :D
Also the article suggests that the productivity improvement is not caused by internet per se, but by having short breaks during work.
So I think many people are beyond the point where internet use could boost their productivity.
Sue’s article is here: She won’t be me.
Robin’s article is here: Meet the New Conflict, Same as the Old Conflict—see also O.B. blog post
Francis’s article is here: A brain in a vat cannot break out: why the singularity must be extended, embedded and embodied.
Marcus Hutter: Can Intelligence Explode?.
I thought the idea that machine intelligence would be developed in virtual worlds on safety grounds was pretty daft. I explained this at the time:
However, Francis’s objections to virtual worlds seem even more silly to me. I’ve been hearing that simulations aren’t real for decades now—and I still don’t really understand why people get into a muddle over this issue.
Hanson link doesn’t seem to work.
It seems to be back now.
Schmidhuber paper
Brief overview of Goedel machines; sort of a rebuke of other authors for ignoring the optimality results for them and AIXI etc.
On falsified predictions of AI progress:
Pessimism:
The Hard Problem dissolved?
A Gödel machine, if one were to exist, surely wouldn’t do something so blatantly stupid as posting to the Internet a “recipe for practically feasible self-improving Gödel machines or AIs in form of code into which one can plug arbitrary utility functions”. Why can’t humanity aspire to this rather minimal standard of intelligence and rationality?
Similar theme from Hutter’s paper:
If AIXI had the option of creating an AIXI (which by definition has the goal of maximizing its own rewards), or creating a different AI (non-AIXI) that had the goal of serving the goals of its creator instead, surely it would choose the latter option. If AIXI is the pinnacle of intelligence (as Hutter claims), and an AIXI wouldn’t build another AIXI, why should we? Because we’re just too dumb?
I like lines of inquiry like this one and would like it if they showed up more.
I’m not sure what you mean by “lines of inquiry like this one”. Can you explain?
I guess it’s not a natural kind, it just had a few things I like all jammed together compactly:
Decompartmentalizes knowledge between domains, in this case between AIXI AI programmers and human AI programmers.
Talks about creation qua creation rather than creation as some implicit kind of self-modification.
Uses common sense to carve up the questionspace naturally in a way that suggests lines of investigation.
An AIXI might create another AIXI if it could determine that the rewards would coincide sufficiently, and it couldn’t figure out how to get as good a result with another design (under real constraints).
I’m sure you can come up with several reasons for that.
That was meant to be rhetorical… I’m hoping that the hypothetical person who’s planning to publish the Gödel machine recipe might see my comment (ETA: or something like it if such attitude were to become common) and think “Hmm, a Gödel machine is supposed to be smart and it wouldn’t publish its own recipe. Maybe I should give this a second thought.”
If someone in IT is behaving monopolistically, a possible defense by the rest of the world is to obtain and publish their source code, thus reducing the original owner’s power and levelling things a little. Such an act may not be irrational—if it is a form of self-defense.
Suppose someone has built a self-improving AI, and it’s the only one in existence (hence they have a “monopoly”). Then there might be two possibilities, either it’s Friendly, or not. In the former case, how would it be rational to publish the source code and thereby allow others to build UFAIs? In the latter case, a reasonable defense might be to forcibly shut down the UFAI if it’s not too late. What would publishing its source code accomplish?
Edit: Is the idea that the UFAI hasn’t taken over the world yet, but for some technical or political reason it can’t be shut down, and the source code is published because many UFAIs are for some reason better than a single UFAI?
I don’t think the FAI / UFAI distinction is particularly helpful in this case. That framework implies that this is a property of the machine itself. Here we are talking about the widespread release of a machine with a programmable utility function. Its effects will depend on the nature and structure society in which it is released into (and the utility functions that are used with it) - rather than being solely attributes of the machine itself.
If you are dealing with a secretive monopolist, nobody on the outside is going to know what kind of machine they have built. The fact that they are a secretive monopolist doesn’t bode well, though. Failing to share is surely one of the most reliable ways to signal that you don’t have the interests of others at heart.
Industrial espionage or reverse engineering can’t shut organisations down—but it may be able to liberate their technology for the benefit of everyone.
So we estimate based on what we anticipate about the possible state of society.
If it’s expected that sharing AGI design results in everyone dying, not sharing it can’t signal bad intentions.
The expectations and intentions of secretive organisations are usually unknown. From outside, it will likely seem pretty clear that only a secretive elite having the technology is more likely to result in a massive wealth and power inequalities than what would happen if everyone had access. Large wealth and power inequalities seem undesirable.
Secretive prospective monopolists might claim all kinds of nonsense in the hope of defending their interests. The rest of society can be expected to ignore such material.
That seems more likely than a secretive monoplolistic agent keeping the technology for themselves from the beginning—and obliterating all potential rivals.
Keeping the technology of general-purpose inductive inference secret seems unlikely to happen in practice. It is going to go into embedded devices—from which it will inevitably be reverse engineered and made publicly accessible. Also, it’s likely to arise from a public collaborative development effort in the first place. I am inclined to doubt whether anyone can win while keeping their technology on a secure server—try to do that and you will just be overtaken—or rather, you will never be in the lead in the first place.
Not pessimism, realism, is my assessment. You have to apply your efforts where they will actually make a difference.
Roman V Yampolskiy paper
Pretty good overview of the AI boxing problem with respect to covert channels; possibly the first time I’ve see Eliezer’s experiments cited, or Stuart Armstrong’s Dr. Evil anthropic attack.
Given the length of the paper, I kind of expected there to be no mention of homomorphic encryption, as the boxing proposal that seems most viable, but to my surprise I read
Important modules? Er, why not just the whole thing? If you have homomorphic encryption working and proven correct, the other measures may add a little security, but not a whole lot.
It says:
Well, weren’t they? That was the whole point, I had the impression on SL4...
Really? I was unaware that Moore’s law was an actual physical law. Our state of the art has already hit the absolute physical limit of transistor design—we have single atom transistors in the lab. So, if you’ll forgive me, I’ll be taking the claim of, “Moore’s law ensures that today’s fastest supercomputer speed will be the standard laptop computer speed in 20 years with a bit of salt.”
Now, perhaps we’ll have some other technology that allows laptops twenty years hence to be as powerful as supercomputers today. But to just handwave that enormous engineering problem away by saying, “Moore’s law will take care of it,” is fuzzy thinking of worst sort.
True. But this one would not make the top 20 list of most problematic statements from the Tipler paper.
Indeed. For example, I raised my eyebrows when I came across the 2007 claim we already have enough. But that was far from the most questionable claim in the paper, and I didn’t feel like reading Tipler 2007 to see what lurked within.
I like Goertzel’s succinct explanation of the idea behind Moore’s Law of Mad Science:
Also, his succinct explanation of why Friendly AI is so hard:
Another choice quote that succinctly makes a key point I find myself making all the time:
His proposal for Nanny AI, however, appears to be FAI-complete.
Also, it is strange that despite paragraphs like this:
...he does not anywhere cite Bostrom (2004).
It’s a very different idea from Yudkowsky’s “CEV” proposal.
It’s reasonable to think that a nanny-like machine might be easier to build that other kinds—because a nanny’s job description is rather limited.
A quote from Dennett’s article, on the topic of consciousness:
This reminds me of the time I took shrooms and my intuition about whether or not Mary acquires knowledge when she is given a color TV turned out to be different when high than when sober. This was interesting, but it didn’t change my judgment on qualia because I had never credited my intuitions on the matter, anyway. (Because, you know, science.)
Damien Broderick paper
Most of the rest is summaries of various Singularity/transhuman scenarios; I did like his descriptions of Stross’s Accelerando (modulo the point that obviously AI-neko is narrating the whole thing).
In “Leakproofing...”
“To reiterate, only safe questions with two possible answers of even likelihood which are independently computable by people should be submitted to the AI.”
Oh come ON. I can see ‘independently computable’, but requiring single bit responses that have been carefully balanced so we have no information to distinguish one from the other? You could always construct multiple questions to extract multiple bits, so that’s no real loss; and with awareness of Bayes’ theorem, getting an exact probability balance is essentially impossible on any question we’d actually care about.
In my opinion, the most relevant article was from Drew McDermott, and I’m surprised that such an emphasis on analyzing the computational complexity of approaches to ‘friendliness’ and self-improving AI has not been more common. For that matter, I think computational complexity has more to tell us about cognition, intelligence, and friendliness in general, not just in the special case of a self-improving optimization/learning algorithms, and could completely modify the foundational assumptions underlying ideas about intelligence/cognition and the singularity.
I’m thinking of specific concepts by Yudkowsky and others in the singularity/FAI crowd that seem uncontroversial at first glance, but upon further investigation, when analyzed in the light of computational complexity, become unconvincing. One example of this is the concept of the possibility space of minds that is an assumption propping up many of the arguments for the negative consequences of careless AI engineering. When seen from the perspective of computability, that possibility space does represent the landscape of theoretically possible intelligent agents, and at first glance, those sensitive and wise enough to care about where in that landscape most outcomes of successful AI engineering projects will be located are alarmed at the needle in the haystack that is our target for a positive outcome. But, if you put on your computational complexity hat and start to analyze not just particular algorithms representing AI systems themselves, but the engineering processes that work towards outputting those AI agents/systems, a very different landscape takes shape, one that drastically constrains the space of possible minds that are a.) of a comparable cognitive class with humans, and b.) have a feasible engineering approach on a timescale T < heat death of our universe. I’m including the evolution of natural history on earth within the set of engineering processes that output intelligence mentioned above
This is but one example of how the neglect of computational complexity, and, to be frank, the neglect of time as a very important factor overall, has influenced the thinking of the SIAI/Lesswrong et al crowd. This neglect leads to statements such as the one Yudkowsky made that an AI could be programmed on a desktop computer circa early 2000s which I am extremely incredulous of. It also leads to timeless decision theories which I don’t feel will be of much importance. Scott Aaronson has made a career out of stressing computational complexity for understanding the deep nature of quantum mechanics, and this should apply to all natural phenomena, cognition and AI among them.
I wish I could read the Dennett article online. If Chalmers has a philosophical nemesis it has to be Dennett. Though he probably sees it otherwise, I contend that Dennett’s hard materialism is loosing ground daily in the academic and philosophical mainstream even as Chalmers’ non-reductive functionalism gains in appreciation. (Look at Giulio Tononi’s celebrated IIT theory of consciousness with its attendant panpsychism for just one example. And that’s in the hard sciences, not philosophy.)
I’m ascertaining from the comments here that Dennett is no fan of the Singularity. I suspect that Dennett dislikes Singularity thought because of its teleological implications about evolution. A truly teleological universe with mind as a non-physical feature opens up a whole host of philosophical reevaluations that I doubt Dennett is willing to explore. (To be fair, Chalmers doesn’t explore these metaphysical concerns either. Broderick’s lovely essay on science fiction and the Singularity gets closest to exploring this new ontological possibility space.)
Of the articles in the journal, at least Tipler thinks big, real big, and takes his arguments to their logical conclusion. Unfortunately, Tipler is convinced he has “proved” what can only be properly seen as suggestive and interesting speculation about future evolution. He even tries to deflate Hume’s entire fact/value distinction while at it, clearly biting off more than he can chew in such a brief essay. (I plan to read his book to see if he gives his Hume discussion a more complete treatment.) Separate from his arguments, there is the aura of quack about Tipler (as there is with other Singularity-celebrities like Aubrey De Grey and even Ray Kurzweil) and yet, he’s a quack who still may just be right, if not in exact detail than in his general train of thought. It’s a radical idea that forces even the most secular of rationalists to think of a future that may only be described as, in some sense, divine.
Many of those people are believers who are already completely sold on the idea of a technological singularity. I hope some sort of critical examination is forthcoming as well.
Schmidhuber, Hutter and Goertzel might be called experts. But I dare to argue that statements like “progress towards self-improving AIs is already substantially beyond what many futurists and philosophers are aware of” are almost certainly bullshit.
You can be certain if you wish. I am not. As I am not sure that there isn’t a supervirus somewhere, I can’t be certain that there isn’t a decent self-improver somewhere. Probably not, but …
Both ARE possible, according to my best knowledge, so it wouldn’t be wise to be too sure in any direction.
As you are.
According to the technically correct, but completely useless, lesswrong style rationality you are right that it is not wise to say that it is “almost certainly bullshit”. What I meant to say is that given what I know it is unlikely enough to be true to be ignored and that any attempt at calculating the expected utility of being wrong will be a waste of time, or even result in spectacular failure.
I currently feel that the whole business of using numerical probability estimates and calculating expected utilities is incredible naive in most situations and at best gives your beliefs a veneer of respectability that is completely unjustified. If you think something is almost certainly bullshit then say it and don’t try to make up some number. Because the number won’t resemble the reflective equilibrium of various kinds of evidence, your preferences and intuition that is being comprised in calling something almost certainly bullshit.
Well, given what you think you know. It is always the case, with just everyone, that (s)he estimates from the premises of what (s)he thinks (s)he knows. It just can’t be any different.
Somewhere in the chain of logical conclusions might be an error. Or might not be. And might be an error in premises. Or might not be.
Saying—oh, I know you are wrong based on everything I stand for—is not good enough. You should explain us why a breakthrough in the self optimizing is so unlikely as you claim. As the next guy, who thinks that this is quite likely—should explain it also. They do so.
P.S. I don’t consider myself as a “lesswronger” at all. Disagree too often and have no “site patriotism”.
My comment was specifically aimed at the kind of optimism that people like Jürgen Schmidbauer and Ben Goertzel seem to be displaying. I asked other AI researchers about their work, even some of whom worked with them, and they disagree.
There are mainly two possibilities here. That it takes a single breakthrough or that it takes a few breakthroughs, i.e. that it is a somewhat gradual development that can be extrapolated.
In the case that the development of self-improving AI’s is stepwise I doubt that their optimism is justified simply because they are unable to show any achievements. All achievements in AI so far are either a result of an increase in computational resources or, in the case of e.g. IBM Watson or the Netflix algorithm, the result of throwing everything we have at a problem to brute force a solution. None of those achievements are based on a single principle like an approximation of AIXI. Therefore, if people like Schmidbauer and Goertzel made stepwise progress and extrapolate it to conclude that more progress will amount to general intelligence, then where are the results? They should be able to market even partial achievements.
In the case that the development of self-improving AI’s demands a single breakthrough or mathematical insights I simply doubt their optimism based on the fact that such predictions amount to pure guesswork and that nobody knows when such a breakthrough will be achieved or at what point new mathematical insights will be discovered.
And regarding the proponents of a technological Singularity. 99% of their arguments consist of handwaving and claims that physical possibility implies feasibility. In other words, bullshit.
Everybody on all sides of this discussion is a suspect of a bullshit trader or a bullshit producer.
That includes me, you, Vinge, Kurzweill, Jürgen S., Ben Goertzel—everybody is a suspect. Including the investigators from any side.
Now, I’ll clear my position. The whole AI business is an Edisonian, not an Einsteinian project. I don’t see a need for some enormous scientific breakthroughs before it can be done. No, to me it looks like—we have Maxwell equations for some time now, can we build an electric lamp?
Edison is just one among many, who is claiming it is almost done in his lab. It is not certain what’s the real situation in the Menlo Park. The fact that an apprentice who left Edison is saying that there is no hope for a light bulb is not very informative. As it is not, that another apprentice still working there, is euphoric. It doesn’t matter even what the Royal Queen Science Society back in old England has to say. Or a simple peasant.
You just can’t meta judge very productively.
But you can judge is it possible to have an object as an electric driven lamp? Or can you build a nuclear fusion reactor? Or can you built an intelligent program?
If it is possible, how hard is to actually build one of those? May takes a long time, even if it is. May take a short time, if it is.
The only real question is—can it be done and if yes—how? If no, also good. It just isn’t.
But you have to stay on topic, not meta topic, I think.
To me it looks like that AGI researchers are simply rubbing amber with fur while claiming that they are on the verge of building a full-scale electricity-producing fusion power plant.
It is possible to create a Matrix style virtual reality. It is possible to create antimatter weapons. That doesn’t mean that it is feasible. It also says nothing about timeframes.
The real question is if we should bother to worry about possibilities that could as well be 500, 5000 or 5 million years into the future or never even come about the way we think.
It has been done in 2500 years. (Providing that the fusion is still outsourced to the Sun). What are guaranties that in this case we will CERTAINLY NOT be 100 times faster?
It does not automatically mean that it is either unfeasible or far, far in the future.
If it was sure that it’s far, far away—but it isn’t that sure at all—even then it would be a very important topic.
I am aware of that line of reasoning and reject it. Each person has about a 1 in 12000 chance of having an unruptured aneurysm in the brain that could be detected and then treated after having a virtually risk free magnetic resonance angiography. Given the utility you likely assign to your own life it would be rational to undergo such a screening. At least it would make much more sense than signing up for cryonics. Yet you don’t do it, do you?
There are literally thousands of activities that are rational given their associated utilities. But that line of reasoning, although technically correct, is completely useless because 1) you can’t really calculate shit 2) it’s impossible to do for any agent that isn’t computationally unbounded 3) you’ll just end up to sprinkle enough mathematics and logic over your fantasies to give them a veneer of respectability.
Expected utility maximization in combination with consequentialism is the ultimate recipe for extreme and absurd decisions and actions. People on lesswrong are fooling themselves by using formalized methods to evaluate informal evidence and pushing the use of intuition onto a lower level.
The right thing to do is to use the absurdity heuristic and discount crazy ideas that are merely possible but can’t be evaluated due to a lack of data.
Does this make sense? How much does the scan cost? How long does it take? What are the costs and risks of the treatment? Essentially, are the facts as you state them?
I don’t think so. Are you thinking of utilitarianism? If so, expected utility maximization != utilitarianism.
Ok what’s the difference here? By “utilitarianism” do you mean the old straw-man version of utilitarianism with bad utility function and no ethical injunctions?
I usually take utilitarianism to be consequentialism + max(E(U)) + sane human-value metaethics. Am I confused?
The term “utilitarianism” refers to maximising the combined happiness of all people. The page says:
So: that’s a particular class of utility functions.
“Expected utility maximization” is a more general framework from decision theory. You can use any utility function with it—and you can use it to model practically any agent.
Utilitarianism is a pretty nutty personal moral philosophy, IMO. It is certainly very unnatural—due partly to its selflessness and lack of nepotism. It may have some merits as a politial philosophy (but even then...).
Thanks.
Is there a name for expected utility maximisation over a consequentialist utility function built from human value? Does “consequentialism” usually imply normal human value, or is it usually a general term?
See http://en.wikipedia.org/wiki/Consequentialism for your last question (it’s a general term).
The answer to your “Is there a name...” question is “no”—AFAIK.
I get the impression that most people around here approach morality from that perspective, it seems like something that ought to have a name.
My understanding from long-past reading of elective whole-body MRIs was that they were basically the perfect example of iatrogenics & how knowing about something can harm you / the danger of testing. What makes your example different?
(Note there is no such possible danger from cryonics: you’re already ‘dead’.)
Really? Some have been known top exaggerate to stimulate funding. However, many people (including some non-engineers) don’t put machine intelligence that far off. Do you have your own estimates yet, perhaps?
That’s one of those statements-that-is-so-vague-it-is-bound-to-be-true. “Substantially” in one problem, and “many” is another one.