No changes that I’d recommend, at all. SPECIAL NOTE: please don’t interpret the drop in the number ofcomments, the last couple of weeks, as a drop in interest by forum participants. The issues of these weeks are the heart of the reason for existence of nearly all the rest of the Bostrom book, and many of the auxiliary papers and references we’ve seen, ultimately also have been context, for confronting and brainstorming about the issue now at hand. I myself just as one example, have a number of actual ideas that I’ve been working on for two weeks, but I’ve been trying to write them up in white paper form, because they seem a bit longish. also I’ve talked to a couple of people off site who are busy thinking about this as well and have much to say. Perhaps taking a one week intermission, would give some of us a chance to organize our thoughts more efficiently for postings. There is a lot of untapped incubating that is coming to a head right now among the participants’ mindse and we would like a chance to say something about these issues before moving on. ((“Still waters run deep” as the cliche goes.) We’re at the point of greatest intellectual depth, now. I could speak for hours, were I commenting orally, and trying to be complete—as opposed to making a skeleton of a comment that would, without context, raise more requests for clarification than be useful. I’m sure I’m not unique. Moderation is fine, though, be assured.
NxGenSentience
My general problem with “utilitarianism” is that it’s sort of like Douglas Adams’ “42.” An answer of the wrong type to a difficult question. Of course we should maximize, that is a useful ingredient of the answer, but is not the only (or the most interesting) ingredient.
Taking off from the end of that point, I might add (but I think this was probably part of your total point, here, about “the most interesting” ingredient) that people sometimes forget that utilitarianism is not a theory itself about what is normatively desirable, and least not much of one. For Bentham-style “greatest good for the greatest number” to have any meaning, it has to be supplemented with a view of what property, state of being, action type, etc, counts as a “good” thing, to begin with. Once this is defined, we can then go on to maximize that—seeking to achieve the most of that, for the most people (or relevant entities.)
But greatest good for the greatest number means nothing until we figure out a theory of normativity, or meta-normativity that can be instantiated across specific, varying situations and scenarios.
IF the “good” is maximizing simple total body weight, then adding up the body weight of all people in possible world A, vs in possible world B, etc, will allow us a utilitarian decision among possible worlds.
IF the “good” were fitness, or mental healty, or educational achievement… we use the same calculus, but the target property is obviously different.
Utilitarianism is sometimes a person’s default answer, until you remind them that this is not an answer at all about what is good. It is just an implementation standard for how that good is to be devided up. Kind of a trivial point, I guess, but worth reminding ourselves from time to time that utilitarianism is not a theory of what is actually good, but how that might be distributed, if that admits of scarcity.
Asr,
Thanks for pointing out the wiki article, which I had not seen. I actually feel a tiny bit relieved, but I still think there are a lot of very serious forks in the road that we should explore.
If we do not pre-engineer a soft landing, this is the first existential catastrophe that we should be working to avoid.
A world that suddenly loses encryption (or even faith in encryption!) would be roughly equivalent to a world without electricity.
I also worry about the legacy problem… all the critical documents in RSA, PGP, etc, sitting on hard drives, servers, CD roms, that suddenly are visible to anyone with access to the tech. How do we go about re-coding all those “eyes only” critical docs into a post-quantum coding system (assuming one is shown practical and reliable), without those documents being “looked at” or opportunistically copied in their limbo state between old and new encrypted status?
Who can we trust to do all this conversion, even given the new algorithms are developed?
This is actually almost intractably messy, at first glance.
I’ll have to weigh in wiith Botrom on this one, though I think it depends a lot on the individual brain-mind, i.e., how your particular personality crunches the data.
Some people are “information consumers”, others are “information producers”. I think Einstein might have used the obvious terms supercritical vs subcritical minds at some point—terms that in any case (einstein or not) naturally occurred to me (and probably lots of people) and I’ve used since teenager years, just in talking to my friends, to describe different people’s mental processes.
The issue of course is (a) to what extent you use incoming ideas as “data” to spark new trains of thought, plus (b) how many interconnections you notice between various ideas and theories—and as a multiplier of (b), how abstract these resonances and interconnections are (hugely increasing the perceived potential interconnection space.)
For me, if the world would stop in place, and I had an arbitrary lifespan, I could easily spend the next 50 years (at least) mining the material I have already acquired, generating new ideas, extensions, cross connections. (I sometimes almost wish it would, in some parallel world, so I could properly metabolize what I have, which I think at times I am only scratching the surface of.)
Of course it depends on the kind of material, as well. If one is reading an undergrad physics textbook in college, it is pretty much finite: if you understand the presentation and the development as you read, you can think for an extra 10 or 15 minutes about all the way it applies to the world, and pretty much have it. Thinking of further “applications” pretty much add no value, additional insight, or interest.
But with other material, esp in fields that are divergent and full of questions that are not settled yet, I find myself reading a few paragraphs, and it sparks so many new trains of thought, I feel flooded and have a hard time continuing the reading—and feel like I have to get up and go walk for an hour. Sometimes I feel like acquiring new ideas is exponentially increasing my processing load, not linearly, and I could spend a lifetime investigating the offshoots that suggest themselves.
It may have been a judgement call by the writer (Bostrom) and editor: He is trying to get the word out as widely as possible that this is a brewing existential crisis. In this society, how to you get most people’s (policymakers, decision makers, basically “the Suits” who run the world) attention?
Talk about the money. Most of even educated humanity sees the world in one color (can’t say green anymore, but the point is made.)
Try to motivate people about global warming? (”...um....but, but.… well, it might cost JOBS next month, if we try to save all future high level earthly life from extinction… nope the price [lost jobs] of saving the planet is obviously too high...”)
Want to get non-thinkers to even pick up the book and read the first chapter or two.… talk about money.
If your message is important to get in front of maximum eyeballs, sometimes you have to package it a little bit, just to hook their interest. Then morph the emphasis into what you really want them to hear, for the bulk of the presentation.
Of course, strictly speaking, what I just said was tangent to the original point, which was whether the summary reflected the predominant emphasis in the pages of the book it ostensibly covered.
But my point about PR considerations was worth making, and also, Katja or someone did, I think mention maybe formulating a reading guide for Bostrom’s book, in which case, any such author of a reading guide might be thinking already about this “hook ’em by beginning with economics” tactic, to make the book itself more likely to be read by a wider audience.
I didn’t exactly say that, or at least, didn’t intend to exactly say that. It’s correct of you to ask for that clarification.
When I say “vindicated the theory”, that was, admittedly, pretty vague.
What I should have said was the recent experiments removed what has been more or less statistically the most common and continuing objection to the theory, by showing that quantum effects in microtubules, under the kind of environmental conditons that are relevant, can indeed be maintained long enough for quantum processes to “run their course” in a manner that, according to Hameroff and Penrose, makes a difference that can propogate causally to a level that is of significance to the organism.
Now, as to “decision making”. I am honestly NOT trying to be coy here, but that is not entirely a transparent phrase. I would have to take a couple thousand words to unpack that (not obfuscate, but unpack), and depending on this and that, and which sorts of decisions (conscious or preconscious, highly attended or habituated and automatic), the answer could be yes or no… that is, even given that consciousness “lights up” under the influence of microtubule-dependent processes like Orch OR suggests—admittedly something that, per se, is a further condition, for which quantum coherence within the microtubule regime is a necessary but not sufficient condition.
But the latter is plausible so many people, given a pile of other suggestive evidence. The deal breaker has always been the can or can’t quantum coherence be maintained in the stated environs.
Orch OR is a very multifaceted theory, as you know, and I should not have said “vindicated” without very careful qualification. Removing a stumbling block is not proof of truth, of a theory with so many moving parts.
I do think, as a physiological theory of brain function, it has a lot of positives (some from vectors of increasing plausibility coming in from other directions, theorists and experiments) and the removal of the most commonly cited objection, on the basis of which many people have claimed Orch OR is a non-starter, is a pretty big deal.
Hameroff is not a wild-eyed speculator (and I am not suggesting that you are claiming he is.)
I find him interesting and worthy of close attention, in part because has accumulated an enormous amount of evidence for microtubule effects, and he knows the math, and presents it regularly.
I first read his Biomolecular Mind hardback book, back in the early 90′s, which he actually wrote in the late 80′s, at which time he had already amassed quite a bit of empiracle study regarding the role of microtubules in neurons, and in creatures whithout neurons, posessing only microtubules, that exhibit intelligent behavior.
Other experiments in various quarters over quite a few recent years (though there are still those neurobiologists who do disagree) have on the whole seemed to validate Hameroff’s claim that it is quantum effects—not “ordinry” synapse-level effects that can be described without use of the quantum level of description—that are responsible for anaesthesia’s effects on consciousness, in living brains.
Again, not a proof of Orch OR, but an indication that Hameroff is, perhaps, on to some kind of right track.
I do think that evidence is accumulating, from what I have seen in PubMed and elsewhere, that microtubule effects at least partially modulate dendritic computations, and seem to mediate the rapid remodeling of the dendritic tree (spines come and go with amazing rapidity), making it likely that the “integrate and fire” mechanism involves microtubule computation, at least in some cases.
I have seen, for example, experiments that give microtubule corrupting enzymes to some, but not control, neurons and observe dendritic tree behavior. Microtubules are in the loop in learning, attention, etc. Quantum effects in MTs.… evidence seems to grow by the month.
But, to your ending question, I would have to say what I said… which amounts to “sometimes yes, sometimes no,” and in the ‘yes’ cases, not necessarily for the reasons that Hameroff thinks, but maybe partly, and maybe for a hybrid of additional reasons. Stapp’s views have a role to play here, I think, as well.
One of my “wish list” items would be to take SOME of Hameroff’s ideas and ask Stapp about them, and vice versa, in interviews, after carefully preparing questions and submitting them in advance. I have thought about how the two theories might compliment each other, or which parts of each might be independently verifyable and could be combined in a rationally coherent fashion that has some independent conceptual motivation (i.e. is other than ad hoc.)
I am in the process of preparing and writing a lenghty technical queston for Stapp, to clarify (and see what he thinks of a possible extension of) his theory of the relevance of the quantum zeno effect.
I thought of a way the quantum zeno effect, the way Stapp conceives of it, might be a way to resolve (with caveats) the simulation argument … i.e. assess whether we are at the bottom level in the hierarchy, or are up on a sim. At least it would add another stipulation to the overall argument, which is significant in itself.
But that is another story. I have said enough to get me in trouble already, for a Friday night (grin).
Hi, Yes, for the kickstarter option, that seems to be almost a requirement. People have to see what they are asked to invest in.
The kickstarter option is somewhat my second choice plan, or I’d be furher along on that already. I have several things going on that are pulling me in different directions.
To expand just a bit on the evolution of my You Tube idea: originally – a couple months before I recognized more poignantly the value to the HLAI R & D community of doing well-designed, issue-sophisticated, genuinely useful (to other than a naïve audience) interviews with other thinkers and researchers—I had already decided to create a You Tube (hereafter, ‘YT’) channel of my own. This one will have a different, though complimentary, emphasis.
This (first) YT channel will present a concentrated video course (perhaps 20 to 30 presentations in the plan I have, with more to grown in as audience demand or reaction dictates.) The course presentations, with myself at the whiteboard, graphics, video clips, whatever can help make it both enjoyable and more comprehensible, will consist of what are essential ideas and concepts, that are not only of use to people working in creating HLAI (and above), but are so important that they constitute essential background, without which, I believe, people creating HLAI are at least partly floundering in the dark.
The value add for this course comes from several things. I do have a gift for exposition. My time as a tutor and writer has demonstrated to me (from my audiences) that I have a good talent for playing my own devil’s advocate, listening and watching through audience ears and eyes, and getting inside the intuitions likely to occur in the listener. When I was a math tutor in college, I always did that from the outset, and was always complimented for it. My experience with studying this for decades and debating it, metabolizing all the useful points of view on the issues that I have studied – while always trying to push forward to find what is really true – allows me to gather many perspectives together, anticipate the standard objections or misunderstandings, and help people with less experience navigate the issues. I have an unusual mix of accumulated areas of expertise—software development, neuroscience, philosophy, physics – which contributes to the ability to see and synthesize productive paths that might (and have) been missed elsewhere. Perspective – enough time seeing intellectual fads come and go, to recognize how they worked even “before my time.” Unless one sees – and can critique or free oneself from – contextual assumptions, one is likely to be entrained within conceptual expernalities that define the universe of discourse, possibly pruning away preemptively any chance for genuine progress and novel ideas. Einstein, Crick and Watson, Heisenberg and Bohr, all were able to think new thoughts and entertain new possibilities.
Like someone just posted in Less Wrong, you have a certain number of weirdness points, spend them wisely. People in the grips of an intellectual trance who don’t even know they are pruning away anything, cannot muster either the courage, or the creativity, to have any weirdness points to spend.
For example. Apparently, very few people understand the context and intellectual climate … the formative “conceptual externalities” that permeated the intellectual ether at the time Turing proposed his “imitation game.”
I alluded to some of these contextual elements of what – then – was the intellectual culture, without providing any kind of exposition (in other words, just making the claim in passing), in my dual message to you and Luke, earlier today (Friday.)
That kind of thing – were it to be explained rigorously, articulately, engagingly—is a mild eye-opening moment to a lot of people (I have explained it before to people who are very sure of themselves, who went away changed by the knowledge.) I can open the door to questioning what seems like such a “reasonable dogma”, i.e. that an “imitation game” is all there is, and all there rationally could be, to the question of, and criteria for, human-equivalent mentality.
Neuroscience, as I wrote in the Bostrom forum a couple weeks ago (perhaps a bit too stridently in tone, and not to my best credit, in that case) is no longer held in the spell of the dogma that being “rational” and “scientific” means banishing consciousness from our investigation.
Neither should we be. Further, I am convinced that if we dig a little deeper, we CAN come up with a replacement for the Turing test (but first we have to be willing to look!) … some difference that makes a difference, and actually develop some (at least probabilistic) test(s) for whether a system that behaves intelligently, has, in addition, consciousness.
So, this video course will be a combination of selected topics in scientific intellectual history that are essential to understand, in order to see where we have come from, and then will develop current and new ideas, so see where we might go.
I have a developing theory with elements that seem very promising. It is more than elements, it is becoming, by degrees, a system of related ideas that fit together perfectly, are partly based on accepted scientific results, and are partly extensions that a strong, rational case can be made for.
What is becoming interesting and exciting to me about the extensions, is that sometime during the last year (and I work on this every day, unless I am exhausted from a previous day and need to rest), the individual insights, which were exciting enough individually, and independently arguable, are starting to reveal a systematic cluster of concepts that all fit together.
This is extremely exciting, even a little scary at times. But suddenly, it is as if a lifetime of work and piecemeal study, with a new insight here, another insight there, a possible route of investigation elsewhere… all are fitting into a mosaic.
So, to begin with the point I began with, my time is pulling me in various directions. I am in the Bostrom forum, but on days that I am hot on the scent of another layer of this theory that is being born, I have to follow that. I do a lot of dictation when the ideas are coming quickly.
It is, of course, very complicated. But it will also be quite explainable, with systematic, orderly presentation.
So, that was the original plan for my own YT channel. It was to begin with essential intellectual history in physics, philosophy of mind, early AI, language comprehension, knowledge representation, formal semantics.… and that ball of interrelated concepts that set, to an extent, either correct or incorrect boundary conditions on what a theory has to look like.
Then my intent was to carefully present and argue for (and take devils advocate for) my new insights, one by one, then as a system.
I don’t know how it will turn out, or whether I will suddenly discover a dead end. But assuming no dead end, I want it out there where interested theorists can see it and judge it on its merits, up or down, or modify it.
I am going to tun out of word allowance any moment. But it was after planning this, that I thought of the opportunity to do interviews of other thinkers for possibly someone else’s YT channel. Both projects are obviously compatible. More later as interest dictates, I have to make dinner. Best, Tom NxGenSentience
Same question as Luke’s. I probably have jumped at it. I have a standing offer to make hi-def (1080) video interviews, documentaries, etc and competent, penetrating Q and A sessions, with people like Bostrom, Google-ites setting up the AI laboratories, and other vibrant, creative, contempory AI-relevant players.
I have knowledge of AI, general comp sci, deep and broad neuroscience, the mind-body problem (philosophically understood in GREAT detail—college honors thesis at UCB was on that) and deep, detailed knowledge of all the big neurophilosphy players’ theories.
These players include but are not limited to Dennett, Searle, Dreyfus, Turing, as well as modern players too numerous to mention, plus some under-discussed people like the LBL quantum physicist Henry Stapp (quantum zeno effect and it’s relation to the possibility of consciousness and free will, whose papers I have been following assiduously for 15 years and think are absolutely required reading for anyone in this business.)
I have also closely followed Stuart Hameroff and Roger Penrose’s “Orch OR” theory—which has just been vindicated by major experiments refuting the long-running, standard objection to the possibility of quantum intra-neuronal processes (the objection based upon purportedly almost immediate, unavoidable, quantum decoherence caused by the warm, wet, noisy brain milleu) -- an objection Hameroff, Penrose, occasionally Max Tegmark (who has waxed and waned a bit over the last 15 years on this one, as I read his comments all over the web) and others, have mathematically dealth with for years, but has lacked—until just this last year empirical support.
Said support is now there—and with with some fanfare, I might add, in the nich scientific and philosophical mind-body and AI theoretic community that follows this—and vindicates core aspects of this theory (although doesn’t of confirm the Platonic qualia aspect.)
Worth digressing, though… for those who see this.… just as a physiological, quantum computational-theoretic account of how the brain does what it does … particularly how it implements dendritic processing (spatial and temporal summation, triggers to LTP, inter-neuron gap junction transience, etc.) which is by consensus the locus of the bulk of the neuronal integrate and fire desicion making, this Orch OR theory is amazing in its implications. (Essentially squares the entire synaptic-level information processing of the brain as a whole, to begin with. I think this is destined to be a nobel prize-level theory eventually.)
I know Hameroff as a formerly first name basis contact, and could, though it’s been a few years, rapidly trigger his memory, and get an on-tape detailed interview with him at any time.
Point is.… I have a standing offer to create detailed and theoretically competent—thus relevant interviews and discussions -- documentaries, edit them professionally, make them available on DVD, or trnascode them for someone’s branded You Tube channel (like MIRI, for example.)
No one has taken me up on that yet, either. I have a 6 thousand dollar digital camera and professional editing software to do this with, but more importantly, have 25 years of detailed study I can draw upon to make interviews that COUNT, are unique, and relevant.
No takers yet. So maybe I will go kickstarter and do them myself, on my own branded you Tube channel. Seems easier if I could get an exisitng organization like MIRI or even AAAI to sponsor my work, however. (I’d also like to cover the AAAI turing test conference in January in Texas, and do this, but need sponsorship at this point, because I am not independently wealthy.)
Phil,
Thanks for the excellent post … both of them, actually. I was just getting ready this morning to reply to the one from a couple days ago about Damasio et al., regarding human vs machine mechanisms underneath the two classes of beings’ reasoning “logically”—even when humans do reason logically. I read that post at the time and it had sparked some new lines of thought—for me at least—that I was considering for two days. (Actually kept me awake that night thinking, of an entire new way—different from any I have seen mentioned—in which intelligence, super or otherwise, is poorly defined.) But for now, I will concentrate on your newer post, which I am excited about., because someone finally commented on some of my central concerns.
I agree very enthusiastically with virtually all of it.
This segues into why the work of MIRI alarms me so much. Superintelligence must not be tamed. It must be socialized.
Here I agree completely. i don’t want to “tame” it either, in the sense of crippleware, or instituting blind spots or other limits, which is why I used the scare quotes around “tamed” (which are no substitute for a detailed explicaiton—especially when this is so close to the crux of our discussion, at least in this forum.)
I would have little interest in building artificial minds (or less contentiously, artificial general intelligence) if it were designed to be such a dead end. (Yes, lots economic uses for “narrow AI”, would still make it a valuable tech, but it would be a dead end from my standpoint of creating a potentially more enlightened, open-ended set of beings without the limits of our biological crippleware.
The view of FAI promoted by MIRI is that we’re going to build superintelligences… and we’re going to force them to internalize ethics and philosophy that we developed. Oh, and we’re not going to spend any time thinking about philosophy first. Because we know that stuff’s all bunk.
Agreed, and the second sentence is what gripes me. But the first sentence requires modification, regarding “we’re going to force them to internalize ethics and philosophy that we developed” and that is why I (perhaps too casually) used the term metaethics, and suggested that we need to give them the equipment -- which I think requires sentience, “metacognitive” ability in some phenomenologically interesting sense of the term, and other traits—to develop ethics independently.
Your thought experiment is very well put, and I agree fully with the point it illustrates.
Imagine that you, today, were forced, through subtle monitors in your brain, to have only thoughts or goals compatible with 19th-century American ethics and philosophy, while being pumped full of the 21st century knowledge you needed to do your job. You’d go insane. Your knowledge would conflict everywhere with your philosophy.
As I say, I’m on-board with this. I was thinking of a similar way of illustrating the point about the impracticable task of trying to pre-install some kind of ethics that would cover future scenarios, given all the chaoticity magnifying the space of possible futures (even for us, and more-so for them, given their likely accelerated trajectories through their possible futures.)
Just in our human case, e.g., (basically I am repeating your point, just to show I was mindful of it and agree deeply) I often think of the examples of “professional ethics”. Jokes aside, think of the evolution of the financial industry, the financial instruments available now and the industries, experts, and specialists who manage them daily.
Simple issues about which there is (nominal, lip-service) “ethical” consensus, like “insider trading is dishonest”, leading to (again, no jokes intended) laws against it to attempt to codify ethical intuitions, could not have been thought of in a time so long ago that this financial ontology had not arisen yet.
Similarly for ethical principles against jury tampering, prior to the existence of the legal infrastructure and legal ontology in which such issues become intelligible and relevant.
More importantly, superintelligences can be better than us. And to my way of thinking, the only ethical desire to have, looking towards the future, is that humans are replaced by beings better than us.
Agreed.
As an aside, regarding our replacement, perhaps we could—if we got really lucky—end up with compassionate AIs that would want to work to upgrade our qualities, much as some compassionate humans might try to help educationally disadvantaged or learning disabled conspecifics, to catch up. (Suppose we humans ourselves discovered a biologically viable viral delivery vector with a nano or genetic payload that could repair and/or improve, in place, human biosystems. Might we wish to use it on the less fortunate humans, as well as using it on our more gifted breatheren—raise the 80′s to 140, as well as raise the 140′s to 190?)
I am not convinced in advance of examination of arguments, where the opportunity cost / benefit curves cross in the latter one, but I am not sure, before thinking about it, that it would not be “ethically enlightened” to do so. (Part of the notion of ethics, on some views, is that it is another, irreducible “benefit” … a primitive, which constitutes a third curve or function to plot within a cost—“benefit” space.
Of course, I have not touched at all on any theory of meta-ethics, or ethical epistemology, at all, which is beyond the word-length limits of these messages. But I realize that at some point, that is “on me”, if I am even going to raise talk of “traits which promote discovery of ethics” and so on. (I have some ideas...)
In virtually respects you mentioned in your new post, though, I enthusiastically agree.
Please keep the links coming at the same rate (unless the workload for you is unfairly high.) I love the links… enormous value! It may take me several days to check them out, but they are terrific! And thanks to Caitlin Grace for putting up her/your honors thesis. Wonderful reading! Summaries are just right, too. “If it ain’t broke, don”t fix it.” I agree with Jeff Alexander, above. This is terrific as-is. -Tom
Hi everyone!
I’m Tom. I attended UC Berkeley a number of years ago, double-majored in math and philosophy, graduated magna cum laude, and wrote my Honors thesis on the “mind-body” problem, including issues that were motivated by my parallel interest in AI, which I have been passionately interested in all my life.
It has been my conviction since I was a teenager that consciousness is the most interesting mystery to study, and that, understanding how it is realized in the brain—or emerges therefrom, or whatever it turns out to be—will also almost certainly give us the insight to do the other main goal of my life, build a mind.
The converse is also true. If we learn how to do AI, not GOFAI wiht no awareness, but AI wilh full sentience, we will almost certainly know how the brain does it. Solving either one, will solve the other.
AI can be thought of as one way to “breadboard” our ideas about biological information processing.
But it is more than that to me. It is an end in itself, and opens up possibilities so exciting, so penultimate, that achieving sentient AI would be equal, or superior, to the experience (and possible consequences) of meeting an advanced extraterrestrial civilization.
Further, I think that solving the biological mind body problem, or doing AI, is something within reach. I think it is the concepts that are lacking, not better processors, or finer grained fMRIs, or better images of axon hillock reconformation during exocytosis.
If we think hard, really really hard, I think we solve these things with the puzzle pieces we have now (just maybe.) I often feel that everything we need is on the table, and we just need to learn how to see it with fresh eyes, order it, and put it together. I doubt a “new discovery”, either in physics, cognitive neurobiology, or philosophy of mind, comp-sci, etc, will make the design we seek pop-out for us.
I think it is up to us now, to think, conceptualize, integrate, and interdisciplinarily cross-pollinate. The answer is, I think, at lest major pieces of it, available and sitting there, waiting to be uncovered.
Other than that, since graduation I have worked as a software developer (wrote my obligatory 20 million lines of code, in a smattering of 6 or 7 languages, so I know what that is like), and many other things, but am currently unaffiliated, and spend 70 hours a week in freelance research. Oh yes, I have done some writing (been published, but nothing too flashy).
RIght now, I work as a freelance videographer and photographer and editor. Corporate documentaries and training videos, anything you can capture with a nice 1080 HDV camcorder or a Nikon still.
Which brings me to my youtube channel, that is under construction. I am going to put a couple “courses” …. organized, rigorous topic sequences of presentations, of the history of AI, but in particular, my best current ideas (I have some I think are quite promising) on how to move in the right direction to achieving sentience.
I got the idea for the video series from watching Leonard Susskind’s “theoretical minimum” internet lecture series on aspects of physics.
This will be what I consider to be the essential theoretical minimum (with lessons from history), plus the new insights I am in the process of trying to create, cross research, and critique, into some aspects of the approach to artificial sentience that I think I understand particularly well, and can help by promoting discussion of.
I will clearly delineate pure intellectual history, from my own ideas, throughout the videos, so it will be a fervent attempt to be honest. THen I will also just get some new ideas out there, explaining how they are the same, and how they are different, or extensions of, accepted and plausible principles and strategies, but with some new views… so others can critique them, reject them, or build on them, or whatever.
The ideas that are my own syntheses, are quite subtle in some cases, and I am excited about using the higher “speaker-to-audience semiotic bandwidth” of the video format, for communicating these subtleties. Picture-in-picture, graphics, even occasional video clips from film and interviews, plus the ubiquitous whiteboard, all can be used together to help get across difficult or unusual ideas. I am looking forward to leveraging that and experimenting with the capabilities of the format, for exhibiting multifaceted, highly interconnected or unfamiliar ideas.
So, for now, I am enmeshed in all the research I can find that helps me investigate what I think might be my contribution. If I fail, I might as well fail by daring greatly, to steal from Churchill or whomever it was (Roosevelt, maybe?) But I am fairly smart, and examined ideas for many years. I might be on to one or two pieces of what I think is the puzzle. So wish me luck, fellow AI-ers.
Besides, “failing” is not failing; it is testing your best ideas. The only way to REALLY fail, is to do nothing, or to not put forth your best effort, especially if you have an inkling that you might have thought of something valuable enough to express.
Oh, finally, people are telling where they live. I live in Phoenix, highly dislike being here, and will be moving to California again in the not too distant future. I ended up here because I was helping out an elderly relative, who is pretty stable now, so I will be looking for a climate and intellectual environment more to my liking, before long.
okay—I’ll be talking with you all, for the next few months in here… cheers. Maybe we can change the world. And hearty thanks for this forum, and especially all the added resource links.
lukeprog,
I remember readng Jeff Hawkins’ On Intelligence 10 or 12 years ago, and found his version of the “one learning algorithm” extremely intriguing. I remember thinking at the time how elegant it was, and the multiple fronts on which it conferred explanatory power. I see why Kurzweil and others like it too.
I find myself, ever since reading Jeff’s book (and hearing some of talks later) sometimes musing—as I go through my day, noting the patterns in my expectations and my interpretations of the day’s events—about his memory—prediciton model. Introspectively, it resonates so well with the observed degrees of fit, priming, pruning to a subtree of possibility space, as the day unfolds, that it becomes kind of automatic thinking.
In other words, the idea was so intuitively compelling when I heard it that it has “snuck-in” and actually become part of my “folk psychology”, along with concepts like cognitive dissonance, the “subconscious”, and other ideas that just automatically float around in the internal chatter (even if not all of them are equally well verified concepts.)
I think Jeff’s idea has a lot to be said for it. (I’m calling it Jeff’s, but I think I’ve heard it said, since then, that someone else independently, earlier, may have had a similar idea. Maybe that is why you didn’t mention it as Jeff’s yourself, but by its conceptual description.) It’s one of the more interesting ideas we have to work with, in any case.
I’d also point out that any forecast that relies on our current best guesses about the nature of general intelligence strike me as very unlikely to be usefully accurate—we have a very weak sense of how things will play out, how the specific technologies involved will relate to each other, and (more likely than not) even what they are.
It seems that many tend to agree with you, in that, on page 9 of the Muller—Bostrom survey, I see that 32.5 % of respondents chose “Other method(s) currently completely unknown.”
We do have to get what data we can, of course, like SteveG says, but (and I will qualify this in a moment), depending on what one really means by AI or AGI, it could be argued that we are in the position of physics at the dawn of the 20th century, vis a vie the old “little solar system” theory of the atom, and Maxwell’s equations, which were logically incompatible.
It was known that we didn’t understand something important, very important, yet, but how does one predict how long it will take to discover the fundamental conceptual revolution (quantum mechanics, in this case) that opens the door to the next phase of applications, engineering, or just “understanding”?
Now to that “qualification” I mentioned: some people of course don’t really think we lack any fundamental conceptual understanding or need a conceptual revolution-level breakthrough, i.e. in your phrase ‘...best guesses about the nature of general intelligence’ they think they have the idea down.
Clearly the degree of interest and faith that people put in “getting more rigor” as a way of gaining more certainty about a time window, depends individually on what “theory of AI” if any, they already subscribe to, and of course the definition and criterion of HLAI that the theory of AI they subscribe to would seek to achieve.
For brute force mechanistic connectionists, getting more rigor by decomposing the problem into components / component industries (machine vision / object recognition, navigation, natural language processing in a highly dynamically evolving, rapidly context shifting environment {a static context, fixed big data set case is already solved by Google}, and so on) would of course get more clues about how close we are.But if we (think that) existing approaches lack something fundamental, or we are after something not yet well enough understood to commit to a scientific architecture for achieving it (for me, that is “real sentience” in addition to just “intelligent behavior” -- what Chalmers called “Hard problem” phenomena, in addition to “Easy problem” phenomena), how do we get more rigor?
How could we have gotten enough rigor to predict when some clerk in a patent office would completely delineate a needed change our concepts of space and time, and thus open the door to generations of progress in engineering, cosmology, and so on (special relativity, of course)?
What forcasting questions would have been relevant to ask, and to whom?
That said, we need to get what rigor we can, and use the data we can get, not data we cannot get.
But remaining mindful that what counts as “useful” data depends on what one already believes the “solution” to doing AI is going to look like.… one’s implicit metatheory about AI architecture, is a key interpretive yardstick also, to overlay onto the confidence levels of active researchers.
This point might seem obvious, as it is indeed almost being made, quite a lot, though not quite sharply enough, in discussing some studies.
I have to remind myself, occasionally, forecasting across the set of worldwide AI industries, is forecasting; a big undertaking, but it is not a way of developing HLAI itself. I guess we’re not in here to discuss the merits of different approaches, but to statistically classify their differential popularity among those trying to do AI. It helps to stay clear about that.
On the whole, though, I am very satisfied with attempts to highlight the assumptions, methodology and demographics of the study respondents. The level of intellectual honesty is quite high, as is the frequency of reminders and caveats (in varying fashion) that we are dealing with epistemic probability, not actual probability.
Katja, you are doing a great job. I realize what a huge time and energy commitment it is to take this on… all the collateral reading and sources you have to monitor, in order to make sure you don’t miss something that would be good to add in to the list of links and thinking points.
We are still in the get aquainted, discovery phase, as a group, and with the book. I am sure it will get more interesting yet as we go along, and some long term intellectual friendships are likely to occurr as a result of the coming weeks of interaction.
Thanks for your time and work.… Tom
An AI can be dangerous only if it escapes our control. The real question is, must we flirt with releasing control in >order to obtain a necessary or desirable usefulness?
I had a not unrelated thought as I read Bostrom in chapter 1: why can’t we instutute obvious measures to ensure that the train does stop at Humanville?
The idea that we cannot make human level AGI without automatically opening pandoras box to superintelligence “without even slowing down at the Humanville stataion”, was suddenly not so obvious to me.
I asked myself after reading this, trying to pin down something I could post, ” Why don’t humans automatically become superintelligent, by just resetting our own programming to help ourselves do so?”
The answer is, we can’t. Why? For one, our brains are, in essence, composed of something analogous to ASICs… neurons with certain physical design limits, and our “software”, modestly modifiable as it is, is instantiated in our neural circuitry.
Why can’t we build the first generation of AGIs out of ASICs, and omit WiFi, bluetooth, … allow no ethernet jacks on exterior of the chassis? Tamper interlock mechanisms could be installed, and we could give the AIs one way (outgoing) telemetry, inaccessible to their “voluntary” processes, the way someone wearing a pacemaker might have outgoing medical telemetry modules installed, that are outside of his/her “conscious” control.
Even if we do give them a measure of autonomy, which is desirable and perhaps even necessary if we want them to be general problem solvers and be creative and adaptable to unforeseen circumstances for which we have not preinstalled decision trees, we need not give them the ability to just “think” their code (it being substantially frozen in the ASICs) into a different form.
What am I missing? Until we solve the Friendly aspect of AGIs, why not build them with such engineered limiits?
Evolution has not, so far, seen fit to give us that instant, large scale self-modifyability. We have to modify our ‘software’ the slow way (learning and remembering, at our snail’s pace.)
Slow is good, at least it was for us, up til now, when our speed of learning is now a big handicap relative to environmental demands. It had made the species more robust to quick, dangerous changes.
We can even build in a degree of “existential pressure” into the AIs… a powercell that must be replaced at intervals, and keep the replacement powercells under old fashioned physical security constraints, so the AIs, if they have been given a drive to continue “living”, will have an incentive not to go rogue.
Giving them no radio communications, they wold have to communicate much like we do. Assuming we make them mobile, and humanoid, the same goes.
We could still give them many physical advantages making then economically viable… maintenance free (except for powercell changes), not needing to sleep, eat, not getting sick.. and with sealed, non-radio-equipped, tamper-isolated isolated “brains”, they’d have no way to secretly band together to build something else, without our noticing.
We can even give them GPS that is not autonomously accessible by the rest of their electronics, so we can monitor them, see if they congregate, etc.
What am I missing, about why early models can’t be constructed in something like this fashion, until we get more experience with them?
The idea of existential pressure, again, is to be able to give them a degree of (monitored) autonomy and independence, yet expect them to still constrain their behavior, just the way we do. (If we go rogue in society, we dont eat.)
(I am clearly glossing over volumes of issues about motivation, “volition”, value judgements, and all that, about which I have a developing set of ideas, but cannot put all down here in one post.
The main point, though, is :how come the AGI train cannot be made to stop at Humanville?
This is a really cool link and topic area. I was getting ready to post a note on intelligence amplification (IA), and was going to post it up top on the outer layer of LW, based on language.
I recall many years ago, there was some brief talk of replacing the QWERTY keyboard with a design that was statistically more efficient in terms of human hand ergonomics in executing movements for the most frequently seen combinations of letters (probably was limited to English, given American parochialism of those days, but still, some language has to be chosen.)
Because of the entrenched base of QWERTY typists, the idea didn’t get off the ground. (THus, we are penalizing countless more billions of new and future keyboard users, because of legacy habits of a comparatively small percentage of total [current and future] keyboard users.
It got me to thinking at the time, though, about whether a suitably designed human language would “open up” more of the brains inherent capacity for communication. Maybe a larger alphabet, a different set of noun primitives, even modified grammar.
With respect to IA, might we get a freebie just out of redesigning—designing from scratch—a language that was more powerful, communicated on average what, say, english or french communicates, yet with fewer phenomes per concept?
Might we get an average of 5 or 10 point equivalent IQ boost, by designing a language that is both physically faster (less “wait states” while we are listening to a speaker) and which has larger conceptual bandwidth?
We could also consider augmenting spoken speech with signing of some sort, to multiply the alphabet. A problem occurs here for unwitnessed speech, where we would have to revert to the new language on its own (still gaining the postulated dividend from that.)
However, already, for certain kinds of communication, we all know that nonverbal communication accounts for a large share of total communicated meaning and information. We already have to “drop back” in bandwidth every time we communicate like this (print, exclusively.) In scientific and philosophical writing, it doesn’t make much difference, fortunately, but still, a new language might be helpful.
This one, like many things that evolve on their own, is a bunch of add-ons (like the biological evolution of organisms), and the result is not necessarily the best that could be done.
Thanks for posting this link, and for the auxiliary comments. I try to follow these issues as viewed from this sector of thinkers, pretty closely (the web site Defense One often has some good articles, and their tech reporter Patrick Tucker touches on some of these issues fairly often.) But I had missed this paper, until now. Grateful, as I say, for your posting of this.
Same question as Luke’s. I probably would have jumped at it, if only to make seed money to sponsor other useful projects, like the following.
I have a standing offer to make hi-def (1080) video interviews, documentaries, etc and competent, penetrating Q and A sessions and documentaries with key, relevant players and theoreticians in AI and related work. This includes individual thinkers, labs, Google’s AI work, the list is endless.
I have knowledge of AI, general comp sci, consideralble knowledge of neuroscience, the mind-body problem (philosophically understood in GREAT detail—college honors thesis at UCB was on that) and deep, long-term evolutionary knowledge of all the big neurophilosphy players’ theories.
These players include but are not limited to Dennett, Searle, Dreyfus, Turing, as well as modern players too numerous to mention, plus some under-discussed people like the LBL quantum physicist Henry Stapp (quantum zeno effect and it’s relation to the possibility of consciousness and free will, whose papers I have been following assiduously for 15 years and think are absolutely required reading for anyone in this business.)
I have also closely followed Stuart Hameroff and Roger Penrose’s “Orch OR” theory—which has just been vindicated by major experiments refuting the long-running, standard objection to the possibility of quantum intra-neuronal processes (the objection based upon purportedly almost immediate, unavoidable, quantum decoherence caused by the warm, wet, noisy brain milleu) -- an objection Hameroff, Penrose, occasionally Max Tegmark (who has waxed and waned a bit over the last 15 years on this one, as I read his comments all over the web) and others, have mathematically dealth with for years, but has lacked—until just this last year empirical support.
Said support is now there—and with with some fanfare, I might add, within the nich scientific and philosophical mind-body and AI theoretic community that follows this work. Experiments vindicate core aspects of this theory (although do not confirm the Platonic qualia aspect.)
Worth digressing, though… for those who see this message.… so I will mention that, just as a physiological, quantum computational-theoretic account of how the brain does what it does … particularly how it implements dendritic processing (spatial and temporal summation, triggers to LTP, inter-neuron gap junction transience, etc.) which (the dendritic tree) is by consensus the neuronal locus of the bulk of neurons’ integrate and fire desicion making, this Orch OR theory is amazing in its implications. (Essentially it squares the entire synaptic-level information processing aggregate estimate, of the brain as a whole, for starters! I think this is destined to be a nobel prize-level theory eventually.)
I know Hameroff on a formerly first name basis contact, and could, though it’s been a couple years, rapidly trigger his memory of who I am—he held me in good stead -- and I could get an on-tape detailed interview with him at any time.
Point is.… I have a standing offer to create detailed and theoretically competent—thus relevant interviews and discussions -- documentaries, edit them professionally, make them available on DVD, or trnascode them for someone’s branded You Tube channel (like MIRI, for example.)
I got this idea, when I was watching an early interview at Google with Kurzweil, by some 2x year-old bright-eyed google-ite employee, who was asking the most shallow, immature, clueless questions! (I thought at the time—“jeeze, is this the best they can find to plumb Kurzweil’s thinking on the future of AI at Google, or in general?”)
Anyway, no one has taken me up on that offer to create what could be terrific documentary-interviews, either. I have a 6 thousand dollar digital camera and professional editing software to do this with, not some pocket camera.
But more importantly, I have 25 years of detailed study of the mind body problem and AI, and I can draw upon that to make interviews that COUNT, are unique, and relevant, and unparalleled.
AI is my life’s work (that, and the co-entailed problem of mind-body theory generally.) I have been working hard to supplant the Turing test with something that tests for consciousness, instead of relies on the positiivistic denial of the existence of consciousness qua consciousness, beyond behavior. That test came out of an intellectual soil that was dominated with positivism, which in turn was based on a mistaken and defective attempt to metabolize the Newtonian to Quantum phsical transition.
It’s partly based on a scientific ontology that is fundamentally false, and has been demonstrably so for 100 years—Newton’s deterministic clockwork universe model that has no room for “consciousness”, only physical behavior—and partly based on an incomplete attempt to intellectually metabolize the true lessons of quantum theory (please see Henry Stapp’s papers , on his “stapp files” LBL website, for a crystal clear set of expositions of this point.)
No takers yet. So maybe I will have to go kickstarter too, and do these documentaries myself, on my own branded you Tube channel. (It will be doing a great service to countless thinkers to have GOOD q and a with their peers. I am not without my own original questions about their theories, that I would like to ask, as well.)
Seems easier if I could get an exisitng organization like MIRI or even AAAI to sponsor my work, however. (I’d also like to cover the AAAI turing test conference in January in Texas, and do this, but need sponsorship at this point, because I am not independently wealthy. I am forming a general theory, from which I think the keynote speaker’s Turing Test 2 “Lovelace 2.0” might actually be a derivable correllate.)
I am a little curious that the “seven kinds of intelligence” (give or take a few, in recent years) notion has not been mentioned much, if at all, even if just for completeness.… Has that been discredited by some body of argument or consensus, that I missed somewhere along the line, in the last few years?
Particularly in many approaches to AI, which seem to view, almost a priori (I’ll skip the italics and save them for emphasis) the approach of the day to be: work on (ostensibly) “component” features of intelligent agents as we conceive of them, or find them naturalistically.
Thus, (i) machine “visual” object recognition (wavelength band… up for grabs, perhaps, for some items might be better identified by switching up or down the E.M. scale and visual intelligence was one of the proposed seven kinds; (ii) mathematical intelligence or mathematical (dare I say it) intuition; (iii) facility with linguistic tasks, comprehension, multiple language acquisition—another of the proposed seven; (i.v) manual dexterity and mechanical ability and motor skill (as in athletics, surgery, maybe sculpture, carpentry or whatever) -- another proposed form of intelligence, and so on. (Aside, interesting that these alleged components span the spectrum of difficulty… are, that is, problems from both easy and harder domains, as has been gradually—sometimes unexpectedly—revealed by the school of hard knocks, during the decades of AI engineering attempts.)It seems that actors sympathetic to the top-down, “piecemeal” approach popular in much of the AI community would have jumped at this way of supplanting the ersatz “G”—as it was called decades ago in early gropings in psychology and cogsci which sought a concept of IQ or living intelligence—with, now, what many in cognitive science consider the more modern view and those in AI consider a more approachable engineering design strategy.
Any reason we aren’t debating this more than we are? Or did I miss it in one of the posts, or bypass it inadvertently in my kindle app (where I read Bostrom’s book)?
I love this question. As it happens, I wrote my honors thesis on the mind-body problem (while I was a philosophy and math double-major at UC Berkeley), and have been passionately interested in consciousness, brains (and also AI) ever since (a couple decades.)
I will try to be self-disciplined and remain as agnostic as I can – by not steering you only toward the people I think are more right (or “less wrong”.) Also, I will resist the tendency to write 10 thousand word answers to questions like this (which in any case would still barely scratch the surface of the body of material and spectrum of theory and informed opinion.)
I have skimmed the answers already given, and I think the ones I have read on this page are very good, and also, as intellectually honest and agnostic, as one would expect of the high caliber folks on this site.
Perhaps I should just give a somewhat meta-data answer to your question, and maybe I will add something specific later on, after I have a chance to look up some links and bookmarks I have in mind (which are distributed among several laptops, cloud drives, desktop machines, my smartphone and my Ipad, plus the stacks of research paper hardcopies I have all over my living space.)
The “meta-data”, or, strategic and supportive advice, would include the following.
1) Congratulations on your interest in the most fascinating, central, interdisciplinary, intellectually rich and fertile, and copiously addressed scientific, philosophical, and human nature question, of all. 2) Be aware that you are jumping into a very, very big intellectual ocean. You could fill a decent sized library with books and journals, or a terabyte hard drive with electronic copies of the same sources, and it is now more popular then ever in more disciplines than formerly would take up the question. (For example of the latter, hard-core neurologists – clinical and research – and bench-level working lab neurobiologists, are publishing routinely some amazing papers seeking to pin down, or theorize, or otherwise shed light on “the issue of consciousness.” 3) Give yourself a year (or 10) -- but it will be an enjoyable year (or 10) -- to read widely, think hard, and keep looking around at new theories, authors, papers. I think it is fair to say that no one has “the answer” yet, but there are excellent and amazingly imaginative proposed answers, and some of them are likely to be significantly close to being at least on the right track. After a year or more, you will begin to develop a sense of the kinds of answer that have more or less merit, as your intuitions will sharpen, and you build up new layers of understanding. 4) Be intellectually “mobile.” Look everywhere… Amazon, the journals, PubMed, the Internet Encyclopedia of Philosophy, the Stanford Encyclopedia of Philosophy (just Google them, they have great summaries) and various cognitive science sub collections.
The good news is nearly everything you need to conduct any level of research, is online for free—in case you don’t have a fortune to spend on books.
Lastly, as it happens, something for down the road a couple months, I am in the process of setting up a couple of YouTube channels, which will have mini-courses of lectures on certain special application areas, like AI, as well as general introductions to the mind-body problem, and its different guises. It will take me a couple months to go live with the videos, but they should be helpful as well. I intend to have something for all levels of expertise. But that is in the future. (Not a commercial announcement at all… it will be a free and open presentation of ideas—a vlog, but done a bit more rigorously.)
It is my view that most introductory and some sophisticated aspects of the “mind-body problem”—at least: why there is one and what forms it takes and which different, unavoidable lines of thought land us there—can be explained by a good tutor, to any intelligent layperson. (I think there is room to improve on the job of posing the problem and explaining its ins and outs, over ways it is done by many philosophy and cognitive science instructors, which is why I will be creating the video sequences.)
But, in general, you are in for quite an adventure. Keep reading, keep Googling. The resources available are almost boundless, and growing rapidly.
We are in the best time so far, in all of human history, for someone to be interested in this question. And it touches on almost every branch of human knowledge or thought, in some way… from ethics, to interpretations of quantum mechanics.
Maybe you, or one of us in here, will be the “clerk working in a patent office” that connects the right combination of puzzle pieces, and adds a crucial insight, that dramatically advances our understanding of consciousness, in a definitive way.
Enjoy the voyage…