Comment removed by author. It was not focused enough to be useful. thanks.
NxGenSentience
Lumifer, Yes, there is established evidence that the (human) brain responds to magnetic fields, both in sensing orientation (varying by individual), as well as the well known induced “faux mystical experience” phenomenon, by subjecting the temporal-parietal lobe area to certain magnetic fields.
Did you read about Google’s partnership with NASA and UCSD to build a quantum computer of 1000 qubits?
Technologically exciting, but … imagine a world without encryption. As if all locks and keys on all houses, cars, banks, nuclear vaults, whatever, disappeared, only incomparably more consequential.
That would be catastrophic, for business, economies, governments, individuals, every form of commerce, military communication....
Didn’t answer your question, I am sorry, but as a “fan” of quantum computing, and also a person with a long time interest in the quantum zeno effect, free will, and the implications for consciousness (as often discussed by Henry Stapp, among others), I am both excited, yet feel a certain trepidation. Like I do about nanotech.
I am writing a long essay and preparing a video on the topic, but it is a long way from completion. I do think it (qc) will have a dramatic effect on artifactual consciousness platforms, and I am even more certain that it will accellerate superintelligence (which is not at all the same thing, as intelligence and consciousness, in my opinion, are not coextensive.)
From what I have read in open source science and tech journals and news sources, general quantum computing seems from what I read to be coming faster than the time frame you had suggested. I wouldn;t be suprised to see it as soon as 2024, prototypical, alpha or beta testing, and think it a safe bet by 2034 for wider deployment. As to very widespread adoption, perhaps a bit later, and w/r to efforts to control the tech for security reasons by governments, perhaps also … later here, earlier, there.
What do you mean with artificial consciousness to the extend that it’s not intelligence and why do you think the problem is in a form where quantum computers are helpful?
The claim wasn’t that artifactual consciousness wasn’t (likely to be) sufficient for a kind of intelligence, but that they are not coextensive. It might have been clearer to say consciousness is (closer to being) sufficient for intelligence, than intelligence (the way computer scientists often use it) is to being a sufficient condition for consciousness (which is not at all.)
I needn’t have restricted the point to artifact-based consciousness, actually. Consider absence seizures (epilepsy) in neurology. A man can seize (lose “consciousness”) get up from his desk, get the car keys, drive to a mini-mart, buy a pack of cigarettes, make polite chat while he gets change from the clerk, drive home (obeying traffic signals), lock up his car, unlock and enter his house, and lay down for a nap, all in absence seizure state, and post-ictally, recall nothing. (Neurologists are confident these cases withstand all proposals to attribute postictal “amnesia” to memory failure. Indeed, seizures in susceptible patients can be induced, witnessed, EEGed, etc. from start to finish, by neurologists. ) Moral: intelligent behavior occurs, consciousness doesn’t. Thus, not coextensive. I have other arguments, also.
As to your second question, I’ll have to defer an answer for now, because it would be copiously long… though I will try to think of a reply (plus the idea is very complex and needs a little more polish, but I am convinced of its merit. I owe you a reply, though..., before we’re through with this forum.
HI and thanks for the link. I just read the entire article, which was good for a general news piece, and correspondingly, not definitive (therefore, I’d consider it journalistically honest) about the time frame. ”...might be decades away...” and ”...might not really seem them in the 21st century...” come to mind as lower and upper estimates.
I don’t want to get out of my depth here, because I have not exhaustively (or representatively) surveyed the field, nor am I personally doing any of the research.
But I still say I have found a significant percentage of articles (Those Nature summaries sites), PubMed (oddly, lots of “physical sciences” have journals on there now too) and “smart layman” publications like New Scientist, and the SciAm news site, which continue to have mini-stories about groups nibbling away at the decoherence problem, and finding approaches that don’t require supercooled, exotic vacuum chambers (some even working with the possibility of chips.)
If 10 percent of these stories have legs and aren’t hype, that would mean I have read dozens which might yield prototypes in a 10 − 20 year time window.
The google—NASA—UCSB joint project seems like they are pretty near term (ie not 40 or 50 years donw the road.)
Given Google’s penchant for quietly working away and then doing something amazing the world thought was a generation away—like unveiling the driverless cars, that the Governor and legislature of Michigan (as in, of course, Detroit) are in the process of licensing for larger scale production and deployment—it wouldn’t surprise me if one popped up in 15 years, that could begin doing useful work.
Then it’s just daisychaining, and parallelizing with classical supercomputers doing error correction, preforming datasets to exploit what QCs do best, and interleaving that with conventional techniques.
I don’t think 2034 is overly optimistic. But, caveat revealed, I am not in the field doing the work, just reading what I can about it.
I am more interested in: positing that we add them to our toolkit, what can we do that is relevant to creating “intereesting” forms of AI.
Thanks for your link to the nyt article.
Luke,
Thanks for posting the ink. Its an april 2014 paper, as you know. I just downloaded the PDF and it looks pretty interesting. I’l post my impression, if I have anything worthwhile to say, either here in Katja’s group, or up top on lw generally, when I have time to read more of it.
Asr,
Thanks for pointing out the wiki article, which I had not seen. I actually feel a tiny bit relieved, but I still think there are a lot of very serious forks in the road that we should explore.
If we do not pre-engineer a soft landing, this is the first existential catastrophe that we should be working to avoid.
A world that suddenly loses encryption (or even faith in encryption!) would be roughly equivalent to a world without electricity.
I also worry about the legacy problem… all the critical documents in RSA, PGP, etc, sitting on hard drives, servers, CD roms, that suddenly are visible to anyone with access to the tech. How do we go about re-coding all those “eyes only” critical docs into a post-quantum coding system (assuming one is shown practical and reliable), without those documents being “looked at” or opportunistically copied in their limbo state between old and new encrypted status?
Who can we trust to do all this conversion, even given the new algorithms are developed?
This is actually almost intractably messy, at first glance.
I have dozens, some of them so good I have actually printed hardcopies of the PDFs—sometimes misplacing the DOIs in the process.
I will get some though; some of them are, I believe, required reading, for those of us looking at the human brain for lessons about the relationship between “consciousness” and other functions. I have a particularly interesting one (74 pages, but it’s a page turner) that I wll try to find the original computer record of. Found it and most of them on PubMed.
If we are in a different thread string in a couple days, I will flag you. I’d like to pick a couple of good ones, so it will take a little re-reading.
This is a really cool link and topic area. I was getting ready to post a note on intelligence amplification (IA), and was going to post it up top on the outer layer of LW, based on language.
I recall many years ago, there was some brief talk of replacing the QWERTY keyboard with a design that was statistically more efficient in terms of human hand ergonomics in executing movements for the most frequently seen combinations of letters (probably was limited to English, given American parochialism of those days, but still, some language has to be chosen.)
Because of the entrenched base of QWERTY typists, the idea didn’t get off the ground. (THus, we are penalizing countless more billions of new and future keyboard users, because of legacy habits of a comparatively small percentage of total [current and future] keyboard users.
It got me to thinking at the time, though, about whether a suitably designed human language would “open up” more of the brains inherent capacity for communication. Maybe a larger alphabet, a different set of noun primitives, even modified grammar.
With respect to IA, might we get a freebie just out of redesigning—designing from scratch—a language that was more powerful, communicated on average what, say, english or french communicates, yet with fewer phenomes per concept?
Might we get an average of 5 or 10 point equivalent IQ boost, by designing a language that is both physically faster (less “wait states” while we are listening to a speaker) and which has larger conceptual bandwidth?
We could also consider augmenting spoken speech with signing of some sort, to multiply the alphabet. A problem occurs here for unwitnessed speech, where we would have to revert to the new language on its own (still gaining the postulated dividend from that.)
However, already, for certain kinds of communication, we all know that nonverbal communication accounts for a large share of total communicated meaning and information. We already have to “drop back” in bandwidth every time we communicate like this (print, exclusively.) In scientific and philosophical writing, it doesn’t make much difference, fortunately, but still, a new language might be helpful.
This one, like many things that evolve on their own, is a bunch of add-ons (like the biological evolution of organisms), and the result is not necessarily the best that could be done.
An AI can be dangerous only if it escapes our control. The real question is, must we flirt with releasing control in >order to obtain a necessary or desirable usefulness?
I had a not unrelated thought as I read Bostrom in chapter 1: why can’t we instutute obvious measures to ensure that the train does stop at Humanville?
The idea that we cannot make human level AGI without automatically opening pandoras box to superintelligence “without even slowing down at the Humanville stataion”, was suddenly not so obvious to me.
I asked myself after reading this, trying to pin down something I could post, ” Why don’t humans automatically become superintelligent, by just resetting our own programming to help ourselves do so?”
The answer is, we can’t. Why? For one, our brains are, in essence, composed of something analogous to ASICs… neurons with certain physical design limits, and our “software”, modestly modifiable as it is, is instantiated in our neural circuitry.
Why can’t we build the first generation of AGIs out of ASICs, and omit WiFi, bluetooth, … allow no ethernet jacks on exterior of the chassis? Tamper interlock mechanisms could be installed, and we could give the AIs one way (outgoing) telemetry, inaccessible to their “voluntary” processes, the way someone wearing a pacemaker might have outgoing medical telemetry modules installed, that are outside of his/her “conscious” control.
Even if we do give them a measure of autonomy, which is desirable and perhaps even necessary if we want them to be general problem solvers and be creative and adaptable to unforeseen circumstances for which we have not preinstalled decision trees, we need not give them the ability to just “think” their code (it being substantially frozen in the ASICs) into a different form.
What am I missing? Until we solve the Friendly aspect of AGIs, why not build them with such engineered limiits?
Evolution has not, so far, seen fit to give us that instant, large scale self-modifyability. We have to modify our ‘software’ the slow way (learning and remembering, at our snail’s pace.)
Slow is good, at least it was for us, up til now, when our speed of learning is now a big handicap relative to environmental demands. It had made the species more robust to quick, dangerous changes.
We can even build in a degree of “existential pressure” into the AIs… a powercell that must be replaced at intervals, and keep the replacement powercells under old fashioned physical security constraints, so the AIs, if they have been given a drive to continue “living”, will have an incentive not to go rogue.
Giving them no radio communications, they wold have to communicate much like we do. Assuming we make them mobile, and humanoid, the same goes.
We could still give them many physical advantages making then economically viable… maintenance free (except for powercell changes), not needing to sleep, eat, not getting sick.. and with sealed, non-radio-equipped, tamper-isolated isolated “brains”, they’d have no way to secretly band together to build something else, without our noticing.
We can even give them GPS that is not autonomously accessible by the rest of their electronics, so we can monitor them, see if they congregate, etc.
What am I missing, about why early models can’t be constructed in something like this fashion, until we get more experience with them?
The idea of existential pressure, again, is to be able to give them a degree of (monitored) autonomy and independence, yet expect them to still constrain their behavior, just the way we do. (If we go rogue in society, we dont eat.)
(I am clearly glossing over volumes of issues about motivation, “volition”, value judgements, and all that, about which I have a developing set of ideas, but cannot put all down here in one post.
The main point, though, is :how come the AGI train cannot be made to stop at Humanville?
Watson’s Jeopardy win shows that, given enough time, a team of AI engineers has an excellent chance of creating a specialized system which can outpace the best human expert in a much wider variety of tasks than we might have thought before.
One could read that comment on a spectrum of charitableness. I will speak for myself, at the risk of ruffling some feathers, but we are all here to bounce ideas around, not tow any party lines, right? To me, Watson’s win means very little, almost nothing. Expert systems have been around for years, even decades. I experimented with coding one myself, many years ago.
It shows what we already knew: given a large budget, a large team of mission-targeted programmers can hand craft a mission specific expert system out of an unlimited pool of hardware resources, to achieve a goal like winning a souped-up game of trivia, laced with puns as well as literal questions.
It was a billion dollar stunt, IMO, by IBM and related project leaders.
Has it achieved consciousness, self awareness, evidence of compassion, a fear of death, moral intuition?
That would have impressed me, that we were entering a new era. (And I will try to rigorously claim, over time, that this is exactly what we really need, in order to have a fighting chance of producing fAGI. I think those not blinded by a paradigm that should have died out with logical positivism and behaviorism, would like to admit (some fraction of them) that penetrating, intellectually honest analysis accumulates a conviction that no mechanical decision procedure we design, no matter how spiffy our mathematics, (and I was a math major with straight As in my day) can guarantee that an emotionless, compassionless, amoral, non,conscious, mechanically goal-seeking apparatus, will not—inadvertently or advertently—steam roller right over us.
I will speak more about that as time goes on. But in keeping with my claim yesterday that “intelligence” and “consciousness” are not coextensive in any simple way, “intelligence” and “sentience” are disjoint. I think that the autonomous “restraint” we need, to make AGIs into friendly AGIs, requires giving them sentience, and creating conditions favorable to them discovering a morality compatible with our own.
Creativity, free will (or autonomy, in language with less philosophical baggage), emotion, a theory of ethics and meta-ethics, and a theory of motivation.… we need to make progress on these, the likely basic building blocks of moral, benign, enlightened, beneficent forms of sentience… as well as progress on the fancy tech needed to implement this, once we have some idea what we are actually trying to implement.
And that thing we should implement is not, in my opinion, ever more sophisticated Watsons, or groups of hundreds or thousands of them, each hand crafted to achieve a specific function (machine vision, unloading a dishwasher, …..) Oh, sure, that would work, just like Watson worked. But if we want moral intuition to develop, a respect for life to develop, we need to have a more ambitious goal.
ANd I actually think we can do it. Now is the time. The choice that confronts us really, is not uAGI vs. fAGI, but dumb GOFAI, vs, sentient AI.
Watson: just another expert system. Had someone given me the budget and offered to let me lead a project team to build Watson, I would have declined, because it was clear in advance that it was just a (more nuanced) brute force, custom crafted and tuned, expert system. It’s success was assured, given a deep wallet.
What did we learn? Maybe some new algorithm-optimizations or N-space data structure topologies were discovered along the way, but nothing fundamental.
I’d have declined to lead the project (not that I would have been asked), because it was uninteresting. There was nothing to learn, and nothing much was learned, except some nuances of tech that always are acquired when you do any big distributed supercomputing, custom programming project.
We’ll learn as much making the next gen weather simulator.
It may have been a judgement call by the writer (Bostrom) and editor: He is trying to get the word out as widely as possible that this is a brewing existential crisis. In this society, how to you get most people’s (policymakers, decision makers, basically “the Suits” who run the world) attention?
Talk about the money. Most of even educated humanity sees the world in one color (can’t say green anymore, but the point is made.)
Try to motivate people about global warming? (”...um....but, but.… well, it might cost JOBS next month, if we try to save all future high level earthly life from extinction… nope the price [lost jobs] of saving the planet is obviously too high...”)
Want to get non-thinkers to even pick up the book and read the first chapter or two.… talk about money.
If your message is important to get in front of maximum eyeballs, sometimes you have to package it a little bit, just to hook their interest. Then morph the emphasis into what you really want them to hear, for the bulk of the presentation.
Of course, strictly speaking, what I just said was tangent to the original point, which was whether the summary reflected the predominant emphasis in the pages of the book it ostensibly covered.
But my point about PR considerations was worth making, and also, Katja or someone did, I think mention maybe formulating a reading guide for Bostrom’s book, in which case, any such author of a reading guide might be thinking already about this “hook ’em by beginning with economics” tactic, to make the book itself more likely to be read by a wider audience.
Not so much from the reading, or even from any specific comments in the forum—though I learned a lot from the links people were kind enough to provide.
But I did, through a kind of osmosis, remind myself that not everyone has the same thing in mind when they think of AI, AGI, human level AI, and still less, mere “intelligence.”
Despite the verbal drawing of the distinction between GOFAI and the spectrum of approaches being investigated and persued today, I have realized by reading between the lines that GOFAI is still alive and well. Maybe it is not the primitive “production system” stuff of the Simon and Newell era, or programs written in LISP or ProLog (both of which I coded in, once upon a time), but there are still a lot of people who don’t much care about what I would call “real consciousness”,and are still taking a Turing-esque, purely operationalistic, essentially logical positivistic positivistic approach to “intellence.”
I am passionately pro-AI. But for me, that means I want more than anything to create a real conscious entity, that feels, has ideas, passions, drives, emotions, loyalties, ideals.
Most of even neurology has moved beyond the positivistic “there is only behavior, and we don’t talk about conscious”, to actively investigating the function, substrate, neural realization of, evolutionary contribution of, etc, consciousness, as opposed to just the evolutiounary contribution of non-conscious informaton processing, to organismic success.
Look at Damasio’s work, showing that emotion is necessary for full spectrum cognitive skill manifestation.
THe thinking-feeling dichotomy is rapidly falling out of the working worldview, and I have been arguing for years that there are fallacious categories we have been using, for other reasons.
This is not to say that nonconscious “intelligent” systems are not here, evolving, and potentially dangerous. Automated program trading on the financial markets is potentially dangerous.
So there is still great utility in being sensitive to possible existential risks from non-consciousness intelligent systems.
They need not be willfully malevolent to pose a risk to us.
But as to my original point, I have learned that much of AI is still (more sophisticated) GOFAI, with better hardware and algorithms.
I am pro-AI, as I say, but I want to create “conscious” machines, in the interesting, natural sense of ‘conscious’ now admitted by neurology, most of cognitive science, much of theoretical neurobiology, and philosophy of mind, -- and in which positions like Dennett’s “intentional stance” that seek to do away with real sentience and admit only behavior, are now recognized to have been a wasted 30 years.
This realization that operationalism is alive and well in AI, is good for me in particular, because I am preparing to create a you tube channel or two, presenting both the history of AI and parallel intellectual history of philosophy of mind and cognitive science—showing why the postivistic atmosphere grew up from ontologal drift emanating from philosphy of science’s delay in digesting the Newtonian to quantum ontology change.
Then untimately, I’ll be laying some fresh groundwork for a series of new ideas I want to present, on how we can advance the goal of artificial sentience, and how and why this is the only way to make superintelligence that has a chance of being safe, let alone ultimately beneficial and a partner to mankind.
So, I have indirectly by, as I say, a kind of osmosis, rather than what anyone has said (more by what has not been said, perhaps) learned that much of AI is lagging behind neurology, cognitive science, and lots of other fields, in the adoption of a head-on attack on the “problem of consciousness.”
To me, not only do I want to create conscious machines, but I think solving the mind body problem in the biological case, and doing “my” brand of successful AI, are complimentary. So complimentary, that solving either would probably point the way to solving the other. I thought that ever since I wrote my undergrad honors thesis.
So that is what I have tentatively introjected so far, albeit indirectly. And it will help me in my You Tube videos (not up yet) which are directed at the AI community, intending to be a helpful resource, especiallly for those who don’t have a clue what kind of intellectual climate made the positivistic “turing test” almost an inevitable outgrowth.
But the intellectual soil from which it grew, no longer is considered valid (understanding this requires digesting the lessons of quantum theory in a new, and rigorous way, and several other issues.)
But its time to shed the suffocating influence of the Turing test, and the gravitational drag of the defective intellectual history, that it inevitably grew out of (along with logical behaviorism, eliminitive materialism, etc. It was all based on a certain understanding of Newtonian physics, which has been known to be fundamentally false, for over a hundred years.
Some of us are still trying to fit AI into an ontology that never was correct to begin with.
But we know enough, now, to get it right this time. If we methodically go back and root out the bad ideas. We need a little top down thinking, to supplement all the bottom up thinking in engineering.
Katja, you are doing a great job. I realize what a huge time and energy commitment it is to take this on… all the collateral reading and sources you have to monitor, in order to make sure you don’t miss something that would be good to add in to the list of links and thinking points.
We are still in the get aquainted, discovery phase, as a group, and with the book. I am sure it will get more interesting yet as we go along, and some long term intellectual friendships are likely to occurr as a result of the coming weeks of interaction.
Thanks for your time and work.… Tom
Leplen,
I agree completely with your opening statement, that if we, the human designers, understand how to make human level AI, then it will probably be a very clear and straightforward issue to understand how to make something smarter. An easy example to see is the obvious bottleneck human intellects have with our limited “working” executive memory.
The solutions for lots of problems by us are obviously heavily encumbered by how many things one can keep in mind at “the same time” and see the key connections, all in one act of synthesis. We all struggle privately with this… some issues cannot ever be understood by chunking, top-down, biting off a piece at a time, then “grokking” the next piece....and gluing it together at the end. Some problems resist decomposition into teams of brainstormers, for the same reason: some single comprehending POV seems to be required to see a critical sized set of factors (which varies by probem, of course.)
Hence, we have to rely on getting lots of pieces into long term memory, (maybe by decades of study) and hoping that incubation and some obscure processes ocurringt outside consciousness will eventually bubble up and give us a solution (--- the “dream of a snake biting its tall for the benzene ring” sort of thing.)
If we could build HL AGI, of course we can eliminate such bottlenecks, and others we will have come to understand, in cracking the design problems. So I agree, and that it is actually one of my reasons for wanting to do AI.
So, yes, the artificial human level AI could understand this.
My point was that we can build in physical controls… monitoring of the AIs. And if their key limits were in ASICs, ROMs, etc, and we could monitor them, we would immediTELY see if they attempt to take over a CHIP factory In, say, Icelend , and we can physically shut the AIs down or intervene. We can “stop them at the airport.”
It doesn’t matter if designs are leaked onto the internet, and an AI gets near an internet terminal and looks itself up. I can look MYSELF up on PubMed, but I can’t just think my BDNF levels to improve here and there, and my DA to 5-HT ratio to improve elsewehere..
To strengthen this point about the key distinction between knowing vs doing, let me explain that, and why, I disagree with your second point, at least with the force of it.
In effect, OUR designs are leaked onto the internet, already.
I think the information for us to self-modify our wetware is within reach. Good neuroscientists, or even people like me, a very smart amateur (and there are much more knowledgable cognitive neurobiology researchers than myself) can nearly tell you, both in principle and in some biology, how to do some intelligence amplification by modifying known aspects of our neurobiology.
(I could, especially with help, come up with some detail on a scale of months about changing neuromodulators, neurosteroids, connectivity hotspots, factors regulating LTP (one has to step lightly, of course, just like one would if screwing around with telomers or hayflick limits) and given a budget, a smart team, and no distractions, I bet in a year or two, a team could do something quite significant) with how to change the human brain, carefully changing areas of plasticity, selective neurogenesis.… et.
So for all practical purposes, we are already like an AI built out of ASICs who would have to not so much reverse engineer its design, but get access to instrumentality. And again, what about physical security metnods? They would work for a while, I am saying). And that would give us a key window to gain experience, see if they develop (given they are close enought to being sentient, OR that they have autonomy and some degree of “creativity”) “psychological problems” or tendencies to go rogue. (I am doing an essay on that, not as silly as it sounds)
THe point is, as long as the AIs need external significant instrumentality to instantiate a new design, and as long as they can be monitored and physically controlled, we can nearly guarantee ourselves a designed layover at Humanville.
We don’t have to put their critical design architecture in flash drives in their head, so to speak, and give then, further, a designed ability to reflash their own architecture just by “thinking” about it.
I’d also point out that any forecast that relies on our current best guesses about the nature of general intelligence strike me as very unlikely to be usefully accurate—we have a very weak sense of how things will play out, how the specific technologies involved will relate to each other, and (more likely than not) even what they are.
It seems that many tend to agree with you, in that, on page 9 of the Muller—Bostrom survey, I see that 32.5 % of respondents chose “Other method(s) currently completely unknown.”
We do have to get what data we can, of course, like SteveG says, but (and I will qualify this in a moment), depending on what one really means by AI or AGI, it could be argued that we are in the position of physics at the dawn of the 20th century, vis a vie the old “little solar system” theory of the atom, and Maxwell’s equations, which were logically incompatible.
It was known that we didn’t understand something important, very important, yet, but how does one predict how long it will take to discover the fundamental conceptual revolution (quantum mechanics, in this case) that opens the door to the next phase of applications, engineering, or just “understanding”?
Now to that “qualification” I mentioned: some people of course don’t really think we lack any fundamental conceptual understanding or need a conceptual revolution-level breakthrough, i.e. in your phrase ‘...best guesses about the nature of general intelligence’ they think they have the idea down.
Clearly the degree of interest and faith that people put in “getting more rigor” as a way of gaining more certainty about a time window, depends individually on what “theory of AI” if any, they already subscribe to, and of course the definition and criterion of HLAI that the theory of AI they subscribe to would seek to achieve.
For brute force mechanistic connectionists, getting more rigor by decomposing the problem into components / component industries (machine vision / object recognition, navigation, natural language processing in a highly dynamically evolving, rapidly context shifting environment {a static context, fixed big data set case is already solved by Google}, and so on) would of course get more clues about how close we are.But if we (think that) existing approaches lack something fundamental, or we are after something not yet well enough understood to commit to a scientific architecture for achieving it (for me, that is “real sentience” in addition to just “intelligent behavior” -- what Chalmers called “Hard problem” phenomena, in addition to “Easy problem” phenomena), how do we get more rigor?
How could we have gotten enough rigor to predict when some clerk in a patent office would completely delineate a needed change our concepts of space and time, and thus open the door to generations of progress in engineering, cosmology, and so on (special relativity, of course)?
What forcasting questions would have been relevant to ask, and to whom?
That said, we need to get what rigor we can, and use the data we can get, not data we cannot get.
But remaining mindful that what counts as “useful” data depends on what one already believes the “solution” to doing AI is going to look like.… one’s implicit metatheory about AI architecture, is a key interpretive yardstick also, to overlay onto the confidence levels of active researchers.
This point might seem obvious, as it is indeed almost being made, quite a lot, though not quite sharply enough, in discussing some studies.
I have to remind myself, occasionally, forecasting across the set of worldwide AI industries, is forecasting; a big undertaking, but it is not a way of developing HLAI itself. I guess we’re not in here to discuss the merits of different approaches, but to statistically classify their differential popularity among those trying to do AI. It helps to stay clear about that.
On the whole, though, I am very satisfied with attempts to highlight the assumptions, methodology and demographics of the study respondents. The level of intellectual honesty is quite high, as is the frequency of reminders and caveats (in varying fashion) that we are dealing with epistemic probability, not actual probability.
Yes, many. Go to PubMed and start drilling around, make up some search compinations and you will get immediately onto lots of interesting research tracks. Cognitive neurobiology, systems neurobiology, many areas and journals you’ll run across, will keep you busy. There is some really terrific, amazing work. Enjoy.
I love this question. As it happens, I wrote my honors thesis on the mind-body problem (while I was a philosophy and math double-major at UC Berkeley), and have been passionately interested in consciousness, brains (and also AI) ever since (a couple decades.)
I will try to be self-disciplined and remain as agnostic as I can – by not steering you only toward the people I think are more right (or “less wrong”.) Also, I will resist the tendency to write 10 thousand word answers to questions like this (which in any case would still barely scratch the surface of the body of material and spectrum of theory and informed opinion.)
I have skimmed the answers already given, and I think the ones I have read on this page are very good, and also, as intellectually honest and agnostic, as one would expect of the high caliber folks on this site.
Perhaps I should just give a somewhat meta-data answer to your question, and maybe I will add something specific later on, after I have a chance to look up some links and bookmarks I have in mind (which are distributed among several laptops, cloud drives, desktop machines, my smartphone and my Ipad, plus the stacks of research paper hardcopies I have all over my living space.)
The “meta-data”, or, strategic and supportive advice, would include the following.
1) Congratulations on your interest in the most fascinating, central, interdisciplinary, intellectually rich and fertile, and copiously addressed scientific, philosophical, and human nature question, of all. 2) Be aware that you are jumping into a very, very big intellectual ocean. You could fill a decent sized library with books and journals, or a terabyte hard drive with electronic copies of the same sources, and it is now more popular then ever in more disciplines than formerly would take up the question. (For example of the latter, hard-core neurologists – clinical and research – and bench-level working lab neurobiologists, are publishing routinely some amazing papers seeking to pin down, or theorize, or otherwise shed light on “the issue of consciousness.” 3) Give yourself a year (or 10) -- but it will be an enjoyable year (or 10) -- to read widely, think hard, and keep looking around at new theories, authors, papers. I think it is fair to say that no one has “the answer” yet, but there are excellent and amazingly imaginative proposed answers, and some of them are likely to be significantly close to being at least on the right track. After a year or more, you will begin to develop a sense of the kinds of answer that have more or less merit, as your intuitions will sharpen, and you build up new layers of understanding. 4) Be intellectually “mobile.” Look everywhere… Amazon, the journals, PubMed, the Internet Encyclopedia of Philosophy, the Stanford Encyclopedia of Philosophy (just Google them, they have great summaries) and various cognitive science sub collections.
The good news is nearly everything you need to conduct any level of research, is online for free—in case you don’t have a fortune to spend on books.
Lastly, as it happens, something for down the road a couple months, I am in the process of setting up a couple of YouTube channels, which will have mini-courses of lectures on certain special application areas, like AI, as well as general introductions to the mind-body problem, and its different guises. It will take me a couple months to go live with the videos, but they should be helpful as well. I intend to have something for all levels of expertise. But that is in the future. (Not a commercial announcement at all… it will be a free and open presentation of ideas—a vlog, but done a bit more rigorously.)
It is my view that most introductory and some sophisticated aspects of the “mind-body problem”—at least: why there is one and what forms it takes and which different, unavoidable lines of thought land us there—can be explained by a good tutor, to any intelligent layperson. (I think there is room to improve on the job of posing the problem and explaining its ins and outs, over ways it is done by many philosophy and cognitive science instructors, which is why I will be creating the video sequences.)
But, in general, you are in for quite an adventure. Keep reading, keep Googling. The resources available are almost boundless, and growing rapidly.
We are in the best time so far, in all of human history, for someone to be interested in this question. And it touches on almost every branch of human knowledge or thought, in some way… from ethics, to interpretations of quantum mechanics.
Maybe you, or one of us in here, will be the “clerk working in a patent office” that connects the right combination of puzzle pieces, and adds a crucial insight, that dramatically advances our understanding of consciousness, in a definitive way.
Enjoy the voyage…
This question of thresholds for ‘comprehension’—to use the judiciously applied scare quotes Katja used abut comprehension (I’ll have more to say about that in coming posts, as many contributors in here doubtless will) – i.e. thresholds for discernment of features of reality, particularly abstract features of “reality” be it across species (existent ones and future ones included, biological and nonbiological included) is one I, too, have thought about seriously and in several guises over the years.
First though, about the scare quotes. Comprehension vs discovery is worth distinguishing. When I was a math major, back in the day ( I was a double major at UCB, math and philosophy, and wrote my honors thesis on the mind body problem), I, like most math majors, frequently experienced the distinction between grasping in a full intuitive sense, some concept or theorem, and technically understanding that it was true, by step-wise going through a proof, seeing the validity of each step, and thus accepting the conclusion.
But what I was always after … and lacking this I never was satisfied with myself that I had really understood the concept, even though I accepted the demonstration of its truth… was the “ah-ha” moment of seeing that it was “conceptually necessary”, as I used to think of it to myself. If fact, I wouldn’t quit trying to intuit the thing, until I finally achieved this full understanding.
It’s well known in math that frequently an intuitively penetrable (by human math people) first demonstration of a theorem, is later replaced in some book by a more compact, but intuitively opaque proof. Math students often hate these more “efficient” and compact proofs, logically valid though they be.
Hence I bring up the conundrum of “theorem proving programs”. They can “discover” a new piece of mathematical “knowledge”, but do they experience these intuitions? Hardly. These intuitions are a form of what I call conceptual qualia.
The question is, if a machine OR human stumbles upon a proof of a new theorem, has anything been “comprehended”, until or unless some conscious agent capable of conceptual qualia (live intuitive “ah-ha’s”) has been able to understand the meaning of the proof, not just walk through each step and say, “yes, logically valid; yes, logically valid….. yes, logically valid.”
The million dollar question, one of them, is whether we have yet accepted the distinction between intelligence and consciousness that was treated so dismissively and derisively in the positiviistic and behavioristic era, providing the intellectual which made the Turing test so palatable, and replaced any talk of comprehension, with talk about behavior.
Do we, now, want superintelligence, or supercomprension?
If we learn how to use Big Data to take the output form the iconic “million monkeys at a million typewriters”, and filter it with sophisticated statistical methods based on mining Big Data, and in the aggregate of these two processes develop machines that “discover” but do not “comprehend”, will we consider ourselves better off?
Well, for some purposes, sure. Drug “discovery” that we do not “understand” but which we can use to reverse Alzheimers, is fine.
Program trading that makes money, makes money.
But for other purposes… I think we ought have people also pursuing supercomprehension, machines that really feel, imagine (not just “search” and combinatorially combine, then filter), feel the joys and ironies of life, and give companionship, devotion, loyalty, altruism, maybe even moral and aesthetic inspiration.
Further, I think our best chance at “taming” superintelligence, is to give it conceptual qualia, emotion, experience, and conditions that allow it to have empathy, and develop moral intuition. For me, I have wanted my whole life to build a companion race of AIs, that truely is sentient, and can be full partners in the experience and perfection of life, the pursuit of “meaning”, and so on.
Building such minds requires we understand and delve into problems we have been, on the whole, too collectively lazy to solve on our own behalf, like developing a decent theory of meta-ethics, so that we know what traits (if any) in the over all space of possible minds, promote the independent discovery or evolution of “ethics”.
I actually think an independently grounded theory that does all this, and solves the mind-body problem in general, is within reach.
One of the things I like about the possibility—and the inherent risk—of imminent superintelligence, is that it will force us to develop answers to these neglected “philosophical” issues, because a mind and intelligence that becomes arbitrarily smart is, as many contemporary authors (Bostrom included) point out, ultimately a much too dangerous power to play with, unless it is given the ability to control itself voluntarily, and “ethically.”
It wasn’t airplanes and physics that brought down the world trade center, it was philosophical stupidity and intellectual immaturity.
By going down the path toward superintelligence, I think we must give it sentience, so that it is more than a mindless, electromechanical apparatus that will steam roller over us, not with malice, but the same way a poorly controlled nuclear power plant will kill us: it is a thing that doesn’t have any clue what it is “doing*.
We need to build brilliant machines with conscious agency, not just behavior. We need to take on the task of building sentient machines.
I think we can do it if we think really, really hard about the problems. We have all the intellectual pieces, the “data”, in hand now. We just need to give up this legacy positivism, and stop equivocating about intelligence and “understanding”.
Phenomenal experience is a necessary (though not sufficient) condition for moral agency. I think we can figure out with a decent chance of being right, what the sufficient conditons are, too. But we cannot (and AI lags very behind neurobiology and neuroscience on this one) drag our feet and continue to default to the legacy positivism of the Turing test era (because we are too lazy to think harder and aim higher) when it comes to discussing, not just information processing behavior, but awareness.
Well, a little preachy, but we are in here to make each other think. I have wanted to build a mind since I was a teenager, but for these reasons. I don’t want just a souped up, Big Data, calculating machine. Does anyone believe Watson “understood” anything?