I tend to believe that most fictional characters are living in malicious computer simulations, to satisfy my own pathological desire for consistency. I now believe that Harry is living in an extremely expensive computer simulation.
nigerweiss
Another extremely serious problem is that there is next to no particularly effective effort in philosophical academia to disregard confused questions, and to move away from naive linguistic realism. The number of philosophical questions of the form ‘is x y’ that can be resolved by ‘depends on your definition of x and y’ is deeply depressing. There does not seem to be a strong understanding of how important it is to remember that not all words correspond to natural, or even (in some cases) meaningful categories.
Any discussion of what art is. Any discussion of whether or not the universe is real. Any conversation about whether machines can truly be intelligent. More specifically, the ship of Theseus thought experiment and the related sorites paradox are entirely definitional, as is Edmund Gettier’s problem of knowledge. The (appallingly bad, by the way) swamp man argument by Donald Davidson hinges entirely on the belief that words actually refer to things. Shades of this pop up in Searle’s Chinese room and other bad thought experiments.
I could go on, but that would require me to actually go out and start reading philosophy papers, and goodness knows I hate that,
Well, I’m sorry. Please fill out a conversational complaint form and put it in the box, and an HR representative will mail you a more detailed survey in six to eight weeks.
I agree entirely that meaningful questions exist, and made no claim to the contrary. I do not believe, however, that as an institution, modern philosophy is particularly good at identifying those questions.
In response to your questions,
Yes, absolutely.
Yes, mostly. There are different kinds of existence, but the answer you get out will depend entirely on your definitions.
Yes, mostly. There are different kinds of possible artificial intelligence, but the question of whether machines can -truly- be intelligent depends exclusively upon your definition of intelligence.
As a general rule, if you can’t imagine any piece of experimental evidence settling a question, it’s probably a definitional one.
Sure, there are absolutely philosophers who aren’t talking about absolute nonsense. But as an industry, philosophy has a miserably bad signal-noise ratio.
I agree that the answers to these questions depend on definitions, but then, so does the answer to the question, “how long is this stick ?”
There’s a key distinction that I feel you may be glossing over here. In the case of the stick question, there is an extremely high probability that you and the person you’re talking to, though you may not be using exactly the same definitions, are using definitions that are closely enough entangled with observable features of the world be broadly isomorphic.
In other words, there is a good chance that, without either of you adjusting your definitions, you and the neurotypical human you’re talking to are likely to be able to come up with some answer that both of you will find satisfying, and will allow you to meaningfully predict future experiences.
With the three examples I raised, this isn’t the case. There are a host of different definitions, which are not closely entangled with simple, observable features of the world. As such, even if you and the person you’re talking to have similar life experiences, there is no guarantee that you will come to the same conclusions, because your definitions are likely to be personal, and the outcome of the question depends heavily upon those definitions.
Furthermore, in the three cases I mentioned, unlike the stick, if you hold a given position, it’s not at all clear what evidence could persuade you to change your mind, for many possible (and common!) positions. This is a telltale sign of a confused question.
Err… science deals with questions you can settle with evidence? I’m not sure what you’re getting at here.
I would strongly disagree.
My interpretation of these experiments is that they make a lot of sense if you consider morality from a system-1 and system-2 perspective. If we actually sit down and think about, humans tend to have somewhat convergent answers to moral dilemmas which tend to the utilitarian (in this case, don’t shock the man). That’s a system-2 response.
However, in the heat of the moment, faced with a novel situation, we resort to fast, cheap system-1 heuristics for our moral intuitions. Some of those heuristics are ‘what is everyone else doing?’ ‘what is the authority figure telling us to do’ and ‘what have I done in similar situations in the past?’ Normally, these work pretty well. However, in certain corner cases, they produce behavior that system-2 would never condone: lynch mobs, authoritarian cruelty, and the unfortunate results of Milgram’s experiments.
People didn’t decide, rationally, that it was morally right to torture a man to death for the sake of an experiment they knew nothing about and were paid a few dollars to participate in, and this paper is silly to suggest otherwise. They did it because they were under stress, and the strongest influence in their head was the ancestral heuristic of ‘keep your head down, do what you’re told, they must know what they’re doing.’
There are a number of other possible explanations for that detail. For example:
“The experiment requires that you continue” invokes the larger apparatus of Science. It gives the impression that something much larger than you is at foot, and that ALL of it is expecting you to shut up and do what you’re told.
“You have no other choice, you must go on”—that rankles. Of course there’s a choice. We pattern match it to a moral choice, and system 2 comes in and makes the right call.
The best lesson you can learn from these experiments, as depressing as they are, is that when you feel rushed, and there’s life and death at stake, and you don’t feel you have time to breathe, the best possible thing you can do is to stop, sit down on the floor, clear your head, and take a moment to really try to think about what you’re doing.
AI “Boxing” and Utility Functions
It’s actually worse than that. Humans do not scale well to more computing power. A good AI could expand the depth of its search trees, in principle, logarithmically with compute power (possibly a bit better with monte-carlo approaches). If you throw an AI ten times more processing power, it could, at the bare minimum, extend the depth or detail of its planning several times. The same is not true of human neurology. All an em can do with more processing power is run faster, which has limited value. A human can do things a chimp just can’t, even if the chimp has a really long time to think about it. The human brain was not designed to scale with processing power, to run on a linear computer, or to be modular and improveable. De novo AI is just (probably) going to run circles around us.
Growing new neurons at extremely accelerated rates IS a process known to happen in adults: we normally call it brain cancer.
That’s obviously a little spurious, but it is a good indication that making the brain more intelligent is not trivial. I don’t doubt that it is possible to bootstrap an em up to higher intelligence, but figuring out how to do that while preserving personal identity and not causing insanity, seizures, neurogenesis-related noise, or other undesirable effects, is probably going to take a long time. I think Eliezer was on the right track by describing ems bootstrapping as being ‘a desperate race between how smart you are and how crazy you are.’ The human brain evolved to work under a fairly narrow design spec. When you change any part of it in a dramatic fashion, all the normal regulatory mechanisms are no longer guaranteed or likely to work.
De novo AI, by virtue of an (almost certainly) simpler underlying algorithm, has none of these issues. Expanding to use new computational resources is likely to be a matter of tweaking parameters in a mathematical function that could fit on a T-shirt if you printed small enough. They’ll always have a huge advantage, in that they were designed for this, and we definitely were not.
No, it is trivial, we do it all the time as I already said: it’s called ‘learning’. With much learning, brain regions change size; what do you think is going on there?
Oh, definitely, the brain is capable of neurogenesis (to degrees that are a function of age) -- but you’ll notice that learning new things do not cause the brain to increase in intelligence dramatically. There are a number of core brain regions that seem pretty thoroughly hardwired. And, again, if you want to tweak things outside of normal ranges, you’re definitely voiding the warranty. The whole thing might and likely will break for no obvious reason unless you do it just exactly right. That takes a lot of time, and is not guaranteed to be efficient.
If you want to bootstrap as fast as possible, sure.
If we’re in an intellectual arms race against de novo uFAI, I’d say yes, we do. And we’re probably going to lose.
Okay, sure, but here’s the hitch:
Even if you gave me a whole bunch of nanobots that could rewire my brain any way I wanted, I would have no clue how to do that. I’m not sure the modern establishment of neurology has any good idea of how you’d do that. I know for sure that nobody on Earth knows how to do that in a safe way that is guaranteed not to cause psychosis, seizures, or other glitches down the line. It’s going to take serious, in depth, and expensive research to figure out how to make this changes in a sane way.
I would only trust this strategy with hyper-neuromorphic artificial intelligence. And that’s unlikely to FOOM uncontrollably anyway. In general, the applicability of such a strategy depends on the structure of the AI, but the line at which it might be applicable is tiny hyperbubble in mind space centered around humans. Anything more alien than that, and it’s a profoundly naive idea.
Human intelligence (probably) evolved in a social arms race. We tend to favor explanations that involve agents (with wills, intentions, etc), which can be appealed to, manipulated, or placated. When something happens through (effectively) mindless processes, we look for a will to attribute it to. It’s natural, and comforting. It just probably isn’t true.
Just another cognitive bias. Move along. Nothing to see here.
Christ I need to get my grandparents signed up for cryo.
Well, consider this: it takes only a very small functional change to the human brain to make ‘raising it as a human child’ a questionable strategy at best. Crippling a few features of the brain produces sociopaths who, notably, cannot be reliably inculcated with our values, despite sharing 99.99etc% of our own neurological architecture.
Mind space is a tricky thing to pin down in a useful way, so let’s just say the bubble is really tiny. If the changes your making are larger than the changes between a sociopath and a neurotypical human, then you shouldn’t employ this strategy. Trying to use it on any kind of denovo AI without anything analagous to our neurons is foolhardy beyond belief. So much of our behavior is predated on things that aren’t and can’t be learned, and trying to program all of those qualities and intuitions by hand so that the AI can be properly taught our value scheme looks broadly isomorphic to the FAI problem.
An AGI that is not either deeply neuromorphic or possessing a well-defined and formally stable utility function sounds like… frankly one of the worst ideas I’ve ever heard. I’m having difficulty imagining a way you could demonstrate the safety of such a system, or trust it enough at any point to give it enough resources to learn. Considering that the fate of intelligent life in our future light cone may hang in the balance, standards of safety must obviously be very high! Intuition is, I’m sorry, simply not an acceptable criteria on which to wager at least billions, and perhaps trillions of lives. The expected utility math does not wash if you actually expect OpenCog to work.
On a more technical level, human values are broadly defined as some function over a typical human brain. There may be some (or many) optimizations possible, but not such that we can rely on them. So, for a really good model of human values, we should not expect to need less than the entropy of a human brain. In other words, nobody, whether they’re Eliezer Yudkowsky with his formalist approach or you, is getting away with less than about ten petabytes of good training samples. Those working on uploads can skip this step entirely, but neuromorphic AI is likely to be fundamentally less useful.
And this assumes that every bit of evidence can be mapped directly to a bit in a typical human brain map. In reality, for a non-FOOMed AI, the mapping it likely to be many orders of magnitude less efficient. I suspect, but cannot demonstrate right now, that a formalist approach starting with a clean framework along the lines of AIXI is going to be more efficient. Quite aside from that, even assuming you can acquire enough data to train your machine reliably, then you still need it to do… something. Human values include a lot of unpleasant qualities. Simply giving it human values and then allowing it to grow to superhuman intellect is grossly unsafe. Ted Bundy had human values. If your plan is to train it on examples of only nice people, then you’ve got a really serious practical problem of how to track down >10 petabytes of really good data on the lives of saints. A formalist approach like CEV, for all the things that bug me about it, simply does not have that issue, because its utility function is defined as functions of the observed values of real humans.
In other words, for a system that’s as alien as the architecture of OpenCog, even if we assume that the software is powerful and general enough to work (which I’m in no way convinced of), attempting to inculate it with human values is extremely difficult, dangerous, and just plan unethical.
This isn’t surprising. It’s been pretty clear for a while that initial synaptic graphs are random / arbitrary, and then are later pruned and strengthened / weakened by learning.
This reasoning has always struck me as deeply and profoundly silly.
The AI might also be in a computer simulation where the dark lord of the matrix might destroy us for not devoting all of our resources to building cheesecakes. In fact, so could we. I don’t see it influencing our behaviors any, nor should it.. You’re privileging the hypothesis.
As for the second part, you might also encounter an alien intelligence that you can’t protect yourself from, because you exhausted so many resources leaving humanity alive, showing down your bootstrapping. That’s the thing about aliens.