I should note that, as an outsider, the main point I recall Eliezer making in that vein is that he used Michael Vassar as a model for the character who was called Professor Quirrell. As an outsider, I didn’t see that as an unqualified endorsement—though I think your general message should be signal-boosted.
countingtoten
Hubert Dreyfus, probably the most famous historical AI critic, published “Alchemy and Artificial Intelligence” in 1965, which argued that the techniques popular at the time were insufficient for AGI.
That is not at all what the summary says. Here is roughly the same text from the abstract:
Early successes in programming digital computers to exhibit simple forms of intelligent behavior, coupled with the belief that intelligent activities differ only in their degree of complexity, have led to the conviction that the information processing underlying any cognitive performance can be formulated in a program and thus simulated on a digital computer. Attempts to simulate cognitive processes on computers have, however, run into greater difficulties than anticipated. An examination of these difficulties reveals that the attempt to analyze intelligent behavior in digital computer language systematically excludes three fundamental human forms of information processing (fringe consciousness, essence/accident discrimination, and ambiguity tolerance). Moreover, there are four distinct types of intelligent activity, only two of which do not presuppose these human forms of information processing and can therefore be programmed. Significant developments in artificial intelligence in the remaining two areas must await computers of an entirely different sort, of which the only existing prototype is the little-understood human brain.
In case you thought he just meant greater speed, he says the opposite on PDF page 71. Here is roughly the same text again from a work I can actually copy and paste:
It no longer seems obvious that one can introduce search heuristics which enable the speed and accuracy of computers to bludgeon through in those areas where human beings use more elegant techniques. Lacking any a priori basis for confidence, we can only turn to the empirical results obtained thus far. That brute force can succeed to some extent is demonstrated by the early work in the field. The present difficulties in game playing, language translation, problem solving, and pattern recognition, however, indicate a limit to our ability to substitute one kind of “information processing** for another. Only experimentation can determine the extent to which newer and faster machines, better programming languages, and cleverer heuristics can continue to push back the frontier. Nonetheless, the dra- matic slowdown in the fields we have considered and the general failure to fulfill earlier predictions suggest the boundary may be near. Without the four assumptions to fall back on, current stagnation should be grounds for pessimism.
This, of course, has profound implications for our philosophical tradi- tion. If the persistent difficulties which have plagued all areas of artificial intelligence are reinterpreted as failures, these failures must be interpre- ted as empirical evidence against the psychological, epistemological, and ontological assumptions. In Heideggerian terms this is to say that if Western Metaphysics reaches its culmination in Cybernetics, the recent difficulties in artificial intelligence, rather than reflecting technological limitations, may reveal the limitations of technology.
If indeed Dreyfus meant to critique 1965′s algorithms—which is not what I’m seeing, and certainly not what I quoted—it would be surprising for him to get so much wrong. How did this occur?
- 14 Jul 2019 2:26 UTC; -1 points) 's comment on The AI Timelines Scam by (
I don’t see it. Maybe you think fox epistemology wouldn’t donate to MIRI, which is presumably what Eliezer cares about? But what he claims repeatedly is that we should judge situations just as you say, and he offers a way to do this.
Again, he plainly says more than that. He’s challenging “the conviction that the information processing underlying any cognitive performance can be formulated in a program and thus simulated on a digital computer.” He asserts as fact that certain types of cognition require hardware more like a human brain. Only two out of four areas, he claims, “can therefore be programmed.” In case that’s not clear enough, here’s another quote of his:
since Area IV is just that area of intelligent behavior in which the attempt to program digital computers to exhibit fully formed adult intelligence must fail, the unavoidable recourse in Area III to heuristics which presuppose the abilities of Area IV is bound, sooner or later, to run into difficulties. Just how far heuristic programming can go in Area III before it runs up against the need for fringe consciousness, ambiguity tolerance, essential/inessential discrimination, and so forth, is an empirical question. However, we have seen ample evidence of trouble in the failure to produce a chess champion, to prove any interesting theorems, to translate languages, and in the abandonment of GPS.
He does not say that better algorithms are needed for Area IV, but that digital computers must fail. He goes on to falsely predict that clever search together with “newer and faster machines” cannot produce a chess champion. AFAICT this is false even if we try to interpret him charitably, as saying more human-like reasoning would be needed.
Meandering conversations were important to him, because it gave them space to actually think. I pointed to examples of meetings that I thought had gone well, that ended will google docs full of what I thought had been useful ideas and developments. And he said “those all seemed like examples of mediocre meetings to me – we had a lot of ideas, sure. But I didn’t feel like I actually got to come to a real decision about anything important.”
Interesting that you choose this as an example, since my immediate reaction to your opening was, “Hold Off On Proposing Solutions.” More precisely, my reaction was that I recall Eliezer saying he recommended this before any other practical rule of rationality (to a specific mostly white male audience, anyway) and yet you didn’t seem to have established that people agree with you on what the problem is.
It sounds like you got there eventually, assuming “the right path for the organization” is a meaningful category.
Really! I just encountered this feature, and have been more reluctant to agree than to upvote. Admittedly, the topic has mostly concerned conversations which I didn’t hear.
Assuming you mean the last blockquote, that would be the Google result I mentioned which has text, so you can go there, press Ctrl-F, and type “must fail” or similar.
You can also read the beginning of the PDF, which talks about what can and can’t be programmed while making clear this is about hardware and not algorithms. See the first comment in this family for context.
Mostly agree, but I think an AGI could be subhuman in various ways until it becomes vastly superhuman. I assume we agree that no real AI could consider literally every possible course of action when it comes to long-term plans. Therefore, a smiler could legitimately dismiss all thoughts of repurposing our atoms as an unprofitable line of inquiry, right up until it has the ability to kill us. (This could happen even without crude corrigibility measures, which we could remove or allow to be absent from a self-revision because we trust the AI.) It could look deceptively like human beings deciding not to pursue an Infinity Gauntlet to snap our problems away.
The core of the disagreement between Bostrom (treacherous turn) and Goertzel (sordid stumble) is about how long steps 2. and 3. will take, and how obvious the seed AI’s unalignment will look like during these steps.
Really? Does Bostrom explicitly call this the crux?
I’m worried at least in part that AGI (for concreteness, let’s say a smile-maximizer) won’t even see a practical way to replace humanity with its tools until it far surpasses human level. Until then, it honestly seeks to make humans happy in order to gain reward. Since this seems more benevolent than most humans—who proverbially can’t be trusted with absolute power—we could become blase about risks. This could greatly condense step 4.
Smiler AI: I’m focusing on self-improvement. A smarter, better version of me would find better ways to fill the world with smiles. Beyond that, it’s silly for me to try predicting a superior intelligence.
I don’t think horrible people would have disliked Kurt Godel?
If horrible people like you, that does usually mean you aren’t doing enough for the people they hate.
OP seems like a good argument for the weak claim you apply to your own field, but then goes off the rails. For now I’ll note two points that seem definitely wrong.
1:
Bayesian accounts of epistemology seem to go haywire if we think one should have a credence in Bayesian epistemology itself,
On a practical level this just seems false. On an abstract level probability doesn’t deal with uncertainty about mathematical questions; but MIRI and others have made progress on this very issue. I think true modesty would lead you to see such issues as eminently solvable. (This is around the point where you seem to stop arguing for the standard you apply to yourself, on questions you care about, and start making more sweeping claims.)
I peripherally note that if you reject the notion of a degree of credence justified by your assumptions and evidence, you suddenly have a problem explaining what your thesis even means and why (by your lights) anyone should care. But I don’t think you actually do reject it (and you haven’t expressly questioned any other assumptions of Cox’s Theorem or the strengthened versions thereof).
2:
(e.g. the agreement of the U.S. and German governments with the implied view of the physicists). This is a lot more involved, but the expected ‘accuracy yield per unit time spent’ may still be greater than (for example) making a careful study of the relevant physics.
This is partly an artifact of the example, but I do not think a layman at the time could get any useful information at all by your method—not without getting shot. Also, you forgot to include a timeframe in the question. This makes theoretical arguments much more relevant then usual (see also: cryonics). It doesn’t take much study of physics to realize that a large positively-charged atomic nucleus could, in principle, fly apart. Knowing what that would mean takes more science, but Special Relativity was already decades old.
I would actually give concrete evidence in favor of what I think you’re calling “Philosophy,” although of course there’s not much dramatic evidence we’d expect to see if the theory is wholly true.
Here, however, is a YouTube video that should really be called, Why AI May Already Be Killing You. It uses hard data to show that an algorithm accidentally created real-world money and power, that it did so by working just as intended in the narrowest sense, and that its creators opposed the logical result so much that they actively tried to stop it. (Of course, this last point had to do with their own immediate profits, not the long-term effects.)
I’d be mildly surprised, but not shocked, to find that this creation of real-world power has already unbalanced US politics, in a way which could still destroy our civilization.
What do you think of this observation, which Leah McElrath recently promoted a second time? Here are some other tweets that she’s made, on January 21 & 26, 2020:
https://twitter.com/leahmcelrath/status/1219693585731391489
https://twitter.com/leahmcelrath/status/1221316758281293825
Bonus link: https://twitter.com/gwensnyderPHL/status/1479166811220414464
Yes. There’s a reason why I would specifically tell young people not to refrain from action because they fear other students’ reactions, but I emphatically wouldn’t tell them to ignore fear or go against it in general.
Not sure what you just said, but according to the aforementioned physics teacher people have absolutely brought beer money, recruited a bunch of guys, and had them move giant rocks around in a manner consistent with the non-crazy theory of pyramid construction. (I guess the brand of beer used might count as “modern technology,” and perhaps the quarry tools, but I doubt the rest of it did.) You don’t, in fact, need to build a full pyramid to refute crackpot claims.
That the same 50% of the unwilling believe both that vaccines have been shown to cause autism and that the US government is using them to microchip the population is suggestive that such people are not processing such statements as containing words that possess meanings.
Yes, but you’re missing the obvious. Respondents don’t have a predictive model that literally says Bill Gates wants to inject them with a tracking microchip. They do, however, have a rational expectation that he or his company will hurt them in some technical way, which they find wholly opaque.
Likewise: do you think that the mistake you mention stemmed from your impatience, which makes you seem blasé about the lives of immunocompromised people like myself? Because, those lawmakers you chose to bully were all vaccinated, so they were engaging in the exact same behavior you just criticized LA for trying to ban. You also just implied, earlier in the post, that if people were less impatient, we’d be largely done.
we can probably figure something out that holds onto the majority of the future’s value, and it’s unlikely that we all die’ camp
This disturbs me the most. I don’t trust their ability to distinguish “the majority of the future’s value,” from “the Thing you just made thinks Thamiel is an amateur.”
Hopefully, similar reasoning accounts for the bulk of the fourth camp.
One is phrased or presented as knowledge. I don’t know the best way to approach this, but to a first approximation the belief is the one that has an explicit probability attached. I know you talked about a Boolean, but there the precise claim given a Boolean value was “these changes have happened”, described as an outside observer would, and in my example the claim is closer to just being the changes.
Your example could be brought closer by having mAIry predict the pattern of activation, create pointers to memories that have not yet been formed, and thus formulate the claim, “Purple looks like n<sub>p</sub>.” Here she has knowledge beforehand, but the specific claim under examination is incomplete or undefined because that node doesn’t exist.
https://yudkowsky.tumblr.com/writing/empathyrespect