Fighting the Taliban also fulfills the purpose of funneling money to friends and supporters.
One of the major problems that Western nations have run into in the past half century is that we’re in wars where (a) we don’t just want to kill everyone, and (b) there is no strong central control of the opposition (or at least none we want to preserve), so we’re effectively forced into the last scenario above.
This argument is only supportive of your main point “command and control by far most important” insofar future wars will also be exclusively asymmetric. That assumption, though, is problematic even today. The US isn’t spending billions of dollars on stealth fighters and bombers to fight the Taliban.
How can an AI that is 10 times as smart and innovative as Elon Musk not be godlike? xD
But seriously, if an AI is really capable of making such great headway in weapons technology, it is then surely capable of bootstrapping itself to superintelligence.
In the limit of large swarms of cheap, small drones, the attacker always has an intrinsic advantage. The attacking drones are trying to hit large, relatively slow moving targets while the defender is trying to “hit a bullet with another bullet”. The only scalable countermeasure in my mind are directed energy weapons; you can’t get faster or smaller than elementary particles. If a laser is fast and accurate enough to shoot down mosquitoes out of the air, it can probably shoot down drones, too.
The US has gained a lot of experience in asymmetric warfare in the last few decades, but due to the Long Peace no one can be sure of which military technologies actually work well in the context of a symmetric war between major powers; none of it has really been validated. So the “lead” the US has over the rest is somewhat theoretical.
If you buy into the Great Stagnation theory then a 20-year lead today should be a lesser deal than in 1900.
Drones, yes, Terminators less so. It depends on whether AI technology can thread the needle of being powerful enough to navigate a very complex environment but not general enough to be a superintelligence. I kinda doubt that such a gap even exists.
If you extrapolated those straight lines further, doesn’t it mean that even small businesses will be able to afford training their own quadrillion-parameter-models just a few years after Google?
Is density even relevant when your computations can be run in parallel? I feel like price-performance will be the only relevant measure, even if that means slower clock cycles.
You can listen to his thoughts on AGI in this video
I find that he has an exceptionally sharp intuition about why deep learning works, from the original AlexNet to Deep Double Descent. You can see him predicting the progress in NLP here
“Why isn’t it an AGI?” here can be read as “why hasn’t it done the things I’d expect from an AGI?” or “why doesn’t it have the characteristics of general intelligence?”, and there’s a subtle shade of difference here that requires two different answers.
For the first, GPT-3 isn’t capable of goal-driven behaviour.
Why would goal-driven behavior be necessary for passing a Turing test? It just needs to predict human behavior in a limited context, which was what GPT-3 was trained to do. It’s not an RL setting.
and by saying that GPT-3 definitely isn’t a general intelligence (for whatever reason), you’re assuming what you set out to prove.
I would like to dispute that by drawing the analogy to the definition of fire before modern chemistry. We didn’t know exactly what fire is, but it’s a “you know it when you see it” kind of deal. It’s not helpful to pre-commit to a certain benchmark, like we did with chess—at one point we were sure beating the world champion in chess would be a definitive sign of intelligence, but Deep Blue came and went and we now agree that chess AIs aren’t general intelligence. I know this sounds like moving the goal-post, but then again, the point of contention here isn’t whether OpenAI deserves some brownie points or not.
“Passing the Turing test with competent judges” is an evasion, not an answer to the question – a very sensible one, though.
It seems like you think I made that suggestion in bad faith, but I was being genuine with that idea. The “competent judges” part was so that the judges, you know, are actually asking adversarial questions, which is the point of the test. Cases like Eugene Goostman should get filtered out. I would grant the AI be allowed to be trained on a corpus of adversarial queries from past Turing tests (though I don’t expect this to help), but the judges should also have access to this corpus so they can try to come up with questions orthogonal to it.
I think the point at which our intuitions depart is: I expect there to be a sharp distinction between general and narrow intelligence, and I expect the difference to resolve very unambiguously in any reasonably well designed test, which is why I don’t care too much about precise benchmarks. Since you don’t share this intuition, I can see why you feel so strongly about precisely defining these benchmarks.
I could offer some alternative ideas in an RL setting though:
An AI that solves Snake perfectly on any map (maps should be randomly generated and separated between training and test set), or
An AI that solves unseen Chronotron levels at test time within a reasonable amount of game time (say <10x human average) while being trained on a separate set of levels
I hope you find these tests fair and precise enough, or at least get a sense of what I’m trying to see in an agent with “reasoning ability”? To me these tasks demonstrate why reasoning is powerful and why we should care about it in the first place. Feel free to disagree though.
Yeah the terms are always a bit vague; as far as existence proof for AGI goes there’s already humans and evolution, so my definition of a harbinger would be something like ‘A prototype that clearly shows no more conceptual breakthroughs are needed for AGI’.
I still think we’re at least one breakthrough away from that point, however that belief is dampened by Ilya Sutskever’s position on this whose opinion I greatly respect. But either way GPT-3 in particular just doesn’t stand out to me from the rest of DL achievements over the years, from AlexNet to AlphaGo to OpenAI5.
And yes, I believe there will be fast takeoff.
I don’t think GPT-3 is a harbinger. I’m not sure if there ever will be a harbinger (at least to the public); leaning towards no. An AI system that passes the Turing test wouldn’t be a harbinger, it’s the real deal.
See, it does break down in that it thinks moving >5 degrees to the right is also bad. What’s going on with the “car locks”, or the “algorithm”? I agree that’s weird. But the concept is still understood, and, AFAICT, is not “just associating” (in the way you mean it).
That’s the exact opposite impression I got from this new segment. In what world is confusing “right” and “left” a demonstration of reasoning over mere association? How much more wrong could GPT-3 have gotten the answer? “Turning forward”? No, that wouldn’t appear in the corpus. What’s the concept that’s being understood here?
And why wouldn’t it be amazing for some (if not all) of its rolls to exhibit impressive-for-an-AI reasoning?
Because GPT-3 isn’t using reasoning to arrive at those answers? Associating gravity with falling doesn’t require reasoning, determining whether something would fall in a specific circumstance does, but that leaves only a small space of answers, so guessing right a few times and wrong at other times (like GPT-3 is doing) isn’t evidence of reasoning. The reasoning doesn’t have to do any work of locating the hypothesis because you’re accepting vague answers and frequent wrong answers.
I didn’t mean to imply we should wait for AI to pass the Turing test before doing alignment work. Perhaps the disagreement comes down to you thinking “We should take GPT-3 as a fire-alarm for AGI and must push forward AI alignment work” whereas I’m thinking “There is and will be no fire-alarm and we must push forward AI alignment work”
So a good exercise becomes: what minimally-complex problem could you give to GPT-3 that would differentiate between pattern-matching and predicting?
Passing the Turing test with competent judges. If you feel like that’s too harsh yet insist on GPT-3 being capable of reasoning, then ask yourself: what’s still missing? It’s capable of both pattern recognition and reasoning, so why isn’t it an AGI yet?
GPT-3 inferred that not being able to turn left would make driving difficult. Amazing.
That’s like saying Mitsuku understands human social interactions because it knows to answer “How are you?” with “I’m doing fine thanks how are you?”. Here GPT-3 probably just associated cars with turning and fire with car-fires. Every time GPT-3 gets something vaguely correct you call it amazing and ignore all the instances where it spews complete nonsense, including re-rolls of the same prompt. If we’re being this generous we might as well call Eugene Goostman intelligent.
Consistency, precision and transparency are important. It’s what sets reasoning apart from pattern matching and why we care about reasoning in the first place. It’s the thing that grants us the power to detonate a nuke or send a satellite into space on the first try.
In a world where 90% of scientists just assume that science works like a religion, a 96%-4% consensus is not a good indicator for implementing policy, it’s an indicator that the few real scientists are almost evenly split on the correct solution.
Why would that cause a 96%-4% split and not a 60%-40% split?
In an Aristotelian framework, dropping 3 very heavy and well-lackered balls towards Earth and seeing they fall with a constant speed barring any wind is enough to say
FG = G * m1 * m2 / r^2
is a true scientific theory.
You mean increasing speed?
Even on the margin, anything that costs Facebook users also makes it less valuable for its remaining users—it’s a negative feedback loop.
I think you meant to say “positive feedback loop”. “Negative” refers to self-stabilizing, not bad/undesirable or the sign of the change.
I think I have found an example for my third design element:
Patterns that require abstract reasoning to discern
The old Nokia game snake isn’t technically a board game, but it’s close enough if you take out the reaction time element of it. The optimal strategy here is to follow a Hamilton cycle, this way you’ll never run into a wall or yourself until the snake literally covers the entire playing field. But a reinforcement learning algorithm wouldn’t be able to make this abstraction; you would never run into the optimal strategy just by chance. Unfortunately, as I suggested in my answer, the pattern is too rigid which allows a hard-coded AI to solve the game.