Basically, AI professionals seem to be trying to manage the hype cycle carefully.
Ignorant people tend to be more all-or-nothing than experts. By default, they’ll see AI as “totally unimportant or fictional”, “a panacea, perfect in every way” or “a catastrophe, terrible in every way.” And they won’t distinguish between different kinds of AI.
Currently, the hype cycle has gone from “professionals are aware that deep learning is useful” (c. 2013) to “deep learning is AI and it is wonderful in every way and you need some” (c. 2015?) to “maybe there are problems with AI? burn it with fire! Nationalize! Ban!” (c. 2019).
Professionals who are still working on the “deep learning is useful for certain applications” project (which is pretty much where I sit) are quite worried about the inevitable crash when public opinion shifts from “wonderful panacea” to “burn it with fire.” When the public opinion crash happens, legitimate R&D is going to lose funding, and that will genuinely be unfortunate. Everyone savvy knows this will happen. Nobody knows exactly when. There are various strategies for dealing with it.
Accelerate the decline: this is what Gary Marcus is doing.
Carve out a niche as an AI Skeptic (who is still in the AI business himself!) Then, when the funding crunch comes, his companies will be seen as “AI that even the skeptic thinks is legit” and have a better chance of surviving.
Be Conservative: this is a less visible strategy but a lot of people are taking it, including me.
Use AI only in contexts that are well justified by evidence, like rapid image processing to replace manual classification. That way, when the funding crunch happens, you’ll be able to say you’re not just using AI as a buzzword, you’re using well-established, safe methods that have a proven track record.
Pivot Into Governance: this is what a lot of AI risk orgs are doing
Benefit from the coming backlash by becoming an advisor to regulators. Make a living not by building the tech but by talking about its social risks and harms. I think this is actually a fairly weak strategy because it’s parasitic on the overall market for AI. There’s no funding for AI think tanks if there’s no funding for AI itself. But it’s an ideal strategy for the cusp time period when we’re just shifting between blind enthusiasm to blind panic.
Preserve Credibility: this is what Yann LeCun is doing and has been doing from day 1 (he was a deep learning pioneer and promoter even before the spectacular empirical performance results came in)
Try to forestall the backlash. Frame AI as good, not bad, and try to preserve the credibility of the profession as long as you can. Argue (honestly but selectively) against anyone who says anything bad about deep learning for any reason.
Any of these strategies may say true things! In fact, assuming you really are an AI expert, the smartest thing to do in the long run is to say only true things, and use connotation and selective focus to define your rhetorical strategy. Reality has no branding; there are true things to say that comport with all four strategies. Gary Marcus is a guy in the “AI Skeptic” niche saying things that are, afaik, true; there are people in that niche who are saying false things. Yann LeCun is a guy in the “Preserve AI Credibility” niche who says true things; when Gary Marcus says true things, Yann LeCun doesn’t deny them, but criticizes Marcus’s tone and emphasis. Which is quite correct; it’s the most intellectually rigorous way to pursue LeCun’s chosen strategy.
wow, I was way too bearish about the “mundane” economic/practical impact of AI.
“AI boosters”, whatever their incentives, were straightforwardly directionally correct in 2019 that AI was drastically “underrated” and had tons of room to grow. Maybe “AGI” was the wrong way of describing it. Certainly, some people seem to be in an awful hurry to round down human capacities for thought to things machines can already do, and they make bad arguments along the way. But at the crudest level, yeah, “AI is more important than you think, let me use whatever hyperbolic words will get that into your thick noggin” was correct in 2019.
also the public figures I named can no longer be characterized as only “saying true things.” Polarization is a hell of a drug.
I would totally agree they were directionally correct, I under-estimated AI progress. I think Paul Christiano got it about right.
I’m not sure I agree about the use of hyperbolic words being “correct” here; surely, “hyperbolic” contradicts the straightforward meaning of “correct”.
Partially the state I was in around 2017 was, there are lots of people around me saying “AGI in 20 years”, by which they mean a thing that shortly after FOOMs and eats the sun or something, and I thought this was wrong and a strange set of belief updates (which was not adequately justified, and where some discussions were suppressed because “maybe it shortens timelines”). And I stand by “no FOOM by 2037”.
The people I know these days who seem most thoughtful about the AI that’s around and where it might go (“LLM whisperer” / cyborgism cluster) tend to think “AGI already, or soon” plus “no FOOM, at least for a long time”. I think there is a bunch of semantic confusion around “AGI” that makes people’s beliefs less clear, with “AGI is what makes us $100 billion” as a hilarious example of “obviously economically/politically motivated narratives about what AGI is”.
So, I don’t see these people as validating “FOOM soon” even if they’re validating “AGI soon”, and the local rat-community thing I was objecting to was something that would imply “FOOM soon”. (Although, to be clear, I was still under-estimating AI progress.)
Basically, AI professionals seem to be trying to manage the hype cycle carefully.
Ignorant people tend to be more all-or-nothing than experts. By default, they’ll see AI as “totally unimportant or fictional”, “a panacea, perfect in every way” or “a catastrophe, terrible in every way.” And they won’t distinguish between different kinds of AI.
Currently, the hype cycle has gone from “professionals are aware that deep learning is useful” (c. 2013) to “deep learning is AI and it is wonderful in every way and you need some” (c. 2015?) to “maybe there are problems with AI? burn it with fire! Nationalize! Ban!” (c. 2019).
Professionals who are still working on the “deep learning is useful for certain applications” project (which is pretty much where I sit) are quite worried about the inevitable crash when public opinion shifts from “wonderful panacea” to “burn it with fire.” When the public opinion crash happens, legitimate R&D is going to lose funding, and that will genuinely be unfortunate. Everyone savvy knows this will happen. Nobody knows exactly when. There are various strategies for dealing with it.
Accelerate the decline: this is what Gary Marcus is doing.
Carve out a niche as an AI Skeptic (who is still in the AI business himself!) Then, when the funding crunch comes, his companies will be seen as “AI that even the skeptic thinks is legit” and have a better chance of surviving.
Be Conservative: this is a less visible strategy but a lot of people are taking it, including me.
Use AI only in contexts that are well justified by evidence, like rapid image processing to replace manual classification. That way, when the funding crunch happens, you’ll be able to say you’re not just using AI as a buzzword, you’re using well-established, safe methods that have a proven track record.
Pivot Into Governance: this is what a lot of AI risk orgs are doing
Benefit from the coming backlash by becoming an advisor to regulators. Make a living not by building the tech but by talking about its social risks and harms. I think this is actually a fairly weak strategy because it’s parasitic on the overall market for AI. There’s no funding for AI think tanks if there’s no funding for AI itself. But it’s an ideal strategy for the cusp time period when we’re just shifting between blind enthusiasm to blind panic.
Preserve Credibility: this is what Yann LeCun is doing and has been doing from day 1 (he was a deep learning pioneer and promoter even before the spectacular empirical performance results came in)
Try to forestall the backlash. Frame AI as good, not bad, and try to preserve the credibility of the profession as long as you can. Argue (honestly but selectively) against anyone who says anything bad about deep learning for any reason.
Any of these strategies may say true things! In fact, assuming you really are an AI expert, the smartest thing to do in the long run is to say only true things, and use connotation and selective focus to define your rhetorical strategy. Reality has no branding; there are true things to say that comport with all four strategies. Gary Marcus is a guy in the “AI Skeptic” niche saying things that are, afaik, true; there are people in that niche who are saying false things. Yann LeCun is a guy in the “Preserve AI Credibility” niche who says true things; when Gary Marcus says true things, Yann LeCun doesn’t deny them, but criticizes Marcus’s tone and emphasis. Which is quite correct; it’s the most intellectually rigorous way to pursue LeCun’s chosen strategy.
in retrospect, 6 years later:
wow, I was way too bearish about the “mundane” economic/practical impact of AI.
“AI boosters”, whatever their incentives, were straightforwardly directionally correct in 2019 that AI was drastically “underrated” and had tons of room to grow. Maybe “AGI” was the wrong way of describing it. Certainly, some people seem to be in an awful hurry to round down human capacities for thought to things machines can already do, and they make bad arguments along the way. But at the crudest level, yeah, “AI is more important than you think, let me use whatever hyperbolic words will get that into your thick noggin” was correct in 2019.
also the public figures I named can no longer be characterized as only “saying true things.” Polarization is a hell of a drug.
I would totally agree they were directionally correct, I under-estimated AI progress. I think Paul Christiano got it about right.
I’m not sure I agree about the use of hyperbolic words being “correct” here; surely, “hyperbolic” contradicts the straightforward meaning of “correct”.
Partially the state I was in around 2017 was, there are lots of people around me saying “AGI in 20 years”, by which they mean a thing that shortly after FOOMs and eats the sun or something, and I thought this was wrong and a strange set of belief updates (which was not adequately justified, and where some discussions were suppressed because “maybe it shortens timelines”). And I stand by “no FOOM by 2037”.
The people I know these days who seem most thoughtful about the AI that’s around and where it might go (“LLM whisperer” / cyborgism cluster) tend to think “AGI already, or soon” plus “no FOOM, at least for a long time”. I think there is a bunch of semantic confusion around “AGI” that makes people’s beliefs less clear, with “AGI is what makes us $100 billion” as a hilarious example of “obviously economically/politically motivated narratives about what AGI is”.
So, I don’t see these people as validating “FOOM soon” even if they’re validating “AGI soon”, and the local rat-community thing I was objecting to was something that would imply “FOOM soon”. (Although, to be clear, I was still under-estimating AI progress.)
What? How exactly is this a way of dealing with the hype bubble bursting? It seems like if it bursts for AI, it bursts for “AI governance”?
Am I missing something?
Never mind. It seems like I should have just kept reading.