Student in fundamental and applied mathematics, interested in theoretical computer science and AI alignment
Technoprogressive, biocosmist, rationalist, defensive accelerationist, longtermist
Student in fundamental and applied mathematics, interested in theoretical computer science and AI alignment
Technoprogressive, biocosmist, rationalist, defensive accelerationist, longtermist
Is insider trading allowed on Manifold?
As a reality check, “any company which fund research into AGI” here would mean all the big tech companies (MAGMA). Much more people use those products than people know AGI developers. It is a much easier ask to switch to using a different browser/search engine/operating system, install an ad blocker, etc. than to ask for social ostracism. Those companies’ revenues collapsing would end the AI race overnight, while having AGI developers keep a social circle of techno-optimists only wouldn’t.
Ok, let’s say we get most of the 8 billion people in the world to ‘come to an accurate understanding of the risks associated with AI’, such as the high likelihood that ASI would cause human extinction.
Then, what should those people actually do with that knowledge?
Boycotting any company which fund research into AGI would be (at least in this scenario) both more effective and, for the vast majority of people, more tractable than “ostracizing” people whose social environment is largely dominated by… other AI developers and like-minded SV techno-optimists.
I think it’s the combination of a temporal axis and a (for a lack of a better term) physical violence axis.
I don’t think the point of hunger strikes is to achieve immediate material goals, but publicity/symbolic ones.
It is vanishingly unlikely that all other major AI companies would agree to do so without the US government telling them to; this statement would be helpful, but only to communicate their position and not because of the commitment itself. Why not ask them to ask the government to stop everyone (maybe conditional on China agreeing to stop everyone in China)?
This seems to be exactly the point of the demand? This is a demand that would be cheap (perhaps even of negative cost) for DeepMind to accept (because the other AI companies wouldn’t agree to that), and would also be a major publicity win for the Pause AI crowd. Even counting myself skeptical of the hunger strikes, I think this is a very smart move.
automated AI safety research, biosecurity, cybersecurity (including AI control), possibly traditional transhumanism (brain-computer interfaces, intelligence augmentation, whole brain emulation)
My point was that in the first stages of AI-induced job loss, it might not be clear to everyone (either due to genuine epistemic uncertainty or due to partisan bias) whether the job loss was induced by AI or their own previous preferred political grievance. This was just an aside and not important to my broader point though.
? Protectionism (whether against AI, or immigration, or trade) is often justified by concerns about job loss.
I think this is a silly argument, comparable to saying if you don’t want to bit the bullet of Esoteric Hitlerism you aren’t a true right-winger, or if you don’t want to bit the bullet of Posadism you aren’t a true left-winger. Yud as of right now believe we should have research intelligence augmentation technology to have supercharged AI safety researchers build Friendly AI right?
Sorry, typo. I didn’t meant to make a connection between those two, it’s just that many developing countries have higher unemployment rates for reasons that are not really relevant to what we’re talking about here.
Consider an axis where on one end you’ve got Shock Level Four and on an opposite end you’ve got John Zerzan. Anything in between is some gradation of gray where you accept some proportion p of all available technology.
Do you believe mass unemployment will jump from ~0-10% in developed countries to 90% overnight? If not, the political question of whether to respond to unemployment increases by either redistribution or protectionism (of any kind – it likely won’t be immediately clear that AI and not other political grievances will be responsible) will be particularly salient in the short term.
I think anti-tech v. pro-tech is in fact going to be more important a political axis orthogonal to the left-right axis as time goes on (and OP seems like clear evidence for that?), and the position you suggest is just ‘centrism’ on that axis. See fallacy of gray.
Reiterating what I said above that “conservatives” should be taboo’d here. It appears to me that this faction is flashy but do not have enough political capital or leverage to decide Republican policy relative to the tech right and neocons, and could only serve as tie-breaker in issues where Ds and (other) Rs disagree (e.g. antitrust policy). On the flip side, it’s worthwhile talking about how to interact with anti-techs whether they are left-coded (deep greens), right-coded (national conservatives), or whatever the anti-AI artists are.
I’m pretty happy if they’re just on board with “stop building AGI” for whatever reason.
Thank you for editing (sentence was cut short in earlier version). Reiterating what I said to @habryka with the same remark:
Even from a PauseAI standpoint (which isn’t my stance, but I do think global compute governance would be a good thing if achievable), I don’t see nationalists (some of which want the US to leave the United Nations) pushing for global compute governance with China. This is really only convincing from a specifically StopAI standpoint where you push for a national ban because you believe everyone regardless of {prior political beliefs,risk tolerance,likelihood of ending up as a winner post-intelligence-curse} will agree on stopping AGI and not taking part in an arms race if exposed to the right arguments, and expect people everywhere else on Earth will also push for a national ban in all their own countries without any coordination.
Even from a PauseAI standpoint (which isn’t my stance, but I do think global compute governance would be a good thing if achievable), I don’t see nationalists (some of which want the US to leave the United Nations) pushing for global compute governance with China. This is really only convincing from a specifically StopAI standpoint where you push for a national ban because you believe everyone regardless of {prior political beliefs,risk tolerance,likelihood of ending up as a winner post-intelligence-curse} will agree on stopping AGI and not taking part in an arms race if exposed to the right arguments, and expect people everywhere else on Earth will also push for a national ban in all their own countries without any coordination.
Your scenario above was that most of the 8 billion people in the world would come to believe with high likelihood that ASI would cause human extinction. I think it’s very reasonable to believe that this would make quite easier to coordinate to make alternatives to MAGMA products more usable in this world, as network effects and economies of scale are largely the bottleneck here.