cousin_it
Thanks for the response. It does not convince me, because my view of how elites will act when informed about AI x-risk is based on actual examples that happened. If you have a specific counterargument to that, can you summarize it here? (Though of course I’d prefer if you didn’t search for counterarguments and just realized that I’m right, but hey, we can’t always get everything we want.)
My go-to example is cheese. I still have a vivid memory of going to a US supermarket and buying a packet of Kraft… something, then coming back to my hotel room, taking a bite of the thing and becoming horrified. In Switzerland every cheese-like thing in the supermarket is actually cheese.
Add to that universal compulsory health insurance, public transport everywhere, laws making it difficult to fire or evict a person, minimum capital of 20K CHF to start an LLC, and you’ll see clearly whether Switzerland is libertarian or not.
(In case it’s not clear, I think Switzerland’s non-libertarian approach is a very good thing overall. With the exception of policies that make it harder to build more housing, which are as bad as everywhere else.)
I disagree with the plan. My objection is about mistake vs conflict theory.
The mistake-theoretic approach is to assume that everyone, masses and elites alike, will react to AI danger as a danger to everyone’s survival. This approach has been tried and the results are disastrous. A bunch of elites heard “AI is dangerous” and decided “gee, let’s build this dangerous thing!” See for example Elon Musk’s early emails, sounding worried about x-risk from AI, leading him to start OpenAI.
The conflict-theoretic approach is to acknowledge that AI’s nature as an amplifier of power will always make it an irresistible temptation to elites. And things could get a lot worse. Right now governments are only dimly aware that AIs could be superweapons. If you convince them of that in earnest, you won’t like the results.
The only way an anti-AI movement can do good is if it acknowledges the conflicting interest of masses vs elites in the matter of AI, and places itself firmly on the side of the masses, with a readiness to go against and overrule elites.
All except 7 and 10 are fine ways to make a point, you don’t need to puff them up through a straw. I think people should be ok with sharing sentence-length or comment-length ideas without expanding them into post-length (or book-length, ugh, there are so many books that could have been comments or posts).
7 falls to Betteridge’s law. You should’ve known when you wrote that title :-) Anyway I live in Switzerland, it has very many good aspects but frontier-like freedom isn’t one of them, neither in law nor in practice nor in vibes.
10 sounds like you actually have some cool info to share, so go ahead and share it :-)
Raising awareness of AI risk led directly to the founding of OpenAI and Anthropic. Raising awareness of AI risk on the level of governments can easily backfire and lead to an arms race in the same way.
The best and clearest statement of the problem and the only possible solution is Bill Hibbard’s critique of SIAI back in 2003. It’s almost unbelievable how much this text got right, 23 years ago:
1. The SIAI analysis fails to recognize the importance of the political process in creating safe AI.
This is a fundamental error in the SIAI analysis. CFAI 4.2.1 says “If an effort to get Congress to enforce any set of regulations were launched, I would expect the final set of regulations adopted to be completely unworkable.” It further says that government regulation of AI is unnecessary because “The existing force tending to ensure Friendliness is that the most advanced projects will have the brightest AI researchers, who are most likely to be able to handle the problems of Friendly AI.” History vividly teaches the danger of trusting the good intentions of individuals.
The singularity will completely change power relations in human society. People and institutions that currently have great power and wealth will know this, and will try to manipulate the singularity to protect and enhance their positions. The public generally protects its own interests against the narrow interests of such powerful people and institutions via widespread political movements and the actions of democratically elected government. Such political action has never been more important than it will be in the singularity.
The reinforcement learning values of the largest (and hence most dangerous) AIs will be defined by the corporations and governments that build them, not the AI researchers working for those organizations. Those organizations will give their AIs values that reflect the organizations’ values: profits in the case of corporations, and political and military power in the case of governments. Only a strong public movement driving government regulation will be able to coerce these organizations to design AI values to protect the interests of all humans. This government regulation must include an agency to monitor AI development and enforce regulations.
The breakthrough ideas for achieving AI will come from individual researchers, many of whom will want their AI to serve the broad human interest. But their breakthrough ideas will become known to wealthy organizations. Their research will either be in the public domain, done for hire by wealthy organizations, or will be sold to such organizations. Breakthrough research may simply be seized by governments and the researchers prohibited from publishing, as was done for research on effective cryptography during the 1970s. The most powerful AIs won’t exist on the $5,000 computers on researchers’ desktops, but on the $5,000,000,000 computers owned by wealthy organizations. The dangerous AIs will be the ones capable of developing close personal relations with huge numbers of people. Such AIs will be operated by wealthy organizations, not individuals.
Individuals working toward the singularity may resist regulation as interference with their research, as was evident in the SL4 discussion of testimony before Congressman Brad Sherman’s committee. But such regulation will be necessary to coerce the wealthy organizations that will own the most powerful AIs. These will be much like the regulations that restrain powerful organizations from building dangerous products (cars, household chemicals, etc), polluting the environment, and abusing citizens.
To highlight the key part that I’d like people to take away, “only a strong public movement driving government regulation” has a chance of solving the problem. Attempts to influence governments that don’t go through the public risk concentrating the power and will to change in the hands of governments, instead of where it should be: with the public.
Sorry to resurrect the thread, but I found another possible bug: the “see in context” link doesn’t work if the comment is a descendant of a negative-voted one, even if the comment itself is positive-voted.
Yeah, the world treats activists badly. I think the most effective activists are those who are aware of this and can calculate on it.
The best activist playbook is this phrase from the Bible: “Behold, I send you out as sheep among wolves, so be wise as serpents and gentle as doves”. Translation: when you go out into the world to defend an ideal that the world doesn’t share, you’re going to be weak. So you have to be sincere about holding the ideal, like a dove, but be ruthlessly strategic about how you promote it, like a serpent. There is a complex balancing act of being naive and being strategic about being naive. This is why, when reading the biographies of Christian saints, you get the impression that they were trying to get killed as publicly as possible. They knew the world would treat them badly, and instead of getting discouraged by that, they used it strategically. This played a big role in how they ended up winning so much.
Of course I’m not saying that one should try to get killed as publicly as possible. What I’m saying is that when you step up a little bit, you sign up for the activist life a little bit. It’s best to go into it with open eyes.
Yeah, I mentioned enslavement in previous comments. Since Oliver is mostly interested in the part about North America, we can just talk about extermination because that’s what happened there.
European colonialism often involved clearing a place of its previous inhabitants and resettling it, which is meaningfully different and worse than the behavior of most empires. Especially given the scale.
For purposes of building a Pax, this was not necessary. The Romans managed fine, they had Africans and Germans in the same empire, and a variety of client states as well. And the French got along fine with Native Americans when they wanted to. The rhetoric about “populations that can’t meaningfully assimilate”, also known as the rhetoric about “savages”, is always a lie. It’s downstream of the desire to remove a population and settle the place yourself.
A popular starting point of European colonialism is 1415, the conquest of Ceuta. If you make a huge change like “no colonialism” from that point onward, do you think after 500 years there would still arise a recognizable Nazi Germany or Soviet Russia? There’s no way. Everything would be different. (And not like South America either, because South America was shaped by colonialism entirely.)
Heck, I might even be ok with Europe being expansionist! Just do it like most empires, conquer and rule. The amount of extermination and enslavement that Europeans did is abnormal even for empires, it’s completely above and beyond. Clearing entire continents would make even Mongols go WTF.
Could a “conquer and rule” empire lead to widespread progress? Go look at a Roman aqueduct, they’re all over Europe and they’re still standing now. Could it create a big alliance? Yes. Could everything be fine? Yes. I’m not saying it would. But I also see no reason why the genocides were necessary.
Yeah, on further thought I can retract the meta point. Feel free to argue for colonialism, I’ll just be here to argue against :-)
Also don’t forget that I am here purely talking about colonization of North America. My current model is that some other colonialization efforts were extremely bad
On the object level I think this is weak. “Yes, the Worldwide Holocaust was overall bad, but the part that happened here was good, because we built something nice on the site afterward.” What happened to building nice things without having a holocaust first? Or is it like, we wanted to build a palace of human rights, but these other people were in the way, so we killed them and built the palace of human rights! Look everybody, how beautiful it is! Hmm. You’re certainly not alone in this position (Bertrand Russell argued for it all his life) but I still find it weak.
And yet Congo is now inhabited by its natives, while Australia after British “soft colonialism” isn’t.
I’m not in the US. Will try answer your questions though.
Do you see analogies between America’s situation now, and Russia after imperialism or after communism (or even, potentially, after Putin)?
Not really. What analogies do you mean?
Would you want to go all the way to the Bobby Fischer solution—whites back to Europe, blacks back to Africa (and implicitly everyone else back to their homelands), hand the American continent back to the indigenous peoples? Or is the rise of progressivism an appropriate response, perhaps analogous to the rise of communism in 20th-century Russia?
I think the rise of progressivism is a good response, yeah. And it seems much less dangerous than Russian communism.
Also, if American civilization and/or European colonialism had never existed, are there good, important, even crucial features of our present world, that might never have existed as well?
I think if European colonialism had restricted itself to conquering and ruling, as most other empires did, and didn’t go for so much extermination and mass enslavement—then most good things about the present world would still have existed, and many other good things would have existed as well that don’t currently exist.
Believe it or not, I’m not against all conquest or imperialism. The main factor to me is that many (most?) empires in history were content to conquer and rule the natives. But European colonialism, on a huge part of territory it affected, went for extermination or mass enslavement instead. This unusual aspect, combined with the scale, is what makes it the worst atrocity to me.
Yes, you’re right of course. I should’ve restricted to atrocities against humans. What we do to animals is the next level of horror.
Worst in human history, period. The Mongols didn’t come anywhere close to clearing three continents (two Americas and Australia) of almost all native population and resettling them themselves, turning a fourth continent (Africa) into a supplier of slaves for centuries, creating a huge bloody mess on the fifth continent (India with tens of millions dead in famines that stopped instantly upon independence, China with the century of humiliation) and a bunch of other things too. There’s not much room to get worse, the Earth has only so many continents.
Wow, this is bad.
I mean, object level, colonialism was the worst atrocity in human history and nobody should defend it. That’s just my opinion of course. But meta level, in the previous post you describe yourself as holding an important position in the movement (LW / EA / AI-safety), and in the followup you say colonialism was a good thing actually. What a target to paint on the movement; what a signpost for young people deciding whether to join. Are you alright?
The biggest examples to me are RLHF and the idea of the HHH assistant (alignment techniques that ended up accelerating the race a lot). And less direct but still relevant, both OpenAI and Anthropic being founded in the name of alignment.
There are some on LW, but more prominently elsewhere, on the political left. Like this article by Mike Monteiro.
Correct! Elon Musk is an entrepreneur, Sam Altman is an entrepreneur, Dario Amodei is a researcher. Informing them led to accelerating the race. Now you want to make a bunch of politicians more informed about AI. The job of politicians is to get power. How do you think it will go? A race to develop AI for military and social control, that’s how. It’s worth being as cynical as possible about this, because reality will look more cynical still.
I agree with your focus on democratic institutions. But less on “institutions” and more on “democratic”. The people at the top will always face competitive incentives to use AI to get more power, even if they know about the risk. That’s why I keep saying that an anti-AI movement needs to be able to overrule most people at the top, and its power base needs to be at the bottom.