DMs open
Support the movement against AI extinction risk
Most of my documents are living documents, so the version posted on lesswrong may be an older version. Latest version is on my website.
DMs open
Support the movement against AI extinction risk
Most of my documents are living documents, so the version posted on lesswrong may be an older version. Latest version is on my website.
Lesswrong core members unwillingness to engage in conflict is directly leading to the end of the world.
By conflict I mean publicly humiliating developers at these companies (including Anthropic), cutting them out of your social circle, organising protests outside their offices, and running for election with an antiAI company message.
I am willing to go further by supporting whistleblowers and cyberattackers against AI companies. But the above is the minimum to become my ally.
In some hypothetical game theory puzzle sure. In the real world it does necessitate it with like >95% probability.
And here we are talking about positive sum stuff like growing a business.
Pause AI movement is explicitly a zero sum political battle.
Positive sum games still involve a lot of zero sum moves! Just because the pie is growing doesn’t mean it doesn’t matter who gets more of the pie. If you are a company CEO in a growing industry, you will end up taking adversarial moves against lots of people. You will sue people, you will fire your employees, you will take away profit from your competitors if you succeed, and so on.
The situation is fundamentally adversarial. People want different things and are willing to go to extreme lengths to get it.
I think my statement is true of basically every major political or economic change in human history.
It’s kinda complicated, I cant answer a blanket yes or no. There are hypothetical situations where I might advocate such a plan yes.
Also I want more info on how this connects to my comment.
I am fundamentally suspicious of any plan to solve AI risk where everyone is better off at the end. Unless you can pinpoint who is suffering as a result of your plan succeeding, I am unlikely to take your plan too seriously.
Bring on the downvotes!
In practice the way this problem is often solved nowadays is to find third-party internet forums where people can leave honest reviews that can’t be censored easily—such as google maps reviews or reviews on reddit or glassdoor job reviews or so on.
Google and Reddit can’t be trusted to be censorship-free either, but the instances of censorship there are often various govts (China, US, Russia etc) demanding censorship, as opposed to your ice cream seller demanding censorship.
Mass violence, especially collusion to apply violence between various parties (govts, religious institutions, families), is what makes information really censored, to the point where entire populations can be repressed and then made to love their repressors.
I think censor-resistant social media platforms are an important piece to solve this. I think leaking the secrets of powerful actors who use violence to censor others, is another important piece to solve this.
A non-trivial fraction of my life philosophy is oriented around avoiding environments that force me into paranoia and incentivizing as little paranoia as possible in the people around me.
Makes sense! My personal preference is to openly declare who my enemies are, and openly take actions that will cause them to suffer. I’m much less keen on the cloak-and-dagger strategy that is required to make someone paranoid or then exploit said paranoia. Because I tend to openly declare who my enemies are, people who are not openly declared as my enemy can find it relatively easier to trust or atleast tolerate me in their circles.
I think fundamentally the world is held together by threats of mass violence, be it threats of nuclear war at a geopolitical level, or threats of mass revolt by armed citizens at a domestic level. Hence I think trying to avoid all conflict is bad—often conflict theory is the right approach and mistake theory is the wrong approach.
I support more people on lesswrong writing about how best to fight conflicts and win, rather than on how to avoid conflicts entirely.
P.S. If you liked this comment you should check out my website, a lot of my writing focusses explicitly on topics like this one.
Yes, I updated it to “MtG colour wheel applied to politics”
Oh. This is actually useful, I didn’t realise this. I’ll update the title.
Update: The title is now “MtG Colour Wheel applied to Politics”
Oh! I’ve changed the title to “Duncan Sabien’s Colour Wheel applied to Politics”. Does that work? Or is there some reason his name should be removed completely?
Thanks for the reply!
I will have to check more on microwaves, I’m not comfortable recommending them right now.
Regarding the trusted circle part
Daniel Ellsberg’s psychiatrist’s house was broken into by CIA agent Howard Hunt.
Assange’s lawyers had to communicate via encrypted comms because they realised they were being spied on. There is a long list of cases of lawyers being intimidated and surveilled.
You can just go look at the websites of legal resources on this topic, none of them are willing to even hint at the notion that they might support classified leaks. If you actually schedule calls with them, you’ll realise the same thing.
Regarding psychology, I basically agree and I don’t think I have solved this problem yet. Hence I didn’t share much about it. I’d love if you could write more about it about your personal experience (either in public, or we could have a private chat) as it is valuable info.
Regarding Russia, my only consideration is probability of not going to prison (or worse). Do you propose any other country with a lower probability of going to prison?
is the implication that it’s an unconfirmed open secret the AI labs are doing really bad stuff?
Almost certainly yes as of today, and everything builds a larger picture. For instance if some of the leaders are proven to be sex offenders like Altman has been accused of (idk if the accusations are true), or are proven to be successionists like Rich Sutton is, it strengthens the political movement against building ASI.
I expect my guide to be even more valuable a couple years from now, than it is today.
I’m not a huge fan of the famous whistleblowers personally, they were reckless and put lives at risk.
I can maybe see why you have that opinion, but I think the criteria here should be something closer to—who used less violence—the US govt or the people exposing them? Since I expect ASI to lead to human extinction or a permanent dictatorship, I am highly open to solutions that involve some violence or collateral damage, but are not as bad as the ASI outcome itself.
This seems related to Ted Kaczynski’s views on power process and artificial superintelligence. Once all the material, safety and social tiers of Maslow’s hierarchy are solved by ASI, there is nothing useful or meaningful for humans to do.
Bye
If you work for the NSA or something and you have access to a bunch of TS/SCI information and you take all of it to Russia because 3% of it (or even 30% of it!) involved the U.S. government doing bad things, you are a traitor to your country and to the free world and fair-minded people will not praise you as a hero.
I think the word “all” is doing a lot of heavy-lifting in this claim.
I also think you’re trying to wash your own personal opinion of who is a traitor, for everyone’s opinion of who is a traitor.
But the idea that the Russian government has a “good track record” of helping whistleblowers is absurd.
I’m pointing out which countries are most likely to actually grant you asylum, as opposed to deport you back to the US where you will be imprisoned.
If you become aware of bad things that the U.S. government is doing, and you want to become a whistleblower, talk to a trusted lawyer in the U.S. Do not fly to Russia and then talk to a lawyer
I actually disagree with this, but I can change my mind if you give evidence. A lawyer will be risking imprisonment themselves if they actively help you at this stage. The most likely outcome is that a lawyer you contact at this stage will neither help nor hurt you. The worst case outcome which is also possible IMO, is that they will rat you out because of ideological disagreement or fear or some other reason.
Thank you for the detailed reply!
I think someone hosting this in the US has a high probability of being sued regardless of first amendment protections. I would prefer discussing legal risks only in private with someone who is actually interested in hosting.
I made up the category names to point out specifically that leaking classified documents, as opposed to just a summary in your own words, makes a large difference for your probability of being imprisoned.
The additional info you would like to me to add in the UI to make it more presentable makes sense!
I’m right now thinking of moving on to another project, but I could get back to this in future, especially if this project gets more funding or an institutional home.
Can you give an example in the real world? (Prefer historical examples if you dont wanna be too controversial) Both your comments are abstract so I’m unclear what you have in mind.