Yes agreed. But none of these are problems whose solutions require acquiring power of any sort. Acquiring political power requires outgroups often.
acylhalide
Disagreement
Some people in lesswrong and EA circles support AI safety researchers working with or inside AI companies, and policymakers working with US govt.
Example: work done by Paul Christiano,
I support social media channels hostile to AI companies and US govt, and protests against AI companies and US govt, and electing politicians that are in favour of pausing AI research.
Example: work done by John Sherman, Holly Elmore
Not a crux
We both agreed that these plans are in conflict with each other. Doing protests and social media channels makes it harder for ai safety researchers or policymakers to collaborate with AI companies and US govt.We both had different guesses which plan has higher probability of working.
CruxHe said it is possible to use social media to raise public awareness of AI risk without being hostile to AI companies or naming CEOs and leaders specifically.
I said being hostile to AI companies is almost a necessary precondition to doing social media successfully.
My argumentThe most popular ideas in society all have outgroups that are the enemy.
Here is empirical evidence.
https://chatgpt.com/s/t_693fc0fbb6548191bc169ad2d0f8511d
We want AI risk to become one of the most popular ideas in society. This means AI companies and the governments supporting them must become an outgroup for signification fractions of society.
See also: I can tolerate anything but the outgroup by Scott Alexander
https://slatestarcodex.com/2014/09/30/i-can-tolerate-anything-except-the-outgroup/
Can AIXItl work here?
Maintain a probability distribution of all Turing machines (with upto s states and t number of steps) that the opponent could possibly be, with more probability mass attached to simpler machines.