DMs open but I may be inactive. Contact is on my website.
I am looking for a cofounder to cyberattack the US govt and US AI companies
DMs open but I may be inactive. Contact is on my website.
I am looking for a cofounder to cyberattack the US govt and US AI companies
Thanks this comment is useful!
single aperture of ~500m
maintaining satellite relative positions to within a wavelength of light
There’s no law of physics that prevents humanity from building either of these things. I’m just pessimistic about the engineering advancing to the point that we can build this in next 10 years. (without help of superhuman intelligence, that is)
I am still not clear where we are disagreeing, sorry.
What do you think is the bottleneck to building a petapixel camera that lets you do facial recognition from outside national borders? I don’t think you can simply stitch a bunch of gigapixel cameras together and achieve this.
Okay. I think what I want is feedback on tactics, not strategy. I don’t want to debate why AI pause political movement is required, yet again. I don’t want to debate why US intelligence community will accelerate development of ASI, yet again. I can debate details of how gigapixel cameras work.
And I don’t have time for actual “proof” yes, I’m just gonna make a guess. Which could involve me projecting stuff based on reference class of similar comments or similar people commenting.
I understand all this. I am assuming a constant angular field of view, let’s just assume 120 degrees for now. I assuming a single photo from a single camera placed at a single location covering 10^15 pixels. I am not talking about multiple photos stitched together, or moving the camera around, and so on.
(Yes, the camera will necessarily have multiple sensor arrays and internally stitch the data together anyway)
And yes a petapixel camera with 120 degree (or some other large field of view) could cover the United States at 3 mm resolution.
I am not sure if we are actually disagreeing.
I am saying someone should be able to place a camera outside US borders and yet be able to do facial recognition of people inside from thousands of kilometres away.
I listed the plan above in short. Friendships are the biggest conflict of interest. If you are not willing to distance from people building ASI, you are unlikely to pursue actually effective plans to stopping ASI development.
If you want a longer version, here it is: Support the movement
Can you give an example in the real world? (Prefer historical examples if you dont wanna be too controversial) Both your comments are abstract so I’m unclear what you have in mind.
Lesswrong core members unwillingness to engage in conflict is directly leading to the end of the world.
By conflict I mean publicly humiliating developers at these companies (including Anthropic), cutting them out of your social circle, organising protests outside their offices, and running for election with an antiAI company message.
I am willing to go further by supporting whistleblowers and cyberattackers against AI companies. But the above is the minimum to become my ally.
In some hypothetical game theory puzzle sure. In the real world it does necessitate it with like >95% probability.
And here we are talking about positive sum stuff like growing a business.
Pause AI movement is explicitly a zero sum political battle.
Positive sum games still involve a lot of zero sum moves! Just because the pie is growing doesn’t mean it doesn’t matter who gets more of the pie. If you are a company CEO in a growing industry, you will end up taking adversarial moves against lots of people. You will sue people, you will fire your employees, you will take away profit from your competitors if you succeed, and so on.
The situation is fundamentally adversarial. People want different things and are willing to go to extreme lengths to get it.
I think my statement is true of basically every major political or economic change in human history.
It’s kinda complicated, I cant answer a blanket yes or no. There are hypothetical situations where I might advocate such a plan yes.
Also I want more info on how this connects to my comment.
I am fundamentally suspicious of any plan to solve AI risk where everyone is better off at the end. Unless you can pinpoint who is suffering as a result of your plan succeeding, I am unlikely to take your plan too seriously.
Bring on the downvotes!
In practice the way this problem is often solved nowadays is to find third-party internet forums where people can leave honest reviews that can’t be censored easily—such as google maps reviews or reviews on reddit or glassdoor job reviews or so on.
Google and Reddit can’t be trusted to be censorship-free either, but the instances of censorship there are often various govts (China, US, Russia etc) demanding censorship, as opposed to your ice cream seller demanding censorship.
Mass violence, especially collusion to apply violence between various parties (govts, religious institutions, families), is what makes information really censored, to the point where entire populations can be repressed and then made to love their repressors.
I think censor-resistant social media platforms are an important piece to solve this. I think leaking the secrets of powerful actors who use violence to censor others, is another important piece to solve this.
A non-trivial fraction of my life philosophy is oriented around avoiding environments that force me into paranoia and incentivizing as little paranoia as possible in the people around me.
Makes sense! My personal preference is to openly declare who my enemies are, and openly take actions that will cause them to suffer. I’m much less keen on the cloak-and-dagger strategy that is required to make someone paranoid or then exploit said paranoia. Because I tend to openly declare who my enemies are, people who are not openly declared as my enemy can find it relatively easier to trust or atleast tolerate me in their circles.
I think fundamentally the world is held together by threats of mass violence, be it threats of nuclear war at a geopolitical level, or threats of mass revolt by armed citizens at a domestic level. Hence I think trying to avoid all conflict is bad—often conflict theory is the right approach and mistake theory is the wrong approach.
I support more people on lesswrong writing about how best to fight conflicts and win, rather than on how to avoid conflicts entirely.
P.S. If you liked this comment you should check out my website, a lot of my writing focusses explicitly on topics like this one.
Yes, I updated it to “MtG colour wheel applied to politics”
This is technically true. But yes, if you had the tech to build this it would also become trivial to built a petapixel camera too (for someone who can afford it). The hard part is doing 0.1 metre resolution from a 10,000 kilometre.
Thanks for this exchange btw, I guess in future I could be more precise.
Why?
Assume we had the tech to manufacture petapixel cameras, and individuals worldwide could purchase them (i.e. a govt couldn’t just lock down the supply chain). Why does this not eventually to a world with zero privacy for everyone?