Student in fundamental and applied mathematics, interested in theoretical computer science and AI alignment
Technoprogressive, biocosmist, rationalist, defensive accelerationist, longtermist
Student in fundamental and applied mathematics, interested in theoretical computer science and AI alignment
Technoprogressive, biocosmist, rationalist, defensive accelerationist, longtermist
Émile Torres would be the most well-known person in that camp.
I think @rife is talking either about mutual cooperation betwen safety advocates and capabilities researchers, or mutual cooperation between humans and AIs.
Pause AI is clearly a central member of Camp B? And Holly signed the superintelligence petition.
If it is a concern that your tool might be symmetric between truth and bullshit, then you should probably not have made the tool in the first place.
I think one can make a stronger claim that the Curry-Howard isomorphism mean a superhuman (constructive?) mathematician would near-definitionally be a superhuman (functional?) programmer as well.
Trying to outline the cruxes:
If you think AI safety require safety research, differential acceleration, etc. and trust AI companies to deliver them, your best bet political affiliation will be with tech-industry-friendly bipartisan centrists.
If you think AI safety require safety research, differential acceleration, etc. and don’t trust AI companies to deliver them, your best bet political affiliation will be with tech-friendly progressives.
If you think AI safety require pausing or stopping all AI research as soon as possible through an international agreement, your best bet political affiliation will be with anti-tech progressives, as anti-tech conservatives will recoil on the “international agreement” aspect.
If you think AI safety require pausing or stopping all AI research as soon as possible, and no international agreement is needed because every country should independently realize that AGI will kill them all, your best bet political affiliation will be with anti-tech people in general whether progressives or conservatives, and probably more with anti-tech conservatives if you expect them to have more political power within AGI timelines.
He was a commenter Overcoming Bias as @Shane_Legg, received a monetary prize from SIAI for his work, commented on SIAI’s strategy on his blog, and took part in the 2010 Singularity Summit, where he and Hassabis would be introduced to Thiel as the first major VC funder of DeepMind (as recounted by both Altman in the tweet mentioned in OP, and in IABIED). I’m not sure this is “being influenced by early Lesswrong” as much as originating in the same memetic milieu – Shane Legg was the one who popularized the term “AGI” and wrote papers like this with Hutter, for example.
IIRC Aella and Grimes got copies in advance and AFAIK haven’t written book reviews (at least not in the sense Scott or the press did).
Your scenario above was that most of the 8 billion people in the world would come to believe with high likelihood that ASI would cause human extinction. I think it’s very reasonable to believe that this would make it quite easier to coordinate to make alternatives to MAGMA products more usable in this world, as network effects and economies of scale are largely the bottleneck here.
Is insider trading allowed on Manifold?
As a reality check, “any company which fund research into AGI” here would mean all the big tech companies (MAGMA). Much more people use those products than people know AGI developers. It is a much easier ask to switch to using a different browser/search engine/operating system, install an ad blocker, etc. than to ask for social ostracism. Those companies’ revenues collapsing would end the AI race overnight, while having AGI developers keep a social circle of techno-optimists only wouldn’t.
Ok, let’s say we get most of the 8 billion people in the world to ‘come to an accurate understanding of the risks associated with AI’, such as the high likelihood that ASI would cause human extinction.
Then, what should those people actually do with that knowledge?
Boycotting any company which fund research into AGI would be (at least in this scenario) both more effective and, for the vast majority of people, more tractable than “ostracizing” people whose social environment is largely dominated by… other AI developers and like-minded SV techno-optimists.
I think it’s the combination of a temporal axis and a (for a lack of a better term) physical violence axis.
I don’t think the point of hunger strikes is to achieve immediate material goals, but publicity/symbolic ones.
On the flip side the OpenAI foundation now have the occasion to do the funniest thing.