(I say this as someone who has already put a lot of their money where their mouth is.)
Greg C
Interesting. I really hope that some of them do something, soon. Time is fast running out. There’s no point being a rich philanthropist (or rich, or a philanthropist) if the world gets destroyed before you deploy your resources.
Thanks, that’s good to hear. What form does the pledge take? Do you have a DAF that contains half your shares? When do you think the next liquidation opportunity might be? (I guess you weren’t eligible for the one in May[1]?)
- ^
I’m disappointed that no one (EA-ish or otherwise) seems do have done anything interesting with that liquidation opportunity.
- ^
Have you donated any of your equity yet? If not, why not?
Shared on the EA Forum, with some commentary on the state of the EA Community (I guess the LessWrong rationality community is somewhat similar?)
In practice, bans can be lifted, so “never” is never going to become an unassailable law of the universe. And right now, it seems misguided to quibble over “Pause for 5, 10, 20 years”, and “Stop for good”, given the urgency of the extinction threat we are currently facing. If we’re going to survive the next decade with any degree of certainty, we need an alliance between B1 and B2, and I’m happy for one to exist.
Re “invest in AI and spend the proceeds on AI safety”—another consideration other than the ethical (/FDT) concerns, is that of liquidity. Have you managed to pull out any profits from Anthropic yet? If not, how likely do you think it is that you will be able to[1] before the singularity/doom?
- ^
Maybe this would require an IPO?
- ^
One problem is that this assumption of the ASI society being mostly structured as well-defined persistent individuals with long-term interests is questionable
Very questionable. Why would it be separate individuals in a society, and not be—or just very rapidly collapse into—a singleton? In fact, the dominant narrative here on LW has always featured a singleton ASI as the main (existential) threat. And my story here reflects that.
being able to discover new laws of nature and to exploit the consequences of that.
Ok, but I think that still basically leads to the end result of all humans (and biological life) dead.
It seems odd to think that it’s more likely such a discovery would lead to the AI disappearing into it’s own universe (like in Egan’s Crystal Nights), than just obliterating our Solar System with it’s new found powers. Nothing analogous has happened in the history of human science and tech development (we have only become more destructive of other species and their habitats).
then it would be better to use an example not directly aimed against “our atoms”
All the atoms are getting repurposed at once, no special focus on those in our bodies (but there is in the story, to get the reader to empathise). Maybe I could’ve included more description of non-alive things getting destroyed.
mucking with quantum gravity too recklessly, or smth in that spirit
I’m trying to focus on plausible science/tech here.
they need to do experiments in forming hybrid consciousness with humans to crack the mystery of human subjectivity, to experience that first-hand for themselves, and to decide whether that is of any value to them based on the first-hand empirical material (losing that option without looking is a huge loss)
Interesting. But even if they do find something valuable in doing that, there’s not much to keep the vast majority of humans around. And as you say, they could just end up as “scans”, with very few being run as oracles.
Where does my writing suggest that it’s a “power play” and “us vs them”? (That was not the intention at all! I’ve always seen indifference, and “collateral damage” as the biggest part of ASI x-risk.)
as we know, compute is not everything, algorithmic improvement is even more important
It should go without saying that it would also be continually improving it’s algorithms. But maybe I should’ve made that explicit.
the action the ASI is taking in the OP is very suboptimal and deprives it of all kinds of options
What are some examples of these options?
They don’t have a choice in the matter—it’s forced by the government (nationalisation). This kind of thing has happened before in wartime (without the companies or people involved staging a rebellion).
On one hand, it’s not clear if a system needs to be all that super-smart to design a devastating attack of this kind...
Good point, but—and as per your second point too—this isn’t an “attack”, it’s “go[ing] straight for execution on its primary instrumental goal of maximally increasing its compute scaling” (i.e. humanity and biological life dying is just collateral damage).
probably would not want to irreversibly destroy important information without good reasons
Maybe it doesn’t consider the lives of individual organisms as “important information”? But if it did, it might do something like scan as it destroys, to retain the information content.
Are you saying they are suicidal?
LessWrong:
A post about all the reasons AGI will kill us: No. 1 all time highest karma (827 on 467 votes; +1.77 karma/vote)
A post about containment strategy for AGI: 7th all time highest karma (609 on 308 votes; +1.98 karma/vote)
A post about us all basically being 100% dead from AGI: 52nd all time highest karma (334 on 343 votes; +0.97 karma/vote, a bit more controversial)
Also LessWrong:
A post about actually doing something about containing the threat from AGI and not dying [this one]: downvoted to oblivion (-5 karma within an hour; currently 13 karma on 24 votes; +0.54 karma/vote)
My read: y’all are so allergic to anything considered remotely political (even though this should really not be a mater of polarisation—it’s about survival above all else!) that you’d rather just lie down and be paperclipped than actually do anything to prevent it happening. I’m done.
From the Abstract:
Rather than targeting state-of-the-art performance, our objective is to highlight GPT-4’s potential
They weren’t aiming for SOTA! What happens when they do?
The way I see the above post (and it’s accompaniment) is knocking down all the soldiers that I’ve encountered talking to lots of people about this over the last few weeks. I would appreciate it if you could stand them back up (because I’m really trying to not be so doomy, and not getting any satisfactory rebuttals).
Thanks for writing out your thoughts in some detail here. What I’m trying to say is that things are already really bad. Industry self-regulation has failed. At some point you have to give up on hoping that the fossil fuel industry (AI/ML industry) will do anything more to fix climate change (AGI x-risk) than mere greenwashing (safetywashing). How much worse does it need to get for more people to realise this?
The Alignment community (climate scientists) can keep doing their thing; I’m very much in favour of that. But there is also now an AI Notkilleveryoneism (climate action) movement. We are raising the damn Fire Alarm.
From the post you link:some authority somewhere will take notice and come to the rescue.
Who is that authority?
The United Nations Security Council. Anything less and we’re toast.
And we can talk all we like about the unilateralist’s curse, but I don’t think anything a bunch of activists can do will ever top the formation and corruption-to-profit-seeking of OpenAI and Anthropic (the supposedly high status moves).
It’s really not intended as a gish gallop, sorry if you are seeing it as such. I feel like I’m really only making 3 arguments:
1. AGI is near
2. Alignment isn’t ready (and therefore P(doom|AGI is high)
3. AGI is dangerousAnd then drawing the conclusion from all these that we need a global AGI moratorium asap.
Good to hear. Look forward to seeing the results!