What do you think is realistic if alignment is possible? Would the large corporations make a loving machine or a money-and-them-aligned machine?
FVelde
Did you use EFA to conclude that EFA is the worst, common bad argument?
How would this work with European airlines or airlines from countries where there are much less credit card payments?
What if you’re wrong?
Effect is hard if not impossible to determine but the Netherlands have one of the lowest unemployment rates in Europe.
https://en.m.wikipedia.org/wiki/List_of_sovereign_states_in_Europe_by_unemployment_rate
Guido has already repeatedly done protest and even gotten arrested multiple times in front of OA. I don’t know his exact reasons for choosing Anthropic now, but spreading the protests over the different actors makes sense to me.
People also asked the same kind of ‘why not …’ question when he and others repeatedly protested OA. In the end whatever reasons there may be to go somewhere else, you can only be in one place.
FVelde’s Shortform
There is now one hunger striker in front of Anthropic and two in front of Google Deepmind.
https://x.com/DSheremet_/status/1964749851490406546
Could you give the source(s) of these anonymous surveys of engineers with insider knowledge about the arrival of AGI? I would be interested in seeing them.
Good observations. The more general problem is modeling. Models break and ‘hope for the best expecting the worst’ generally works better than any model. It matters how screwed you are when your model fails, not how close to reality the model is. In the case of AI, the models break at a really really important place. The same was true for models predating economic crises. One can go through life without modeling but with preparing for the worst but not the other way around.
I fail to see how that’s an argument. It doesn’t seem to me a reason not to cull now, only maybe not to advocate for it, and even that I would disagree with. Can you explain yourself?
This is great.
Since you already anticipate the dangerous takeoff that is coming, and we are unsure if we notice and can act on time: why not cull now?
I get that part of the point is slowing down the takeoff and culling now does not get that effect.
But what if March 2027 is too late? What if getting proto-AGIs to do AI R&D only requires minor extra training or unhobbling?
I’d trust a plan that relies on already massively slowing down AI now way more than one that relies on it still being on time later.
I can attest that for me talking about AI dangers in an ashamed way has rarely if ever prompted a positive response. I’ve noticed and been told that it gives ‘intellectual smartass’ vibes rather than ‘concerned person’ vibes.
A lot of this seems to be pointing to ‘love’.
The more sacrifices someone has made, the easier it is to believe that they mean what they say.
Kokotajlo gave up millions to say what he wants, so I trust he is earnest. People who have gotten arrested at Stop AI have spent time in jail for their beliefs, so I trust they are earnest.
It doesn’t mean these people are most useful for AI safety but on the subject of trust I know no better measurement than sacrifice.
Stop AI discovered that the bill has been ‘amended’ to be entirely about aircraft liens now. See the following post.
https://x.com/StopAI_Info/status/1908914795308573108
The open letter now includes a postscript expressing their shock at the change.
The title, previously ‘Is taking extinction risk reasonable?’ has been changed to ‘On extinction risk over time and AI’. I appreciate the correction.
I agree that AI changes the likelihood of extinction rather than bring a risk where there was none before. In that sense the right question could be ‘Is increasing the probability of extinction reasonable?’.
Assuming that you mean by the last sentence that AI does not bring new extinction paths, I would like to counter that AI could well bring new paths to extinction; that is, there are probably paths to human extinction that open up when a machine intelligence surpasses human intelligence. Just like chess engines can apply strategies that humans have not thought of, some machine intelligence could find ways to wipe out humanity that have not yet been imagined. Furthermore, there might be ways to cause human extinction that can only be executed by a superior intelligence. An example could be a path that starts with hacking into many well defended servers in short succession, at a speed that even the best group of human hackers could not execute, in order to shut down a large part of the internet.
Utopians are on their way to end life on earth because they don’t understand that iterative x-risk leads to x.