I’d love to explain but sadly I was just told not to
FVelde
Estimating the probability of the moonrise problem is impossible and system failure is world-ending. The US and China have agreed on not using AI in nuclear recently and I sleep better at night for it. Under what circumstances would you choose a machine over humans for this?
https://www.reuters.com/world/biden-xi-agreed-that-humans-not-ai-should-control-nuclear-weapons-white-house-2024-11-16/
Utopians are on their way to end life on earth because they don’t understand that iterative x-risk leads to x.
What do you think is realistic if alignment is possible? Would the large corporations make a loving machine or a money-and-them-aligned machine?
Did you use EFA to conclude that EFA is the worst, common bad argument?
How would this work with European airlines or airlines from countries where there are much less credit card payments?
What if you’re wrong?
Effect is hard if not impossible to determine but the Netherlands have one of the lowest unemployment rates in Europe.
https://en.m.wikipedia.org/wiki/List_of_sovereign_states_in_Europe_by_unemployment_rate
Guido has already repeatedly done protest and even gotten arrested multiple times in front of OA. I don’t know his exact reasons for choosing Anthropic now, but spreading the protests over the different actors makes sense to me.
People also asked the same kind of ‘why not …’ question when he and others repeatedly protested OA. In the end whatever reasons there may be to go somewhere else, you can only be in one place.
FVelde’s Shortform
There is now one hunger striker in front of Anthropic and two in front of Google Deepmind.
https://x.com/DSheremet_/status/1964749851490406546
Could you give the source(s) of these anonymous surveys of engineers with insider knowledge about the arrival of AGI? I would be interested in seeing them.
Good observations. The more general problem is modeling. Models break and ‘hope for the best expecting the worst’ generally works better than any model. It matters how screwed you are when your model fails, not how close to reality the model is. In the case of AI, the models break at a really really important place. The same was true for models predating economic crises. One can go through life without modeling but with preparing for the worst but not the other way around.
I fail to see how that’s an argument. It doesn’t seem to me a reason not to cull now, only maybe not to advocate for it, and even that I would disagree with. Can you explain yourself?
This is great.
Since you already anticipate the dangerous takeoff that is coming, and we are unsure if we notice and can act on time: why not cull now?
I get that part of the point is slowing down the takeoff and culling now does not get that effect.
But what if March 2027 is too late? What if getting proto-AGIs to do AI R&D only requires minor extra training or unhobbling?
I’d trust a plan that relies on already massively slowing down AI now way more than one that relies on it still being on time later.
I can attest that for me talking about AI dangers in an ashamed way has rarely if ever prompted a positive response. I’ve noticed and been told that it gives ‘intellectual smartass’ vibes rather than ‘concerned person’ vibes.
A lot of this seems to be pointing to ‘love’.
The more sacrifices someone has made, the easier it is to believe that they mean what they say.
Kokotajlo gave up millions to say what he wants, so I trust he is earnest. People who have gotten arrested at Stop AI have spent time in jail for their beliefs, so I trust they are earnest.
It doesn’t mean these people are most useful for AI safety but on the subject of trust I know no better measurement than sacrifice.
Stop AI discovered that the bill has been ‘amended’ to be entirely about aircraft liens now. See the following post.
https://x.com/StopAI_Info/status/1908914795308573108
The open letter now includes a postscript expressing their shock at the change.
Could you show some examples and/or say how you come up with a comment that gets a lot of likes?