Many words. But fundamentally first time I see something that makes sense on the topic. If you make a God, prepare to be killed by him.
If Sutskever, Altman et al. want that, I wish there was a way to send them off to a parallel universe to run their experiments. I have a family and normal life to attend to.
There is no such thing as safe AGI. It must be indefinitely delayed.
I generally agree with your commentary about the dire lack of research in this area now, and I want to be hopeful about solvability of alignment.
I want to propose that AI alignment is not only a problem for ML professionals. It is a problem for the whole society and we need to get as many people involved here as possible, soon. From lawyers and law-makers, to teachers and cooks. It is so for many reasons:
They can have wonderful ideas people with ML background might not. (Which may translate into technical solutions, or into societal solutions.)
It affects everyone, so everyone should be invited to address the problem.
We need millions of people working on this problem right now.
I want to show what we are doing at my company: https://conjointly.com/blog/ai-alignment-research-grant/ . The aim is to make social science PhDs aware of the alignment problem and get them involved in the way they can. Is it the right way to do it? I do not know.
I, for one, am not an LLM specialist. So I intend to be making noise everywhere I can with the resources I have. This weekend I will be writing to every member of the Australian parliament. Next weekend, I will be writing to every university in the country.