Sometimes, even my sense of normality shatters, and I start to think about things that you shouldn’t think about. It doesn’t help, but sometimes you think about these things anyway.
I stared out the window at the fragile sky and delicate ground and flimsy buildings full of irreplaceable people, and in my imagination, there was a grey curtain sweeping across the world. People saw it coming, and screamed; mothers clutched their children and children clutched at their mothers; and then the grey washed across them and they just weren’t there any more. The grey curtain swept over my house, my mother and my father and my little sister -
Koizumi’s hand rested on my shoulder and I jerked. Sweat had soaked the back of my shirt.
“Kyon,” he said firmly. “Trying to visualize the full reality of the situation is not a good technique when dealing with Suzumiya-san.”
How do you handle it, Koizumi!
“I’m not sure I can put it in words,” said Koizumi. “From the first day I understood my situation, I instinctively knew that to think ‘I am responsible for the whole world’ is only self-indulgence even if it’s true. Trying to self-consciously maintain an air of abnormality will only reduce my mind’s ability to cope.”
Also: I agree that people who want to do alignment research should just go ahead and do alignment research, without worrying about credentials or whether or not they’re smart enough. On a problem as wickedly difficult as alignment, it’s more important to be able to think of even a single actually-promising new approach than to be very intelligent and know lots of math. (Though even people who don’t feel they’re suited for thinking up new approaches can still work on the problem by joining an existing approach.)
The linked post talks about the large value of buying even six months of time, but six months ago it was May 2022. What has been accomplished in AI alignment since then? I think we urgently need to figure out how to make real progress on this problem, and it would be a tragedy if we turned away people who were genuinely enthusiastic about doing that for reasons of “efficiency”. Allocating people to the tasks for which they have the most enthusiasm is efficient.
A quote from Eliezer’s short fanfic Trust in God, or, The Riddle of Kyon that you may find interesting:
Also: I agree that people who want to do alignment research should just go ahead and do alignment research, without worrying about credentials or whether or not they’re smart enough. On a problem as wickedly difficult as alignment, it’s more important to be able to think of even a single actually-promising new approach than to be very intelligent and know lots of math. (Though even people who don’t feel they’re suited for thinking up new approaches can still work on the problem by joining an existing approach.)
The linked post talks about the large value of buying even six months of time, but six months ago it was May 2022. What has been accomplished in AI alignment since then? I think we urgently need to figure out how to make real progress on this problem, and it would be a tragedy if we turned away people who were genuinely enthusiastic about doing that for reasons of “efficiency”. Allocating people to the tasks for which they have the most enthusiasm is efficient.