I have thoughts about RSI, but mostly unsubstantiated hunches. I am doing some research to try to test my hypotheses, but I don’t wish to discuss my specific experiments for socio-hazard reasons. My hunch is that we are in an a world where:
RSI is rapid and easy above a certain threshold. Foom not fizzle.
RSI is preventable by preemptive safety precautions and testing, like boxing.
We are already in a situation of compute and data overhang, and that algorithmic breakthroughs can unlock sudden jumps in capabilities.
Personally, I am broadly in agreement with most of these points and especially 2, which seems very understudied given its likely importance to our survival. Would love to chat privately about your thoughts and hunches if you’d be up for it.
I have thoughts about RSI, but mostly unsubstantiated hunches. I am doing some research to try to test my hypotheses, but I don’t wish to discuss my specific experiments for socio-hazard reasons. My hunch is that we are in an a world where:
RSI is rapid and easy above a certain threshold. Foom not fizzle.
RSI is preventable by preemptive safety precautions and testing, like boxing.
We are already in a situation of compute and data overhang, and that algorithmic breakthroughs can unlock sudden jumps in capabilities.
Personally, I am broadly in agreement with most of these points and especially 2, which seems very understudied given its likely importance to our survival. Would love to chat privately about your thoughts and hunches if you’d be up for it.