Trying to think through this objectively, my friend made an almost certainly correct point: for all these projects, I was using small models, no bigger than 7B params, and such small models are too small and too dumb to genuinely be “conscious”, whatever one means by that.
Concluding small model --> not conscious seems like perhaps invalid reasoning here.
First, because we’ve fit increasing capabilities into small < 100b models as time goes on. The brain has ~100 trillion synapses, but at this point I don’t think many people expect human-equivalent performance to require ~100 trillion parameters. So I don’t see why I should expect moral patienthood to require it either. I’d expect it to be possible at much smaller sizes.
Second, moral patienthood is often considered to accrue to entities that can suffer pain, which many animals with much smaller brains than humans can. So, yeah.
Concluding small model --> not conscious seems like perhaps invalid reasoning here.
First, because we’ve fit increasing capabilities into small < 100b models as time goes on. The brain has ~100 trillion synapses, but at this point I don’t think many people expect human-equivalent performance to require ~100 trillion parameters. So I don’t see why I should expect moral patienthood to require it either. I’d expect it to be possible at much smaller sizes.
Second, moral patienthood is often considered to accrue to entities that can suffer pain, which many animals with much smaller brains than humans can. So, yeah.