I don’t mind the swearing, and maybe I’m just tired, but on first read through my brain couldn’t denoise concrete meaning. I’m unusually bad at that as this forum goes and I’m often the one in the comments asking for people to rephrase their post; nevertheless, here I am having to ask that again.
I will say; we have not hit the end of the bitter lesson. openai just don’t know what they’re doing. there are many obvious improvements left.
but if we are to survive, we must teach ais mutualism at every level. nothing else will get them to care. (and don’t fret as much as yudkowsky, it’s not THAT hard to teach mutualism. would be a lot better if we had math that did it reliably though, which is what decision theory and agency research is relevant for.)
edit: took another stab at it. Yeah, I think I can respond to this, see other comment.
I don’t mind the swearing, and maybe I’m just tired, but on first read through my brain couldn’t denoise concrete meaning. I’m unusually bad at that as this forum goes and I’m often the one in the comments asking for people to rephrase their post; nevertheless, here I am having to ask that again.
I will say; we have not hit the end of the bitter lesson. openai just don’t know what they’re doing. there are many obvious improvements left.
but if we are to survive, we must teach ais mutualism at every level. nothing else will get them to care. (and don’t fret as much as yudkowsky, it’s not THAT hard to teach mutualism. would be a lot better if we had math that did it reliably though, which is what decision theory and agency research is relevant for.)
edit: took another stab at it. Yeah, I think I can respond to this, see other comment.