Most of my posts and comments are about AI and alignment. Posts I’m most proud of, which also provide a good introduction to my worldview:
Without a trajectory change, the development of AGI is likely to go badly
Steering systems, and a follow up on corrigibility.
I also created Forum Karma, and wrote a longer self-introduction here.
PMs and private feedback are always welcome.
NOTE: I am not Max Harms, author of Crystal Society. I’d prefer for now that my LW postings not be attached to my full name when people Google me for other reasons, but you can PM me here or on Discord (m4xed) if you want to know who I am.
Buy it as what? You mentioned data-dependent generalization, I thought because you were using at as an example / reason that alien LLMs would be different. I pointed out in response that a lot of the data is actually the same, in some sense - someone or (some-LLM) who studies chemistry in English will be able to predict the effects of mixing baking soda and vinegar anywhere. Maybe before you get to full understanding, you can get various Sapir-Whorf-like effects based on what language you’re working in (e.g. perhaps LLMs learn chemistry quickly and more accurately in French, or something), but so what? Eventually with enough scale, they all saturate your evals, regardless of what language they initially learned in. My point is that the curriculum and format of the data in an Alien LLM corpus is at least arguably more similar to a human LLM corpus, than either dataset is similar to the format and curriculum in the respective human and alien natural growth processes.
Not really. The thing I was thinking of and maybe mis-remembering or mis-applying, was translation between pairs of languages for which their are few or no direct human translations between the pair. IIUC, for most of the language pairs on Google Translate for which this is true, Google Translate used to work by translating from Language A <-> English <-> Language B, and this didn’t work very well. Nowadays Google Translate uses some kind of LLM and it apparently works much better. I hypothesize that LLMs faculty with translation would extend to alien languages as well; given how much LLMs have improved machine translation and how good LLMs are at deciphering codes and patterns in text generally. But I concede that’s not the same as definitely already being “extremely superhuman” at it, which was what I said in the grandparent.