And you just confirmed in your prior comment that “sufficient capabilities are tied to compute and parameters”.
I am having trouble interpreting that in a way that does not approximately mean “alignment will inevitably happen automatically when we scale up”.
Sorry this is another case where I play with language a bit: I view “parametrisation of an intelligent system” as a broad statement that includes architecting it in different ways. For example recently some more capable models use a snapshot with less parameters than earlier snapshots, for me in this case the “parametrisation” is a process that includes summing the literal model parameters across the whole process and also engineering novel architecture.
Perhaps if you could give me an idea of the high-level implications of your framework, that might give me a better context for interpreting your intent. What does it entail? What actions does it advocate for?
High level I’m sharing things that I derive from my world-model for humans + superintelligence, I’m advocating for exploration of these topics and discussing how it is changing my approach to understanding AI alignment efforts that I think hold the most promise.
If this “playing with language” is merely a stylistic choice, I would personally prefer you not intentionally redefine words with known meanings to mean something else. If this was instead due to the challenges of compressing complex ideas into fewer words, I can definitely relate to that challenge. But either way, I think your use of “parameters” in that way is confusing and undermines the reader’s ability to interpret your ideas accurately and efficiently.
Sorry this is another case where I play with language a bit: I view “parametrisation of an intelligent system” as a broad statement that includes architecting it in different ways. For example recently some more capable models use a snapshot with less parameters than earlier snapshots, for me in this case the “parametrisation” is a process that includes summing the literal model parameters across the whole process and also engineering novel architecture.
High level I’m sharing things that I derive from my world-model for humans + superintelligence, I’m advocating for exploration of these topics and discussing how it is changing my approach to understanding AI alignment efforts that I think hold the most promise.
If this “playing with language” is merely a stylistic choice, I would personally prefer you not intentionally redefine words with known meanings to mean something else. If this was instead due to the challenges of compressing complex ideas into fewer words, I can definitely relate to that challenge. But either way, I think your use of “parameters” in that way is confusing and undermines the reader’s ability to interpret your ideas accurately and efficiently.