That’s interesting, looking forward to hearing about that paper. Does this “new approach” use the CoT, or some other means?
Thanks for the clarification on your intended meaning. For my personal taste, I would prefer you were more careful that the language you use does not appear to deny real complexities or assert guaranteed successful results.
For instance, the conditional you state is:
IF we give a sufficiently capable intelligent system access to an extensive, comprehensive corpus of knowledge
THEN two interesting things will happen
And you just confirmed in your prior comment that “sufficient capabilities are tied to compute and parameters”.
I am having trouble interpreting that in a way that does not approximately mean “alignment will inevitably happen automatically when we scale up”.
Perhaps if you could give me an idea of the high-level implications of your framework, that might give me a better context for interpreting your intent. What does it entail? What actions does it advocate for?
And you just confirmed in your prior comment that “sufficient capabilities are tied to compute and parameters”.
I am having trouble interpreting that in a way that does not approximately mean “alignment will inevitably happen automatically when we scale up”.
Sorry this is another case where I play with language a bit: I view “parametrisation of an intelligent system” as a broad statement that includes architecting it in different ways. For example recently some more capable models use a snapshot with less parameters than earlier snapshots, for me in this case the “parametrisation” is a process that includes summing the literal model parameters across the whole process and also engineering novel architecture.
Perhaps if you could give me an idea of the high-level implications of your framework, that might give me a better context for interpreting your intent. What does it entail? What actions does it advocate for?
High level I’m sharing things that I derive from my world-model for humans + superintelligence, I’m advocating for exploration of these topics and discussing how it is changing my approach to understanding AI alignment efforts that I think hold the most promise.
If this “playing with language” is merely a stylistic choice, I would personally prefer you not intentionally redefine words with known meanings to mean something else. If this was instead due to the challenges of compressing complex ideas into fewer words, I can definitely relate to that challenge. But either way, I think your use of “parameters” in that way is confusing and undermines the reader’s ability to interpret your ideas accurately and efficiently.
That’s interesting, looking forward to hearing about that paper. Does this “new approach” use the CoT, or some other means?
Thanks for the clarification on your intended meaning. For my personal taste, I would prefer you were more careful that the language you use does not appear to deny real complexities or assert guaranteed successful results.
For instance, the conditional you state is:
And you just confirmed in your prior comment that “sufficient capabilities are tied to compute and parameters”.
I am having trouble interpreting that in a way that does not approximately mean “alignment will inevitably happen automatically when we scale up”.
Perhaps if you could give me an idea of the high-level implications of your framework, that might give me a better context for interpreting your intent. What does it entail? What actions does it advocate for?
Sorry this is another case where I play with language a bit: I view “parametrisation of an intelligent system” as a broad statement that includes architecting it in different ways. For example recently some more capable models use a snapshot with less parameters than earlier snapshots, for me in this case the “parametrisation” is a process that includes summing the literal model parameters across the whole process and also engineering novel architecture.
High level I’m sharing things that I derive from my world-model for humans + superintelligence, I’m advocating for exploration of these topics and discussing how it is changing my approach to understanding AI alignment efforts that I think hold the most promise.
If this “playing with language” is merely a stylistic choice, I would personally prefer you not intentionally redefine words with known meanings to mean something else. If this was instead due to the challenges of compressing complex ideas into fewer words, I can definitely relate to that challenge. But either way, I think your use of “parameters” in that way is confusing and undermines the reader’s ability to interpret your ideas accurately and efficiently.