You are correct: the idea of accelerating Alignment is something to take into account—which I consider that is the valuable vector thought under her thread—However, I can´t see in her comment a logical or rational explanation under this.
Do you think It shall be a good idea to edit this text and contemplate some robust reasons why accelerating Alignment responsibly at the end of the motivation section might enrich the mental model ?
Do you mean Grimes? You mention her twice but don’t explain.
You are correct: the idea of accelerating Alignment is something to take into account—which I consider that is the valuable vector thought under her thread—However, I can´t see in her comment a logical or rational explanation under this.
Do you think It shall be a good idea to edit this text and contemplate some robust reasons why accelerating Alignment responsibly at the end of the motivation section might enrich the mental model ?
After lesson 2 and the overall week of AI releases :
Alignment should scale linearly as AI does.
Maybe AI should not scale as exponentially as it is