AFAIK, only Gwern and I have written concrete stories speculating about how a training run will develop cognition within the AGI.
This worries me, if true (if not, please reply with more!). I think it would be awesome to have more concrete stories![1] If Nate, or Evan, or John, or Paul, or—anyone, please, anyone add more concrete detail to this website!—wrote one of their guesses of how AGI goes, I would understand their ideas and viewpoints better. I could go “Oh, that’s where the claimed sharp left turn is supposed to occur.” Or “That’s how Paul imagines IDA being implemented, that’s the particular way in which he thinks it will help.”
Even if scrubbed of any AGI-capabilities-advancing sociohazardous detail. Although I’m not that convinced that this is a big deal for conceptual content written on AF. Lots of people probably have theories of how AGI will go. Implementation is, I have heard, the bottleneck.
Contrast this with beating SOTA on crisply defined datasets in a way which enables ML authors to get prestige and publication and attention and funding by building off of your work. Seem like different beasts.
AFAIK, only Gwern and I have written concrete stories speculating about how a training run will develop cognition within the AGI.
This worries me, if true (if not, please reply with more!). I think it would be awesome to have more concrete stories![1] If Nate, or Evan, or John, or Paul, or—anyone, please, anyone add more concrete detail to this website!—wrote one of their guesses of how AGI goes, I would understand their ideas and viewpoints better. I could go “Oh, that’s where the claimed sharp left turn is supposed to occur.” Or “That’s how Paul imagines IDA being implemented, that’s the particular way in which he thinks it will help.”
Maybe a contest would help?
ETA tone
Even if scrubbed of any AGI-capabilities-advancing sociohazardous detail. Although I’m not that convinced that this is a big deal for conceptual content written on AF. Lots of people probably have theories of how AGI will go. Implementation is, I have heard, the bottleneck.
Contrast this with beating SOTA on crisply defined datasets in a way which enables ML authors to get prestige and publication and attention and funding by building off of your work. Seem like different beasts.