I mostly agree with the points written here. It’s actually on the (Section A; Point1) that I’d like to have more clarification on:
AGI will not be upper-bounded by human ability or human learning speed. Things much smarter than human would be able to learn from less evidence than humans require to have ideas driven into their brains
When we have AGI working on hard research problems, it sounds akin to decades of human-level research compressed up into just a few days or maybe even less, perhaps. That may be possible, but often, the bottleneck is not the theoretical framework or proposed hypothesis, but waiting for experimental proof. If we say that an AGI will be a more rational agent than humans, do we not expect it to try to accumulate more experimental proof to test the theory to estimate, for example, the expected utility of pursuing a novel course of action?
I think there would still be some constraints to this process. For example, humans often wait until the experimental proof has accumulated enough to validate certain theories (for example, the Large Hadron Collider Project, the Photoelectric effect, etc). We need to observe nature to gather proof that the theory doesn’t fail in scenarios we expect it to fail. To accumulate such proof, we might build new instruments to gather new types of data to validate the theory on the now-larger set of available data. Sometimes that process can take years. Just because AGI will be smarter than humans, can we say that it’ll be making proportionately faster breakthroughs in research?
From what I’ve seen so far, Imagen is more “straightforward” and does a better job generating an image describing the text than DALE-2. But DALE-2 seems to be producing prettier images (which makes sense given it was fine-tuned for aesthetics),
There’s a Github repo up already, so I hope we’ll be able to try an Open source version and actually test on the same prompts as DALE-2.