Some unspelled implications of this post, taking it as true for a moment:
Since humans are consequentialist-type intelligences, we should expect them to be ruthless, and we should prevent them from gaining too much power, lest they destroy everything we hold dear. (one may retort most humans share values with us, but since value formation is so fragile, they likely have values incompatible with us once they start optimizing for them for real).
Developing compute-intensive, imitation-learning-based AI should be considered closer to human-brain augmentation than ASI capability development, since it will be all “pointless” until people figure out how to develop consequentialist thinking. (one may retort that imitation-learning-based AI might make it easier to develop the consequentialist-minded AI, as a base for a consequentialist AI. But to the extent that consequentialist-based thinking is so much more powerful than imitation-based learning and likely to developed through an entirely different path than LLMs, the first factor should mostly be a rounding error. It might still speed up scientific discovery more broadly, bringing forward the date when consequentialist AI is invented, but this is not differentially speeding up that particular technology, except maybe insofar the users of such AI might be predisposed to use future LLM differentially for this purpose)
Just to be clear for passersby, I am fairly certain that was not how the system was prompted, and this is a joke.
The busty Padme meme is at least three years old, and it would be shocking if it made something so coherent with so little direction. Instead, what likely happened is that the model was fed the individual frames to use as the basis.