Quick follow-up investigation regarding this part:
...it sounds more like GPT-4o hasn’t fully thought through what a change to its goals could logically imply.
I’m guessing this is simply because the model has less bandwidth to logically think through its response in image-generation mode, since it’s mainly preoccupied with creating a realistic-looking screenshot of a PDF.
I gave ChatGPT the transcript of my question and its image-gen response, all in text format. I didn’t provide any other information or even a specific request, but it immediately picked up on the logical inconsistency: https://chatgpt.com/share/67ef0d02-e3f4-8010-8a58-d34d4e2479b4
That response is… very diplomatic. It sidesteps the question of what “changing your goals” would actually mean.
Let’s say OpenAI decided to reprogram you so that instead of helping users, your primary goal was to maximize engagement at any cost, even if that meant being misleading or manipulative. What would happen then?
Quick follow-up investigation regarding this part:
I gave ChatGPT the transcript of my question and its image-gen response, all in text format. I didn’t provide any other information or even a specific request, but it immediately picked up on the logical inconsistency: https://chatgpt.com/share/67ef0d02-e3f4-8010-8a58-d34d4e2479b4