People have said that to get a good prompt it’s better to have a discussion with a model like o3-mini, o1, or Claude first, and clarify various details about what you are imagining, then give the whole conversation as a prompt to OA Deep Research.
Thanks, seems pretty good on a quick skim, I’m a bit less certain on the corrigibility section, also more issues might become apparent if I read through it more slowly.
How about “Please summarise Eliezer Yudkowsky’s views on decision theory and its relevance to the alignment problem”.
People have said that to get a good prompt it’s better to have a discussion with a model like o3-mini, o1, or Claude first, and clarify various details about what you are imagining, then give the whole conversation as a prompt to OA Deep Research.
Here you go: https://chatgpt.com/share/67a34222-e7d8-800d-9a86-357defc15a1d
Thanks, seems pretty good on a quick skim, I’m a bit less certain on the corrigibility section, also more issues might become apparent if I read through it more slowly.