President of Wisconsin AI Safety Initiative
ben hayum(Ben Hayum)
Karma: 73
Risks from GPT-4 Byproduct of Recursively Optimizing AIs
I can’t convince you to continue or stop, but maybe reading the edit I made to the start of the post will better clarify the risks for you.
Ultimately I think this leads to the necessity of very strong global monitoring, including breaking all encryption, to prevent hostile AGI behavior.
This may be the case but I think there are other possible solutions and propose some early ideas of what they might look like in: https://www.lesswrong.com/posts/5nfHFRC4RZ6S2zQyb/risks-from-gpt-4-byproduct-of-recursively-optimizing-ais
I agree, it isn’t the worst heuristic ever. But at the end of the day is still a heuristic.
Yes, you’re right. I tried to phrase things kindly but perhaps I didn’t go far enough. Made some edits :)