Agree. I also suggested ‘philosophical landmines’—secret questions posted on the Internet that may halt any advanced AI that tries to solve them. Solving such landmines maybe needed to access the resources which rogue AI may need. Real examples of such landmines should be kept secret, but it may be something like what is the meaning of life or some Pascal mugging calculations.
Recently, I asked a question to Sonnet and the correct answer to it was to output an error message.
I read some of your post and I like your philosophical landmines ideas (and other ideas too). You’ve definitely done a lot of research! I’m also thinking in similar directions as you, we might talk more sometime.
(By the way I was writing a reply to your comment, but then turned my reply into this quicktake)
Agree. I also suggested ‘philosophical landmines’—secret questions posted on the Internet that may halt any advanced AI that tries to solve them. Solving such landmines maybe needed to access the resources which rogue AI may need. Real examples of such landmines should be kept secret, but it may be something like what is the meaning of life or some Pascal mugging calculations.
Recently, I asked a question to Sonnet and the correct answer to it was to output an error message.
I read some of your post and I like your philosophical landmines ideas (and other ideas too). You’ve definitely done a lot of research! I’m also thinking in similar directions as you, we might talk more sometime.
(By the way I was writing a reply to your comment, but then turned my reply into this quicktake)