>AGI discussed in e.g. superintelligence
The fact that this is how you’re clarifying it shows my point. While I’ve heard people talking about “AGI” (meaning weak ASI) having significant impacts, it’s seldom discussed as leading to x-risks. To give maybe the most famous recent example, AI 2027 ascribes the drastic changes and extinction specifically to ASI. Do you have examples of mind of credible people specifically talking about x-risks from AGI? Bostrom’s book refers to superintelligence, not general intelligence, as the superhuman AI.
Also, I find the argument that we should continue to use muddled terms to avoid changing their meaning not that compelling, at least when it’s easy to clarify meanings where necessary.
-Thanks.
-For a bunch of reasons that I’ll explain over dm if you want (among other things, having similar games and certainly haven’t been exposed to the training process and not seeing models perform anonymously worse at those, but also particulars about what parts of the law are easy for models to figure out), I’m not that worried about what I had causing contamination issues. But out of an abundance of caution (and a desire to make the post spoil Starburst less for people), I edited it to remove details about the law.
-The 4- and 2-hour physics Starburst games are not spoiled by this post (other than slightly by hearing about how well the models did). Nor is the nearly-finished Chemistry Starburst or the fiendish technology Starburst. You’re welcome to try any of those if you want. As for future readers, talking about human and AI performance is a sort of soft spoiler, but much better than it was.
-Those look interesting. Will take a look.