Regarding the “Explicitly Disagrees” section, I’d worry more about people like Nintil than Scholar’s Stage.
Scholar’s Stage clearly took a face-value approach, based on the classic bayesian approach that if someone is trying to get your money, it doesn’t matter how complicated or convoluted their strategy seems; you’re more likely to encounter winning strategies than losing ones because winning strategies are adopted by more grifters. That problem is actually less solvable than it sounds, but nonetheless was solved by the widespread draw-down of earning to give. Fortunately, those dark days are over.
Nintil, on the other hand, worries me greatly. There are massive vested interests with public support/ambivalence of the AI industry, and they are theoretically capable of flicking a switch and stomping on AI risk via counterargument DDOS-ing (or gradually ratcheting up those systems whenever they need to keep AI-risk below some acceptable threshold).
Counterarguments that are refutable, but not quickly or conveniently refutable, are something that can become a much more prevalent concern out of nowhere; much more than their persuasiveness would ordinarily warrant if those counterarguments are considered on their own merit, rather than being artificially propped up in very sophisticated and deliberate ways.
Regarding the “Explicitly Disagrees” section, I’d worry more about people like Nintil than Scholar’s Stage.
Scholar’s Stage clearly took a face-value approach, based on the classic bayesian approach that if someone is trying to get your money, it doesn’t matter how complicated or convoluted their strategy seems; you’re more likely to encounter winning strategies than losing ones because winning strategies are adopted by more grifters. That problem is actually less solvable than it sounds, but nonetheless was solved by the widespread draw-down of earning to give. Fortunately, those dark days are over.
Nintil, on the other hand, worries me greatly. There are massive vested interests with public support/ambivalence of the AI industry, and they are theoretically capable of flicking a switch and stomping on AI risk via counterargument DDOS-ing (or gradually ratcheting up those systems whenever they need to keep AI-risk below some acceptable threshold).
Counterarguments that are refutable, but not quickly or conveniently refutable, are something that can become a much more prevalent concern out of nowhere; much more than their persuasiveness would ordinarily warrant if those counterarguments are considered on their own merit, rather than being artificially propped up in very sophisticated and deliberate ways.
See here: https://www.lesswrong.com/posts/RsDwRmHGvf6GqaQkE/why-so-little-ai-risk-on-rationalist-adjacent-blogs?commentId=rumGEbYYnHBcRxx6c