Wouldn’t discussions of high-level philosophy benefit from concrete examples like myattempts to show that mankind shouldn’t actually populate many stellar systems because there are many other lifeforms that would be oppressed?
Another concrete example could be Buck’s Christian homeschoolers or David Matolcsi’s superpersuasive AI girlfriends. These examples imply that the AIs are not to be allowed to do… what exactly? To be persuasive over a certain level? To keep Christian homeschoolers in the dark? And is the latter fixable by demanding that OpenBrain moves major parts of the Spec to root level, making it a governance issue?
As for preventing researchers from working on alignment, this simply means that work related to aligning the AIs to any targets is either done by agents as trustworthy as Agent-4 or CCP’s DeepCent or suppressed by an international ASI ban. Your proposal means that the ASI ban has to include alignment work until illegible troubles are solved, then capabilities work until alignment is solved. But it is likely easier to include the clause about “alignment work until illegible troubles are solved” into an existing ASI ban, especially if the negative effects of AI girlfriends, slop, pyramid replacement, etc, become obvious.
Wouldn’t discussions of high-level philosophy benefit from concrete examples like my attempts to show that mankind shouldn’t actually populate many stellar systems because there are many other lifeforms that would be oppressed?
Another concrete example could be Buck’s Christian homeschoolers or David Matolcsi’s superpersuasive AI girlfriends. These examples imply that the AIs are not to be allowed to do… what exactly? To be persuasive over a certain level? To keep Christian homeschoolers in the dark? And is the latter fixable by demanding that OpenBrain moves major parts of the Spec to root level, making it a governance issue?
As for preventing researchers from working on alignment, this simply means that work related to aligning the AIs to any targets is either done by agents as trustworthy as Agent-4 or CCP’s DeepCent or suppressed by an international ASI ban. Your proposal means that the ASI ban has to include alignment work until illegible troubles are solved, then capabilities work until alignment is solved. But it is likely easier to include the clause about “alignment work until illegible troubles are solved” into an existing ASI ban, especially if the negative effects of AI girlfriends, slop, pyramid replacement, etc, become obvious.