Coming here for the first time I immediately noticed how unstructured the platform is. Imagine AI Alignment creating a brand new platform—or recreating this one if you prefer—that is structured exactly like Reddit. I mean exactly like Reddit. That way the many thousands of open-source developers out there working to solve the alignment problem would know exactly where to go to find the open questions in each category of development, and to also ask for help where they are stuck. Guys, this is a no-brainer. The way you have it now you make it so much more difficult for people to find questions and answers on the part of AI alignment that they are focused on.
This seems… at least superficially worse than status quo? I don’t see how this helps.
LessWrong currently has posts and tags. I do agree it doesn’t make it that easy to find a relevant post and open question. But how does spreading out the questions across a sprawling unstructured group of subforums help?
(FYI EA Forum experimented with building subforums. My impression is it mostly didn’t work out the way they hoped)
I’m interested in hearing more about what problems you ran into though and what specific things you’re hoping for. Like what are some actions you took recently that didn’t work, and what actions are you imagining taking on the new site that go better?)
If you’re not yet familiar with Reddit I would suggest you visit the platform to better acquaint yourself with why I believe it would better connect researchers working on alignment.
There are probably dozens, if not hundreds, of specific categories related to the alignment problem. If each of them had their own sub-reddit-like community that researchers could subscribe to, this may greatly facilitate finding collaborators depending on the specific focus of their research.
I’m not familiar with FYI EA, but if their forum did not succeed it may because forums can be a challenge that few platforms get right. I believe that Reddit gets it exceedingly right as evidenced by its popularity and ease of use.
I’m not specifically an alignment researcher. I try to develop ideas that will advance AI, and understand that the area of alignment is especially important.
I won’t be able to post this following idea here for another five days because I’m new to the forum, but I thought you may be interested in previewing an approach that may advance alignment:
NAMSI: A promising approach to solving the alignment problem
Media-driven fears about AI causing major havoc that includes human extinction have as their foundation the fear that we will not get the alignment problem right before we reach AGI, and that the threat will grow far more menacing when we reach ASI. What hasn’t yet been sufficiently appreciated by AI developers is that the alignment problem is most fundamentally a morality problem.
This is where the development of narrow AI systems dedicated exclusively to solving alignment by better understanding morality holds great promise. We humans may not have the intelligence to solve alignment but if we create narrow AI dedicated to understanding and advancing the morality required to solve this challenge, we can more effectively rely on it, rather than on ourselves, to provide the most promising solutions in the shortest span of time.
Since the fears of destructive AI center mainly on when we reach ASI, or artificial super-intelligence, perhaps developing narrow ASI dedicated to morality should be the focus of our alignment work. Narrow AI systems are now approaching top notch legal and medical expertise, and because so much progress has already been made in these two domains at such a rapid pace, we can expect substantial advances in these next few years.
What if we develop a narrow AI system dedicated exclusively not to law or medicine but rather to better understanding the morality that lies at the heart of the alignment problem? Such a system may be dubbed Narrow Artificial Moral Super-intelligence, or NAMSI.
AI developers like Emad Mostaque of Stability AI understand the advantages of pursuing narrow AI applications over the more ambitious but less attainable AGI. In fact Stability’s business model focuses on developing very specific narrow AI applications for its corporate clients.
One of the questions facing us as a global society is to what should we be most applying the AI that we are developing? Considering the absolute necessity of getting the alignment problem right, and the understanding that morality is the central challenge of that solution, developing NAMSI may be our best chance of solving alignment before we reach AGI and ASI.
But why go for narrow artificial moral super-intelligence rather than simply artificial moral intelligence? Because this is within our grasp. While morality has great complexities that challenge humans, our success with narrow legal and medical AI applications that may in a few years exceed the expertise of top lawyers and doctors in various narrow domains tells us something. We have reason to be confident that if we train AI systems to better understand the workings of morality, we can expect that they will probably sooner than later achieve a level of expertise in this narrow domain that far exceeds that of humans.
Once we arrive there, the likelihood of our solving the alignment problem before we get to AGI and ASI becomes far greater because we will have relied on AI rather than on our own weaker intelligence as of as our tool of choice.
Thanks for your reply and interest in my platform concerns!