AI pause/stop/slowdown—Gives more time to research both issues and to improve human intelligence/rationality/philosophy which in turn helps with both.
Metaphilosophy and AI philosophical competence—Higher philosophical competence means AIs can help more with alignment research (otherwise such research will be bottlenecked by reliance on humans to solve the philosophical parts of alignment), and also help humans avoid making catastrophic mistakes with their new newfound AI-given powers if no takeover happens.
Also, have you written down a list of potential risks of doing/attempting human intelligence amplification? (See Managing risks while trying to do good and this for context.)
You can also work on things that help with both:
AI pause/stop/slowdown—Gives more time to research both issues and to improve human intelligence/rationality/philosophy which in turn helps with both.
Metaphilosophy and AI philosophical competence—Higher philosophical competence means AIs can help more with alignment research (otherwise such research will be bottlenecked by reliance on humans to solve the philosophical parts of alignment), and also help humans avoid making catastrophic mistakes with their new newfound AI-given powers if no takeover happens.
Human intelligence amplification
BTW, have you see my recent post Trying to understand my own cognitive edge, especially the last paragraph?
Also, have you written down a list of potential risks of doing/attempting human intelligence amplification? (See Managing risks while trying to do good and this for context.)
I haven’t seen your stuff, I’ll try to check it out nowish (busy with Inkhaven). Briefly (IDK which things you’ve seen):
My most direct comments are here: https://x.com/BerkeleyGenomic/status/1909101431103402245
I’ve written a fair bit about possible perils of germline engineering (aiming extremely for breadth without depth, i.e. just trying to comprehensively mention everything). Some of them apply generally to HIA. https://berkeleygenomics.org/articles/Potential_perils_of_germline_genomic_engineering.html
My review of HIA discusses some risks (esp. value drift), though not in much depth: https://www.lesswrong.com/posts/jTiSWHKAtnyA723LE/overview-of-strong-human-intelligence-amplification-methods