There’s also a chance that we’re past the point of no return, but if that’s the case, we’re screwed no matter what we do. Okay, it’s slightly more complicated because there’s a chance that we aren’t yet past the point of no return, but if we pursue wise AI advisors, instead of redirecting these resources to another project that we will be past the point of no return by the time we produce such advisors. This is possible, but my intuition is that it’s worth pursuing anyway.
By Wise AI Advisors, I mean training an AI to provide wise advice. BTW, I’ve now added a link to a short-form post in my original comment where I detail the argument for wise AI advisors further.
Props for proposing a new and potentially fruitful framing.
I would like to propose training Wise AI Advisors as something that could potentially meet your two criteria:
• Even if AI is pretty much positive, wise AI advisors would allow us get closer to maximising these benefits
• We can likely save the world if we make sufficiently wise decisions[1]
There’s also a chance that we’re past the point of no return, but if that’s the case, we’re screwed no matter what we do. Okay, it’s slightly more complicated because there’s a chance that we aren’t yet past the point of no return, but if we pursue wise AI advisors, instead of redirecting these resources to another project that we will be past the point of no return by the time we produce such advisors. This is possible, but my intuition is that it’s worth pursuing anyway.
Is that a wise AI, which is an advisor, or somebody who advises about AI who is wise?
By Wise AI Advisors, I mean training an AI to provide wise advice. BTW, I’ve now added a link to a short-form post in my original comment where I detail the argument for wise AI advisors further.