Despite my contention on the associated paper post that focusing on wisdom in this sense is ducking the hard part of the alignment problem, I’ll stress here that it Iseems thoroughly useful if it’s a supplement not a substitute for work on the hard parts of the problem—technical, theoretical and societal.
I also think it’s going to be easier to create wise advisors than you think, at least in the weak sense that they make their human users effectively wiser.
In short, think simple prompting schemes and eventually agentic scaffolds can do a lot of the extra work it takes to turn knowledge into wisdom, and that there’s an incentive for orgs to train for “wisdom” in the sense you mean as well. So we’ll get wiser advisors as we go, at little or no extra effort. More effort would of course help more.
I believe Deep Research has already made me wiser. I can get a broader context for any given decision.
And that was primarily achieved by prompting; the o3 model that powers OpenAI’s version does seem to help but Perplexity introducing a nearly-as-good system just a week or two later indicates that just the right set of prompts were extremely valuable.
Current systems aren’t up to helping very much with the hypercomplex problems surrounding alignment. But they can now help a little. And any improvements will be a push in the right direction.
Training specifically for “wisdom” as you define it is a push toward a different type of useful capability, so it may be that frontier labs pursue similar training by default.
(As an aside, I think your “comparisons” are all wildly impractical and highly unlikely to be executed before we hit AGI, even on longer realistic estimates. It’s weird that they’re considered valid points of comparison, as all plans that will never be executed have exactly the same value. But that’s where we’re at in the project right now.)
To return from the tangent, I don’t think wise advisors is actually asking anyone to go far out of their default path toward capabilities. Wise advisors will help with everything, including things with lots of economic value, and with AGI alignment/survival planning.
I’ll throw in the caveat that fake wisdom is the opposite of helpful, and there’s a risk of getting sycophantic confabulations on important topics like alignment if you’re not really careful. Sycophantic AIs and humans collaborating to fuck up alignment in a complementarily-foolish clown show that no one will laugh it is now one of my leading models of doom after John Wentworth’s pointing it out.
That’s why I favor AI as a wisdom-aid rather than trying to make it wiser-than-human on its own- if it was, we’d have to trust it, and we probably shouldn’t truest AI more than humans until well past the alignment crunch.
I agree that humans with wise AI advisors is a more promising approach, at least at first, then attempting to directly program wisdom into an autonomously acting agent.
Beyond that, I personally haven’t made up my mind yet about the best way to use wisdom tech.
Despite my contention on the associated paper post that focusing on wisdom in this sense is ducking the hard part of the alignment problem, I’ll stress here that it Iseems thoroughly useful if it’s a supplement not a substitute for work on the hard parts of the problem—technical, theoretical and societal.
I also think it’s going to be easier to create wise advisors than you think, at least in the weak sense that they make their human users effectively wiser.
In short, think simple prompting schemes and eventually agentic scaffolds can do a lot of the extra work it takes to turn knowledge into wisdom, and that there’s an incentive for orgs to train for “wisdom” in the sense you mean as well. So we’ll get wiser advisors as we go, at little or no extra effort. More effort would of course help more.
I believe Deep Research has already made me wiser. I can get a broader context for any given decision.
And that was primarily achieved by prompting; the o3 model that powers OpenAI’s version does seem to help but Perplexity introducing a nearly-as-good system just a week or two later indicates that just the right set of prompts were extremely valuable.
Current systems aren’t up to helping very much with the hypercomplex problems surrounding alignment. But they can now help a little. And any improvements will be a push in the right direction.
Training specifically for “wisdom” as you define it is a push toward a different type of useful capability, so it may be that frontier labs pursue similar training by default.
(As an aside, I think your “comparisons” are all wildly impractical and highly unlikely to be executed before we hit AGI, even on longer realistic estimates. It’s weird that they’re considered valid points of comparison, as all plans that will never be executed have exactly the same value. But that’s where we’re at in the project right now.)
To return from the tangent, I don’t think wise advisors is actually asking anyone to go far out of their default path toward capabilities. Wise advisors will help with everything, including things with lots of economic value, and with AGI alignment/survival planning.
I’ll throw in the caveat that fake wisdom is the opposite of helpful, and there’s a risk of getting sycophantic confabulations on important topics like alignment if you’re not really careful. Sycophantic AIs and humans collaborating to fuck up alignment in a complementarily-foolish clown show that no one will laugh it is now one of my leading models of doom after John Wentworth’s pointing it out.
That’s why I favor AI as a wisdom-aid rather than trying to make it wiser-than-human on its own- if it was, we’d have to trust it, and we probably shouldn’t truest AI more than humans until well past the alignment crunch.
Thanks for sharing your thoughts.
I agree that humans with wise AI advisors is a more promising approach, at least at first, then attempting to directly program wisdom into an autonomously acting agent.
Beyond that, I personally haven’t made up my mind yet about the best way to use wisdom tech.