The best technical solution might just be “use the FAI to find the solution.” Friendly AI is already, at its core, just a formal method for evaluating which actions are good for humans.
It’s plausible we could use AI alignment research to “align” corporations, but only in a weakened sense where there’s some process that returns good answers in everyday contexts. But for “real” alignment where the corporation somehow does what’s best for humans with high generality… well, that means using some process to evaluate actions, so this is the case of using FAI.
The best technical solution might just be “use the FAI to find the solution.” Friendly AI is already, at its core, just a formal method for evaluating which actions are good for humans.
It’s plausible we could use AI alignment research to “align” corporations, but only in a weakened sense where there’s some process that returns good answers in everyday contexts. But for “real” alignment where the corporation somehow does what’s best for humans with high generality… well, that means using some process to evaluate actions, so this is the case of using FAI.