Second, we can match the certification to the types of people and institutions, that is, our certifications talk about the executives, citizens, or corporations (rather than e.g. specific algorithms, that may be replaced in the future). Third, the certification system can build in mechanisms for updating the certification criteria periodically.
* I think effective certification is likely to involve expert analysis (including non-technical domain experts) of specific algorithms used in specific contexts. This appears to contradict the “Second” point above somewhat. * I want people to work on developing the infrastructure for such analyses. This is in keeping with the “Third” point. * This will likely involve a massive increase in investment of AI talent in the process of certification.
As an example, I think “manipulative” algorithms—that treat humans as part of the state to be optimized over—should be banned in many applications in the near future, and that we need expert involvement to determine the propensity of different algorithms to actually optimize over humans in various contexts.
I think effective certification is likely to involve expert analysis (including non-technical domain experts) of specific algorithms used in specific contexts. This appears to contradict the “Second” point above somewhat.
The idea with the “Second” point is that the certification would be something like “we certify that company X has a process Y for analyzing and fixing potential problem Z whenever they build a new algorithm / product”, which seems like it is consistent with your belief here? Unless you think that the process isn’t enough, you need to certify the analysis itself.
I think the contradiction may only be apparent, but I thought it was worth mentioning anyways. My point was just that we might actually want certifications to say things about specific algorithms.
* I think effective certification is likely to involve expert analysis (including non-technical domain experts) of specific algorithms used in specific contexts. This appears to contradict the “Second” point above somewhat.
* I want people to work on developing the infrastructure for such analyses. This is in keeping with the “Third” point.
* This will likely involve a massive increase in investment of AI talent in the process of certification.
As an example, I think “manipulative” algorithms—that treat humans as part of the state to be optimized over—should be banned in many applications in the near future, and that we need expert involvement to determine the propensity of different algorithms to actually optimize over humans in various contexts.
The idea with the “Second” point is that the certification would be something like “we certify that company X has a process Y for analyzing and fixing potential problem Z whenever they build a new algorithm / product”, which seems like it is consistent with your belief here? Unless you think that the process isn’t enough, you need to certify the analysis itself.
I think the contradiction may only be apparent, but I thought it was worth mentioning anyways.
My point was just that we might actually want certifications to say things about specific algorithms.