I realize I linked the summary overview. The specific wording I was referencing is in 14(4)(e), the requirement for humans to be able: “to intervene in the operation of the high-risk AI system or interrupt the system through a ‘stop’ button”. The Recitals do not provide any further, technical insights about how this “stop button” should work...
It could. It’s in their best interest to know how to make it either 1) enforceable, which is hard; or 2) enforce that companies development and deploying high risk systems dedicate enough resources and funding to research on effectively circumventing this challenge.
Lawyer me says it’s a wonderful consultancy opportunity for people who have spent years on this issue and actually have a methodology worth exploring and funding. The opportunity to make this provision more specific was missed (the AI act is now fully in force) but there will be future guidances and directives. Which means funding opportunities that hopefully make big tech direct more resources to research. But this only happens if we can make policy makers understand what works, the current state of affairs of the shutdown problem, and how to steer companies in the right direction.
(Thanks for your engagement here and on LinkedIn, much appreciated 🙏🏻).
Article 14 seems like a good provision to me! Why would UK-specific regulation want to avoid it?
I realize I linked the summary overview. The specific wording I was referencing is in 14(4)(e), the requirement for humans to be able: “to intervene in the operation of the high-risk AI system or interrupt the system through a ‘stop’ button”.
The Recitals do not provide any further, technical insights about how this “stop button” should work...
Ah, I see! I agree it could be more specific.
It could. It’s in their best interest to know how to make it either 1) enforceable, which is hard; or 2) enforce that companies development and deploying high risk systems dedicate enough resources and funding to research on effectively circumventing this challenge.
Lawyer me says it’s a wonderful consultancy opportunity for people who have spent years on this issue and actually have a methodology worth exploring and funding. The opportunity to make this provision more specific was missed (the AI act is now fully in force) but there will be future guidances and directives. Which means funding opportunities that hopefully make big tech direct more resources to research. But this only happens if we can make policy makers understand what works, the current state of affairs of the shutdown problem, and how to steer companies in the right direction.
(Thanks for your engagement here and on LinkedIn, much appreciated 🙏🏻).