@jbash What do you think would be a better strategy/more reasonable? Should there be more focus on mitigating risks after potential model theft? Or a much stronger effort to convince key actors to implement unprecedentedly strict security for AI?
Sorry; I’m not in the habit of reading the notifications, so I didn’t see the “@” tag.
I don’t have a good answer (which doesn’t change the underlying bad prospects for securing the data). I think I’d tend to prefer to “mitigating risks after potential model theft”, because I believe “convince key actors” is fundamentally futile. The kind of security you’d need, if it’s possible, would basically shut them down. Which is equivalent to abandoning the “key actor” role to whoever does not implement that kind of security.
Unfortunately, “key actors” would also have to be convinced to “mitigate risks”, which they’re unlikely to do because that would require them to accept that their preventative measures are probably going to fail. So even the relatively mild “go ahead and do it, but don’t expect it to work” is probably not going to happen.
@jbash What do you think would be a better strategy/more reasonable? Should there be more focus on mitigating risks after potential model theft? Or a much stronger effort to convince key actors to implement unprecedentedly strict security for AI?
Sorry; I’m not in the habit of reading the notifications, so I didn’t see the “@” tag.
I don’t have a good answer (which doesn’t change the underlying bad prospects for securing the data). I think I’d tend to prefer to “mitigating risks after potential model theft”, because I believe “convince key actors” is fundamentally futile. The kind of security you’d need, if it’s possible, would basically shut them down. Which is equivalent to abandoning the “key actor” role to whoever does not implement that kind of security.
Unfortunately, “key actors” would also have to be convinced to “mitigate risks”, which they’re unlikely to do because that would require them to accept that their preventative measures are probably going to fail. So even the relatively mild “go ahead and do it, but don’t expect it to work” is probably not going to happen.
Account settings let you set mentions to notify you by email :)