But this only works if those less worried about AI risks who join such a collaboration don’t use the knowledge they gain to cash in on the AI boom in an acceleratory way.
Can you state more specifically what the alleged bad actions are here? Based on some of the discussions under your post about professional norms surrounding information disclosure, I think it is worth distinguishing two cases.
First, consider a norm that limits the disclosure of some relatively specific and circumscribed pieces of information, such as a doctor not being allowed to reveal personal health information of patients outside of what is needed to provide care.
Second, a general norm that if you cooperate with someone and they provide you some info, you won’t use that info contrary to their interests. Its not 100% clear to me, but your post sounds a lot like this second one.
I think the second scenario raises a lot of issues. Its seems challenging to enforce, hard to understand and navigate, costly for people to attempt to conform to, and potentially counterproductive for what seems to be your goal. You are considering a specific case at a specific point in time, but I don’t think that gives the full picture of the impact of such a norm. For example, consider ex-OpenAI employees who left due to concerns about AI safety. Should the expectation be that they only use information and experience they gained at OpenAI in a way that OpenAI would approve of?
Now, if Epoch and/or specific individuals made commitments that they violated, that might be more like the first case, but its not clear that is what happened here. If it is, more explanation of how this is the case would be helpful, I think.
I agree that this issue is complex and I don’t pretend to have all of the solutions.
I just think it’s really bad if people feel that they can’t speak relatively freely with the forecasting organisations because they’ll misuse the information. I think this is somewhat similar to how it is important for folks to be able to speak freely to their doctor/lawyer/psychologist though I admit that the analogy isn’t perfect and that straightforwardly copying these norms over would probably be a mistake.
Nonetheless, I think it is worthwhile discussing whether there should be some kind of norms and what they should be. As you’ve rightly pointed out, are a lot of issues that would need to be considered. I’m not saying I know exactly what these norms should be. I see myself as more just starting a discussion.
(This is distinct from my separate point about it being a mistake to hire folk who do things like this. It is a mistake to have hired folks who act strongly against your interests even if they don’t break any ethical injuctions)
I just think it’s really bad if people feel that they can’t speak relatively freely with the forecasting organisations because they’ll misuse the information.
To “misuse” to me implies taking a bad action. Can you explain what misuse occurred here? If we assume that people at OpenAI now feel less able to speak freely after things that ex-OpenAI employees have said/done would you likewise characterize those people as having “misused” information or experience they gained at OpenAI? I understand you don’t have fully formed solutions and that’s completely understandable, but I think my questions go to a much more fundamentally issue about what the underlying problem actually is. I agree it is worth discussing, but I think it would clarify the discussion to understand what the intent of such a norm would (and if achieving that intent would in fact be desirable).
(This is distinct from my separate point about it being a mistake to hire folk who do things like this. It is a mistake to have hired folks who act strongly against your interests even if they don’t break any ethical injuctions)
If Coca-Cola hires someone who later leaves and goes to work for Pepsi because Pepsi offered them higher compensation, I’m not sure it would make sense for Coca-Cola to conclude that they should make big changes to their hiring process, other than perhaps increasing their own compensation if they determine that is a systematic issue. Coca-Cola probably needs to accept that “its not personal” is sometimes going to be the natural of the situation. Obviously details matter, so maybe this case is different, but I think working in an environment where you need to cooperate with other people/institutions means you also have to sometimes accept that people you work with will make decisions based on their own judgements and interests, and therefore may do things you don’t necessarily agree with.
To “misuse” to me implies taking a bad action. Can you explain what misuse occurred here?
They’re recklessly accelerating AI. Or, at least, that’s how I see it. I’ll leave it to others to debate whether or not this characterisation is accurate.
Obviously details matter
Details matter. It depends on how bad it is and how rare these actions are.
I know I’ve responded to a lot of your comments, and I get the sense you don’t want to keep engaging with me, so I’ll try to keep it brief.
We both agree that details matter, and I think the details of what the actual problem is matter. If, at bottom, the thing that Epoch/these individuals have done wrong is recklessly accelerate AI, I think you should have just said that up top. Why all the “burn the commons”, “sharing information freely”, “damaging to trust” stuff? It seems like you’re saying at the end of the day, those things aren’t really the thing you have a problem with. On the other hand, I think invoking that stuff is leading you to consider approaches that won’t necessarily help with avoiding reckless acceleration, as I hope my OpenAI example demonstrates.
Can you state more specifically what the alleged bad actions are here? Based on some of the discussions under your post about professional norms surrounding information disclosure, I think it is worth distinguishing two cases.
First, consider a norm that limits the disclosure of some relatively specific and circumscribed pieces of information, such as a doctor not being allowed to reveal personal health information of patients outside of what is needed to provide care.
Second, a general norm that if you cooperate with someone and they provide you some info, you won’t use that info contrary to their interests. Its not 100% clear to me, but your post sounds a lot like this second one.
I think the second scenario raises a lot of issues. Its seems challenging to enforce, hard to understand and navigate, costly for people to attempt to conform to, and potentially counterproductive for what seems to be your goal. You are considering a specific case at a specific point in time, but I don’t think that gives the full picture of the impact of such a norm. For example, consider ex-OpenAI employees who left due to concerns about AI safety. Should the expectation be that they only use information and experience they gained at OpenAI in a way that OpenAI would approve of?
Now, if Epoch and/or specific individuals made commitments that they violated, that might be more like the first case, but its not clear that is what happened here. If it is, more explanation of how this is the case would be helpful, I think.
I agree that this issue is complex and I don’t pretend to have all of the solutions.
I just think it’s really bad if people feel that they can’t speak relatively freely with the forecasting organisations because they’ll misuse the information. I think this is somewhat similar to how it is important for folks to be able to speak freely to their doctor/lawyer/psychologist though I admit that the analogy isn’t perfect and that straightforwardly copying these norms over would probably be a mistake.
Nonetheless, I think it is worthwhile discussing whether there should be some kind of norms and what they should be. As you’ve rightly pointed out, are a lot of issues that would need to be considered. I’m not saying I know exactly what these norms should be. I see myself as more just starting a discussion.
(This is distinct from my separate point about it being a mistake to hire folk who do things like this. It is a mistake to have hired folks who act strongly against your interests even if they don’t break any ethical injuctions)
To “misuse” to me implies taking a bad action. Can you explain what misuse occurred here? If we assume that people at OpenAI now feel less able to speak freely after things that ex-OpenAI employees have said/done would you likewise characterize those people as having “misused” information or experience they gained at OpenAI? I understand you don’t have fully formed solutions and that’s completely understandable, but I think my questions go to a much more fundamentally issue about what the underlying problem actually is. I agree it is worth discussing, but I think it would clarify the discussion to understand what the intent of such a norm would (and if achieving that intent would in fact be desirable).
If Coca-Cola hires someone who later leaves and goes to work for Pepsi because Pepsi offered them higher compensation, I’m not sure it would make sense for Coca-Cola to conclude that they should make big changes to their hiring process, other than perhaps increasing their own compensation if they determine that is a systematic issue. Coca-Cola probably needs to accept that “its not personal” is sometimes going to be the natural of the situation. Obviously details matter, so maybe this case is different, but I think working in an environment where you need to cooperate with other people/institutions means you also have to sometimes accept that people you work with will make decisions based on their own judgements and interests, and therefore may do things you don’t necessarily agree with.
They’re recklessly accelerating AI. Or, at least, that’s how I see it. I’ll leave it to others to debate whether or not this characterisation is accurate.
Details matter. It depends on how bad it is and how rare these actions are.
I know I’ve responded to a lot of your comments, and I get the sense you don’t want to keep engaging with me, so I’ll try to keep it brief.
We both agree that details matter, and I think the details of what the actual problem is matter. If, at bottom, the thing that Epoch/these individuals have done wrong is recklessly accelerate AI, I think you should have just said that up top. Why all the “burn the commons”, “sharing information freely”, “damaging to trust” stuff? It seems like you’re saying at the end of the day, those things aren’t really the thing you have a problem with. On the other hand, I think invoking that stuff is leading you to consider approaches that won’t necessarily help with avoiding reckless acceleration, as I hope my OpenAI example demonstrates.
I believe those are useful frames for understanding the impacts.