The request to file the amicus brief was denied because the court (rightly) interpreted the content of the brief as purely factual. Without consent of all parties, amici curiae are normally only granted when they are able to offer a unique legal perspective on the matters in dispute between the parties. Going through the proposed brief, it seems that nearly all of the substantive content is factual, being descriptions of processes, statements, and general history. Plainly speaking, this is just inappropriate and has rightly been dismissed.
If Elon wanted these facts to go in, he could have contacted these employees ahead of time and had them provide evidence directly. The employees don’t get to perform an end-run around the rules of evidence by putting all of their factual assertions into a brief.
With respect to your question about the request for judicial notice, the Order refers to the filing that had requested that relief (document 104), and the documents that they were requesting the Court to take notice of were the following:
1. A true and correct copy of the complaint filed on February 29, 2024 in the case captioned Musk v. Altman, et al., Case No. CGC-24-612746 (S.F. Sup. Ct.), attached as Ex. A to the Wiener Declaration; 2. A true and correct copy of the request for dismissal filed on June 11, 2024 in the case captioned Musk v. Altman, et al., Case No. CGC-24-612746 (S.F. Sup. Ct.), attached as Ex. B to the Wiener Declaration; and 3. A true and correct copy of the opposition to demurrer filed on April 10, 2024 in the case captioned Musk v. Altman, et al., Case No. CGC-24-612746 (S.F. Sup. Ct.), attached as Ex. C to the Wiener Declaration.
These documents are the first lawsuit that Elon filed and then discontinued, and have nothing to do with the charter.
Going through the proposed brief, it seems that nearly all of the substantive content is factual, being descriptions of processes, statements, and general history. Plainly speaking, this is just inappropriate and has rightly been dismissed.
This definitely seems reasonable, but in that case, why accept the Encode brief? Obviously they are different documents, but my read of the Encode one was that it had similar issues, but (unless I’m missing something?) it appears that it was accepted?
Editing to add: Not sure if the diff is whether the parties all agreed to the Encode brief? I recall the ex-OpenAI employee one noting they didn’t have consent of all parties, I don’t remember if this was mentioned one way or the other in the Encode one.
There are also some prominent law professors involved in the ex-OpenAI employee brief, Lawrence Lessig appears to be representing them and Eugene Volokh seems to have served as local counsel. Although I have no special knowledge of either of them, while I could see Lessig as someone who might “yolo-sendit” on something like this, the Volokh inclusion to me would make it a bit surprising if the brief were just straight-up inappropriate. But I don’t really know how the dynamics of being “local counsel” for something like this work?
Part of my reason for writing my previous post on this case is that I think the reaction in the AI safety community has somewhat misread the situation, in fact a highly upvoted comment on an EA forum post related to this case explicitly suggests the idea of submitting amicus briefs. I had originally drafted a section about amicus briefs but decided to exclude it because the fact that the Encode brief got accepted made me wonder if I myself was misreading the judge’s receptivity to such briefs.
Regardless, I think if what you suggest is true it supports the idea that some AI safety people haven’t necessarily correctly read the judge’s disposition, would you be interested in commenting on that?
If Elon wanted these facts to go in, he could have contacted these employees ahead of time and had them provide evidence directly. The employees don’t get to perform an end-run around the rules of evidence by putting all of their factual assertions into a brief.
This definitely makes sense to me.
With respect to your question about the request for judicial notice, the Order refers to the filing that had requested that relief (document 104), and the documents that they were requesting the Court to take notice of were the following
Thanks, I will update the post with this info, appreciate you taking the time to look into it.
It is more than just briefs. Amici curiae means ‘friends of the court’. They are only permitted either by consent of all the actual parties or upon application if the court decides that the prospective amici offers a unique legal perspective. The parties to the litigation are given wide discretion to call witnesses, so you don’t need to be an amici for that, just subpoenaed and tendered.
Encode successfully navigated this, by not offering facts (who did what, and when), since they don’t have any first-hand knowledge of the facts. What they offered, according to their brief (which is attached as a sechedule to the “main document” for document 72), was their philosophical and technical perspective, particularly as a public body concerned with AI safety vis a vis the change in structure of OpenAI.
If you read through the two proposed briefs, they are night and day. Encode describes the interest that the public might have in OpenAI continuing under its present structure, compared to transitioning to a for-profit enterprise, the risk of AI, and why it should be avoided. The employee brief recounts meetings, memos, and who was making promises.
In a very abstract way, Encode is basically saying that the transition shouldn’t proceed because it would be bad for society and humanity. This is a perspective that isn’t captured by either Musk or OpenAi/Microsoft. The employees tried to say that OpenAI and Altman made promises to them, and those promises should be kept, which is almost entirely factual. What little non-factual element might remain in the employee brief is essentially in line with Musk’s argument, so they offer nothing unique that would assist the court.
Encode successfully navigated this, by not offering facts (who did what, and when), since they don’t have any first-hand knowledge of the facts. What they offered, according to their brief (which is attached as a sechedule to the “main document” for document 72), was their philosophical and technical perspective, particularly as a public body concerned with AI safety vis a vis the change in structure of OpenAI.
Didn’t the Encode brief do stuff like quote public opinion polls? Sure they characterize it as offering a philosophical perspective (how it would be called “technical” I’m not sure) but to me it came across as basically asserting policy rather than legal arguments. Sure there is a consideration of the public interest for the preliminary injunction but the overall feel to me was very much policy rather than legal arguments. I also don’t think you’re necessarily applying this standard evenly to both briefs. The ex-OpenAI brief I think can be seen in a similar way, it just brings in additional pieces of evidence to make that case.
If you read through the two proposed briefs, they are night and day. Encode describes the interest that the public might have in OpenAI continuing under its present structure, compared to transitioning to a for-profit enterprise, the risk of AI, and why it should be avoided. The employee brief recounts meetings, memos, and who was making promises.
In my view the high-level arguments of both briefs are the same in that they argue that having actual control of future AI systems residing with a non-profit is in the public interest. It’s just that the ex-OpenAI brief brings in more information for purposes of suggesting that such a belief was not uncommon among OpenAI employees and that we might reasonable view OpenAI to have committed to such a thing and understood this to be consistent with its charitable purpose. I could see how that might not be relevant to the case since it doesn’t necessarily go to Musk’s reliance, so perhaps it makes sense to not muddy the waters with it, but I don’t think its the case that the ex-OpenAI brief somehow lacked any relevance if we assume the Encode brief was relevant.
In a very abstract way, Encode is basically saying that the transition shouldn’t proceed because it would be bad for society and humanity. This is a perspective that isn’t captured by either Musk or OpenAi/Microsoft.
This is relevant because its a factor for preliminary injunction purposes. I haven’t gone back and read all the documents, but it would be very surprising to me if Musk didn’t argue that the for-profit transition was contrary to the public interest. Also it seems to me like the ex-OpenAI brief also casts their arguments in these same terms.
The employees tried to say that OpenAI and Altman made promises to them, and those promises should be kept, which is almost entirely factual.
I think the brief is trying to argue that these facts go to the very point you identify the Encode brief as addressing.
One other thing has to do with the procedural posture. On a motion to dismiss, Courts are not generally supposed to evaluate facts at all. A motion to dismiss is essentially an argument by the defense that, even if all those accusations are true, they don’t amount to a legal claim.
By contrast, a preliminary injunction doeslook at facts. So one reason for different treatment of the Encode brief vs. the ex-employees brief is that they were submitted in response to different motions.
I had a similar thought, and that would make sense to me, but I just don’t know enough about the standards to say what the correct interpretation is. To an extent I feel like its kind of tea-leaf reading and maybe isn’t a good idea, but at the same time I feel like these dynamics could be relevant to how views on AI safety develop among groups that are exposed to those ideas in these formats. I definitely think this won’t be the last court case by far that implicates AI issues, so I feel its worth thinking about how different courses of action could play out.
The request to file the amicus brief was denied because the court (rightly) interpreted the content of the brief as purely factual. Without consent of all parties, amici curiae are normally only granted when they are able to offer a unique legal perspective on the matters in dispute between the parties. Going through the proposed brief, it seems that nearly all of the substantive content is factual, being descriptions of processes, statements, and general history. Plainly speaking, this is just inappropriate and has rightly been dismissed.
If Elon wanted these facts to go in, he could have contacted these employees ahead of time and had them provide evidence directly. The employees don’t get to perform an end-run around the rules of evidence by putting all of their factual assertions into a brief.
With respect to your question about the request for judicial notice, the Order refers to the filing that had requested that relief (document 104), and the documents that they were requesting the Court to take notice of were the following:
1. A true and correct copy of the complaint filed on February 29, 2024 in the case captioned Musk v. Altman, et al., Case No. CGC-24-612746 (S.F. Sup. Ct.), attached as Ex. A to the Wiener Declaration;
2. A true and correct copy of the request for dismissal filed on June 11, 2024 in the case captioned Musk v. Altman, et al., Case No. CGC-24-612746 (S.F. Sup. Ct.), attached as Ex. B to the Wiener Declaration; and
3. A true and correct copy of the opposition to demurrer filed on April 10, 2024 in the case captioned Musk v. Altman, et al., Case No. CGC-24-612746 (S.F. Sup. Ct.), attached as Ex. C to the Wiener Declaration.
These documents are the first lawsuit that Elon filed and then discontinued, and have nothing to do with the charter.
This definitely seems reasonable, but in that case, why accept the Encode brief? Obviously they are different documents, but my read of the Encode one was that it had similar issues, but (unless I’m missing something?) it appears that it was accepted?
Editing to add: Not sure if the diff is whether the parties all agreed to the Encode brief? I recall the ex-OpenAI employee one noting they didn’t have consent of all parties, I don’t remember if this was mentioned one way or the other in the Encode one.
There are also some prominent law professors involved in the ex-OpenAI employee brief, Lawrence Lessig appears to be representing them and Eugene Volokh seems to have served as local counsel. Although I have no special knowledge of either of them, while I could see Lessig as someone who might “yolo-sendit” on something like this, the Volokh inclusion to me would make it a bit surprising if the brief were just straight-up inappropriate. But I don’t really know how the dynamics of being “local counsel” for something like this work?
Part of my reason for writing my previous post on this case is that I think the reaction in the AI safety community has somewhat misread the situation, in fact a highly upvoted comment on an EA forum post related to this case explicitly suggests the idea of submitting amicus briefs. I had originally drafted a section about amicus briefs but decided to exclude it because the fact that the Encode brief got accepted made me wonder if I myself was misreading the judge’s receptivity to such briefs.
Regardless, I think if what you suggest is true it supports the idea that some AI safety people haven’t necessarily correctly read the judge’s disposition, would you be interested in commenting on that?
This definitely makes sense to me.
Thanks, I will update the post with this info, appreciate you taking the time to look into it.
It is more than just briefs. Amici curiae means ‘friends of the court’. They are only permitted either by consent of all the actual parties or upon application if the court decides that the prospective amici offers a unique legal perspective. The parties to the litigation are given wide discretion to call witnesses, so you don’t need to be an amici for that, just subpoenaed and tendered.
Encode successfully navigated this, by not offering facts (who did what, and when), since they don’t have any first-hand knowledge of the facts. What they offered, according to their brief (which is attached as a sechedule to the “main document” for document 72), was their philosophical and technical perspective, particularly as a public body concerned with AI safety vis a vis the change in structure of OpenAI.
If you read through the two proposed briefs, they are night and day. Encode describes the interest that the public might have in OpenAI continuing under its present structure, compared to transitioning to a for-profit enterprise, the risk of AI, and why it should be avoided. The employee brief recounts meetings, memos, and who was making promises.
In a very abstract way, Encode is basically saying that the transition shouldn’t proceed because it would be bad for society and humanity. This is a perspective that isn’t captured by either Musk or OpenAi/Microsoft. The employees tried to say that OpenAI and Altman made promises to them, and those promises should be kept, which is almost entirely factual. What little non-factual element might remain in the employee brief is essentially in line with Musk’s argument, so they offer nothing unique that would assist the court.
Didn’t the Encode brief do stuff like quote public opinion polls? Sure they characterize it as offering a philosophical perspective (how it would be called “technical” I’m not sure) but to me it came across as basically asserting policy rather than legal arguments. Sure there is a consideration of the public interest for the preliminary injunction but the overall feel to me was very much policy rather than legal arguments. I also don’t think you’re necessarily applying this standard evenly to both briefs. The ex-OpenAI brief I think can be seen in a similar way, it just brings in additional pieces of evidence to make that case.
In my view the high-level arguments of both briefs are the same in that they argue that having actual control of future AI systems residing with a non-profit is in the public interest. It’s just that the ex-OpenAI brief brings in more information for purposes of suggesting that such a belief was not uncommon among OpenAI employees and that we might reasonable view OpenAI to have committed to such a thing and understood this to be consistent with its charitable purpose. I could see how that might not be relevant to the case since it doesn’t necessarily go to Musk’s reliance, so perhaps it makes sense to not muddy the waters with it, but I don’t think its the case that the ex-OpenAI brief somehow lacked any relevance if we assume the Encode brief was relevant.
This is relevant because its a factor for preliminary injunction purposes. I haven’t gone back and read all the documents, but it would be very surprising to me if Musk didn’t argue that the for-profit transition was contrary to the public interest. Also it seems to me like the ex-OpenAI brief also casts their arguments in these same terms.
I think the brief is trying to argue that these facts go to the very point you identify the Encode brief as addressing.
One other thing has to do with the procedural posture. On a motion to dismiss, Courts are not generally supposed to evaluate facts at all. A motion to dismiss is essentially an argument by the defense that, even if all those accusations are true, they don’t amount to a legal claim.
By contrast, a preliminary injunction does look at facts. So one reason for different treatment of the Encode brief vs. the ex-employees brief is that they were submitted in response to different motions.
I had a similar thought, and that would make sense to me, but I just don’t know enough about the standards to say what the correct interpretation is. To an extent I feel like its kind of tea-leaf reading and maybe isn’t a good idea, but at the same time I feel like these dynamics could be relevant to how views on AI safety develop among groups that are exposed to those ideas in these formats. I definitely think this won’t be the last court case by far that implicates AI issues, so I feel its worth thinking about how different courses of action could play out.