GPTs. Yes, it’s still an initialism like “LLM” but it’s much easier to pronounce (“jee-pee-tee”) and you can call them “jepeats” (rhyming with “repeats”) if you want.
Shankar Sivarajan
From what I understand, they executed a risky maneuver, lost, tried to salvage what they could from the wreckage (by pinning the blame on someone else), but got pushed out anyway. So I can see where you’re getting “scheming or manipulative” (them trying this scheme), “less competent” than Altman (losing to him), and “very selfish” (blaming other people). But where are you getting “cowardly” from? From their attempt at minimizing their losses after it became clear their initial plan had become exceeding unlikely to succeed? If so, I’d say it speaks to how poorly you think of valor if you believe it precludes so sensible an action.
I thought the cases in The Man Who Mistook His Wife for a Hat were obviously as fictionalized as an episode of House: the condition described is real and based on an actual case, but the details were made up to make the story engaging. But I didn’t read it in 1985 when it was published. Did people back then take statements like “based on a true story” more seriously?
Lesser companies say “Look, we’ve made a thing you’ll like!” When you’re like Google, you say “Here is our thing you will use.”
I made a sequence of predictions of what the effects of this “legal regulatory capture” would look like. To ignore all but the one farthest out, and ask “Is that an accurate understanding of how you foresee the regulatory capture?” as though it were my only one seems
clearly in bad faithpoor form.they definitely don’t know how to do this in downloadable models.
Yes, I expect this would have the effect of chilling open model releases broadly. The “AI Safety” people have been advocating for precisely this for a while now.
Is your goal here to isolate the aspect of my response that’ll keep you right that “legal regulatory capture isn’t happening” for as long as you can? Because if so, yeah, of all I things I said, the compute screening requirement would indeed be the hardest for them achieve, and I expect that to take them the longest if they do.
I also don’t believe I said anything about new laws being passed; the threat of decades-old laws being reïnterpreted would suffice for the most part.
So first, the most likely and proximate thing I foresee happening is that major US AI companies – Google, xAI, OpenAI, and Anthropic – “voluntarily” add “guardrails” against their models providing legal advice.
Second, Huggingface, also “voluntarily,” takes down open models considered harmful, but restricting themselves to fine-tunes, LoRAs, and the like, since the companies developing the foundation models have enough reach to distribute them themselves that taking them down achieves little.
Third, and this I foresee taking longer, is that companies releasing open models (for now, that’s mostly a half-dozen Chinese ones) are deemed liable for “harm” caused by anyone using their models.
Okay, that’s a reasonable thing to clarify. First off, I don’t think whether or not one charges for it is relevant: it’s currently criminal to offer unlicensed legal even for free. It’s the activity itself that’s restricted, not merely the fee.
I do not believe it will be made illegal[1] to receive or use for oneself legal advice from any source: unlicensed, disbarred, foreign, underage, non-human, whatever. The restrictions I predict only apply to providing such advice.
the push will be to make it illegal for an LLM to give someone legal advice
Essentially, but as stated, it could be construed as though the crime would be committed by the LLM, which I think is absurdly unlikely. Instead the company (OpenAI, et al) would be considered responsible. And yes, I expect them to be forbidden from providing such a service, and to be as liable for it as they are for, say, copyright infringement.
For any currently accessible open models you’re running locally, yes, you’ll probably continue to be able to use them. But companies[2] could be forbidden from releasing any future models that can’t be proven to be unable to violate the law (on pain of some absurd fine), similar to the currently proposed legislation for governing “CBRN” threats. And plausibly even extant models that haven’t been proven to be sufficiently safe could be taken down from Huggingface etc., and cloud GPU providers could be required to screen for them (like they generally do now for AI-generated “CSAM”).
represent themselves in court, draft their own contracts, file their own patents
just deciding to use LLMs
It looks like you’re not even seeing the difference I’m arguing they will make salient. I agree the former is yet widely considered too fundamental a right in America for even lawyers to try to abolish, but I expect them to argue LLM assistance in this is a service provided illegally.
With occupational licensing in general, and criminalizing the Unauthorized Practice of Law more specifically, they’ve already accomplished plenty of regulatory capture. Do you really believe them using this well-established framework to deem the AI companies to be “giving legal advice” in violation of these laws implausible?
The primary application of “safety research” is improving refusal calibration, which, at least from a retail client’s perspective, is exactly like a capability improvement: it makes no difference to me whether the model can’t satisfy my request or can but won’t. It’s easy to demonstrate differences in this regard – simply show one model refusing a request another fulfills – so I disagree that this would cause clients to be “dissuaded from AI in general.”
On the contrary, I would expect the amor fati people to get normal prophecies, like, “you will have a grilled cheese sandwich for breakfast tomorrow,” “you will marry Samantha from next door and have three kids together,” or “you will get a B+ on the Chemistry quiz next week,” while the horrible contrived destinies come to those who would take roads far out of their way to avoid them.
I can think of several prominent predictions in the present of similar magnitude.
Every election is proclaimed as the death of American democracy.
Race war precipitated by Whites becoming a racial minority.
The recognition of “same-sex marriages” was to harbinger a collapse of all public morality.
Restrictions on abortion access reducing women to sex-slaves, à la The Handmaid’s Tale.
I think you’re understating the apocalypticism of climate-change activism.
Smartphones/social media/pornography corrupting the youth, leading to … okay, admittedly this one’s vaguer, but the consequences, whatever they might be, are still expected to be dire.
If overpopulation has ceased to be a major concern, that’s a very recent development.
Similarly, running out of oil was forecast to return technology to horse-drawn carriages and beeswax candles. They’ve definitely stopped saying this, but I heard it in the ’00s.
The difference you’re talking about might be simply due to you discounting these as insane (or maybe just disingenuous) while hailing analogous predictions in the past as wise/prescient.
“Death gives life meaning.”
A fun thing you can do is to say this line after events like natural disasters or mass murders. I’m hopeful that if it catches on as an ironic meme, people will come to realize it and the deathist sentiment that originally spawned it unironically ought to be no less obscene in any context.
equipment they’re automating would be capable of producing viruses (saying that this equipment is a normal thing to have in a bio lab
This seems to fall into the same genre as “that word processor can be used to produce disinformation,” “that image editor can be used to produce ‘CSAM’,” and “the pocket calculator is capable of displaying the number 5318008.”
you can engage with journalists while holding to rationalist principles to only say true things.
Suppose there was a relatively simple computer program, say a kind of social media bot, that when you input a statement, posts the opposite of that statement. Would you argue that as long you only type true statements yourself, using this program doesn’t constitute lying?
This line of reasoning leads to Richelieu’s six lines, where everyone is guilty of something, so you can punish anyone at any time for any reason: process crimes make for a much more plausible pretext to go after a target than any “intrinsically bad” thing.
tout their “holistic” approach to recognizing creativity and intellectual promise
This doesn’t mean what you think it means. It’s code for racial discrimination.
While the examples in section II are good, this whole thing sounds to me like, to use a different sporting metaphor, “Everyone has a plan until your opponent serves to your backhand.” Most people experience politicians, journalists and the media, Scientists and Experts, lying to their faces routinely, and often as established policy. While I’m not allowed to give examples here, if this is comes as news to you, first, you aren’t beating the quokka allegations, and second, it’s probably because you’ve fallen for these lies so comprehensively that you don’t even notice anymore.
I edited the screenshot of this Twitter thread.
If you’d put in a link to a deleted tweet, I’d probably have believed it.
This is the core of the dispute between the USPTO and OpenAI over their (failed) attempt to trademark the term in the US, so citing their papers doesn’t help resolve this.