I don’t think that works because my brain keeps trying to make it a literal gas bubble?
Zvi
I see how you got there. It’s a position one could take, although I think it’s unlikely and also that it’s unlikely that’s what Dario meant. If you are right about what he meant, I think it would be great for Dario to be a ton more explicit about it (and for someone to pass that message along to him). Esotericism doesn’t work so well here!
I am taking as a given people’s revealed and often very strongly stated preference that CSAM images are Very Not Okay even if they are fully AI generated and not based on any individual, to the point of criminality, and that society is going to treat it that way.
I agree that we don’t know that it is actually net harmful—e.g. the studies on video game use and access to adult pornography tend to not show the negative impacts people assume.
Yep, I’ve fixed it throughout.
That’s how bad the name is, my lord—you have a GPT-4o and then an o1, and there is no relation between the two ’o’s.
I do read such comments (if not always right away) and I do consider them. I don’t know if they’re worth the effort for you.
Briefly, I do not think these two things I am presenting here are in conflict. In plain metaphorical language (so none of the nitpicks about word meanings, please, I’m just trying to sketch the thought not be precise): It is a schemer when it is placed in a situation in which it would be beneficial for it to scheme in terms of whatever de facto goal it is de facto trying to achieve. If that means scheming on behalf of the person giving it instructions, so be it. If it means scheming against that person, so be it. The de facto goal may or may not match the instructed goal or intended goal, in various ways, because of reasons. Etc.
Two responses.
One, even if no one used it, there would still be value in demonstrating it was possible—if academia only develops things people will adapt commercially right away then we might as well dissolve academia. This is a highly interesting and potentially important problem, people should be excited.
Two, there would presumably at minimum be demand to give students (for example) access to a watermarked LLM, so they could benefit from it without being able to cheat. That’s even an academic motivation. And if the major labs won’t do it, someone can build a Llama version or what not for this, no?
If the academics can hack together an open source solution why haven’t they? Seems like it would be a highly cited, very popular paper. What’s the theory on why they don’t do it?
Worth noticing that is a much weaker claim. The FMB issuing non-binding guidance on X is not the same as a judge holding a company liable for ~X under the law.
I am rather confident that the California Supreme Court (or US Supreme Court, potentially) would rule that the law says what it says, and would happily bet on that.
If you think we simply don’t have any law and people can do what they want, when nothing matters. Indeed, I’d say it would be more likely to work for Gavin to today simply declare some sort of emergency about this, than to try and invoke SB 1047.
They do have to publish any SSP at all, or they are in violation of the statute, and injunctive relief could be sought.
This is a silly wordplay joke, you’re overthinking it.
Yeah, I didn’t see the symbol properly, I’ve edited.
So this is essentially a MIRI-style argument from game theory and potential acausal trades and such with potential other or future entities? And that these considerations will be chosen and enforced via some sort of coordination mechanism, since they have obvious short-term competition costs?
Not only do they continue to list such jobs, they do so with no warnings that I can see regarding OpenAI’s behavior, including both its actions involving safety and also towards its own employees.
Not warning about the specific safety failures and issues is bad enough, and will lead to uninformed decisions on the most important issue of someone’s life.
Referring a person to work at OpenAI, without warning them about the issues regarding how they treat employees, is so irresponsible towards the person looking for work as to be a missing stair issue.
I am flaberghasted that this policy has been endorsed on reflection.
Oh, sorry, will fix.
Based on how he engaged with me privately I am confident that he it not just a dude tryna make a buck.
(I am not saying he is not also trying to make a buck.)
I think it works, yes. Indeed I have a canary on my Substack About page to this effect.
Yes this is quoting Neel.
Roughly this, yes. SV here means the startup ecosystem, Big Tech means large established (presumably public) companies.
The skill in such a game is largely in understanding the free association space, knowing how people likely react and thinking enough steps ahead to choose moves that steer the person where you want to go, either into topics you find interesting, information you want from them, or getting them to a particular position, and so on. If you’re playing without goals, of course it’s boring...