Yes this is quoting Neel.
Zvi
Roughly this, yes. SV here means the startup ecosystem, Big Tech means large established (presumably public) companies.
Here is my coverage of it. Given this is a ‘day minus one’ interview of someone in a different position, and given everything else we already know about OpenAI, I thought this went about as well as it could have. I don’t want to see false confidence in that kind of spot, and the failure of OpenAI to have a plan for that scenario is not news.
It is better than nothing I suppose but if they are keeping the safeties and restrictions on then it will not teach you whether it is fine to open it up.
My guess is that different people do it differently, and I am super weird.
For me a lot of the trick is consciously asking if I am providing good incentives, and remembering to consider what the alternative world looks like.
I don’t see this response as harsh at all? I see it as engaging in detail with the substance, note the bill is highly thoughtful overall, with a bunch of explicit encouragement, defend a bunch of their specific choices, and I say I am very happy they offered this bill. It seems good and constructive to note where I think they are asking for too much? While noting that the right amount of ‘any given person reacting thinks you went too far in some places’ is definitely not zero.
Excellent. On the thresholds, got it, sad that I didn’t realize this, and that others didn’t either from what I saw.
I appreciate the ‘long post is long’ problem but I do think you need the warnings to be in all the places someone might see the 10^X numbers in isolation, if you don’t want this to happen, and it probably happens anyway, on the grounds of ‘yes that was technically not a proposal but of course it will be treated like one.’ And there’s some truth in that, and that you want to use examples that are what you would actually pick right now if you had to pick what to actually do (or propose).
I do think the numbers I suggest are about as low as one could realistically get until we get much stronger evidence of impending big problems.
Secrecy is the exception. Mostly no one cares about your startup idea or will remember your hazardous brainstorm, no one is going to cause you trouble, and so on, and honesty is almost always the best policy.
That doesn’t mean always tell everyone everything, but you need to know what you are worried about if you are letting this block you.
On infohazards, I think people were far too worried for far too long. The actual dangerous idea turned out to be that AGI was a dangerous idea, not any specific thing. There are exceptions, but you need a very good reason, and an even better reason if it is an individual you are talking with.
Trust in terms of ‘they won’t steal from me’ or ‘they will do what they promise’ is another question with no easy answers.
If you are planning something radical enough to actually get people’s attention (e.g. breaking laws, using violence, fraud of various kinds, etc) then you would want to be a lot more careful who you tell, but also—don’t do that?
Secrecy is the exception. Mostly no one cares about your startup idea or will remember your hazardous brainstorm, no one is going to cause you trouble, and so on, and honesty is almost always the best policy.
That doesn’t mean always tell everyone everything, but you need to know what you are worried about if you are letting this block you.
On infohazards, I think people were far too worried for far too long. The actual dangerous idea turned out to be that AGI was a dangerous idea, not any specific thing. There are exceptions, but you need a very good reason, and an even better reason if it is an individual you are talking with.
Trust in terms of ‘they won’t steal from me’ or ‘they will do what they promise’ is another question with no easy answers.
If you are planning something radical enough to actually get people’s attention (e.g. breaking laws, using violence, fraud of various kinds, etc) then you would want to be a lot more careful who you tell, but also—don’t do that?
Sounds like your scale is stingier than mine is a lot of it. And it makes sense that the recommendations come apart at the extreme high end, especially for older films. The ‘for the time’ here is telling.
On my scale, if I went 1 for 7 on finding 4.0+ films in a year, then yeah I’d find that a disappointing year.
In other news, I tried out Scaruffi. I figured I’d watch the top pick. Number was Citizen Kane which I’d already watched (5.0 so that was a good sign), which was Repulsion. And… yeah, that was not a good selection method. Critics and I do NOT see eye to eye.
I also scanned their ratings of various other films, which generally seemed reasonable for films I’d seen, although with a very clear ‘look at me I am a movie critic’ bias, including one towards older films. I don’t know how to correct for that properly.
Real estate can definitely be a special case, because (1) you are also doing consumption, (2) it is non-recourse and you never get a margin call, which provides a lot of protection and (3) The USG is massively subsidizing you doing that...
There are lead times to a lot of these actions, costs to do so are often fixed, and no reason to expect the rules changes not to happen. I buy that it is efficient to do so early.
‘Greed’ I consider a non-sequitur here, the manager will profit maximize.
I’m curious how many films you saw—having only one above 3.5 on that scale seems highly disappointing.
Argument from incredulity?
Thanks for the notes!
As I understand that last point, you’re saying that it’s not a good point because it is false (hence my ‘if it turns out to be true’). Weird that I’ve heard the claim from multiple places in these discussions. I assumed there was some sort of ‘order matters in terms of pre-training vs. fine-tuning obviously, but there’s a phase shift in what you’re doing between them.’ I also did wonder about the whole ‘you can remove Llama-2’s fine tuning in 100 steps’ thing, since if that is true then presumably order must matter within fine tuning.
Anyone think there’s any reason to think Pope isn’t simply technically wrong here (including Pope)?
Yep, whoops, fixing.
That seems rather loaded in the other direction. How about “The evidence suggests that if current ML systems were going to deceive us in scenarios that do not appear in our training sets, we wouldn’t be able to detect this or change them not to unless we found the conditions where it would happen.”?
Did you see (https://thezvi.substack.com/p/balsa-update-and-general-thank-you)? That’s the closest thing available at the moment.
I think it works, yes. Indeed I have a canary on my Substack About page to this effect.