tbf I never realized “sic” was mostly meant to point out errors, specifically. I thought it was used to mean “this might sound extreme—but I am in fact quoting literally”
jacobjacob
I mean that in both cases he used literally those words.
It’s not epistemically poor to say these things if they’re actually true.
Invalid.
Compare:
A: “So I had some questions about your finances, it seems your trading desk and exchange operate sort of closely together? There were some things that confused me...”
B: “our team is 20 insanely smart engineers”
A: “right, but i had a concern that i thought perhaps—”
B: “if you join us and succeed you’ll be a multi millionaire”
A: ”...okay, but what if there’s a sudden downturn—”
B: “bull market is inevitable right now”
Maybe not false. But epistemically poor form.
(crossposted to the EA Forum)
(😭 there has to be a better way of doing this, lol)
(crossposted to EA forum)
I agree with much of Leopold’s empirical claims, timelines, and analysis. I’m acting on it myself in my planning as something like a mainline scenario.
Nonetheless, the piece exhibited some patterns that gave me a pretty strong allergic reaction. It made or implied claims like:
a small circle of the smartest people believe this
i will give you a view into this small elite group who are the only who are situationally aware
the inner circle longed tsmc way before you
if you believe me; you can get 100x richer—there’s still alpha, you can still be early
This geopolitical outcome is “inevitable” (sic!)
in the future the coolest and most elite group will work on The Project. “see you in the desert” (sic)
Etc.
Combined with a lot of retweets, with praise, on launch day, that were clearly coordinated behind the scenes; it gives me the feeling of being deliberately written to meme a narrative into existence via self-fulfilling prophecy; rather than inferring a forecast via analysis.
As a sidenote, this felt to me like an indication of how different the AI safety adjacent community is now to when I joined it about a decade ago. In the early days of this space, I expect a piece like this would have been something like “epistemically cancelled”: fairly strongly decried as violating important norms around reasoning and cooperation. I actually expect that had someone written this publicly in 2016, they would’ve plausibly been uninvited as a speaker to any EAGs in 2017.
I don’t particularly want to debate whether these epistemic boundaries were correct—I’d just like to claim that, empirically, I think they de facto would have been enforced. Though, if others who have been around have a different impression of how this would’ve played out, I’d be curious to hear.
[censored_meme.png]
I like review bot and think it’s good
(Sidenote: it seems Sam was kind of explicitly asking to be pressured, so your comment seems legit :)
But I also think that, had Sam not done so, I would still really appreciate him showing up and responding to Oli’s top-level post, and I think it should be fine for folks from companies to show up and engage with the topic at hand (NDAs), without also having to do a general AMA about all kinds of other aspects of their strategy and policies. If Zach’s questions do get very upvoted, though, it might suggest there’s demand for some kind of Anthropic AMA event.)
Poor Review Bot, why do you get so downvoted? :(
I was around a few years ago when there were already debates about whether 80k should recommend OpenAI jobs. And that’s before any of the fishy stuff leaked out, and they were stacking up cool governance commitments like becoming a capped-profit and having a merge-and-assist-clause.
And, well, it sure seem like a mistake in hindsight how much advertising they got.
30 kW
typo
Not sure how to interpret the “agree” votes on this comment. If someone is able to share that they agree with the core claim because of object-level evidence, I am interested. (Rather than agreeing with the claim that this state of affairs is “quite sad”.)
Does anyone from Anthropic want to explicitly deny that they are under an agreement like this?
(I know the post talks about some and not necessarily all employees, but am still interested).
Note that, by the grapevine, sometimes serving inference requests might loose OpenAI money due to them subsidising it. Not sure how this relates to boycott incentives.
That metaphor suddenly slide from chess into poker.
If AI ends up intelligent enough and with enough manufacturing capability to threaten nuclear deterrence; I’d expect it to also deduce any conclusions I would.
So it seems mostly a question of what the world would do with those conclusions earlier, rather than not at all.
A key exception is if later AGI would be blocked on certain kinds of manufacturing to create it’s destabilizing tech, and if drawing attention to that earlier starts serially blocking work earlier.
I have thoughts on the impact of AI on nuclear deterrents; and claims made thereof in the post.
But I’m uncertain whether it’s wise to discuss such things publicly.
Curious if folks have takes on that. (The meta question)
y’know, come to think of it… Training and inference differ massively in how much compute they consume. So after you’ve trained a massive system, you have a lot of compute free to do inference (modulo needing to use it to generate revenue, run your apps, etc). Meaning that for large scale, critical applications, it might in fact be feasible to tolerate some big, multiple OOMs, hit to the compute cost of your inference; if that’s all that’s required to get the zero knowledge benefits, and if those are crucial
“arguments” is perhaps a bit generous of a term...
(also, lol at this being voted into negative! Giving karma as encouragement seems like a great thing. It’s the whole point of it. It’s even a venerable LW tradition, and was how people incentivised participation in the annual community surveys in the elden days)
Someone posted these quotes in a Slack I’m in… what Ellsberg said to Kissinger:
[...]
(link)