Email me at assadiguive@gmail.com, if you want to discuss anything I posted here or just chat.
Guive
Not the main point here, but Huckleberry Finn is (rather famously) an anti-slavery work and not a good representation of the nineteenth-century racist worldview. A better example would be that a lot of college history classes assign parts of Mein Kampf.
Good question. You should check out Phil Trammell’s writing on patient philanthropy:
* https://80000hours.org/podcast/episodes/phil-trammell-patient-philanthropy/
* https://docs.google.com/document/d/1NcfTgZsqT9k30ngeQbappYyn-UO4vltjkm64n4or5r4/edit?tab=t.0
That seems like quite a reasonable assumption.
This was a fun read.
I’m struggling to come up with an example of a real dispute involving the intermediate value theorem. Can you suggest one?
Then what does it mean, in concrete terms? Can you give some probabilities about what you think will happen to the valuations of what companies over what time frame?
Even if the summary is accurate, it’s pretty bad to call a summary by a third party a quote.
So do you think it’s 2 years now? Any update?
What model did OpenAI delete? Where can I learn more?
The existential risk argument is suspiciously aligned with the commercial incentives of AI executives. It simultaneously serves to hype up capabilities and coolness while also directing attention away from the real problems that are already emerging. It’s suspicious that the apparent solution to this problem is to do more AI research as opposed to doing anything that would actually hurt AI companies financially.
This claim is bizarre, notwithstanding its popularity. It is bad for the industry if it is true that AI is likely to destroy the world, because if this (putative) fact becomes widely known, the AI industry will probably be shut down. Obviously it would be worth imposing more costs on AI companies to prevent the end of the world than to prevent the unemployment of translators or racial bias in imagegen models.
I don’t think this kind of relative-length-based analysis provides any more than a trivial amount of evidence about their real views.
Yeah, things happened pretty slowly in general back then.
I don’t really believe there is any such thing as “epistemic violence.” In general, words are not violence.
There’s a similar effect with stage actors who are chosen partly for looking good when seen from far away.
I’m no expert on Albanian politics, but I think it’s pretty obvious this is just a gimmick with minimal broader significance.
Agreed.
The system prompt in claude.ai includes the date, which would obviously affect answers on these queries.
At least, I have yet to find a Twitter user who regularly or irregularly talks about these things, and fails to boost obvious misinformation every once in a while.
Feel free to pass on this, but I would be interested in hearing about what obvious misinformation I’ve boosted if the spirit moves you to look.
Another issue is that these definitions typically don’t distinguish between models that would explicitly think about how to fool humans on most inputs vs. on a small percentage of inputs vs. such a tiny fraction of possible inputs that it doesn’t matter in practice.