>it appears in Australia the right wing are staunchly antisemitic—which is giving me a bit of conceptual whiplash.
Did you mean pro-?
ShardPhoenix
Feedback: The cover image and choice of font are bizarre and off-putting to me. Bubbly font with a giant HOMO and a weird diseased-looking pink gun give me more vibes of homosexuality than rationality.
In the tradeoff between emphasis on intellectual exploration vs. emphasis on correctness and applicability LW seems to have moved closer to the latter, and I think you’re mourning the former. I do feel this has been driven largely by AI moving from a speculative idea to very much a reality.
Also re: Said Achmiz—to quote The Big Lebowski, “You’re not wrong Walter. You’re just an asshole.”
(I agree with Said on the object level more often than not but his tone can be more abrasive than necessary. But then again too much agreeableness can make it hard to get at the truth. Sometimes the truth hurts.)
That’s a reasonable suspicion but as a counterpoint there might be more low-hanging fruit in biomedicine than math, precisely because it’s harder to test ideas in the former. Without the need for expensive experiments, math has already been driven much deeper than other fields, and therefore requires a deeper understanding to have any hope of making novel progress.
edit: Also, if I recall correctly, the average IQ of mathematicians is higher than biologists, which is consistent with it being harder to make progress in math.
If there were, we would’ve probably heard about massive shifts in how scientists (and entrepreneurs!) are doing their work.
I have been seeing a bit of this, mostly uses of o1-pro and OpenAI Deep Research in chem/bio/medicine, and mostly via Twitter hype so far. But it might be the start of something.
In principle, distressed sales shouldn’t affect the long term price since they have nothing to do with fundamentals—so it’s really just a discount for non-distressed buyers. However crypto is weird and more like a Keynesian beauty contest than most things, so who knows.
I think they meant that as an analogy to how developed/sophisticated it was (ie they’re saying that it’s still early days for reasoning models and to expect rapid improvement), not that the underlying model size is similar.
That’s a PR friendly way of saying that it failed to reach PMF.
Thanks for fixing this. The ‘A’ thing in particular multiple times caused me to try to edit comments thinking that I’d omitted a space.
This sounds like democracy-washing rule by unaccountable “experts”.
>many of the top films by rating are anime
Not sure 4 of the top 100 being anime counts as unexpectedly many.
Not clear to me how to interpret the chart.
FWIW I downvoted this mainly because I thought you were much too quick to dismiss the existing literature on this topic in favour of your personal theories, which is a bit of a bad habit around here.
It is times like this that it is
missing end of sentence
This seems mostly fine for anyone who doesn’t engage in political advocacy or activism, but a mild-moderate form of defection against society if you do—because if dragons are real, society should probably do something about that, even if you personally can’t.
edit: I guess dragon-agnosticism is tolerable if you avoid advocating for (and ideally voting for) policies that would be disastrous if dragons do in fact exist.
You describe Sam as going “mask off” with his editorial, but it feels more like mask on to me—I’d guess he went with the nationalist angle because he thinks it will sell, not because it’s his personal highest priority.
they’ve been much more effective at getting their priorities funded than you have been!
Sounds plausible but do you have any numeric evidence for this?
What leads MIRI to believe that this policy of being very outspoken will work better than the expert-recommended policy of being careful what you say?
(Not saying it won’t work, but this post doesn’t seem to say why you think it will).
Great post. I wonder how to determine what is a “reasonable” maximum epsilon to use in the adversarial training. Does performance on normal examples get worse as epsilon increases?
Based on the last paragraph it doesn’t sound like OpenAI specifically was asked to do this?