This also stood out to me as a truly insane quote. He’s almost but not quite saying “we have raised awareness that this bad thing can happen by doing the bad thing”
MichaelDickens
“we would also expect general support for OpenAI to be likely beneficial on its own” seems to imply that they did think it was good to make OAI go faster/better, unless that statement was a lie to avoid badmouthing a grantee.
Some ideas:
Make Sam Altman look stupid on Twitter, which will marginally persuade more employees to quit and more potential investors not to invest (this is my worst idea but also the easiest, and people seem to pretty much have this one covered already)
Pay a fund to hire a good lawyer to figure out a strategy to nullify the non-disparagement agreements. Maybe a class-action lawsuit, maybe a lawsuit on the behalf of one individual, maybe try to charge Altman with some sort of crime, I’m not sure the best way to do this but that’s the lawyer’s job to figure out.
Have everyone call their representative in support of SB 1047, or maybe even say you want SB 1047 to have stronger whistleblower protections or something similar.
When the NYT article came out, some people discussed the hypothesis that perhaps the article was originally going to be favorable, but the editors at NYT got mad when Scott deleted his blog so they forced Cade to turn it into a hit piece. This interview pretty much demonstrates that it was always going to be a hit piece (and, as a corollary, Cade lied to people saying it was going to be positive to get them to do interviews).
So yes this changed my view from “probably acted unethically but maybe it wasn’t his fault” to “definitely acted unethically”.
people have repeatedly told me that a surprisingly high fraction of applicants for programming jobs can’t do fizzbuzz
I’ve heard it argued that this isn’t representative of the programming population. Rather, people who suck at programming (and thus can’t get jobs) apply to way more positions than people who are good at programming.
I have no idea if it’s true, but it sounds plausible.
Just last week I wrote a post reviewing the evidence on caffeine cycling and caffeine habituation. My conclusion was that the evidence was thin and it’s hard to say anything with confidence.[1]
My weakly held beliefs are:
Taking caffeine daily is better than not taking it at all, but worse than cycling.
Taking caffeine once every 3 days is a reasonable default. A large % of people can take it more often than that, and a large % will need to take it less.
I take caffeine 3 days a week and I am currently running a self-experiment (described in my linked post). I’m currently in the experimental phase, I already did a 9-day withdrawal period and my test results over that period (weakly) suggest that I wasn’t habituated previously because my performance didn’t improve during the withdrawal period (it actually got worse, p=0.4 on a regression test).
[1] Gavin Leech’s post that you linked cited a paper on brain receptors in mice which I was unaware of, I will edit my post to include it. Based on reading the abstract, it looks like that study suggests a weaker habituation effect than the studies I looked at (receptor density in mice increased by 20–25% which naively suggests a 20–25% reduction in the benefit of caffeine whereas other studies suggest a 30–100% reduction, but I’m guessing you can’t just directly extrapolate from receptor counts to efficacy like that). Gavin also cited Rogers et al. (2013) which I previously skipped over because I thought it wasn’t relevant, but on second thought, it does look relevant and I will give it a closer look.
What’s going on with /r/AskHistorians?
AFAIK, /r/AskHistorians is the best place to hear from actual historians about historical topics. But I’ve noticed some trends that make it seem like the historians there generally share some bias or agenda, but I can’t exactly tell what that agenda is.
The most obvious thing I noticed is from their FAQ on historians’ views on other [popular] historians. I looked through these and in every single case, the /r/AskHistorians commenters dislike the pop historian. Surely at least one pop historian got it right?
I don’t know about the actual object level, but a lot of /r/AskHistorians’ criticisms strike me as weak:
They criticize Dan Carlin for (1) allegedly downplaying the Rape of Belgium even though by my listening he emphasized pretty strongly how bad it was and (2) doing a bad job answering “could Caesar have won the Battle of Hastings?” even though this is a thought experiment, not a historical question. (Some commenters criticize him for being inaccurate and others criticize him for being unoriginal, which are contradictory criticisms.)
They criticize Guns, Germs, and Steel for...honestly I’m a little confused about how this person disagrees with GGS.
Lots of criticisms of popular works for being “oversimplified”, which strikes me as a dumb criticism—everything is simplified, the map is always less detailed than the territory.
They criticize The Better Angels of Our Nature for taking implausible figures from ancient historians at face value (fair) and for using per capita deaths instead of total deaths (per capita seems obviously correct to me?).
Seems like they are bending over backwards to talk about how bad popular historical media are, while not providing substantive criticisms. I’ve also noticed they like to criticize media for not citing any sources (or for citing sources that aren’t sufficiently academic), but then they usually don’t cite any sources themselves.
I don’t know enough about history to know whether /r/AskHistorians is reliable, but I see some meta-level issues that make me skeptical. I want to get other people’s takes. Am I being unfair to /r/AskHistorians?
(I don’t expect to find a lot of historians on LessWrong, but I do expect to find people who are good at assessing credibility.)
If you disagree but can’t succinctly explain, I would suggest doing one of these things:
Write a long comment explaining your disagreement
Write a short comment stating your specific points of disagreement, with a disclaimer that you don’t have time to fully justify your beliefs
Your comment is being downvoted (I suspect) because it does neither of these, instead it indirectly insults the author without providing any information as to why you disagree. IMO this sort of comment doesn’t really contribute anything—all I know is that you disagree, I have no idea what’s going on inside your head, so I’m not learning anything from it.
10 million dollars will probably have very small impact on Terry Tao’s decision to work on the problem.
That might be true for him specifically, but I’m sure there are plenty of world-class researchers who would find $10 million (or even $1 million) highly motivating.
I thought it was obviously fiction, but I didn’t know that it was set in Dath Ilan, and the fact that it’s set in Dath Ilan would give away that the red hair thing is fake.
Have there been any great discoveries made by someone who wasn’t particularly smart?
This seems worth knowing if you’re considering pursuing a career with a low chance of high impact. Is there any hope for relatively ordinary people (like the average LW reader) to make great discoveries?
I “know” that nausea can be handled with a pill, but it had never occurred to me to carry around a couple anti-nausea pills.
The contextualizer/decoupler punch is an outstanding joke.
A related pattern I noticed recently:
Alice asks, “What effect does X have on Y?”
Bob, an expert in Y, replies, “There are many variables that impact Y, and you can’t reduce it to simply X.”
Alice asked for a one-variable model with limited but positive predictive power, and Bob replied with a zero-variable model with no predictive power whatsoever.
You see this sort of thing with acquisitions. Say company A is currently priced at $100, and company B announces that it’s acquiring A for $200 per share. A will jump up to something like $170 per share, and then slowly increase to $200 on the acquisition date. The $30 gap is there because there’s some probability that the acquisition will fall through, and that probability decreases over time (unless it actually does fall through, in which case the price drops back down to ~$100).
The high-level explanation I’d give for this is that smart people make better decisions in general, and certain classes of bad decisions are also illegal. So perhaps the reason smart people follow rules more isn’t that they’re more inherently rule-abiding, but that they behave in more reasonable ways, and rules tend to be reasonable (obviously not always, but they’re more reasonable than if they were assigned at random).
I was just thinking not 10 minutes ago about how that one LW user who casually brought up Daniel K’s equity (I didn’t remember your username) had a massive impact and I’m really grateful for them.
There’s a plausible chain of events where simeon_c brings up the equity > it comes to more people’s attention > OpenAI goes under scrutiny > OpenAI becomes more transparent > OpenAI can no longer maintain its de facto anti-safety policies > either OpenAI changes policy to become much more safety-conscious, or loses power relative to more safety-conscious companies > we don’t all die from OpenAI’s unsafe AI.
So you may have saved the world.
Update: I finished my self-experiment, results are here: https://mdickens.me/2024/04/11/caffeine_self_experiment/
I find that sort of feedback more palatable when they start with something like “This is not related to your main point but...”
I am more OK with talking about tangents when the commenter understands that it’s a tangent.
I am pretty uncertain about whether this change is good, and I don’t think anyone can confidently say it is or isn’t good. But no other forum with voting does this (AFAIK), so it’s good to try it and see what happens.
Something to think about: What sorts of observations might constitute evidence in favor of or against this system?