The target audience for Soylent is much weirder. Although TBF I originally thought the Soylent branding was a bad idea and I was probably wrong.
MichaelDickens
This also stood out to me as a truly insane quote. He’s almost but not quite saying “we have raised awareness that this bad thing can happen by doing the bad thing”
Some ideas:
Make Sam Altman look stupid on Twitter, which will marginally persuade more employees to quit and more potential investors not to invest (this is my worst idea but also the easiest, and people seem to pretty much have this one covered already)
Pay a fund to hire a good lawyer to figure out a strategy to nullify the non-disparagement agreements. Maybe a class-action lawsuit, maybe a lawsuit on the behalf of one individual, maybe try to charge Altman with some sort of crime, I’m not sure the best way to do this but that’s the lawyer’s job to figure out.
Have everyone call their representative in support of SB 1047, or maybe even say you want SB 1047 to have stronger whistleblower protections or something similar.
“we would also expect general support for OpenAI to be likely beneficial on its own” seems to imply that they did think it was good to make OAI go faster/better, unless that statement was a lie to avoid badmouthing a grantee.
What do you think is the strongest evidence on sunscreen? I’ve read mixed things on its effectiveness.
Update: I finished my self-experiment, results are here: https://mdickens.me/2024/04/11/caffeine_self_experiment/
Have there been any great discoveries made by someone who wasn’t particularly smart?
This seems worth knowing if you’re considering pursuing a career with a low chance of high impact. Is there any hope for relatively ordinary people (like the average LW reader) to make great discoveries?
I find that sort of feedback more palatable when they start with something like “This is not related to your main point but...”
I am more OK with talking about tangents when the commenter understands that it’s a tangent.
I wonder if there’s a good way to call out this sort of feedback? I might start trying something like
That’s a reasonable point, I have some quibbles with it but I think it’s not very relevant to my core thesis so I don’t plan on responding in detail.
(Perhaps that comes across as rude? I’m not sure.)
I realize I got to this thread a bit late but here are two things you can do:
Pull-up negatives. Use your legs to jump up to the top of a pull-up position and then lower yourself as slowly as possible.
Banded pull-ups. This might be tricky to set up in a doorway but if you can, tie a resistance band at a height where you can kneel on it while doing pull-ups and the band will help push you up.
When the NYT article came out, some people discussed the hypothesis that perhaps the article was originally going to be favorable, but the editors at NYT got mad when Scott deleted his blog so they forced Cade to turn it into a hit piece. This interview pretty much demonstrates that it was always going to be a hit piece (and, as a corollary, Cade lied to people saying it was going to be positive to get them to do interviews).
So yes this changed my view from “probably acted unethically but maybe it wasn’t his fault” to “definitely acted unethically”.
people have repeatedly told me that a surprisingly high fraction of applicants for programming jobs can’t do fizzbuzz
I’ve heard it argued that this isn’t representative of the programming population. Rather, people who suck at programming (and thus can’t get jobs) apply to way more positions than people who are good at programming.
I have no idea if it’s true, but it sounds plausible.
On the note of wearing helmets, wearing a helmet while walking is plausibly as beneficial as wearing one while cycling[1]. So if you weren’t so concerned about not looking silly[2], you’d wear a helmet while walking.
[1] I’ve heard people claim that this is true. I haven’t looked into it myself but I find the claim plausible because there’s a clear mechanism—wearing a helmet should reduce head injuries if you get hit by a car, and deaths while walking are approximately as frequent as deaths while cycling.
[2] I’m using the proverbial “you” in the same way as Mark Xu.
Just last week I wrote a post reviewing the evidence on caffeine cycling and caffeine habituation. My conclusion was that the evidence was thin and it’s hard to say anything with confidence.[1]
My weakly held beliefs are:
Taking caffeine daily is better than not taking it at all, but worse than cycling.
Taking caffeine once every 3 days is a reasonable default. A large % of people can take it more often than that, and a large % will need to take it less.
I take caffeine 3 days a week and I am currently running a self-experiment (described in my linked post). I’m currently in the experimental phase, I already did a 9-day withdrawal period and my test results over that period (weakly) suggest that I wasn’t habituated previously because my performance didn’t improve during the withdrawal period (it actually got worse, p=0.4 on a regression test).
[1] Gavin Leech’s post that you linked cited a paper on brain receptors in mice which I was unaware of, I will edit my post to include it. Based on reading the abstract, it looks like that study suggests a weaker habituation effect than the studies I looked at (receptor density in mice increased by 20–25% which naively suggests a 20–25% reduction in the benefit of caffeine whereas other studies suggest a 30–100% reduction, but I’m guessing you can’t just directly extrapolate from receptor counts to efficacy like that). Gavin also cited Rogers et al. (2013) which I previously skipped over because I thought it wasn’t relevant, but on second thought, it does look relevant and I will give it a closer look.
The contextualizer/decoupler punch is an outstanding joke.
Based on your explanation in this comment, it seems to me that St. Petersburg-like prospects don’t actually invalidate utilitarian ethics as it would have been understood by e.g. Bentham, but it does contradict the existence of a real-valued utility function. It can still be true that welfare is the only thing that matters, and that the value of welfare aggregates linearly. It’s not clear how to choose when a decision has multiple options with infinite expected utility (or an option that has infinite positive EV plus infinite negative EV), but I don’t think these theorems imply that there cannot be any decision criterion that’s consistent with the principles of utilitarianism. (At the same time, I don’t know what the decision criterion would actually be.) Perhaps you could have a version of Bentham-esque utilitarianism that uses a real-valued utility function for finite values, and uses some other decision procedure for infinite values.
Ok, fair point, I was going too far in assuming that the sort of engineering necessary was physically impossible.
I think the evidence against (most) miracles is stronger because they violate the laws of physics. Although I think the same could be said for a few UAPs—if a UAP moves in a way that is physically impossible as far as we know, that’s strong evidence against it being aliens, because aliens still have to follow the laws of physics.
How would a tic-tac to accelerate at 700g with no visible propulsion, even positing the existence of super-advanced technology? The best I can think of off the top of my head is that it’s using an extremely strong magnet to manipulate its position relative to earth’s magnetic field. But that would require an absurd amount of energy so it would probably need to be powered by a tiny cold fusion reactor (which may be physically impossible), and it would still need to avoid emitting noticeable amounts of heat, and even if it has some sort of hyper-insulating shell, it would need internal parts that don’t evaporate under that much heat, and also need to avoid emitting the massive amount of heat that would be generated by friction with the air.
To add more on “what we don’t see”: if some UAPs are aliens, why have they been on earth for decades, but they haven’t done anything yet other than fly around? Why have they never landed (or, if they’ve landed, why did they only land at secret military bases)? My prior is that if intelligent aliens visited earth, they would do one of two things:
They arrive in force, and their presence quickly becomes undeniable.
Their scouts arrive and fly around for only a short time.
It seems a lot less likely that they’d arrive, fly around for decades, get spotted several times, but only ever in the distance.
I was just thinking not 10 minutes ago about how that one LW user who casually brought up Daniel K’s equity (I didn’t remember your username) had a massive impact and I’m really grateful for them.
There’s a plausible chain of events where simeon_c brings up the equity > it comes to more people’s attention > OpenAI goes under scrutiny > OpenAI becomes more transparent > OpenAI can no longer maintain its de facto anti-safety policies > either OpenAI changes policy to become much more safety-conscious, or loses power relative to more safety-conscious companies > we don’t all die from OpenAI’s unsafe AI.
So you may have saved the world.