LessWrong developer, rationalist since the Overcoming Bias days. Jargon connoisseur.
jimrandomh
We (the LW moderation team) have given Roko a one-week site ban and an indefinite post/topic ban for attempted doxing. We have deleted all comments that revealed real names, and ask that everyone respect the privacy of the people involved.
Genetically altering IQ is more or less about flipping a sufficient number of IQ-decreasing variants to their IQ-increasing counterparts. This sounds overly simplified, but it’s surprisingly accurate; most of the variance in the genome is linear in nature, by which I mean the effect of a gene doesn’t usually depend on which other genes are present.
So modeling a continuous trait like intelligence is actually extremely straightforward: you simply add the effects of the IQ-increasing alleles to to those of the IQ-decreasing alleles and then normalize the score relative to some reference group.
If the mechanism of most of these genes is that their variants push something analogous to a hyperparameter in one direction or the other, and the number of parameters is much smaller than the number of genes, then this strategy will greatly underperform the simulated prediction. This is because the cumulative effect of flipping all these genes will be to move hyperparameters towards optimal but then drastically overshoot the optimum.
I think you’re modeling the audience as knowing a lot less than we do. Someone who didn’t know high school chemistry and biology would be at risk of being misled, sure. But I think that stuff should be treated as a common-knowledge background. At which point, obviously, you unpack the claim to: the weakest links in a structure determine its strength, biological structures have weak links in them which are noncovalent bonds, not all of those noncovalent bonds are weak for functional reasons, some are just hard to reinforce while constrained to things made by ribosomes. The fact that most links are not the weakest links, does not refute the claim. The fact that some weak links have a functional purpose, like enabling mobility, does not refute the claim.
LW gives authors the ability to moderate comments on their own posts (particularly non-frontpage posts) when they reach a karma threshold. It doesn’t automatically remove that power when they fall back under the threshold, because this doesn’t normally come up (the threshold is only 50 karma). In this case, however, I’m taking the can-moderate flag off the account, since they’re well below the threshold and, in my opinion, abusing it. (They deleted this comment by me, which I undid, and this comment which I did not undo.)
We are discussing in moderator-slack and may take other actions.
Yeah, this is definitely a minimally-obfuscated autobiographical account, not hypothetical. It’s also false; there were lots of replies. Albeit mostly after Yarrow had already escalated (by posting about it on Dank EA Memes).
- Nov 30, 2023, 9:30 AM; -1 points) 's comment on Cis fragility by (
I don’t think this was about pricing, but about keeping occasional bits of literal spam out of the site search. The fact that we use the same search for both users looking for content, and authors adding stuff to Sequences, is a historical accident which makes for a few unfortunate edge cases.
Adam D’Angelo retweeted a tweet implying that hidden information still exists and will come out in the future:
Have known Adam D’Angelo for many years and although I have not spoken to him in a while, the idea that he went crazy or is being vindictive over some feature overlap or any of the other rumors seems just wrong. It’s best to withhold judgement until more information comes out.
While I could imagine someone thinking this way, I haven’t seen any direct evidence of it, and I think someone would need several specific false beliefs in order to wind up thinking this way.
The main thing is, any advantage that AI could give in derivatives trading is small and petty compared to what’s at stake. This is true for AI optimists (who think AI has the potential to solve all problems, including solving aging making us effectively immortal). This is true for AI pessimists (who think AI will kill literally everyone). The failure mode of “picking up pennies in front of a steamroller” is common enough to have its own aphorism, but this seems implausible.
Trading also has a large zero-sum component, which means that having AI while no one else does would be profitable, but society as a whole gaining AI would not profit traders much except via ways that rest of society isn’t also profiting.
Also worth calling out explicitly: There aren’t that many derivatives traders in the world, and the profession favors secrecy. I think the total influence of derivatives-trading on elite culture is pretty small.
Was Sam Altman acting consistently with the OpenAI charter prior to the board firing him?
Short answer: No, and trying this does significant damage to people’s health.
The prototypical bulimic goes through a cycle where they severely undereat overall, then occasionally experience (what feels from the inside like) a willpower failure which causes them to “binge”, eating an enormous amount in a short time. They’re then in a state where, if they let digestion run its course, they’d be sick from the excess; so they make themselves vomit, to prevent that.
I believe the “binge” state is actually hypoglycemia (aka low blood sugar), because (as a T1 diabetic), I’ve experienced it. Most people who talk about blood sugar in relation to appetite have never experienced blood sugar low enough to be actually dangerous; it’s very distinctive, and it includes an overpowering compulsion to eat. It also can’t be resolved faster than 15 minutes, because eating doesn’t raise blood sugar, digesting raises blood sugar; that can lead to consuming thousands of calories of carbs at once (which would be fine if spaced out a little, but is harmful if concentrated into such a narrow time window).
The other important thing about hypoglycemia is that being hypoglycemic is proof that someone’s fat cells aren’t providing enough energy withdrawals to survive. The binge-eating behavior is a biological safeguard that prevents people from starving themself so much that they literally die.
It’s an AWS firewall rule with bad defaults. We’ll fix it soon, but in the mean time, you can scrape if you change your user agent to something other than wget/curl/etc. Please use your name/project in the user-agent so we can identify you in logs if we need to, and rate-limit yourself conservatively.
I wrote about this previously here. I think you have to break it down by company; the answer for why they’re not globally available is different for the different companies.
For Waymo, they have self-driving taxis in SF and Phoenix without safety drivers. They use LIDAR, so instead of the cognitive task of driving as a human would solve it, they have substituted the easier task “driving but your eyes are laser rangefinders”. The reason they haven’t scaled to cover every city, or at least more cities, is unclear to me; the obvious possibilities are that the LIDAR sensors and onboard computers are impractically expensive, that they have a surprisingly high manual-override and there’s a big unscalable call center somewhere, or they’re being cowardly and trying to maintain zero fatalities forever (at scales where a comparable fleet of human-driven taxis would definitely have some fatalities). In any case, I don’t think the software/neural nets are likely to be the bottleneck.
For Tesla, until recently, they were using surprisingly-low-resolution cameras. So instead of the cognitive task of driving as a human would solve it, they substituted the harder task “driving with a vision impairment and no glasses”. They did upgrade the cameras within the past year, but it’s hard to tell how much of the customer feedback represents the current hardware version vs. past versions; sites like FSDBeta Community Tracker don’t really distinguish. It also seems likely that their onboard GPUs are underpowered relative to the task.
As for Cruise, Comma.ai, and others—well, distance-to-AGI is measured only from the market leader, and just as GPT-4, Claude and Bard have a long tail of inferior models by other orgs trailing behind them, you also expect a long tail of self-driving systems with worse disengagement rates than the leaders.
It seems likely that all relevant groups are cowards, and none are willing to move forward without a more favorable political context. But there’s another possibility not considered here: perhaps someone has already done a gene-drive mosquito release in secret, but we don’t know about it because it didn’t work. This might happen if local mosquito populations mix too slowly compared to how long it takes a gene-driven population to crash; or if the initially group all died out before they could mate; or something in the biology of the driven-drive machinery didn’t function as expected.
If that were the situation, then the world would have a different problem than the one we think it has: inability to share information about what the obstacle was and debug the solution.
Unfortunately the ban-users-from-posts feature has a rube-goldberg of rules around it that were never written down, and because there was no documentation to check it against, I’ve never managed to give it a proper QA pass. I’d be interested in reports of people’s experience with it, but I do not have confidence that this feature works without major bugs.
You should think less about PR and more about truth.
Mod note: I count six deleted comments by you on this post. Of these, two had replies (and so were edited to just say “deleted”), one was deleted quickly after posting, and three were deleted after they’d been up for awhile. This is disruptive to the conversation. It’s particularly costly when the subject of the top-level post is about conversation dynamics themselves, which the deleted comments are instances (or counterexamples) of.
You do have the right to remove your post/comments from LessWrong. However, doing so frequently, or in the middle of active conversations, is impolite. If you predict that you’re likely to wind up deleting a comment, it would be better to not post it in the first place. LessWrong has a “retract” button which crosses out text (keeping it technically-readable but making it annoying to read so that people won’t); this is the polite and epistemically-virtuous way to handle comments that you no longer stand by.
The thing I was referring to was an exchange on Facebook, particularly the comment where you wrote:
also i felt like there was lots of protein, but maybe folks just didn’t realize it? rice and most grains that are not maize have a lot (though less densely packed) and there was a lot of quinoa and nut products too
That exchange was salient to me because, in the process of replying to Elizabeth, I had just searched my FB posting history and reread what veganism-related discussions I’d had, including that one. But I agree, in retrospect, that calling you a “vegan advocate” was incorrect. I extrapolated too far based on remembering you to have been vegan at that time and the stance you took in that conversation. The distinction matters both from the perspective of not generalizing to vegan advocates in general, and because the advocate role carries higher expectations about nutrition-knowledge than participating casually in a Facebook conversation does.
I draw a slightly different conclusion from that example: that vegan advocates in particular are a threat to truth-seeking in AI alignment. Because I recognize the name, and that’s a vegan who’s said some extremely facepalm-worthy things about nutrition to me.
I believe that her summaries are a strong misrepresentation of my views, and explained why in the above comment through object-level references comparing my text to her summaries.
I’m looking at those quote-response pairs, and just not seeing the mismatch you claim there to be. Consider this one:
The charitable explanation here is that my post focuses on naive veganism, and Soto thinks that’s a made-up problem.
Of course, my position is not as hyperbolic as this.
This only asserts that there’s a mismatch; it provides no actual evidence of one. Next up:
his desired policy of suppressing public discussion of nutrition issues with plant-exclusive diets will prevent us from getting the information to know if problems are widespread
In my original answers I address why this is not the case (private communication serves this purpose more naturally).
Pretty straightforwardly, if the pilot study results had only been sent through private communications, then they wouldn’t have public discussion (ie, public discussion would be suppressed). I myself wouldn’t know about the results. The probability of a larger follow-up study would be greatly reduced. I personally would have less information about how widespread problems are.
There’s a big difference between arguing that someone shouldn’t be able to stay anonymous, and unilaterally posting names. Arguing against allowing anonymity (without posting names) would not have been against the rules. But, we’re definitely not going to re-derive the philosophy of when anonymity should and shouldn’t be allowed, after names are already posted. The time to argue for an exception was beforehand, not after the fact.