LessWrong developer, rationalist since the Overcoming Bias days. Jargon connoisseur. Currently working on auto-applying tags to LW posts with a language model.
jimrandomh(Jim Babcock)
While I could imagine someone thinking this way, I haven’t seen any direct evidence of it, and I think someone would need several specific false beliefs in order to wind up thinking this way.
The main thing is, any advantage that AI could give in derivatives trading is small and petty compared to what’s at stake. This is true for AI optimists (who think AI has the potential to solve all problems, including solving aging making us effectively immortal). This is true for AI pessimists (who think AI will kill literally everyone). The failure mode of “picking up pennies in front of a steamroller” is common enough to have its own aphorism, but this seems implausible.
Trading also has a large zero-sum component, which means that having AI while no one else does would be profitable, but society as a whole gaining AI would not profit traders much except via ways that rest of society isn’t also profiting.
Also worth calling out explicitly: There aren’t that many derivatives traders in the world, and the profession favors secrecy. I think the total influence of derivatives-trading on elite culture is pretty small.
Was Sam Altman acting consistently with the OpenAI charter prior to the board firing him?
Short answer: No, and trying this does significant damage to people’s health.
The prototypical bulimic goes through a cycle where they severely undereat overall, then occasionally experience (what feels from the inside like) a willpower failure which causes them to “binge”, eating an enormous amount in a short time. They’re then in a state where, if they let digestion run its course, they’d be sick from the excess; so they make themselves vomit, to prevent that.
I believe the “binge” state is actually hypoglycemia (aka low blood sugar), because (as a T1 diabetic), I’ve experienced it. Most people who talk about blood sugar in relation to appetite have never experienced blood sugar low enough to be actually dangerous; it’s very distinctive, and it includes an overpowering compulsion to eat. It also can’t be resolved faster than 15 minutes, because eating doesn’t raise blood sugar, digesting raises blood sugar; that can lead to consuming thousands of calories of carbs at once (which would be fine if spaced out a little, but is harmful if concentrated into such a narrow time window).
The other important thing about hypoglycemia is that being hypoglycemic is proof that someone’s fat cells aren’t providing enough energy withdrawals to survive. The binge-eating behavior is a biological safeguard that prevents people from starving themself so much that they literally die.
It’s an AWS firewall rule with bad defaults. We’ll fix it soon, but in the mean time, you can scrape if you change your user agent to something other than wget/curl/etc. Please use your name/project in the user-agent so we can identify you in logs if we need to, and rate-limit yourself conservatively.
I wrote about this previously here. I think you have to break it down by company; the answer for why they’re not globally available is different for the different companies.
For Waymo, they have self-driving taxis in SF and Phoenix without safety drivers. They use LIDAR, so instead of the cognitive task of driving as a human would solve it, they have substituted the easier task “driving but your eyes are laser rangefinders”. The reason they haven’t scaled to cover every city, or at least more cities, is unclear to me; the obvious possibilities are that the LIDAR sensors and onboard computers are impractically expensive, that they have a surprisingly high manual-override and there’s a big unscalable call center somewhere, or they’re being cowardly and trying to maintain zero fatalities forever (at scales where a comparable fleet of human-driven taxis would definitely have some fatalities). In any case, I don’t think the software/neural nets are likely to be the bottleneck.
For Tesla, until recently, they were using surprisingly-low-resolution cameras. So instead of the cognitive task of driving as a human would solve it, they substituted the harder task “driving with a vision impairment and no glasses”. They did upgrade the cameras within the past year, but it’s hard to tell how much of the customer feedback represents the current hardware version vs. past versions; sites like FSDBeta Community Tracker don’t really distinguish. It also seems likely that their onboard GPUs are underpowered relative to the task.
As for Cruise, Comma.ai, and others—well, distance-to-AGI is measured only from the market leader, and just as GPT-4, Claude and Bard have a long tail of inferior models by other orgs trailing behind them, you also expect a long tail of self-driving systems with worse disengagement rates than the leaders.
It seems likely that all relevant groups are cowards, and none are willing to move forward without a more favorable political context. But there’s another possibility not considered here: perhaps someone has already done a gene-drive mosquito release in secret, but we don’t know about it because it didn’t work. This might happen if local mosquito populations mix too slowly compared to how long it takes a gene-driven population to crash; or if the initially group all died out before they could mate; or something in the biology of the driven-drive machinery didn’t function as expected.
If that were the situation, then the world would have a different problem than the one we think it has: inability to share information about what the obstacle was and debug the solution.
Unfortunately the ban-users-from-posts feature has a rube-goldberg of rules around it that were never written down, and because there was no documentation to check it against, I’ve never managed to give it a proper QA pass. I’d be interested in reports of people’s experience with it, but I do not have confidence that this feature works without major bugs.
You should think less about PR and more about truth.
Mod note: I count six deleted comments by you on this post. Of these, two had replies (and so were edited to just say “deleted”), one was deleted quickly after posting, and three were deleted after they’d been up for awhile. This is disruptive to the conversation. It’s particularly costly when the subject of the top-level post is about conversation dynamics themselves, which the deleted comments are instances (or counterexamples) of.
You do have the right to remove your post/comments from LessWrong. However, doing so frequently, or in the middle of active conversations, is impolite. If you predict that you’re likely to wind up deleting a comment, it would be better to not post it in the first place. LessWrong has a “retract” button which crosses out text (keeping it technically-readable but making it annoying to read so that people won’t); this is the polite and epistemically-virtuous way to handle comments that you no longer stand by.
The thing I was referring to was an exchange on Facebook, particularly the comment where you wrote:
also i felt like there was lots of protein, but maybe folks just didn’t realize it? rice and most grains that are not maize have a lot (though less densely packed) and there was a lot of quinoa and nut products too
That exchange was salient to me because, in the process of replying to Elizabeth, I had just searched my FB posting history and reread what veganism-related discussions I’d had, including that one. But I agree, in retrospect, that calling you a “vegan advocate” was incorrect. I extrapolated too far based on remembering you to have been vegan at that time and the stance you took in that conversation. The distinction matters both from the perspective of not generalizing to vegan advocates in general, and because the advocate role carries higher expectations about nutrition-knowledge than participating casually in a Facebook conversation does.
I draw a slightly different conclusion from that example: that vegan advocates in particular are a threat to truth-seeking in AI alignment. Because I recognize the name, and that’s a vegan who’s said some extremely facepalm-worthy things about nutrition to me.
I believe that her summaries are a strong misrepresentation of my views, and explained why in the above comment through object-level references comparing my text to her summaries.
I’m looking at those quote-response pairs, and just not seeing the mismatch you claim there to be. Consider this one:
The charitable explanation here is that my post focuses on naive veganism, and Soto thinks that’s a made-up problem.
Of course, my position is not as hyperbolic as this.
This only asserts that there’s a mismatch; it provides no actual evidence of one. Next up:
his desired policy of suppressing public discussion of nutrition issues with plant-exclusive diets will prevent us from getting the information to know if problems are widespread
In my original answers I address why this is not the case (private communication serves this purpose more naturally).
Pretty straightforwardly, if the pilot study results had only been sent through private communications, then they wouldn’t have public discussion (ie, public discussion would be suppressed). I myself wouldn’t know about the results. The probability of a larger follow-up study would be greatly reduced. I personally would have less information about how widespread problems are.
If the information environment prevents people from figuring out the true cause of the obesity epidemic, or making better engineered foods, this affects you no matter what place and what social circles you run in. And if epistemic norms are damaged in ways that lead to misaligned AGI instead of aligned AGI, that could literally kill you.
The stakes here are much larger than the individual meat consumption of people within EA and rationality circles. I think this framing (moralistic vegans vs selfish meat eaters with no externalities) causes people to misunderstand the world in ways that are predictably very harmful.
I think that’s true, but also: When people ask the authors for things (edits to the post, time-consuming engagement), especially if the request is explicit (as in this thread), it’s important for third parties to prevent authors from suffering unreasonable costs by pushing back on requests that shouldn’t be fulfilled.
Disagree. The straightforward reading of this is that claims of harm that route through sharing of true information will nearly-always be very small compared to the harms that route through people being less informed. Framed this way, it’s easy to see that, for example, the argument doesnt apply to things like dangerous medical experiments, because those would have costs that aren’t based in talk.
You say that he quoted bits are misrepresentations, but I checked your writing and they seem like accurate summaries. You should flag that your position has been misrepresented iff that is true. But you haven’t been misrepresented, and I don’t think that you think you’ve been misrepresented.
I think you are muddying the waters on purpose, and making spurious demands on Elizabeth’s time, because you think clarity about what’s going on will make people more likely to eat meat. I believe this because you’ve written things like:
One thing that might be happening here, is that we’re speaking at different simulacra levels
Source comment. I’m not sure how how familiar you are with local usage of the the simulacrum levels phrase/framework, but in my understanding of the term, all but one of the simulacrum levels are flavors of lying. You go on to say:
Now, I understand the benefits of adopting the general adoption of the policy “state transparently the true facts you know, and that other people seem not to know”. Unfortunately, my impression is this community is not yet in a position in which implementing this policy will be viable or generally beneficial for many topics.
The front-page moderation guidelines on LessWrong say “aim to explain, not persuade”. This is already the norm. The norms of LessWrong can be debated, but not in a subthread on someone else’s post on a different topic.
This comment appears transparently intended to increase the costs associated with having written this post, and to be a continuation of the same strategy of attempting to suppress true information.
- 30 Sep 2023 18:23 UTC; 26 points) 's comment on EA Vegan Advocacy is not truthseeking, and it’s everyone’s problem by (EA Forum;
Expressing negative judgments of someone’s intellectual output could be an honest report, generated by looking at the output itself and extrapolating a pattern. Epistemically speaking, this is fine. Alternatively, it could be motivated by something more like politics; someone gets offended, or has a conflict of interest, then evaluates things in a biased way. Epistemically speaking, this is not fine.
So, if I were to take a stab at what the true rule of epistemic conduct here is, the primary rule would be that you ought to evaluate the ideas first before evaluating the person, in your own thinking. There are also reasons why the order of evaluations should also be ideas-before-people in the written product; it sets a better example of what thought processes are supposed to look like, it’s less likely to mislead people into biased evaluations of the ideas; but this is less fundamental and less absolute than the ordering of the thinking.
But.
Having the order-of-evaluations wrong in a piece of writing is evidence, in a Bayesian sense, of having also had the order-of-evaluations wrong in the thinking that generated it. Based on the totality of omnizoid’s post, I think in that case, it was an accurate heuristic. The post is full of overreaches and hyperbolic language. It presents each disagreement as though Eliezer were going against an expert consensus, when in fact each position mentioned is one where he sided with a camp in an extant expert divide.
And...
Over in the legal profession, they have a concept called “appearance of impropriety”, which is that, for some types of misconduct, they consider it not only important to avoid the misconduct itself but also to avoid doing things that look too similar to misconduct.
If I translate that into something that could be the true rule, it would be something like: If an epistemic failure mode looks especially likely, both in the sense of a view-from-nowhere risk analysis and in the sense that your audience will think you’ve fallen into the failure mode, then some things that would normally be epistemically superogatory become mandatory instead.
Eliezer’s criticism of Steven J. Gould does not follow the stated rule, of responding to a substantive point before making any general criticism of the author. I lean towards modus tollens over modus pollens, that this makes the criticism of Steven J. Gould worse. But how much worse depends on whether that’s a reflection of an inverted generative process, or an artifact of how he wrote it up. I think it was probably the latter.
Credit to Benquo’s writing for giving me the idea.
Adam D’Angelo retweeted a tweet implying that hidden information still exists and will come out in the future: