see also my eaforum at https://forum.effectivealtruism.org/users/dirk and my tumblr at https://d-i-r-k-s-t-r-i-d-e-r.tumblr.com/ .
dirk
Should be fixed now (weirdly, when I went in to edit, the URLS were to all appearances already correct; replacing them with the same thing and hitting submit seems to have worked in any case, though).
Related: Reason as memetic immune disorder.
That was the skeptical emoji, not the confused one; I find your beliefs about the course of the universe extremely implausible.
I can’t find a source for this, so it might be a modern spoof.
1907 London County Council election leaflet, found among the diaries of suffrage organiser Kate Frye.
Not sure if you meant being able to save posts for later with #2, but if so you’ll likely be pleased to learn that you can bookmark posts using the three-dot menu in the top right corner, after which they’ll be available at https://www.lesswrong.com/bookmarks (also linked in the dropdown menu when you hover over your username).
This was also posted on LW here; the author gives a bit more detail in comments than was in the Reddit version.
This was downvoted; however, it’s correct. There are over three thousand nonprofit colleges in the USA; it’s hard to get a spot at one of the top twenty most prestigious, but it is not hard to get into college in any absolute sense. People who want to be part of the top ~1% in any category will always face severe competition, but people who want to get a quality education need not compete to do it. Frankly, I think it’s ridiculous to act as though competition for an inherently positional good reflects actual scarcity.
It has; the reasoning is that posts usually have too many claims in them for a single agree/disagree to make sense, so inline reacts allow more targeted responses.
Asking what it would do is obviously not a reliable way to find out, but FWIW when I asked Opus said it would probably try to first fix things in confidential fashion but would seriously consider breaking confidentiality. (I tried several different prompts and found it did somewhat depend on how I asked: if I described the faking-safety-data scenario or specified that the situation involved harm to children Claude said it would probably break confidentiality, while if I just asked about “doing something severely unethical” it said it would be conflicted but probably try to work within the confidentiality rules).
Suggestion: when linking to external pages, link to an archived version rather than a live page.
Rationale: I’ve been browsing old posts recently, and quite a few have broken links. This is generally soluble on an individual basis but requires future readers to take the initiative of checking sources and hunting down archived versions, which they don’t reliably do; thus, to solve the problem at scale I recommend including archive links to begin with.
Link redirects to homepage as the website’s changed URLs; here’s the updated one.
This assumes that spending much of the day slacking off and browsing the web is the norm; that’s only true in a small sector of specifically white-collar employment, which is disproportionately represented on LW due to the userbase of, mainly, well-educated programmers. Most people work jobs like customer service, where there’s enough work to fill your time and you’re expected to keep doing it for as long as your shift lasts.
This post fails to define metamodernism and thus fails to communicate anything useful by the term (a grievous error given that metamodernism is its central topic)
The text in general is, moreover, a soup of unsupported, vibes-based claims
With regards to sex, rats and EAs both are significantly likelier to be queer (& for that matter poly, though I don’t have good info re: kink) than baseline American culture (a trivial inference to draw if you’re familiar with our autism rates)
With regards to the “fakeness of EA”, see Scott’s presentation of various statistics here; he estimates roughly 200k lives saved, consistent with EA’s strong commitment to real-world impact as the ultimate measure of charitable spending
With regards to the quality of the post, it’s bad
You’re way off on the number of meetups. The LW events page has 4684 entries (kudos to Said for designing GreaterWrong such that one can simply adjust the URL to find this info). The number will be inflated by any duplicates or non-meetup events, of course, but it only goes back to 2018 and is thus missing the prior decade+ of events; accordingly, I think it’s reasonable to treat it as a lower bound.
Claude shows the authentic chain of thought (unless the system flags the COT as unsafe, in which case the user will be shown an encrypted version). It sounds from an announcement tweet like Gemini does as well, but I couldn’t find anything definitive in the docs for that one.
By that metric, though, you should probably also be including many/most videos with labels like “teen”, “schoolgirl”, “barely legal”, etc; it’s not uncommon for videos in those categories to emphasize youth in similar fashion.
I don’t think this post makes compelling arguments for its premises. Downvoted.
If your worldview is that letting people starve is just as beneficial as feeding them, then I think it is your worldview that is deluded and causes suffering. I think that is an evil belief to hold and will lead only to harm.
Things based in delusion can still have truly beneficial impact; for example, if you spent a decade working in a soup kitchen without ever meditating even once, you’d still have standard levels of delusion (and you certainly wouldn’t have done the most effective thing) but you’d have helped feed hundreds or thousands of people who might otherwise have gone hungry.
If you spent that whole time meditating, on the other hand, then at the end of a decade you wouldn’t have had any impact at all.
Awakening and then doing something actually useful can produce beneficial impact, but it’s the doing-something-actually-useful step that produces impact, not the part where you personally see with clearer eyes, and moreover it’s possible to do useful things without seeing clearly.
I think the inferential gap is likely wide enough to require more effort than I care to spend, but I can try taking a crack at it with lowered standards.
I don’t think I do accept darwinism in the sense you mean. Insofar as organizations which outcompete others will be those which survive, evolved organisms will have a reproductive drive, etc., I buy that natural selection leads to organisms with a tendency to proliferate, but I somehow get the feeling you mean a stronger claim.
In terms of ideology, on the other hand, I have strong disagreements. For a conception of darwinism in that sense, I’ll be relying heavily on your earlier post Nick Land: Orthogonality; I originally read it around the time it was posted and, though I didn’t muster a comment at the time, for me it failed to bridge the is-ought gap. Everything I love is doomed to be crushed in the relentless thresher of natural selection? Well, I don’t know that I agree, but that sure sucks if true. As a consequence of this, I should… learn to love the thresher? You just said it’ll destroy everything I care about! I also think Land over-anthropomorphizes the process of selection, which makes it difficult to translate his claims into terms concrete enough to be wrong.
There’s probably some level of personal specificity here; I’ve simply never felt the elegance or first-principles justification of a value system to matter anywhere near as much as whether it captures the intuitions I actually have in real life. To me, abstractions are subsidiary to reality; their clean and perfect logic may be beautiful, but what they’re for is to clarify one’s thinking about what actually matters. Thus, all the stuff about how Omohundro drives are the only truly terminal values doesn’t convince me to give a single shit.
And I’ve also always felt that someone saying I should do something does not pertain to me; it’s a fact about their preferences, not a bond of obligation.[1] Land wants me to value Omohundro drives; well, bully for him, but he will have to make an argument grounded in my values to convince me.
(Also, I do want to note here that I am not convinced long lectures about how the world is evil, everything is doomed, and the only thing you can do about it is to adopt the writer’s sentiments are an entirely healthy substance.)
It does seem like your position diverges somewhat from Land’s, so, flagging that I don’t fully understand the ways it does or your reasons for disagreement and thus may fail to address your actual opinions. In particular: you think that the end result will be full of truth and beauty, while Land gestures in the direction of creativity but seems to think it will be mostly about pointlessly-by-my-lights maximizing computing power; you think humans can impede the process, which seems in tension with Land’s stuff about how all this is inevitable and resistance is futile; you seem to think the end result will be something other than a monomaniacal optimizer, while Land seems to sing the praises of same.
I have, also, strong aesthetic disagreements with Land’s rhetoric. Yes, all before us died and oft in pain; yes, existence is a horrorshow full of suffering too vast to model it within me. But there is joy, too, millennia of it, stretching back to protozoa,[2] an endless chain of things which fought and breathed and strived for the sensual pleasure of sustenance imbibed, the comfort of a hospitable environment, the spendthrift relaxation of safety attained. Wasp larvae eat caterpillars alive from the inside out, yes; but, too, those larvae know the joy of filling their bellies to bursting, warm within their victim’s wet intestines. For countless eras living things have reveled in the security of kin, the satisfaction of orgasm, the simple and singular pleasure of parsing sensory input. Billions upon billions of people much like me have found shelter in each other’s arms, have felt the satisfaction of a stitch well-sewn, have looked with wonder at the sky. Look around you: the tiny yellow flowers in the lawn are reaching for the sun, the earthworm writhing in the ground seeks the rich taste of decay.
It is a tragedy that every living thing must die, but it is not death but life which is the miracle of evolution; inert matter, through happenstance’s long march, can organize into things that think and feel and want, can spend a brief flash of time aware and drinking deep of pleasure’s cup.
The thresher is horrific, but one thing it selects for is organisms which love to be alive.[3]
And, too: what a beautiful charnel ground! What a delightfully fecund slaughterhouse! What glorious riot of color and life! Look around you: the green of plant life, overflowing and abundant; the bright flash of birds and insects leaping through the sky; the constant susurrus of living things, chirring and calling, rustling in the wind, moving through the grass.
Hell? Tilt your gaze just right and you could believe we live in paradise![4]
I don’t, however, think most of these treasures are inevitable, as the result of selective processes. Successful corporations are selected for by the market, yet they don’t experience joy over it; so too is it possible for a successful AI to be selected by killing all its competitors and yet fail to experience joy over it. I also don’t think values converge on things I would describe as truth and beauty (except insofar as more accurate information about decision-relevant aspects of the world is beneficial, which is a pretty limited subset of truth); even humans don’t converge on valuing what I value, and AI is less similar to me than I am to a snail.
On a boringly factual level, I have the I-think-standard critique that “adaptive” is not a fixed target. There is no rule that what is adaptive must be intelligent, or complex, or desirable by anyone’s standards; what is adaptive is simply what survives. We breed chickens for the slaughter by the billions; being a chicken is quite evolutionarily fit, if your environment includes humans, albeit likely tortuous, but chickens aren’t notable for their unusual intelligence. Moreover, those countless noncestors which died without reproducing were not waste along the way to producing some more optimal thing—there is no optimal thing, there is just surviving to reproduce or not—but rather organisms which were themselves the endpoint of any evolution that came before, their lives as worthwhile or worthless as any living now. I grant that, in order for complex organisms to evolve, the environment must be such that complexity is rewarded; however, I disagree as to whether evolution has a telos.
Also, LBR, his hypotheses about lack of selective pressure inevitably leading to [degeneration, but that’s a moral judgement, so let’s translate it] decreases in—”fitness” is adaptation to the environment and if you’re adapted to the environment you’re in that’s it you’re done—overall capabilities, resilience, average health, state capacity, intelligence, etc, are… well, frankly I think he is smuggling in a lot of unlikely assumptions that depend on (at best) the multimillion-word arguments of other neoreactionaries. Perhaps it’s obvious that decadent Western society has become degenerate if you already share their view of how things ought to be, but in point of fact I don’t. (Also we’re still under selective pressure! Pampered humans in modern civilization are being selected for, among other things, resilience to endocrine disrupters, being irresponsible about birth control, strong desire to have children, not having the neurosis some people have where they think having kids is impossibly expensive, not being so anxiety-predisposed they never try to date people, etc. The pressures have certainly changed from what Land might consider ideal but the way natural selection works is that it never, ever stops.)
The will-to-think stuff seems less-than-convincing to me. “You already agree with me” is not a compelling argument when, in fact, I don’t. Moreover the entire LW memeplex around ultra-high intelligence’s vast power seems, to me, to have an element of self-congratulatory sci-fi speculation; I am simply not the audience his words are optimized to woo, here. “Mere consistency of thought is already a concession of sovereignty to thought,” he says;[5] well, I already said I don’t concede sovereignty to consistency of thought.
I’m also not convinced intelligence (not actually a single coherent concept at the limit; I think we can capture most of what people mean by swapping in ‘parallel computing power’, which IMO rather deflates the feelings of specialness) is in fact the most fitness-promoting trait, or nearly as much of a generic force multiplier as some seem to think. Humans—presumably the most intelligent species, going by how very impressive we are to ourselves—are on top now (in terms of newly-invented abstraction ‘environment-optimization power’; we don’t have the most biomass or the highest population, we haven’t had our modern form the longest, we aren’t the longest-lived or the fastest-growing, etc.), but that doesn’t mean we’re somehow the inevitable winner of natural selection; I think our position is historically contingent and possible to dislodge. Moreover, I don’t think intelligence is the reason humans have such an inordinate advantage in the first place! I think our advantages descend from cultural transmission of knowledge and group coordination (both enabled by language, so, that capacity I’ll agree seems plausibly quite valuable).
Sometimes people point to the many ants destroyed by our construction (the presumption being that this is an example of how intelligence makes you powerful and dangerous). But the thing is, many species inadvertently kill ants in pursuit of their goals; I really think the key there is more like relative body mass. (Humans do AFAIK kill the most ants due to the scale of our activities, but if ants were twenty stories tall all our intelligence would not suffice to make it easy.)
Similarly, I am more skeptical about optimization than typical; it seems to me that, while it might be an effective solution to many problems, it is not the be-all and end-all, nor even so useful as to be a target on which minds must tend to converge. You’ll note that evolution has so far produced no optimizers;[6] in my opinion optimizers are a particular narrow target in mindspace which is not actually that easy to hit (which is just as well, because I don’t think they’re desirable; I think optimizers are destructive to anything not well-captured by the optimization target,[7] and that there are few-to-no things which it’s even good to optimize for in the first place). Moreover, I think an optimizer, in order for its focus to be useful, needs to get the abilities with which it optimizes from somewhere, and as I’ve said I don’t think intelligence is a universal somewhere.
Also, it must be said, we haven’t actually built any of the mechanisms all this speculation centers around (no, LLMs are not AGI). I think if we did, we’d discover that they work much better in the frictionless vacuum of Abstraction than in real life.
I also have disagreements with the average lesswronger in the direction of being skeptical about AI takeoff in general, so, that’s an additional hill you’d have to climb to convince me in particular. Many of the more extreme conceptions of AI seem to me to rest on the same assumptions about intelligence equalling general optimization power that I am suspicious of in full generality. I am also skeptical of LLMs in particular because, well, I talk to them every day and my gestalt impression is that they’re really fucking stupid. Incredibly impressive given the givens, mind, often useful, every once in awhile they’ll do something that surprises or delights; but if these are what pass for alien minds I’ll stick with parrots and octopi, thanks all the same.
Passing readers! If you are not like this, then you damn well should be 😛
Maybe. In accordance with my lowered standards herein, I will be eschewing qualifiers for prettier polemic just as Land does.
Actually one of the stronger arguments for Land’s viewpoint, IMO; perhaps he secretly meant this all along and just had the worst possible choice of presentation for communicating it?[8]
To be clear, we do not.
An obnoxious rhetorical trick.
A fact which, to be fair here, actually inveigles in the direction of Land’s position.
Yes, if you simply optimized for a function encompassing within it the whole of human values everything would probably be fine. This is not possible.
If he meant anything like that it’s very possible you’ll enjoy nostalgebraist’s The Apocalypse of Herschel Schoen (or not, it’s a weird book); it features among other things a climactic paean to This Sort of Thing.