No because, like, for example, you can ask followup questions of a human, and they’ll give outputs that come from the result of thinking humanly, which includes processes LLMs currently can’t do.
TsviBT
Escalation and perception
As I wrote, if you actually carefully review it, you will end up changing a lot of it.
Its already a dumpster fire, right? LLMs might be generating burning garbage, but if they do so more cheaply than the burning garbage generated by humans then maybe its still a win??
I mean, ok?
Yeah I think uncritically reading a 2 sentence summary-gloss from a medium or low traffic wiki page and regurgitating it without citing your source is comparably bad to covertly including LLM paragraphs.
As I mentioned elsewhere in the comments, the OP is centrally about good discourse in general, and LLMs are only the most obvious foil (and something I had a rant in me about).
And I could say “I asked Grok and didn’t do any fact checking, but maybe it helps you to know that he said: <copypasta>” and the attribution/plagiarism concerns would be solved.
I mean, ok, but I might want to block you, because I might pretty easily come to believe that you aren’t well-calibrated about when that’s useful. I think it is fairly similar to googling something for me; it definitely COULD be helpful, but could also be annoying. Like, maybe you have that one friend / acquaintance who knows you’ve worked on “something involving AI” and sends you articles about, like, datacenter water usage or [insert thing only the slightest bit related to what you care about] or something, asking “So what about this??” and you might care about them and not be rude or judgemental but it is still them injecting a bit of noise, if you see what I mean.
I discuss something related in the post, and as I said, I agree that if in fact you check the LLM output really hard, in such a manner that you would actually change the text substantively on any of a dozen or a hundred points if the text was wrong, but you don’t change anything because it’s actually correct, then my objection is quantitatively lessened.
I do however think that there’s a bunch of really obvious ways that my argument does go through. People have given some examples in the comment, e.g. the LLM could tell a story that’s plausibly true, and happens to be actually true of some people, and some of those people generate that story with their LLM and post it. But I want to know who would generate that themselves without LLMs. (Also again in real life people would just present LLM’s testimony-lookalike text as though it is their testimony.) The issue with the GLUT is that it’s a huge amount of info, hence immensely improbable to generate randomly. An issue here is that text may have only a few bits of “relevant info”, so it’s not astronomically unlikely to generate a lookalike. Cf. Monty Hall problem; 1⁄3 or 2⁄3 or something of participants find themselves in a game-state where they actually need to know the algorithm that the host follows!
Meta-agentic Prisoner’s Dilemmas
(I’m not saying “give me the prompt so I can give it to an LLM”, I’m saying “just tell me the shorter raw version that you would have used as a prompt”. Like if you want to prompt “please write three paragraphs explaining why countries have to enforce borders”, you could just send me “countries have to enforce borders”. I don’t need the LLM slop from that. (If you ask the LLM for concrete examples of when countries do and don’t have to enforce borders, and then you curate and verify them and explain how they demonstrate some abstract thing, then that seems fine/good.))
everything you write about LLM text is true also of human-written text posted anonymously.
Certainly not. You could interrogate that person and they might respond if they want, which gets some of the benefits; you can see their life-connected models shining through; etc. But yes there are many overlaps.
Is this LLM-generated? My eyes glazed over in about 3 seconds.
A flip side of this analysis is that the detrimental effects of the aforementioned cognitive distortions might be much higher than is usually supposed or realized, perhaps sometimes causing multi-year/decade delays in important approaches and conclusions, and can’t be overcome by others even with significant IQ advantages over me. This may be a crucial strategic consideration, e.g., implying that the effort to reduce x-risks by genetically enhancing human intelligence may be insufficient without other concomitant efforts to reduce such distortions.
This is non-researched speculation, but my guesses would be:
There are many cognitive dimensions that importantly affect performance in one or another important domain.
Most of these effects are substantively, though far from completely, fungible with more IQ. In other words, to make up a totally fictional example, you could have someone with IQ 130 and a lot of calm-spacious-attuned-nimble-empathy, who is able to follow along with another person as they struggle though conflicting mental elements, and to help that person untangle themselves by inserting relevant tricks, tools, perceptions, etc., while being sensitive to things that might be upsetting, etc. etc. On the other hand you could have someone with IQ 155, and only somewhat of this calm-spacious-attuned-nimble-empathy; and they basically perform as well as the first therapist at the overall task of helping a client come out of the therapy session with more thriving, on a better cognitive trajectory by their own lights, etc. Even thought Therapist 2 has somewhat less intuitive following-along with the client as does Therapist 1, Therapist 2 is able to make up for that by being able to generate more varied hypotheses quicker and “manually” updating quicker and thinking of better tools and communicating more clearly.
If you get a lot more people with really high IQs, you also get a bunch more people who are [high on other important cognitive traits, and also high IQ]. (How relevant this argument is, depends on what the numbers look like—how quick is the uptake of, say, reprogenetics technology, how high is the threshold of IQ and of other cognitive dimensions for a given performance, etc.)
Anyway, I definitely would want to genomically vector for these other traits, e.g. wisdom, but it’s harder. I do think that argues in favor of working on psychometrics for personality traits as a higher marginal priority than IQ; I think that argument goes through pretty strongly. (Though some people have expressed special worry about personality traits—some of which, e.g. obedience/agreeability, might be targets for oppressive regimes. IDK what to think of that; it feels “far” / unlikely / outlandish, but I don’t want to be dismissive and haven’t thought about it enough.) But, I think
The hardest part of any of this is the biotech, not the psychometrics. A crash course to get strong reprogenetics would be really hard and expensive and might not work; a crash course on psychometrics would probably somewhat work, well enough to get significant chunks of benefit. (But, not confident of any of that.)
Even if you can just vector for IQ, that’s still very positive in EV (though my belief here has substantial “instability”, i.e. EV>0 has cruxes with high volatility on their probabilities, or something).
This is pretty related to 2--4, especially 3 and 4, but also: you can induce ontological crises in yourself, and this can be pretty fraught. Two subclasses:
You now think of the world in a fundamentally different way. Example: before, you thought of “one real world”; now you think in terms of Everett branches, mathematical multiverse, counterlogicals, simiulation, reality fluid, attention juice, etc. Example: before, a conscious being is a flesh-and-blood human; now it is a computational pattern. Example: before you took for granted a background moral perspective; now, you see that everything that produces your sense of values and morals is some algorithms, put there by evolution and training. This can disconnect previously-functional flows from values through beliefs to actions. E.g. now you think it’s fine to suppress / disengage some moral intuition / worry you have, because it’s just some neurological tic. Or, now that you think of morality as “what successfully exists”, you think it’s fine to harm other people for your own advantage. Or, now that you’ve noticed that some things you thought were deep-seated, truthful beliefs were actually just status-seeking simulacra, you now treat everything as status-seeking simulacra. Or something, idk.
You set off a self-sustaining chain reaction of reevaluating, which degrades your ability to control your decision to continue expanding the scope of reevaluation, which degrades your value judgements and general sanity. See: https://www.lesswrong.com/posts/n299hFwqBxqwJfZyN/adele-lopez-s-shortform?commentId=RZkduRGJAdFgtgZD5 , https://www.lesswrong.com/posts/n299hFwqBxqwJfZyN/adele-lopez-s-shortform?commentId=zWyC9mDQ9FTxKEqnT
These can also spread to other people (even if it doesn’t happen to the philosopher who comes up with the instigating thoughts).
A prayer for engaging in conflict
I haven’t seen your stuff, I’ll try to check it out nowish (busy with Inkhaven). Briefly (IDK which things you’ve seen):
My most direct comments are here: https://x.com/BerkeleyGenomic/status/1909101431103402245
I’ve written a fair bit about possible perils of germline engineering (aiming extremely for breadth without depth, i.e. just trying to comprehensively mention everything). Some of them apply generally to HIA. https://berkeleygenomics.org/articles/Potential_perils_of_germline_genomic_engineering.html
My review of HIA discusses some risks (esp. value drift), though not in much depth: https://www.lesswrong.com/posts/jTiSWHKAtnyA723LE/overview-of-strong-human-intelligence-amplification-methods
P(no AI takeover) = 75%
I want to say “Debate or update!”, but I’m not necessarily personally offering / demanding to debate. I would want there to be some way to say that though. I don’t think this is a “respectable” position, for the meaning gestured at here: https://www.lesswrong.com/posts/7xCxz36Jx3KxqYrd9/plan-1-and-plan-2?commentId=Pfqxj66S98KByEnTp
(Unless you mean you think P(AGI within 50 years < 30%), which would be respectable, but I don’t think you mean that.)
Human intelligence amplification
I wonder if people have some sort of ego-type investment in LLMs being good / minds / something?
I sort of agree that LLMs are somewhat incidental to the point of the post ( https://www.lesswrong.com/posts/DDG2Tf2sqc8rTWRk3/llm-generated-text-is-not-testimony?commentId=d5gtpsRzESm4dNxBZ ). I also agree that utterances today are very often failing to be testimony in the way I discuss, and that this fact is very important. A main aim of this essay is to help us think about that phenomenon.
I mean, I’m not the arbiter of anything… I think it’s “fine”, in that for example I wouldn’t suggest a moderator should delete the LLM version for being LLM. I do think that the LLM version is very slightly worse on net. The correct sentence structure and capitalization is probably an improvement, but for example it replaced
your own personal greed and material accumulation
with
your own gain, chasing wealth, status, or comfort
And I prefer your phrasing, it’s more interesting. The LLM basically replaced
they mixed and matched, they refined and innovated
with
combined
which is worse and less evocative. The LLM replaced
even for one other than you that doesn’t include your kids?
with
And no, that doesn’t mean leaving your assets to your kid. It means showing up while you’re still alive and making a difference that extends beyond your own skin.
which is a totally different and much worse sentiment, unless that is actually what you meant.
I’d rather read something ‘unreadable’ that comes from someone’s currently-fermenting models than read something ‘readable’ that does not. If you write a really detailed prompt, that’s basically the post but with poor / unclear sentence structure, and the LLM fixes the sentence structure without changing the content, then this seems probably mostly fine / good. (I think a bit of subtle info might be lost unless you’re really vigilant, but the tradeoff could be worth it, idk.)
Curious where you are in your PhD, and if it finished, whether you’re aiming at bigger boosts.