(Typo: Your post title is cut off, ending in “an”.)
MondSemmel
I can’t see a fanfic with that name in the moderation log (scroll down to “Rejected Posts”).
A brief complaint about software: to uninstall my bugged GPU driver before a reinstall, I used AMD Cleanup Utility, which briefly booted my PC in Windows Safe Mode. MS Edge briefly launched in that mode for no reason. Then on restarting into normal Windows, I discovered that my MS Edge cookies and settings had been nuked. Apparently that’s a known issue. I had recently switched away from Firefox because Chromium is supposed to be a more mature browser engine, only to experience this joke.
And of course the mode which wrecked everything is called “Safe Mode”. The analogy to AI Safety is left as an exercise to the reader.
Sure, but “technological progress good” isn’t exactly an undersupplied viewpoint, is it? One counterpoint to food preservatives specifically is that the things that make food go bad are similar to what your body uses to digest food, so preserving food in this way can make it harder or harmful to digest. Other procedures like refrigeration and canning don’t have that particular problem.
Again, I feel like both you and ChristianKI are pattern matching on a different type of low-vaccination community, one with vaccine hesitancy out of conviction etc. The claim in the OP is that the main problem for this particular community isn’t that they distrust mainstream medicine; their main problem is that they outright can’t communicate.
I’m glad you survived a real danger to your life, and major kudos for writing up your experience!
Regarding this essay, I expected to upvote it based on the title alone. But having read it, its particular advice feels weak to me and sounds more like a general exhortation to Be Vigilant (or paranoid) of X, which isn’t at all sustainable in a world full of X’s one could Be Vigilant about. So it seems to me that a stronger version of such an essay almost must be rooted in base rates or something.
The kind of structure I’d expect would look more like: brainstorm or LLM-generate a list of “in my environment, what things could kill me”. Then guesstimate or google likelihoods for those, then brainstorm or look up or LLM-generate countermeasures for these threats, etc., and finally land on a list of top threats & suggested efficient countermeasures. Plus an understanding that one cannot drive all risks down to zero.
Finally, such a list should probably also consider cryonics (as a way to kind-of-survive many otherwise unpreventable causes of death), as well as non-individual risks of death like war, pandemics, or x-risks.
I wouldn’t trust Perplexity Pro’s percentage numbers one bit. It likes to insert random percentage numbers into my answers and they have hardly any bearing to reality at all. When I challenged it on this point, it claimed these reflected percentages of search results (e.g. in this scenario, 20 search results with 17 featuring Claude would result in an answer of 85%), but even that wasn’t remotely correct. For now I assume these are entirely hallucinated/made up, unless strongly proven otherwise. It’s certainly not doing any plausible math on any plausible data, from what I can tell.
This is part of a more general pattern wherein Perplexity for me tends to be extremely confident or intent on being useful, even in situations when it has no way to actually be useful given its capabilities, and so it just makes stuff up.
Maybe the idea is that if a spanner is thrown in the works, you can’t necessarily have someone else unthrow that spanner?
Idle suggestion, probably not useful: have you checked if you can do what you want by using GreaterWrong instead?
I think OP’s perspective is valid, and I’m not at all convinced by your reply. We’re currently racing towards technological extinction with the utmost efficiency, to the point that it’s hard to imagine that any arbitrary alternative system of economics or governance could be worse by that metric, if only by virtue of producing less economic growth. I don’t see how nuclear warfare results in extinction, either; to my understanding it’s merely a global catastrophic risk, but not an existential one. And regarding your final paragraph, there are a lot of orders of magnitude between a system of governance that self-destructs in <10k years, vs. one that eventually succumbs to the Heat Death of the universe.
Anyway, I made similar comments as OP in a doomy comment from last year:
In a world where technological extinction is possible, tons of our virtues become vices:
Freedom: we appreciate freedoms like economic freedom, political freedom, and intellectual freedom. But that also means freedom to (economically, politically, scientifically) contribute to technological extinction. Like, I would not want to live in a global tyranny, but I can at least imagine how a global tyranny could in principle prevent AGI doom, namely by severely and globally restricting many freedoms. (Conversely, without these freedoms, maybe the tyrant wouldn’t learn about technological extinction in the first place.)
Democracy: politicians care about what the voters care about. But to avert extinction you need to make that a top priority, ideally priority number 1, which it can never be: no voter has ever gone extinct, so why should they care?
Egalitarianism: resulted in IQ denialism; if discourse around intelligence was less insane, that would help discussion of superintelligence.
Cosmopolitanism: resulted in pro-immigration and pro-asylum policy, which in turn precipitated both a global anti-immigration and an anti-elite backlash.
Economic growth: the more the better; results in rising living standards and makes people healthier and happier… right until the point of technological extinction.
Technological progress: I’ve used a computer, and played video games, all my life. So I cheered for faster tech, faster CPUs, faster GPUs. Now the GPUs that powered my games instead speed us up towards technological extinction. Oops.
Your comment runs counter to the OP’s claim in the bottom section called Mennonites Are Susceptible To Facts and Logic, When Presented In Low German. E.g. the anecdote about the woman who thought the hospital turned her away, sounds like it’s not about vaccine hesitancy but about total inability to communicate.
And sure, human doctors and nurses who know Obscure Language are a much better solution than LLM doctors and nurses, but realistically the former basically don’t exist, so...
I’ll accept time-sensitive stuff as a valid counterargument to my claim, as well as e.g. things moving beyond the observable universe.
But I don’t see how the existence of the moons of Neptune works as a counterargument. The whole point is that you do something laborious to gain/accumulate/generate new knowledge (like send a space probe). And then to verify/confirm said knowledge, you don’t have to send a new space probe because you can use a gazillion other cheaper methods to confirm the knowledge instead (like by pointing telescopes at the moons, or by using your improved knowledge of physical law to predict their positions, etc. etc.).
If the claim is just “producing the exact same kind of evidence (space probe pictures) can require the same cost”, then I don’t exactly disagree, I just don’t see how that’s at all relevant. The AI context here is that we have a superhuman mind that can generate knowledge we can’t (the space probe or its pictures), and the question is whether it can convert that knowledge into a form we’d have a much easier time understanding. In that situation, why would it matter that we can’t build a second space probe?
This is not what you’re asking for, but are you aware of Pantheon (2022) (Wikipedia, LW thread)? It’s a short animated TV series (16 episodes over 2 seasons, canceled / cut short) about mind uploads and related topics. It features several of the things you want, but also some weird stuff like superhero-esque fights between uploads. And while the ending of the final episode is quite bombastically sci-fi, it also makes it very clear that the series was cut short.
Comment half in jest, half serious: if the problem is lack of fluency in Obscure Language, then might this be a case where people today would be better-served by LLM nurses and doctors, rather than human ones?
I asked Claude Code to make a Slither Link puzzle game, providing it an extensive design doc about stuff like difficulty curve or UI but none about the basic game rules, and it failed to get puzzle generation to work (IIRC initially it wouldn’t even generate closed loops or something) and then got stuck trying to fix it, continually giving me new versions that never worked nor even looked like they got any closer to the goal. To be clear, that doesn’t meant that Claude Code can’t complete this particular task, it just means that I couldn’t get it to Just Work™.
To play devil’s advocate, I don’t see why preventive war would be “insane”. If you’re the first nuclear power, and you can prevent your potential rivals from acquiring their own nukes, then that makes you an unassailable hegemon. With the benefit of hindsight, a clever arguer (not meant as a compliment) could even claim that this strategy isn’t evil but actually morally required because, if it indeed prevents others from obtaining nukes, then this prevents an entire source of future x-risk from MAD and the Cold War. Not to mention unpreventable human rights abuses by future nuclear powers like North Korea.
To be clear, I’m not advocating for this alternate history; most importantly from a strategic perspective, it’s not at all clear that the US could’ve kept the technology for itself no matter how aggressive it acted. Also it would’ve been evil, and I can’t imagine there would’ve been enough political will by the US public post-1945 to pursue such a war directly after World War II, so it would’ve eventually failed for that reason anyway.
Have donated $1000.
Sure, but I meant that you still need the votes of those other people, too. And the fewer votes you have, the more compromises make it into the final bill.
Agreed. I can’t entirely appreciate what is lost in this story, not being personally interested in romance, but if given the choice between this future vs. the ones I consider likely, I’d choose this one in a heartbeat.
Feedback: your supposed “LLM content block” is currently utterly visually indistinguishable from the regular content, and thus (to me) currently entirely fails to achieve what you / Claude say is its intended purpose:
Sentence for sentence:
it’s not visually distinct
therefore it doesn’t clearly attribute anything
and therefore readers certainly don’t always know what they’re looking at
and therefore this is not a valid way to be transparent about AI-assisted writing