That’s just not true, try buying clothes from Shein instead of some at least half decent shop. Heck, I once bought a screwdriver at a pound store, thinking they couldn’t really ruin that easily. The steel was so bad it basically bent and chipped upon meeting a screw.
dr_s
consider how hard it was for society just to realize that COVID was transmitted via aerosols!
It was only hard because inexplicably no one bothered checking for over a year into the pandemic, we just took the whole “fomites and large droplets” stuff from cold and flu for granted despite the evidence being as we see here pretty scant. There’s a serious coordination problem there IMO in how chaotic research ended up being rather than exploring systematically and rapidly all these very obvious things that we should have had some decent evidence on by April/May 2020.
True though to be fair they’re a different type of story. The trickster has skills, they’re not conventional skills but they have them in spades; they are also clever and ambitious enough to use those skills to upend the existing order. Trickster narratives reward cunning, initiative and ambition, whereas traditional warrior narratives reward strength, bravery and honour. Meanwhile the classic Christian narrative is something like “the saint fasted for fifty days and lashed himself for no good reason other than to prove how much he thought he was sinful; then the Romans came to martyr him and he let them, the end. But joke’s on them ’cos now he’s in Heaven”. Humility, passiveness and guilt.
That said, Christianity hasn’t exactly erased either warrior narratives nor trickster narratives. The knights of the Round Table or the paladins of Charlemagne are classic Christian warrior templates. Robin Hood is a classic Christian trickster (and medieval folklore also abounds with stories in which the Devil is foolish and easily tricked by a clever human whom he was trying to ensnare).
That’s not a bad idea. You could link something like “this post is a reply to X” and then people could explore “threads” of posts that are all rebuttals and arguments surrounding a single specific topic. Doesn’t even need to be about things that have gotten this hostile, sometimes you just want to write a full post because it’s more organic than a comment.
To a first approximation, they are as likely as you to be biased, so why do they get to be the judge?
I think the answer to this is, “because the post, specifically, is the author’s private space”. So they get to decide how to conduct discussion there (for reference, I always set moderation to Easy Going on mine, but I can see a point even to Reign of Terror if the topic is spicy enough). The free space for responses and rebuttals isn’t supposed to be the comments of the post, but the ability to write a different post in reply.
I do agree that in general if it comes to that—authors banning each other from comments and answering just via new posts—then maybe things have already gotten a bit too far into “internet drama” land and everyone could use some cooling down. And it’s generally probably easier to keep discussions on a post in the comments of the post. But I don’t think the principle is inherently unfair; you have the same exact rights as the other person and can always respond symmetrically, that’s fairness.
Fun Baader-Meinhof effect I experienced: the very evening of the day in which I read this article, while chatting with my father-in-law, he mentioned (without me prompting) eating and enjoying a sandwich with lard, honey and chestnuts while vacationing in the Alps. Not quite the same but close enough, for more accessible ingredients. And the mountain setting makes a lot of sense because:
all the ingredients would be local and traditional
the cold means people burn more energy and thus favours the development of more energy-dense foods
But I don’t think the right conclusion is “Unpredictable!” so much as “So put in the work if you care to predict it?”.
I still think there’s a bit of post-hoc reasoning here; it’s easy to rationalise why we would like ice cream, specifically, after the fact, and harder to make novel predictions that are that spot-on. Though as you say prediction can bring you a bit further than expected.
There’s also the matter of information. How much information are the aliens even given to work from? To predict “chocolate ice cream” you would need data on the chemical composition of our biosphere, the ecological niches occupied by various animals, how mammalian biology and child-rearing works, how parasites work, how our biochemical energy producing mechanisms work, how DNA bases, insect nervous systems, and human nervous systems work (to guess that caffeine or similar compounds might be produced and enjoyed) and who knows what else. That’s a lot of info, probably much more than we comparably have for hypothetical future ASIs. Absent all that, you get stuck with stupid predictions like “gasoline” or “bear fat with honey and salt”.
As an additional point—“bear fat”, specifically, is impractical for reasons I think even an alien with a modest understanding of Earth’s biosphere could guess (I mean, have you seen a bear, Mr. Alien?). But “pork fat” is an exceedingly common ingredient, and not too far off. So “lard with honey and salt” or “tallow with honey and salt” would be very much possible to mass produce, and yet it’s the ice cream that prevails. There may be something there, I’m sure lard with honey and salt is perfectly viable and possibly even made in some circumstances? But ice cream feels more “casual”, I think milk-based fats are more digestible than the ones straight from the meat. Lard just doesn’t scream “refreshing thing you eat while on a walk”.
It makes sense as an extrapolation—chemical technology was advancing rapidly, so obviously the potential to do such things was there already or would have been shortly, and while maybe actual police investigators had never even really considered involving scientists in their work, Doyle with his outside perspective could spot the obvious connection and use it as a over plot idea to reinforce just how clever and innovative his genius detective was.
It’s possibly another argument for why this happens: fiction can be a really good outlet for laypersons with not enough credentials to put ideas out there and give them high visibility. Once the idea is read by someone with the right technical chops, it can then spark actual research and the prophecy fulfills itself.
Part of the reason why this would be beneficial is also that killing all mosquitoes is really hard and could have side effects for us (like loss of pollination). One could hope that maybe humans would have similar niche usefulness to the ASI despite the difference in power, but it’s not a guarantee.
I think those things can be generally interpreted as “trades” in the broadest sense. Sometimes trades of favour, reputation, or knowledge.
Of course, human-based entities are superintelligent in a different way than ASI probably will be, but I think that difference is irrelevant in many discussions involving ASI.
I think while the analogy absolutely does make sense and is worth taking seriously, this is wrong. The main reason why the analogy is worth taking seriously is that using partial evidence is still generally better than using no evidence at all, but the evidence is partial because the fact that ultimately a corporation is still made of people means there’s tons of values that are already etched into it from the get go, ways it can fail at coordinating itself, and so on so forth, which makes it a rather different case from an ASI.
If anything, I guess the argument would be “obviously aligning a corporation should be way easier than aligning an ASI, and look at our track record there!”.
He mentions he’s just learned coding so I guess he had the AI build the scaffolding. But the experiment itself seems like a pretty natural idea, he literally likens it to a King’s council. I’m sure once you have the concept having an LLM code it is no big deal.
I think not passing off LLM text as your own words is common good manners for a number of reasons—including that you are taking responsibility for words you didn’t write and possibly not even read in depth enough, so it’s going to be on you if someone reads too much into them. But it doesn’t really much need any assumptions on LLMs themselves, their theory of mind, etc. Nearly the same would apply about hiring a human ghostwriter to expand on your rough draft, it’s just that that has never been a problem until now because ghostwriters cost a lot more than a few LLM tokens.
However, the plausible assumption has begun to tremble since we had a curated post whose author admitted to generating it by using Claude Opus 4.1 and substantially editing the output.
TBF “being a curated post on LW” doesn’t exclude anything from being also a mix and match of arguments already said by others. One of the most common criticisms of LW I’ve seen is that it’s a community reinventing a lot of already said philosophical wheels (which personally I don’t think is a great dunk; exploring and reinventing things for yourself is often the best way to engage with them at a deep level).
Thanks! I guess my original statement came off a bit too strong, but what I meant is that while there is a frontier for trade offs (maybe the GPUs’ greater flexibility is worth the 2x energy cost?), I didn’t expect the gap to be orders of magnitude. That’s good enough for me with the understanding that any such estimates will never be particularly accurate anyway and just give us a rough idea of how much compute these companies are actually fielding. What they squeeze out of that will depend on a bunch of other details anyway, so scale is the best we can guess.
I mean, we do this too! Like if you were doing a very boring, simple task you would probably seek outlets for your mental energy (e.g. little additional self imposed challenges, humming, fiddling, etc).
Well, within reason that can happen—I am not saying the metric is going to be perfect. But it’s probably a decent first order approximation because that logic can’t stretch forever. If instead of a factor of 2 it was a factor of 10 the trade off would probably not be worth it.
This is an argument from absurdity against infinite utility functions, but not quite against unbounded ones.
Can you elaborate on the practical distinction? My impression is that if your utility function is unbounded, then you should always be able to devise paths that lead to infinite utility—even by just infinite amounts of finite utility gains. So I don’t know if the difference matters that much.
Yeah, it’s not like the point of outreach is to mobilise citizen science on alignment (though that may happen). It’s because in democracy the public is an important force. You can pick the option of focusing on converting a few powerful people and hope they can get shit done via non-political avenues but that hasn’t worked spectacularly either for now, such people are still subject to classic race to the bottom dynamics and then you get cases like Altman and Musk, who all in all may have ended up net negative for the AI safety cause.