It would be slower for sure, at least, being bound to human dynamics. But “same problems but slower” isn’t the same as a solution/alternative. Admittedly better in the limited sense that it’s less likely to end with straight up extinction, but it’s a rather grim world either way.
dr_s
I feel like intelligence amplification is plenty destabilising. Consider how toxic intelligence discourse is or has been right now already:
some people argue that some ethnic groups (usually their own) has inherently higher intelligence, which makes them better
other people who want to push back against the former then go the other extreme and claim intelligence doesn’t exist at all as an even partially measurable quantity
And what would you do with your intelligence amplification method? Sell it? So now richer people, and richer countries, are the ones to first reap the benefits, amplifying gaps in inequality which again have destabilising effects.
A lot of this ends up in similar places as aligned ASI, if you only consider the political side of it. Similar issues.
This is why, in a much more real and also famous case, President Truman was validly angered and told “that son of a bitch”, Oppenheimer, to fuck off, after Oppenheimer decided to be a drama queen at Truman. Oppenheimer was trying to have nuclear weapons be about Oppenheimer’s remorse at having helped create nuclear weapons. This feels obviously icky to me; I would not be surprised if Truman felt very nearly the same.
I did sympathise with Truman in the way that scene is portrayed in Nolan’s movie more than most seem to have (or even, that the movie intended to). But I am not sure that wasn’t just Truman making the bombs about him instead—he made the call after all, it was his burden to bear. Which again sort of shifts it from it being about, you know, the approximately 200k civilians they killed and stuff.
I think they are because in practice they just didn’t produce the same amount of economic growth. And for most people, their direct impact of these things are entertainment applications, or using them at work (where sometimes they feel like they make things worse). Meanwhile I remember hearing a story of a woman (someone’s grandma) who was in awe of the washing machine they had just bought because well, it had saved her hours of daily gruelling work. And that’s more impactful to one’s life than almost anything computers or the internet have done.
One last thing: I misunderstood the point you were making when you were talking about blackholes. The point you were making was ‘”What maximizes entropy” is a bad morality’; what I thought I was reading was ‘dissipative adaptation does not work because it predicts that we will into a black hole and Earth developed complex life because the complex life did some nuclear fission after it was developed’.
My point was a bit more complex. Yes, there’s absolutely the morality argument—obviously something that prescribes “thou shalt make black holes” is a dumb morality and I will not follow it. But there’s also a predictive power argument. At a planetary scale, putting aside all the complexity issues you rightly bring up, it may be possible that life truly maximises entropy production given certain constraints. The Earth would have more entropy as a black hole, but the potential barrier to reaching that state is enormous, and so we’re stuck in the local maximum of a planet teeming with life instead. But Beff and e-acc carry the argument all the way to the universal scale, and that’s where it breaks down, because at the universal scale, black holes absolutely do dominate entropy production, and everything else is a rounding error, so life becomes inconsequential for the ledger.
To make a practical example: suppose future humanity becomes a Kardashev 3 civilization, using up all the energy output of the Milky way and dissipating it at cosmic background temperature via radiation. That makes for an entropy production of approximately . Now suppose that this powerful civilization at some point predicts that two stellar black holes, each of 3 solar masses, will at some point in the future merge near an inhabited system, and this will cause trouble. With their immense power, this civilization finds a way to change the trajectory of one of those black holes, avoiding the merger, and save the system. Well, with that single change this civilization has averted the creation of roughly of entropy, that is, over 3 trillion years’ worth of their current entropy production! The civilization that does this will forever be a net negative in entropy creation for its whole existence, regardless of how much it splurges on using energy otherwise.
So, entropy production itself does not predict life at universal scales. It can’t. Life is just a tiny rounding error several digits down on that balance sheet. And even if on some local scales it may be possible that life is an avenue to maximizing entropy, overall those goals don’t stay aligned all the way to life taking over the universe.
the only point I disagree on is that I think that a tree is in fact a more efficient dissipator than no tree
I think that genuinely depends on details like the precise colour of the soil and efficiency of the plant. We know photosynthesis is not very efficient at energy conversion (IIRC the top efficiency belongs to the sugar cane and is a meager 8%). Also, you could probably make a more dissipative surface by putting up a very dark, very efficient solar panel and then using it to power a heater. I suppose there’s an argument that solar panels are created by life but that seems like a very tortuous way for thermodynamics to work.
Yeah, it’s not like the point of outreach is to mobilise citizen science on alignment (though that may happen). It’s because in democracy the public is an important force. You can pick the option of focusing on converting a few powerful people and hope they can get shit done via non-political avenues but that hasn’t worked spectacularly either for now, such people are still subject to classic race to the bottom dynamics and then you get cases like Altman and Musk, who all in all may have ended up net negative for the AI safety cause.
That’s just not true, try buying clothes from Shein instead of some at least half decent shop. Heck, I once bought a screwdriver at a pound store, thinking they couldn’t really ruin that easily. The steel was so bad it basically bent and chipped upon meeting a screw.
consider how hard it was for society just to realize that COVID was transmitted via aerosols!
It was only hard because inexplicably no one bothered checking for over a year into the pandemic, we just took the whole “fomites and large droplets” stuff from cold and flu for granted despite the evidence being as we see here pretty scant. There’s a serious coordination problem there IMO in how chaotic research ended up being rather than exploring systematically and rapidly all these very obvious things that we should have had some decent evidence on by April/May 2020.
True though to be fair they’re a different type of story. The trickster has skills, they’re not conventional skills but they have them in spades; they are also clever and ambitious enough to use those skills to upend the existing order. Trickster narratives reward cunning, initiative and ambition, whereas traditional warrior narratives reward strength, bravery and honour. Meanwhile the classic Christian narrative is something like “the saint fasted for fifty days and lashed himself for no good reason other than to prove how much he thought he was sinful; then the Romans came to martyr him and he let them, the end. But joke’s on them ’cos now he’s in Heaven”. Humility, passiveness and guilt.
That said, Christianity hasn’t exactly erased either warrior narratives nor trickster narratives. The knights of the Round Table or the paladins of Charlemagne are classic Christian warrior templates. Robin Hood is a classic Christian trickster (and medieval folklore also abounds with stories in which the Devil is foolish and easily tricked by a clever human whom he was trying to ensnare).
That’s not a bad idea. You could link something like “this post is a reply to X” and then people could explore “threads” of posts that are all rebuttals and arguments surrounding a single specific topic. Doesn’t even need to be about things that have gotten this hostile, sometimes you just want to write a full post because it’s more organic than a comment.
To a first approximation, they are as likely as you to be biased, so why do they get to be the judge?
I think the answer to this is, “because the post, specifically, is the author’s private space”. So they get to decide how to conduct discussion there (for reference, I always set moderation to Easy Going on mine, but I can see a point even to Reign of Terror if the topic is spicy enough). The free space for responses and rebuttals isn’t supposed to be the comments of the post, but the ability to write a different post in reply.
I do agree that in general if it comes to that—authors banning each other from comments and answering just via new posts—then maybe things have already gotten a bit too far into “internet drama” land and everyone could use some cooling down. And it’s generally probably easier to keep discussions on a post in the comments of the post. But I don’t think the principle is inherently unfair; you have the same exact rights as the other person and can always respond symmetrically, that’s fairness.
Fun Baader-Meinhof effect I experienced: the very evening of the day in which I read this article, while chatting with my father-in-law, he mentioned (without me prompting) eating and enjoying a sandwich with lard, honey and chestnuts while vacationing in the Alps. Not quite the same but close enough, for more accessible ingredients. And the mountain setting makes a lot of sense because:
all the ingredients would be local and traditional
the cold means people burn more energy and thus favours the development of more energy-dense foods
But I don’t think the right conclusion is “Unpredictable!” so much as “So put in the work if you care to predict it?”.
I still think there’s a bit of post-hoc reasoning here; it’s easy to rationalise why we would like ice cream, specifically, after the fact, and harder to make novel predictions that are that spot-on. Though as you say prediction can bring you a bit further than expected.
There’s also the matter of information. How much information are the aliens even given to work from? To predict “chocolate ice cream” you would need data on the chemical composition of our biosphere, the ecological niches occupied by various animals, how mammalian biology and child-rearing works, how parasites work, how our biochemical energy producing mechanisms work, how DNA bases, insect nervous systems, and human nervous systems work (to guess that caffeine or similar compounds might be produced and enjoyed) and who knows what else. That’s a lot of info, probably much more than we comparably have for hypothetical future ASIs. Absent all that, you get stuck with stupid predictions like “gasoline” or “bear fat with honey and salt”.
As an additional point—“bear fat”, specifically, is impractical for reasons I think even an alien with a modest understanding of Earth’s biosphere could guess (I mean, have you seen a bear, Mr. Alien?). But “pork fat” is an exceedingly common ingredient, and not too far off. So “lard with honey and salt” or “tallow with honey and salt” would be very much possible to mass produce, and yet it’s the ice cream that prevails. There may be something there, I’m sure lard with honey and salt is perfectly viable and possibly even made in some circumstances? But ice cream feels more “casual”, I think milk-based fats are more digestible than the ones straight from the meat. Lard just doesn’t scream “refreshing thing you eat while on a walk”.
It makes sense as an extrapolation—chemical technology was advancing rapidly, so obviously the potential to do such things was there already or would have been shortly, and while maybe actual police investigators had never even really considered involving scientists in their work, Doyle with his outside perspective could spot the obvious connection and use it as a over plot idea to reinforce just how clever and innovative his genius detective was.
It’s possibly another argument for why this happens: fiction can be a really good outlet for laypersons with not enough credentials to put ideas out there and give them high visibility. Once the idea is read by someone with the right technical chops, it can then spark actual research and the prophecy fulfills itself.
Part of the reason why this would be beneficial is also that killing all mosquitoes is really hard and could have side effects for us (like loss of pollination). One could hope that maybe humans would have similar niche usefulness to the ASI despite the difference in power, but it’s not a guarantee.
I think those things can be generally interpreted as “trades” in the broadest sense. Sometimes trades of favour, reputation, or knowledge.
Of course, human-based entities are superintelligent in a different way than ASI probably will be, but I think that difference is irrelevant in many discussions involving ASI.
I think while the analogy absolutely does make sense and is worth taking seriously, this is wrong. The main reason why the analogy is worth taking seriously is that using partial evidence is still generally better than using no evidence at all, but the evidence is partial because the fact that ultimately a corporation is still made of people means there’s tons of values that are already etched into it from the get go, ways it can fail at coordinating itself, and so on so forth, which makes it a rather different case from an ASI.
If anything, I guess the argument would be “obviously aligning a corporation should be way easier than aligning an ASI, and look at our track record there!”.
He mentions he’s just learned coding so I guess he had the AI build the scaffolding. But the experiment itself seems like a pretty natural idea, he literally likens it to a King’s council. I’m sure once you have the concept having an LLM code it is no big deal.
I think it’s just a matter of what’s more technologically achievable. Building LLMs turned out to be a lot easier than understanding neuroscience to a level even remotely close to what’s necessary to achieve 1 or 2. And both of those also require huge political capital due to needing (likely dangerous) human experimentation that would currently be considered unacceptable.