I can’t find any off the top of my had, but I’m pretty sure the LW/Lightcone salary question has been asked and answered before, so it might help to link to past discussions?
MondSemmel
Apologies if I gave the impression that “a selfish person should love all humans equally”; while I’m sympathetic to arguments from e.g. Parfit’s book Reasons and Persons[1], I don’t go anywhere that far. I was making a weaker and (I think) uncontroversial claim, something closer to Adam Smith’s invisible hand: that aggregating over every individual’s selfish focus on close family ties, overall results in moral concerns becoming relatively more spread out, because the close circles of your close circle aren’t exactly identical to your own.
- ^
Like that distances in time and space are similar. So if you imagine people in the distant past having the choice for a better life at their current time, in exchange for there being no people in the far future, then you wish they’d care about more than just their own present time. A similar logic argues against applying a very high discount rate to your moral concern for beings that are very distant to you in e.g. space, close ties, etc.
- ^
Well, if there were no minds to care about things, what would it even mean that something should be terminally cared about?
Re: value falloff: sure, but if you start with your close circle, and then aggregate the preferences of that close circle (who has close circles of their own), and rinse and repeat, then this falloff for any individual becomes comparatively much less significant for society as a whole.
Maybe our disagreement is that I’m more skeptical about the legislature proactively suggesting any good legislation? My default assumption is that without leadership, hardly anything of value gets done. Like, it’s an obviously good idea to repeal the Jones Act, and yet it’s persisted for a hundred years.
This discussion felt fine to me, though I’m not sure to which extent anyone got convinced of anything, so it might have been fine but not necessarily worthwhile, or something. Anyway, I’m also in favor of people being able to disengage from conversations without that becoming a meta discussion of its own, so… *shrugs*.
I do agree that it’s easy to have discussions about politics become bad, even on LW.
I know that Trump doesn’t have control of the legislature or anything, but I guess I’m still not quite understanding how all this is supposed to relate to the Jones Act question. Do you think if (big if) Trump wanted the Jones Act repealed, it would not be possible to find a (potentially bipartisan) majority of votes for this in the House and the Senate? (Let’s leave the filibuster aside for a moment.) This is not like e.g. cutting entitlement programs; the interest groups defending the Jones Act are just not that powerful.
I don’t know, I’ve been reading a lot of Slow Boring and Nate Silver, and to me this just really doesn’t seem to remotely describe how the Trump coalition works. Beginning with the idea that there are powerful party elites whose opinion Trump has to care about, rather than the other way around.
Like, the fact that Trump moderated the entire party on abortion and entitlement cuts seems like pretty strong evidence against that idea, as well. Or, Trump’s recent demand that the US Senate should confirm his appointees via recess appointments, similarly really does not strike me as Trump caring about what party elites think.
My model is more like, both Trump and party elites care about what their base thinks, and Trump can mobilize the base better (but not perfectly) than the party elites can, so Trump has a stronger position in that power dynamic. And isn’t that how he won the 2016 primary in the first place? He ran as a populist, so of course party elites did not want him to win, since the whole point of populist candidates is that they’re less beholden to elites. But he won, so now those elites mostly have to acquiesce.
All that said, to get back to the Jones Act thing: if Trump somehow wanted it repealed, that would have to happen via an act of Congress, so at that point he would obviously need votes in the US House and Senate. But that could in principle (though not necessarily in practice) happen on a bipartisan vote, too.
EDIT: And re: the importance of party elites, that’s also kind of counter to the thesis that today’s US political parties are very weak. Slow Boring has a couple articles on this topic, like this one (not paywalled), all based on the book “The Hollow Parties”.
so, convincing Republicans under his watch of just replacing the Jones Act is hardly possible
Given how the Trump coalition seems to have worked so far, I don’t find this rejoinder plausible. Yes, Trump is not immune from his constituents. For example, he walked back from what some consider to be the greatest achievement of his presidency (i.e. Operation Warp Speed), because his base was or became increasingly opposed to vaccination.
But in many other regards he’s shown a strong ability to make his constituents follow him (we might call it “leadership”, even if we don’t like where he leads to), rather than the other way around. Like, his Supreme Court appointments overturned Roe v. Wade, but in this year’s presidential election he campaigned against a national abortion ban, because he figured such a ban would harm his election prospects. And IIRC he’s moderated his entire party on making (or at least campaigning on) cuts to entitlement programs, too, again because it’s bad politics.
This is not to say that Trump will advocate for repealing the Jones Act. But rather that if he doesn’t do it, it will be because he doesn’t want to, not because his constituents don’t. The Jones Act is just not politically important enough for a rebellion by his base.
A much bigger problem here would be that Trump seems to have very dubious instincts on foreign policy and positive-sum trade (e.g. he’s IIRC been advocating for tarifs for a long time), and might well interpret repealing the Jones Act as showing weakness towards foreign nations, or some such.
Agreed insofar as shortform posts are conceptually shortlived, which is a bummer for high-karma shortform posts with big comments treads.
Disagreed insofar by “automatically converted” you mean “the shortform author has no recourse against this”. I do wish there were both nudges to turn particularly high-value shortform posts (and particularly high-value comments, period!) into full posts, and assistance to make this as easy as possible, but I’m against forcing authors and commenters to do things against their wishes.
(Side note: there are also a few practical issues with converting shortform posts to full posts: the latter have titles, the former do not. The former have agreement votes, the latter do not. Do you straightforwardly port over the karma votes from shortform to full post? Full posts get an automatic strong upvote from their author, whereas comments only get an automatic regular upvote. Etc.)
Still, here are a few ideas for such non-coercive nudges and assistance:
An opt-in or opt-out feature to turn high-karma shortform posts into full posts.
An email reminder or website notification to inform you about high-karma shortform posts or comments you could turn into full posts, ideally with a button you can click which does this for you.
Since it can be a hassle to think up a title, some general tips or specific AI assistance for choosing one. (Though if there was AI assistance, it should not invent titles out of thin air, but rather make suggestions which closely hew to the shortform content. E.g. for your shortform post, it should be closer to “LessWrong shortform posts above some amount of karma should get automatically converted into personal blog posts”, rather than “a revolutionary suggestion to make LessWrong, the greatest of all websites, even better, with this one simple trick”.)
Re: moral patienthood, I understand the Sam Harris position (paraphrased by him here as “Morality and values depend on the existence of conscious minds—and specifically on the fact that such minds can experience various forms of well-being and suffering in this universe.”) as saying that anything else that supposedly matters, only matters because conscious minds care about it. Like, a painting has no more intrinsic value in the universe than any other random arrangement of atoms like a rock; its value stems purely from conscious minds caring about it. Same with concepts like beauty and virtue and biodiversity and anything else that’s not directly about conscious minds.
And re: caring more about one’s close circle: well, everyone in your close circle has their own close circle they care about, and if you repeat that exercise often enough, the vast majority of people in the world are in someone’s close circle.
How would you avoid the data contamination issue where the AI system has been trained on the entire Internet and thus already knows about all of these vulnerabilities?
Yudkowsky has a pinned tweet that states the problem quite well: it’s not so much that alignment is necessarily infinitely difficult, but that it certainly doesn’t seem anywhere as easy as advancing capabilities, and that’s a problem when what matters is whether the first powerful AI is aligned:
Safely aligning a powerful AI will be said to be ‘difficult’ if that work takes two years longer or 50% more serial time, whichever is less, compared to the work of building a powerful AI without trying to safely align it.
It seems to me like the “more careful philosophy” part presupposes a) that decision-makers use philosophy to guide their decision-making, b) that decision-makers can distinguish more careful philosophy from less careful philosophy, and c) that doing this successfully would result in the correct (LW-style) philosophy winning out. I’m very skeptical of all three.
Counterexample to a): almost no billionaire philanthropy uses philosophy to guide decision-making.
Counterexample to b): it is a hard problem to identify expertise in domains you’re not an expert in.
Counterexample to c): from what I understand, in 2014, most of academia did not share EY’s and Bostrom’s views.
Presumably it was because Google had just bought DeepMind, back when it was the only game in town?
This NYT article (archive.is link) (reliability and source unknown) corroborates Musk’s perspective:
As the discussion stretched into the chilly hours, it grew intense, and some of the more than 30 partyers gathered closer to listen. Mr. Page, hampered for more than a decade by an unusual ailment in his vocal cords, described his vision of a digital utopia in a whisper. Humans would eventually merge with artificially intelligent machines, he said. One day there would be many kinds of intelligence competing for resources, and the best would win.
If that happens, Mr. Musk said, we’re doomed. The machines will destroy humanity.
With a rasp of frustration, Mr. Page insisted his utopia should be pursued. Finally he called Mr. Musk a “specieist,” a person who favors humans over the digital life-forms of the future.
That insult, Mr. Musk said later, was “the last straw.”
And this article from Business Insider also contains this context:
Musk’s biographer, Walter Isaacson, also wrote about the fight but dated it to 2013 in his recent biography of Musk. Isaacson wrote that Musk said to Page at the time, “Well, yes, I am pro-human, I fucking like humanity, dude.”
Musk’s birthday bash was not the only instance when the two clashed over AI.
Page was CEO of Google when it acquired the AI lab DeepMind for more than $500 million in 2014. In the lead-up to the deal, though, Musk had approached DeepMind’s founder Demis Hassabis to convince him not to take the offer, according to Isaacson. “The future of AI should not be controlled by Larry,” Musk told Hassabis, according to Isaacson’s book.
Most configurations of matter, most courses of action, and most mind designs, are not conducive to flourishing intelligent life. Just like most parts of the universe don’t contain flourishing intelligent life. I’m sure this stuff has been formally stated somewhere, but the underlying intuition seems pretty clear, doesn’t it?
What if whistleblowers and internal documents corroborated that they think what they’re doing could destroy the world?
Ilya is demonstrably not in on that mission, since his step immediately after leaving OpenAI was to found an additional AGI company and thus increase x-risk.
I don’t understand the reference to assassination. Presumably there are already laws on the books that outlaw trying to destroy the world (?), so it would be enough to apply those to AGI companies.
I could barely see that despite always using a zoom level of 150%. So I’m sometimes baffled at the default zoom levels of sites like LessWrong, wondering if everyone just has way better eyes than me. I can barely read anything at 100% zoom, and certainly not that tiny difference in the formulas!