So far, we have documented cases of Generative AI being used to subvert elections in Romania (actually causing an annulment).
AFAIK that was not because of Gen AI, though the broader point of your comment does stand.
So far, we have documented cases of Generative AI being used to subvert elections in Romania (actually causing an annulment).
AFAIK that was not because of Gen AI, though the broader point of your comment does stand.
Previously, I said:
People are very worried about a future in which a lot of the Internet is AI-generated. I’m kinda not. So far, AIs are more truth-tracking and kinder than humans. I think the default (conditional on OK alignment) is that an Internet that includes a much higher population of AIs is a much better experience for humans than the current Internet, which is full of bullying and lies.
All such discussions hinge on AI being relatively aligned, though. Of course, an Internet full of misaligned AIs would be bad for humans, but the reason is human disempowerment, not any of the usual reasons people say such an Internet would be terrible.
I feel good about this prediction so far. Instagram and TikTok have now a significant amount of AI-generated videos (though they haven’t overrun these platforms by any means). The categories I’ve seen so far are:
- Low-brow animated stories.
- Fantasy or sci-fi scenarios with music.
- Colorful AI-generated art.
- Cute meme animals.
The greatest sin of this content is that it’s often low quality. But it’s not really that great of a sin. I think, all things considered, AI slop is above average content. Other content often contains bullying, meanness, lies. AI-generated content rarely so.
Also, so far, this is mostly thanks to humans and to AI guardrails, not really due to the character of AIs as I expected in my initial quick take. It looks like humans are using this tech in mostly good-spirited ways so far.
Hmm but humans are not ruthless consequentialists, despite being consequentialist enough to be able to do all kinds of tasks and build civilization. So I don’t see how the Optimist’s argument is addressed.
We’re still in the part of AI 2027 that was easy to predict. They point this out themselves.
Sure but he hasn’t laid out the argument. “something something simulation acausal trade” isn’t a motivation.
I’d like to know what are your motivations for doing what you’re doing! In the first podcast you hinted at “weird reasons” but you didn’t say them explicitly in the end. I’m thinking about this quote:
Yeah, maybe a general question here is: I engage in recruiting sometimes and sometimes people are like, “So why should I work at Redwood Research, Buck?” And I’m like, “Well, I think it’s good for reducing AI takeover risk and perhaps making some other things go better.” And I feel a little weird about the fact that actually my motivation is in some sense a pretty weird other thing.
We love Claude, Claude is frankly a more responsible, ethical, wise agent than we are at this point, plus we have to worry that a human is secretly scheming whereas with Claude we are pretty sure it isn’t; therefore, we aren’t even trying to hide the fact that Claude is basically telling us all what to do and we are willingly obeying—in fact, we are proud of it.
My best guess is that this would be OK
My own felt sense, as an outsider, is that the pessimists look more ideological/political and fervent than the relatively normal-looking labs. According to the frame of the essay, the “catastrophe brought about with good intent” could easily be preventing AI progress from continuing and the political means to bring that about.
by an ex- lab employee
How do we know this is true?
People seemed confused by my take here, but it’s the same take Davidad expressed in this thread that has been making rounds: https://x.com/davidad/status/2011845180484133071
it fails to produce a readable essay
Do you just dislike the writing style or do you think it’s seriously “unreadable” in some sense?
If more people were egoistic in such a forward-looking way, the world would be better off for it.
It would be really really helpful if the discussion wasn’t so meta. Everyone seems to take for granted that Trump did Something that is really really worrying but no-one says it. What is that something and why does it make you so worried?
Yeah, maybe a general question here is: I engage in recruiting sometimes and sometimes people are like, “So why should I work at Redwood Research, Buck?” And I’m like, “Well, I think it’s good for reducing AI takeover risk and perhaps making some other things go better.” And I feel a little weird about the fact that actually my motivation is in some sense a pretty weird other thing.
I would definitely like to know what this weird other thing is exactly. You only hinted at it in the podcast!
My main guess at why you’re talking past each other is that you think it way more likely than them that ASI results in human extinction or some nefarious outcome. They think it’s like 10% to 40% likely. Also they probably think this is going to be gradual enough for humans to augment and keep up with AIs cognitively. And, sure, many things can happen, included property rights losing meaning. But under this view it’s not that crazy that property rights continue to be respected and enforced. Human norms will have a clear unbroken lineage.
That account is notorious on X for making things up. It doesn’t even try to make them believable. I would disregard anything coming from it.
That this post is at 47 upvotes and no-one has said this is crazy. LessWrong please get your acts together.
Hmm, part of the reason I asked is that the reasoning in your comment is the kind of cognitive process that tends to exhaust me when I have to work through it. It somehow coincides with me being more neurotic overall. So, basically, you think all that explicit stuff about social life, and you don’t feel at least a little pang of psychological pain/exhaustion? The very starting phrase (“a huge part of...”) reads like my thoughts when I’m ruminating about this stuff.
Sorry if this is a little intrusive, I’m just kind of curious, other than fishing for insights from people who might have similar thought-patterns.
Do you struggle with feelings of isolation? I do sometimes, and I try to fix that by taking more social bids and proactively seeking social life. And then I immediately pull out because I get overwhelmed by social life very easily and it kinda colonizes my thought processes too much. So I’m kind of stuck in that loop of seeking more of it and then pulling out and then seeking more of it...
Fragility of Value thesis and Orthogonality thesis both hold, for this type of agent.
...
E.g. it’s vision for a future utopia would actually be quite bad from our perspective because there’s some important value it lacks (such as diversity, or consent, or whatever)
I think we have enough evidence to say that, in practice, this turns out very easy or moot. Values tend to cluster in LLMs (good with good and bad with bad; see emergent misalignment results), so value fragility isn’t a hard problem.
Rationalists and Pause AI people on X are accusing Davidad of suffering of AI psychosis. I think it’s them who have lost the plot actually, not Davidad. The move here looks political, rather than truth-tracking. “Davidad is now my political opponent, so I’m accusing him of being crazy.” This happened to Emmet Shear too at some point.
I also strongly believe AI psychosis to be a far more limited phenomenon than people here seem to believe. I think you’re treating it as a good soldier in your army of arguments rather than investigating it truthfully for what it is.