Okay, I feel it now

I’ve been coming to LessWrong for a while. I’ve read most of the arguments for how and why things might go wrong.

I’ve been keeping across most developments. I’ve been following alignment efforts. I’ve done some thinking about the challenges involved.

But now I feel it.

Spending time observing ChatGPT – its abilities, its quirks, its flaws – has brought my feelings into step with my beliefs.

I already appreciated why I should be concerned about AI. Like I say, I’d read the arguments, and I’d often agreed.

But my appreciation took a detached, ‘I can’t fault the reasoning so I should accept the conclusion’ kind of form. I was concerned in the abstract, but I was never really worried. At least some of my concern was second-hand; people I respected seemed to care a lot, so I probably should too. It was forced.

I spend a lot of time thinking about catastrophic risks. When I consider something like an engineered pandemic, I’ve always felt the danger. It comes naturally to me. It’s intuitive. The same goes for many other threats. But only in the last few weeks has that become true for AI.

This is all a little embarrassing to admit. I know that ChatGPT isn’t an enormous leap from what existed previously. Yet it has significantly changed how I feel about AI risk. It’s made things click in a way that I can’t fully explain; all the arguments now hit with an added force.

Those who’ve always ‘felt it’ are probably wondering what took me so long, and why now. I’m not sure. But I doubt my experience is unique. Hopefully some of you know exactly what I’m trying to convey here.

Before I believed it. Now I feel it.

And as much as I’d like to think otherwise, that makes a big difference.