Today I’m running a meetup about AI 2027. I plan to end it with a short pitch on If Anyone Builds It. I think the meetup itself is solid- talking about some of Scott’s writing and making predictions about the future, both good ways to spend an afternoon. I wouldn’t run it if I thought it was just going to be bad.
But part of the motive is marginal If Anyone Builds It pre-orders. I’m wary here, because several notable previous organizations that looked like they were not for AI x-risk eventually turned out to be mostly for AI x-risk. I dont want to wind up like that, and I super don’t want to be basically focused on x-risk under the hood but most people think I’m working on something else.
Current solution- say out loud the extra motive here.
Today I’m running a meetup about AI 2027. I plan to end it with a short pitch on If Anyone Builds It. I think the meetup itself is solid- talking about some of Scott’s writing and making predictions about the future, both good ways to spend an afternoon. I wouldn’t run it if I thought it was just going to be bad.
But part of the motive is marginal If Anyone Builds It pre-orders. I’m wary here, because several notable previous organizations that looked like they were not for AI x-risk eventually turned out to be mostly for AI x-risk. I dont want to wind up like that, and I super don’t want to be basically focused on x-risk under the hood but most people think I’m working on something else.
Current solution- say out loud the extra motive here.
Personally I’m against AI x-risk.
“In my defense, if I had meant to offer the Senator a bomb removal squad, I would have said bomb removal squad.”
-Unsong
As in, you’re against the AI x-risk movement, or you want to reduce AI x-risk?
I’m in favor of humanity surviving.