Are you passionate about pushing for a global halt to AGI development? An international treaty banning superintelligent AI? Pausing AI? Before it’s too late to prevent human extinction?
Would you like to live with a group of like-minded people pushing for the same?
Do you want to do much more, but don’t have the financial means to support yourself volunteering?
Then apply to stay at Pause House. I (Greg Colbourn), am offering free accommodation, (vegan) food, and a small stipend for those who need it (£50/week). In exchange I ask for a commitment to spending at least 20 hrs a week on work related to pushing for a Pause. This could be either campaigning/advocacy, or raising public awareness.
If you have an income, then I ask that you pay ~cost price (£20/day).
Pause House is located in Blackpool, UK, next door to CEEALAR (which I founded in 2018); 1hr from Manchester, 3.5hrs from London. It has a large communal work/social/events room, dining room, well equipped kitchen, toilet, laundry and small courtyard downstairs. And a meeting room and 12 bedrooms in the upper floors, 4 with en suite bathrooms (first come, first served!). The other 8 bedrooms all have sinks in them, and the use of 2 shared showers and 2 shared toilets.
It is not the first time that people have tried to stop an economic process by campaigning. It is not the last time that it will fail.
The development of AI is a race.
Truth-orientedness is critical, but so is momentum and energy. Dropping this kind of conclusion helps a lot less than giving gears-level clarity on the dynamics that make this very hard, which people need to navigate if they’re trying to make pause work.
My model is that without top-down support from a superpower whose leadership actually gets the reasons why if anyone builds it everyone dies, the dynamic system of civilization does just push onwards to the most economically and strategically important technology in history and opposition gets outmaneuvered, outspent, or crushed. But! Attempts to plant seeds and strategically get ideas into the right places have some slim chance of actually getting someone with the power to halt this race to get the shape of the situation, which is not a “we build it first, ~80% dominance, they build it first ~100% us dying because they won or blew the world up”, but actually “we build it first, we die because we don’t have the alignment tech, same if they do, no we actually have to stop”.
The difference between the ‘seriously understands the gears of what alignment would take and how far current attempts are from succeeding and gets the technical details’ models and the vague washy “well even 10% risk is too high” is absolutely critical for the relevant people to see the EV of the bet they’re taking correctly.