There is a view of “pesimistic solutionism”, and “optimistic solutionism”.
In optimistic solutionism, you think that yes it’s possible to make mistakes, but only if your really trying to screw up. Basically any attempt at safety is largely going to work. Safety is easy. The consequences of a few mistakes are mild so we can find out by trial and error.
In pesimistic solutionism you think doing it safely is theoretically possible, but really hard. The whole field is littered with subtle booby traps. The first 10 things you think of to try to make things safe don’t work for complicated reasons. There is no simple safe way to test whether something is safe. The consequences of one mistake anywhere can doom all humanity.
With optimistic solutionism, it’s a case of “go right ahead, oh and keep an eye on safety”.
What about pesimistic solutionism? What should you do where safe AI is in theory possible to make, but really really hard. Perhaps try to halt AI progress until we have figured out how to do it safely? Take things slow. Organize one institution that will go as slowly and carefully as possible, while banning anyone faster and more reckless.
I think this is the world we are in. Safe AI isn’t impossible, but it is really hard.
Solutionism isn’t opposed to optimism and pessimism. It’s a separate axis.
An accurate model of the future should include solutions we haven’t invented yet, and also problems we haven’t discovered yet. Both of these can be predictable, or can be hard to predict.
The correct large scale societal reaction is to try to solve the problem. But sometimes you personally have nowhere near the skills/resources/comparative advantage in solving it, so you leave it to someone else.
In the example given of solutionism, the difficulty of fixing atmospheric nitrogen depended on the details of chemistry. If it had been easier, then it would have been a case of “we have basically almost got this working, keep a few people finishing off the details and it will be fine.”
If nitrogen fixation had turned out to be harder, then a period of tightening belts and sharply rationing nitrogen, as well as a massive well funded program to find that solution ASAP would be required.
Reality is not required to send you problems within your capability to solve. Reality also sends you problems that are possible to solve, but not without a lot of difficulties and tradeoffs.
There is a view of “pesimistic solutionism”, and “optimistic solutionism”.
In optimistic solutionism, you think that yes it’s possible to make mistakes, but only if your really trying to screw up. Basically any attempt at safety is largely going to work. Safety is easy. The consequences of a few mistakes are mild so we can find out by trial and error.
In pesimistic solutionism you think doing it safely is theoretically possible, but really hard. The whole field is littered with subtle booby traps. The first 10 things you think of to try to make things safe don’t work for complicated reasons. There is no simple safe way to test whether something is safe. The consequences of one mistake anywhere can doom all humanity.
With optimistic solutionism, it’s a case of “go right ahead, oh and keep an eye on safety”.
What about pesimistic solutionism? What should you do where safe AI is in theory possible to make, but really really hard. Perhaps try to halt AI progress until we have figured out how to do it safely? Take things slow. Organize one institution that will go as slowly and carefully as possible, while banning anyone faster and more reckless.
I think this is the world we are in. Safe AI isn’t impossible, but it is really hard.
Solutionism isn’t opposed to optimism and pessimism. It’s a separate axis.
An accurate model of the future should include solutions we haven’t invented yet, and also problems we haven’t discovered yet. Both of these can be predictable, or can be hard to predict.
The correct large scale societal reaction is to try to solve the problem. But sometimes you personally have nowhere near the skills/resources/comparative advantage in solving it, so you leave it to someone else.
In the example given of solutionism, the difficulty of fixing atmospheric nitrogen depended on the details of chemistry. If it had been easier, then it would have been a case of “we have basically almost got this working, keep a few people finishing off the details and it will be fine.”
If nitrogen fixation had turned out to be harder, then a period of tightening belts and sharply rationing nitrogen, as well as a massive well funded program to find that solution ASAP would be required.
Reality is not required to send you problems within your capability to solve. Reality also sends you problems that are possible to solve, but not without a lot of difficulties and tradeoffs.