Discussions about possible economic future should account for the (imo high) possibility that everyone might have inexpensive access to sufficient intelligence to accomplish basically any task they would need intelligence for. There are some exceptions like quant trading where you have a use case for arbitrarily high intelligence, but for most businesses, the marginal gains for SOTA intelligence won’t be so high. I’d imagine that raw human intelligence just becomes less valuable (as it has been for most of human history I guess this is worse because many businesses would also not need employees for physical tasks. But the point is that many such non-tech businesses might be fine).
Separately: Is AI safety at all feasible to tackle in the likely scenario that many people will be able to build extremely powerful but non-SOTA AI without safety mechanisms in place? Will the hope be that a strong enough gap exists between aligned AI and everyone else’s non-aligned AI?
Discussions about possible economic future should account for the (imo high) possibility that everyone might have inexpensive access to sufficient intelligence to accomplish basically any task they would need intelligence for. There are some exceptions like quant trading where you have a use case for arbitrarily high intelligence, but for most businesses, the marginal gains for SOTA intelligence won’t be so high. I’d imagine that raw human intelligence just becomes less valuable (
as it has been for most of human historyI guess this is worse because many businesses would also not need employees for physical tasks. But the point is that many such non-tech businesses might be fine).Separately: Is AI safety at all feasible to tackle in the likely scenario that many people will be able to build extremely powerful but non-SOTA AI without safety mechanisms in place? Will the hope be that a strong enough gap exists between aligned AI and everyone else’s non-aligned AI?