I agree that the crux is the difference between public and private models. That’s exactly what I was pointing to in the opener by saying maybe somebody is completing a misaligned Agent-4 in a lab right when this is happening in public. That would make all of this concern almost useless. It still would be in the air and might push decision-makers a bit more cautious—which could be a nontrivial advantage.
I agree that anything that produces public worry earlier is probably important and useful. The only exceptions would be outright lies that could blow back. But sparking concerns about job losses early wouldn’t be a lie. I’m constantly a bit puzzled as to why other alignment people don’t seem to think we’ll get catastrophic job losses before AGI. Mostly I don’t think people spend time thinking about it, which makes sense since actual misalignment and takeover is so much worse. But I think it’s between possible and likely that job losses will be very severe and people should worry about them while there’s still time to slow them down dramatically. Which would also slow AGI.
Constantly asking politicians about their plans seems like a good start. Saying you’re an AI researcher when you do would be better.
To your first point:
Yes, I think that incompetence will both be taken for misalignment when it’s not, and it will also create real misalignments (largely harmless ones)
I think this wouldn’t be that helpful if the public really followed the logic closely; ASI wouldn’t be incompetent, so it wouldn’t have the same sources of incompetence. But the two issues are semantically linked. This will just get the public worried about alignment. Then they’ll stay worried even if they do untangle the logic. Because they should be.
Those aren’t necessarily contradictory: you could have big jumps in unemployment even with increases in average productivity. You already see this happening in software development, where increasing productivity for senior employees has also coincided with fewer junior hires. While I expect that the effect of this today is pretty small and has more to do with the end of ZIRP and previous tech overhiring, you’ll probably see it play out in a big way as better AI tools take up spots for newgrads in the runup to AGI.
I agree that the crux is the difference between public and private models. That’s exactly what I was pointing to in the opener by saying maybe somebody is completing a misaligned Agent-4 in a lab right when this is happening in public. That would make all of this concern almost useless. It still would be in the air and might push decision-makers a bit more cautious—which could be a nontrivial advantage.
I agree that anything that produces public worry earlier is probably important and useful. The only exceptions would be outright lies that could blow back. But sparking concerns about job losses early wouldn’t be a lie. I’m constantly a bit puzzled as to why other alignment people don’t seem to think we’ll get catastrophic job losses before AGI. Mostly I don’t think people spend time thinking about it, which makes sense since actual misalignment and takeover is so much worse. But I think it’s between possible and likely that job losses will be very severe and people should worry about them while there’s still time to slow them down dramatically. Which would also slow AGI.
Constantly asking politicians about their plans seems like a good start. Saying you’re an AI researcher when you do would be better.
To your first point:
Yes, I think that incompetence will both be taken for misalignment when it’s not, and it will also create real misalignments (largely harmless ones)
I think this wouldn’t be that helpful if the public really followed the logic closely; ASI wouldn’t be incompetent, so it wouldn’t have the same sources of incompetence. But the two issues are semantically linked. This will just get the public worried about alignment. Then they’ll stay worried even if they do untangle the logic. Because they should be.
Because AI before AGI will have similar effects as previous productivity enhancing technologies.
Those aren’t necessarily contradictory: you could have big jumps in unemployment even with increases in average productivity. You already see this happening in software development, where increasing productivity for senior employees has also coincided with fewer junior hires. While I expect that the effect of this today is pretty small and has more to do with the end of ZIRP and previous tech overhiring, you’ll probably see it play out in a big way as better AI tools take up spots for newgrads in the runup to AGI.