This seems to be almost exclusively based on the proxies of humans and human institutions. Reasons why this does not necessarily generalize to advanced AIs are often visible when looking from a perspective of other proxies, eg. programs or insects.
Sandwiching:
So far, progress of ML often led to this pattern:
1. ML models sort of suck, maybe help a bit sometimes. Humans are clearly better (“humans better”). 2. ML models get overall comparable to humans, but have different strengths and weaknesses; human+AI teams beat both best AIs alone, or best humans alone (“age of cyborgs”) 3. human inputs just mess up with superior AI suggestions (“age of AIs”)
(chess, go, creating nice images, poetry seems to be at different stages of this sequence)
This seems to lead to a different intuition than the lawyer-owner case.
Also: designer-engineer and lawyer-owner problems seem both related to communication bottleneck between two human brains.
meta:
This seems to be almost exclusively based on the proxies of humans and human institutions. Reasons why this does not necessarily generalize to advanced AIs are often visible when looking from a perspective of other proxies, eg. programs or insects.
Sandwiching:
So far, progress of ML often led to this pattern:
1. ML models sort of suck, maybe help a bit sometimes. Humans are clearly better (“humans better”).
2. ML models get overall comparable to humans, but have different strengths and weaknesses; human+AI teams beat both best AIs alone, or best humans alone (“age of cyborgs”)
3. human inputs just mess up with superior AI suggestions (“age of AIs”)
(chess, go, creating nice images, poetry seems to be at different stages of this sequence)
This seems to lead to a different intuition than the lawyer-owner case.
Also: designer-engineer and lawyer-owner problems seem both related to communication bottleneck between two human brains.