I said a variation of the Peter Principle. Maybe I should have said some relation of the Peter Principle, or not used that term at all. What I’m talking about isn’t about promotion but expansion into new types of tasks.
Once somebody makes money deploying agents in one domain, other people will want to try similar agents in similar new domains that are probably somewhat more difficult. This is a very loose analog of promotion.
The bit about not wanting to demote them is totally different. I think they can be bad at a job and make mistakes that damage their and your reputation and still be well worth keeping in that job. There are also some momentum effects of not wanting to re-hire all the people you just fired in favor of AI and admit you made a big mistake. Many decision-makers would be tempted to push through and try to upgrade the AI and work around its problems instead of admit they screwed up.
See below response for the rest of that logic. There can be more upside than down even with some disastrous mistakes or near misses that will go viral.
I’d be happy to not call it a relation of the Peter Principle at all. Let’s call it the Seth Principle; I’d find it funny to have a principle of incompetence named after me :)
I said a variation of the Peter Principle. Maybe I should have said some relation of the Peter Principle, or not used that term at all. What I’m talking about isn’t about promotion but expansion into new types of tasks.
Once somebody makes money deploying agents in one domain, other people will want to try similar agents in similar new domains that are probably somewhat more difficult. This is a very loose analog of promotion.
The bit about not wanting to demote them is totally different. I think they can be bad at a job and make mistakes that damage their and your reputation and still be well worth keeping in that job. There are also some momentum effects of not wanting to re-hire all the people you just fired in favor of AI and admit you made a big mistake. Many decision-makers would be tempted to push through and try to upgrade the AI and work around its problems instead of admit they screwed up.
See below response for the rest of that logic. There can be more upside than down even with some disastrous mistakes or near misses that will go viral.
I’d be happy to not call it a relation of the Peter Principle at all. Let’s call it the Seth Principle; I’d find it funny to have a principle of incompetence named after me :)