[Question] Ethics and prospects of AI related jobs?

I’ve been on the lookout for new jobs recently and one thing I have noticed is that the market seems flooded with ads for AI-related jobs. What I mean is not work on building models (or aligning them, alas), but rather, work on building applications using generative AI or other advances to make new software products. My impression of this is that first, there’s probably something of a bubble, because I doubt many of these ideas can deliver on their promises, especially as they rely so heavily on still pretty unreliable LLMs and such. And second, that while the jobs are well paid and sound fun, I’m not sure how I feel about them. These jobs all essentially aim at automating away other jobs, one way or another. That is a good thing only insofar as various other things happen, and depending on the specific job and quality of the work—a good automated GP for diagnosis would probably do a lot of good, but a rushed one might be net negative, and automating creative work is IMO just the wrong road to go down in general if we want good AI futures.

What are your intuitions about this? Which kinds of AI jobs do you consider having more potential for overall positive/​negative value for society?

No comments.