After listening to many techbros I often hear the notion that AI, more specifically LLM’s like chatgpt or claude are “Just tools to be used”. This notion is usually followed by some pro AI or AI accelerationist discussion but thats besides the point I will make.
I feel as though AI has exceeded the capabilities of a “tool” and its probably harmful to call it so. Take for example a hammer, or even a gun, both reasonably classified as tools. This is because by definition they are used to complete a task, whether it be hammering a nail or subduing a criminal. An LLM falls under this definition as well but I noticed stops when it comes to distributing responsibility. For example when tasked to hammer a nail, the responsibility of the completed task is distributed wholly to the person hammering rather than the hammer itself. Same way when someone is shot the responsibility falls onto the shooter rather than the gun.
But this effect changes when it comes to LLM’s. For example if someone is to vibecode(using AI to program) an app, the distribution of responsibility becomes much more murky. The user prompts the llm, whereas the llm does the ‘heavy lifting’ of actually coding, and therefore we are more likely to distribute responsibility or contribution of a tasks completion less one-sidedly when dealing with LLM’s.
In this scenario the LLM clearly exceeds the classification of ‘tool’, assuming firearms or hammers are also considered tools, and should not be considered equivalent. The issue with making this false equivalence is that the LLM is underestimated or dropped to the level of a hammer, leading to further proliferation of anti Ai safety sentiment at the consumer level. In order to solve this issue either AI should be completely disconnected from being a ‘tool’, or the definition of tool must evolve as well.
I think the theme presented of prioritizing status as a product of quality rather than as a product of signaling mechanisms is a valid one. Though I feel as though in the example of presenting casually in an interview rather than in a suit is just another way of signaling to the interviewer that “I have the skills because I look the part of a data scientist” or “I have the skills because I am not doing level 1 signaling rather I am doing level 2 signalling”. If you were to completely avoid signaling your fashion choices would be completely determined by you in which case whether you wore a suit or not would be inconsequential because you are internally not signaling at all because you are “legit”. In the same vein I don’t think that there are hard lines between the “winners and losers bracket” as proposed. There are useful components from both levels 1 and 2 which can accommodate for both being “legit” and integrating well in society.