I was thinking about non-obvious incorrect uses for LLMs, where the output is useable but not aligned to the target audience. For example, using an LLM to design a return to office distribution schedule seems like a good idea because it takes advantage of LLMs ability to solve “math” problems. In reality, however, this method is masking the actually difficult part of this task, human preferences. It won’t accurately factor in niche cultural imprints (lunch time/duration, bathroom usage, tardiness) nor will it be able to detect chaotic phenomena (traffic fluctuations, weather, sports game results).
I was thinking about non-obvious incorrect uses for LLMs, where the output is useable but not aligned to the target audience. For example, using an LLM to design a return to office distribution schedule seems like a good idea because it takes advantage of LLMs ability to solve “math” problems. In reality, however, this method is masking the actually difficult part of this task, human preferences. It won’t accurately factor in niche cultural imprints (lunch time/duration, bathroom usage, tardiness) nor will it be able to detect chaotic phenomena (traffic fluctuations, weather, sports game results).