I haven’t tried AI writing recently, but last time I did, it was horrible. With my kids we tried prompts like “write a story about two kittens who went to a forest and found a treasure”, and the result was invariably like this:
5 paragraphs of text:
The first two paragraphs describe the protagonists and the setting. “There were two kittens named …, they were great friends, and they lived in …”
The third paragraph was the “story”, except using the very opposite of the “show, don’t tell” rule, often just a restatement of the prompt. “One day, the kittens went to the forest, had many adventures, and found a treasure.”
The remaining two paragraphs clarified the moral of the story, which was invariably “and so the kittens learned that friendship is more precious than all treasures in the world”, described in the most annoying way.
I tried to nudge the AI to improve the story, like “expand the part about the adventures, be more specific”, which the AI did in the most lazy way, like: “The kittens had to solve many puzzles and fight the dragon”, without telling anything specific about the puzzles or about the fight.
I used to work for a SF&F magazine, so I have read my share of bad stories, but the AI stories were the worst of all. It was as if it was explicitly trained to make the story as bad as possible.
(I should probably try this again.)
If you can context-switch to a game or puzzle while your AI agent is processing, then you should try instead context-switching to another AI agent instance where you are working on a different branch or codebase.
This is my nightmare. I already find it difficult to work on multiple projects in parallel during the same sprint, but if the future is working on multiple projects literally at the same time, editing in one window while the other one compiles, expected to complete dozen jira tasks each hour in parallel… I can only hope that suicide remains legal.
I haven’t tried AI writing recently, but last time I did, it was horrible. With my kids we tried prompts like “write a story about two kittens who went to a forest and found a treasure”, and the result was invariably like this:
5 paragraphs of text:
The first two paragraphs describe the protagonists and the setting. “There were two kittens named …, they were great friends, and they lived in …”
The third paragraph was the “story”, except using the very opposite of the “show, don’t tell” rule, often just a restatement of the prompt. “One day, the kittens went to the forest, had many adventures, and found a treasure.”
The remaining two paragraphs clarified the moral of the story, which was invariably “and so the kittens learned that friendship is more precious than all treasures in the world”, described in the most annoying way.
I tried to nudge the AI to improve the story, like “expand the part about the adventures, be more specific”, which the AI did in the most lazy way, like: “The kittens had to solve many puzzles and fight the dragon”, without telling anything specific about the puzzles or about the fight.
I used to work for a SF&F magazine, so I have read my share of bad stories, but the AI stories were the worst of all. It was as if it was explicitly trained to make the story as bad as possible.
(I should probably try this again.)
This is my nightmare. I already find it difficult to work on multiple projects in parallel during the same sprint, but if the future is working on multiple projects literally at the same time, editing in one window while the other one compiles, expected to complete dozen jira tasks each hour in parallel… I can only hope that suicide remains legal.