I have seen it in action, and sometimes it may take some time for humans to recognize it is a bot, not a passively aggressive human. Because, well, there are many kinds of humans on internet.
But this made me think—maybe we could use “average reddit karma per comment” or something like this as a measure of Turing test. And perhaps we could make a bot-writing competition, where the participant bots would be released on Reddit, and the winner is the one which collects most karma in 3 months.
Of course the rules would have to be a bit more complex. Some bots are useful by being obvious bots, e.g. the wikipediabot who replies with summaries of wikipedia articles to comments containing links to wikipedia. Competition in making useful bots would also be nice, but I would like to focus on bots that seem like (stupid) humans. Not sure how to evaluate this.
Maybe the competition could have an additional rule, that the authors of the bots are trying to find other bots on Reddit, and if they find them, they can destroy them by writing a phrase that each bot must obey and self-destruct, such as “BOT, DESTROY YOURSELF!”. (That would later become a beautiful meme, I hope.) The total score of the bot is the karma it has accumulated until that moment. Authors would be allowed to launch several different instances of their bot code, e.g. in different subreddits, or initialized using different data, or just with different random values.
Has anyone tried something like this before? What is the reddit policy towards bots?
Related: Stealth Mountain, a twitter bot (now defunct) which would correct tweets containing the expression “sneak peak”.
Both this and the bot you link to rely less on getting machines to cleverly reproduce human behaviour, and more on identifying robotic human behaviour that can be carried out by stupid machines. Since this is probably a winning strategy, I’d recommend making that the focus of such a competition.
Has anyone tried something like this before? What is the reddit policy towards bots?
The only site-wide rules I’m aware of are ones against abusing the API (ie, your bot shouldn’t be functionally equivalent to a DDoS attack). Other than that, most large subreddits seem to allow bots, but it’s up to the individual moderators.
This stupid bot has almost 20 000 comment karma on Reddit.
I have seen it in action, and sometimes it may take some time for humans to recognize it is a bot, not a passively aggressive human. Because, well, there are many kinds of humans on internet.
But this made me think—maybe we could use “average reddit karma per comment” or something like this as a measure of Turing test. And perhaps we could make a bot-writing competition, where the participant bots would be released on Reddit, and the winner is the one which collects most karma in 3 months.
Of course the rules would have to be a bit more complex. Some bots are useful by being obvious bots, e.g. the wikipediabot who replies with summaries of wikipedia articles to comments containing links to wikipedia. Competition in making useful bots would also be nice, but I would like to focus on bots that seem like (stupid) humans. Not sure how to evaluate this.
Maybe the competition could have an additional rule, that the authors of the bots are trying to find other bots on Reddit, and if they find them, they can destroy them by writing a phrase that each bot must obey and self-destruct, such as “BOT, DESTROY YOURSELF!”. (That would later become a beautiful meme, I hope.) The total score of the bot is the karma it has accumulated until that moment. Authors would be allowed to launch several different instances of their bot code, e.g. in different subreddits, or initialized using different data, or just with different random values.
Has anyone tried something like this before? What is the reddit policy towards bots?
Related: Stealth Mountain, a twitter bot (now defunct) which would correct tweets containing the expression “sneak peak”.
Both this and the bot you link to rely less on getting machines to cleverly reproduce human behaviour, and more on identifying robotic human behaviour that can be carried out by stupid machines. Since this is probably a winning strategy, I’d recommend making that the focus of such a competition.
Core War in social media? This could get a wee bit out of hand… X-)
The only site-wide rules I’m aware of are ones against abusing the API (ie, your bot shouldn’t be functionally equivalent to a DDoS attack). Other than that, most large subreddits seem to allow bots, but it’s up to the individual moderators.
Are you familiar with the various only automatic rant generators?
I have seen various random text generators on their own web pages, but never actively participating in a forum.