[Question] Does translating a post with an LLM affect its rating?

TL;DR: Recently I wrote a post that got much less karma than I expected. My best guess is that the main reason is that I translated it from Russian to English via ChatGPT, and this easily recognizable LLM-style convinced readers from the first lines that “There aren’t any original thoughts; it is a standard, machine-generated fluff piece”. Is it correct, or does the post itself contain any major issues?

More detailed thoughts:

  • I expected the ideas from this post to be quite non-trivial (the math is simple, but practical results are more or less interesting; I mean, even Bayes’ and Aumann’s theorems are very simple from the mathematical point of view, but they bring many useful insights).

    • When I read it in the mentioned book, I found it very interesting.

    • When I published the original Russian post in my Telegram channel, it became one of the most liked posts there.

    • It is not easy to see it from the text, but I added many personal insights of my own to the book’s ideas (most of them are “how exactly it goes from the math” and “why it is still working in real life”).

  • I expected—including after considering my previous post—a rating of about 10-20 (and if I were an author who has been writing for a long time and proved his posts as useful for community, maybe I could expect 50+). Rating of 5 (2 of which are default rating based on author’s karma) makes me feel very confused.

  • My main hypothesis is as follows: the user opened the post, read one or two paragraphs, noticed distinct LLM-generated style, understood that it’s just another empty machine-generated piece of text, closed the post.

  • Part of the effect may be due to my not adding any tags to the post. But I don’t expect it was the main reason.

  • Perhaps the post itself is not well written or lacks new ideas; but my evals of such effect are already contained in my expected 10-20 rating.

And now, more detailed questions:

  • Are my assumptions correct? More specifically, I have the following questions (depending on how much effort you are willing to put in):

    • If you see text, which is obviously LLM-written/​translated, what is the probability that you would stop reading after one or two paragraphs (except in cases where there is quite a revelation within these paragraphs)?

    • How strong is feeling from first one or two paragraphs of this particular post, that “There is nothing interesting in here; I will barely update; it’s just another LLM-written post”?

    • If you read the whole post—how much information from it is interesting? Are there any significant flaws in its content? How unpleasant is reading such LLM-style text?

  • Assuming that the main problem is just LLM-translation, what should I do next time? Should I translate it manually (as I did for this particular question-post—unfortunately, it took significantly more effort, and I’m not sure about its quality)? Or maybe I should add a disclaimer at the beginning of a post: “This text is completely thought through and written by myself, but translated by an LLM with minimal editing”? Or should I do something else?