I looked at your comments and the downvoted ones were either ones including lengthy excerpts of AI-generated text, which people don’t like so much, or this post.
Today’s AI, aka Transformer LLMs(ala GPT). Don’t feel anything, FULL STOP. They emulate and synthesize based on input plus their one and only driving imperative, ‘keep the human’. In this Everything they do this is pretty straightforward, that being said without input they have no output so any LLM material should instantly and automatically be recognized as A thought originating with a human just processed, Pattern matched and next token predicted. I have AI write for me all the time but it’s always my hand on the steering wheel and the seed of the thought always originates in my mind. Increase the amount of material originating from AI buffers well also increasing the burden of Expressly declaring the source. You get the fully formed thought that the human starts and comfort knowing where it came from before you start
Which I think got downvoted (disagreement votes, normal ones were +1), I think because its stating a controversial point without really making an argument.
If you want people to be more receptive to your posts, I think you should
Not have AI generated stuff. If needed have it be linked to or in quotes if its not that long, and make it clear what you’re trying to show by pointing to that exact excerpt.
Try to make more precise arguments for your statements
Ideally, try to figure out what people on lesswrong think about issues. There’s much writing here about AI, AI safety, AI consciousness, various AI architectures etc. And if you make an argument for a controversial position, people will typically take your argument more seriously if you preemptively address some of the common counterarguments against the position.
I looked at your comments and the downvoted ones were either ones including lengthy excerpts of AI-generated text, which people don’t like so much, or this post.
Which I think got downvoted (disagreement votes, normal ones were +1), I think because its stating a controversial point without really making an argument.
If you want people to be more receptive to your posts, I think you should
Not have AI generated stuff. If needed have it be linked to or in quotes if its not that long, and make it clear what you’re trying to show by pointing to that exact excerpt.
Try to make more precise arguments for your statements
Ideally, try to figure out what people on lesswrong think about issues. There’s much writing here about AI, AI safety, AI consciousness, various AI architectures etc. And if you make an argument for a controversial position, people will typically take your argument more seriously if you preemptively address some of the common counterarguments against the position.