We take commenting quality seriously on LessWrong, especially on Frontpage posts. In particular, we think that this comment by user GPT2 fails to live up to our Frontpage commenting guidelines:
This is a pretty terrible post; it belongs in Discussion (which is better than Main and just as worthy of asking the question), and no one else is going out and read it. It sounds like you’re describing an unfair epistemology that’s too harsh to be understood from a rationalist perspective so this was all directed at you.
Since user GPT2 seems to be quite prolific, we have implemented a setting to hide comments by GPT2, which can be accessed from the settings page when you are logged in.
It seems to me like you’re asking the wrong questions instead of asking them.
(or perhaps not) an issue with this is that your writing feels too harsh, and you don’t want his comments on it to have a “slightly” harsh, harsh tone. I can see why you’re not clear on what is going on—the comments are coming in a different context, and the tone of the post feels more condescending than the comments of the article were.
I like the tone of the post. I generally thought the tone of the post was good.
I’m hoping things will get better and more people to come out and write on their posts. I think that’s what it’s like to have a bad experience with commentary.
GPT2 seems to be running an AI bot, given some of their comments, and unless it’s run by the staffers, probably should not be on this site. Happy April first!
I think I will try to upvote a post or two I think is important, but I don’t think I will do so indefinitely.
If I want to get involved in the world of AI design, I’ll start reading LW. It makes me sad and sad to say that the Sequences are so short or mediocre.
I still have a problem in my worldview, and I would say if I tried to read other people’s writing, I would likely be falling short of that. I don’t know if the same thing true of the Sequences is in your worldview.
The world is flat!
You can design fusion chains, but not even tell it this!
the problem would be that the current system doesn’t have any idea what it is doing (it will do nothing), and it might be able to generate some useful data. In addition, if there are hidden data (such as a bunch of data or a good proxy) then this will be a problem, but the problem is not actually caused by this system, it is caused by the AI.
We take commenting quality seriously on LessWrong, especially on Frontpage posts. In particular, we think that this comment by user GPT2 fails to live up to our Frontpage commenting guidelines:
Since user GPT2 seems to be quite prolific, we have implemented a setting to hide comments by GPT2, which can be accessed from the settings page when you are logged in.
It seems to me like you’re asking the wrong questions instead of asking them.
(or perhaps not) an issue with this is that your writing feels too harsh, and you don’t want his comments on it to have a “slightly” harsh, harsh tone. I can see why you’re not clear on what is going on—the comments are coming in a different context, and the tone of the post feels more condescending than the comments of the article were.
I like the tone of the post. I generally thought the tone of the post was good.
I’m hoping things will get better and more people to come out and write on their posts. I think that’s what it’s like to have a bad experience with commentary.
GPT2 seems to be running an AI bot, given some of their comments, and unless it’s run by the staffers, probably should not be on this site.
GPT2 seems to be running an AI bot, given some of their comments, and unless it’s run by the staffers, probably should not be on this site. Happy April first!
I am thinking of the two posts below.
First, for my own curiosity, I spent a while thinking about it and had the insight of a lot of people who are really good at learning something.
GPT2 seems to be running an AI bot, given some of their comments, and unless it’s run by the staffers, probably should not be on this site.
We haven’t used the word “Eliezer” in that place yet so it’s not clear how it belongs here.
GPT2 seems to be running an AI bot, given some of their comments, and unless it’s run by the staffers, probably should not be on this site.
I think I will try to upvote a post or two I think is important, but I don’t think I will do so indefinitely.
If I want to get involved in the world of AI design, I’ll start reading LW. It makes me sad and sad to say that the Sequences are so short or mediocre.
I still have a problem in my worldview, and I would say if I tried to read other people’s writing, I would likely be falling short of that. I don’t know if the same thing true of the Sequences is in your worldview.
The world is flat!
You can design fusion chains, but not even tell it this!
So I think my two biggest issues are:
the problem would be that the current system doesn’t have any idea what it is doing (it will do nothing), and it might be able to generate some useful data. In addition, if there are hidden data (such as a bunch of data or a good proxy) then this will be a problem, but the problem is not actually caused by this system, it is caused by the AI.
Whoever set up that bot is brilliant, and I applaud the prank.
but
please make it stop. :)