Awesome! I hadn’t seen Caspar’s idea, and I think it’s a neat point on its own that could also lead in some new directions.
Edit: Also, I’m curious if I had any role in Alex’s idea about learning the goals of a game-playing agent. I think I was talking about inferring the rules of checkers as a toy value-learning problem about a year and a half ago. It’s just interesting to me to imagine what circuituitous route the information could have taken, in the case that it’s not independent invention.
I don’t think that was where my idea came from. I remember thinking of it during AI Summer Fellows 2017, and fleshing it out a bit later. And IIRC, I thought about learning concepts that an agent has been trained to recognize before I thought of learning rules of a game an agent plays.
Thanks, that’s very flattering! The thing I’m working on now (looking into prior work on reference, because it seems relevant to what Abram Demski calls model-utility learning) will probably qualify, so I will err on the side of rushing a little (prize working as intended).
Sometimes you just read a chunk of philosophical literature on reference and it’s not useful to you even as a springboard. shrugs So I don’t have an ideasworth of posts, and it’ll be ready when it’s ready.
Awesome! I hadn’t seen Caspar’s idea, and I think it’s a neat point on its own that could also lead in some new directions.
Edit: Also, I’m curious if I had any role in Alex’s idea about learning the goals of a game-playing agent. I think I was talking about inferring the rules of checkers as a toy value-learning problem about a year and a half ago. It’s just interesting to me to imagine what circuituitous route the information could have taken, in the case that it’s not independent invention.
I don’t think that was where my idea came from. I remember thinking of it during AI Summer Fellows 2017, and fleshing it out a bit later. And IIRC, I thought about learning concepts that an agent has been trained to recognize before I thought of learning rules of a game an agent plays.
Charlie, it’d be great to see an entry from you in this round.
Thanks, that’s very flattering! The thing I’m working on now (looking into prior work on reference, because it seems relevant to what Abram Demski calls model-utility learning) will probably qualify, so I will err on the side of rushing a little (prize working as intended).
Hurry up!
Sometimes you just read a chunk of philosophical literature on reference and it’s not useful to you even as a springboard. shrugs So I don’t have an ideasworth of posts, and it’ll be ready when it’s ready.