this feels like chatgpt’s writing in a way that I find makes it harder for me to understand. I’ve been talking to chatgpt to try to understand these concepts as well, and I generally find it to be good for suggesting keywords, but its tendency to repeat itself feels like a bad school essay, not an insightful explanation. it brings up concepts because it needs them to think out loud, but then doesn’t expand on them enough to really teach me about them. so I look up YouTube videos and Wikipedia articles on each subtopic, drop fragments of explanations into metaphor and read those, and still feel like it hasn’t properly resolved my confusions. I am very excited about this approach and I agree it’s probably the true answer; it meshes well with, eg, MIMI, LOVE in a simbox, etc. but actually resolving all the references enough that I actually understand will take some doing, and I’m a just-in-time learner who is missing large chunks of intuition; I look forward to working through this post but I guess my point with this comment is that I’m a bit pessimistic about my ability to compensate for the entropy introduced by sampling randomly from /mlgroups/openai/iffy_school_essay.py.
if anyone with more experience can recommend a series of high quality, high density exercises on information theory, that will help me flesh out my intuitions for the concepts referenced by language in this post, I’d love to see it. I recognize even matching chatgpt unaided can be a lot of writing work, so I would hardly call this post awful for it. but maybe the signal to noise ratio could be improved or something. idk, not totally sure what I’m asking. maybe I just need to try to write a post myself in order to understand whatever it is I’m stuck on about this research path.
oooh upvote! however...
this feels like chatgpt’s writing in a way that I find makes it harder for me to understand. I’ve been talking to chatgpt to try to understand these concepts as well, and I generally find it to be good for suggesting keywords, but its tendency to repeat itself feels like a bad school essay, not an insightful explanation. it brings up concepts because it needs them to think out loud, but then doesn’t expand on them enough to really teach me about them. so I look up YouTube videos and Wikipedia articles on each subtopic, drop fragments of explanations into metaphor and read those, and still feel like it hasn’t properly resolved my confusions. I am very excited about this approach and I agree it’s probably the true answer; it meshes well with, eg, MIMI, LOVE in a simbox, etc. but actually resolving all the references enough that I actually understand will take some doing, and I’m a just-in-time learner who is missing large chunks of intuition; I look forward to working through this post but I guess my point with this comment is that I’m a bit pessimistic about my ability to compensate for the entropy introduced by sampling randomly from /mlgroups/openai/iffy_school_essay.py.
if anyone with more experience can recommend a series of high quality, high density exercises on information theory, that will help me flesh out my intuitions for the concepts referenced by language in this post, I’d love to see it. I recognize even matching chatgpt unaided can be a lot of writing work, so I would hardly call this post awful for it. but maybe the signal to noise ratio could be improved or something. idk, not totally sure what I’m asking. maybe I just need to try to write a post myself in order to understand whatever it is I’m stuck on about this research path.
catch y’all tomorrow!