I partly feel that there sometimes is a missing mood on LW about the ability of models to actually do good coding by themselves? I might be wrong but if I look around the spaces which I consider to be doing proper computer science it very much still feels like it is not that good? For example, here’s an interesting video on AI taking a cornell CS freshman class: https://www.youtube.com/watch?v=56HJQm5nb0U
The qualitative vibe is more like it’s a nice extention of a single human’s agency? When I look around the programming space and the vibes of more serious programmers I still can’t really say that I feel the AGI (I think the core problem is something around things that are highly dependable and how that is hard to develop with AI but idk, I just mostly wanted to point out this missing mood.)
I don’t understand what exactly is the mood that you think is missing.
I am happy about Claude’s ability to do various things (including things it couldn’t do a few months ago). That’s why I use it to help me with coding, or answer my questions.
I am also aware that its abilities are limited. That’s why I don’t give it grandiose tasks, such as “make me a new Facebook, or a new Wikipedia, or design an entire computer game based on a few sketches”.
Do you think either of these is wrong? Or insufficiently communicated on LW? Or is it something else?
I partly feel that there sometimes is a missing mood on LW about the ability of models to actually do good coding by themselves? I might be wrong but if I look around the spaces which I consider to be doing proper computer science it very much still feels like it is not that good? For example, here’s an interesting video on AI taking a cornell CS freshman class: https://www.youtube.com/watch?v=56HJQm5nb0U
The qualitative vibe is more like it’s a nice extention of a single human’s agency? When I look around the programming space and the vibes of more serious programmers I still can’t really say that I feel the AGI (I think the core problem is something around things that are highly dependable and how that is hard to develop with AI but idk, I just mostly wanted to point out this missing mood.)
I don’t understand what exactly is the mood that you think is missing.
I am happy about Claude’s ability to do various things (including things it couldn’t do a few months ago). That’s why I use it to help me with coding, or answer my questions.
I am also aware that its abilities are limited. That’s why I don’t give it grandiose tasks, such as “make me a new Facebook, or a new Wikipedia, or design an entire computer game based on a few sketches”.
Do you think either of these is wrong? Or insufficiently communicated on LW? Or is it something else?