AI has really become the new polarizing issue. One camp thinks it’s the future: “just extrapolate the graphs”, “look at the coding”, “AGI is near”, “the risks are real.” The other camp thinks it’s pure hype: “it’s a bubble”, “you’re just saying that to make money”, “plagiarizing slop machine”, “the risks are science fiction”. It is literally impossible to tell what’s going on with AI based on the wisdom of the crowd.
I watched a video that tried to explain what AI can’t yet do. It was extremely bland and featured areas like “common sense” (I thought “common sense questions” were one of the main things LLMs had solved!). Not a single specific task it can’t yet do, because nobody can tell right now. I lost track of AI capabilities after the o3 rollout.
AI has really become the new polarizing issue. One camp thinks it’s the future: “just extrapolate the graphs”, “look at the coding”, “AGI is near”, “the risks are real.” The other camp thinks it’s pure hype: “it’s a bubble”, “you’re just saying that to make money”, “plagiarizing slop machine”, “the risks are science fiction”. It is literally impossible to tell what’s going on with AI based on the wisdom of the crowd.
I watched a video that tried to explain what AI can’t yet do. It was extremely bland and featured areas like “common sense” (I thought “common sense questions” were one of the main things LLMs had solved!). Not a single specific task it can’t yet do, because nobody can tell right now. I lost track of AI capabilities after the o3 rollout.