Good point about AI possibly being different levels of conscious depending on their prompts and “current thought processes”. This surely applies to humans. When engaging with physically complex tasks or dangerous extreme sports, humans often report they feel almost completely unconscious, “flow state”, at one with the elements, etc
Now compare that to a human sitting and staring at a blank wall. A totally different state of mind is achieved, perhaps thinking about anxieties, existential dread, life problems, current events, and generally you might feel super-conscious, even uncomfortably so.
Mapping this to AI and different AI prompts isn’t that much of a stretch…
MazevSchlong
Coming from a very technical field, but without an AI or AI-safety background, I’ll say: so much of this AI safety work and research seems like such self-serving nonsense. It just so happens that all the leading AI companies and many employees with huge equity stakes agree that open-source AI == doom and death on a massive scale?
The internet also helps bio-terrorists communicate and learn how to do bad acts. Imagine 50 years ago, the largest internet companies at the time were pushing to make internet protocols closed-source and walled-gardens because of terrorism? (Well they were for different reasons, but can be seen as anticompetitive nonsense in the present day)Encryption and encrypted messaging apps also help bad actors massively. You can communicate over long distances with no risk of spying or comms interception. Also, governments and the US govt in particular tried really hard to ban encryption algos as “export of arms and munitions”. Luckily this failed, the war on encryption mostly continues, but us plebes do have access to Signal and PGP.
Now it just so happens that AI needs to be closed source, walled off, and controlled by a small cartel for our safety. Have we not heard this before, on like every single technological breakthrough? I haven’t fallen for it… yet at least.
Anthropic CEO:>”AI will lead to the unemployment of 20% of workers and civil unrest/war level poverty for a major portion of our economy”
>”Oh and also, have you seen our new funding round? It’s the biggest yet! Let’s speed this up!”
OpenAI:
>”we can’t release open models of our most powerful models, as it will lead to bioterrorism” (even though the latest uh bio-COVID-event was created by government labs which do/will have access to uncensored AI)
>Doesn’t even release their GPT-3 model from years past that barely makes coherent sentences (I wonder why, surely ain’t terrorism)
I think this is a great point here:
None of us have ever managed an infinite army of untrained interns before
Its probable that AIs will force us to totally reformat workflows to stay competitive. Even as the tech progresses, it’s likely there will remain things that humans are good at and AIs lag. If intelligence can be represented by some sort of n-th dimensional object, AIs are already super-human at some subset of n, but beating humans at all n seems unlikely in the near-to-mid term.
In this case, we need to segment work, and have a good pipeline for tasking humans with the work that they excel at, and automating the rest with AI. Young zoomers and kids will likely be intuitively good at this, since they are growing up with this tech.
This is also great in a p(doom) scenario, because even if there are a few pesky things that humans can still do, there’s a good reason to keep us around to do them!
Sure fair point! But generally people gossiping online about missed benchmark questions, and then likely spoiling the answers means that a question is now ~ruined for all training runs. How much of these modest benchmark improvements overtime can be attributed to this?
The fact that frontier AIs can basically see and regurgitate everything ever written on the entire internet is hard to fathom!
I could be really petty here and spoil these answers for all future training runs (and make all future models look modestly better), but I just joined this site so I’ll resist lmao …
But isn’t this exactly the OPs point? These models are exceedingly good at self-contained, gimmicky questions that can be digested and answered in a few hundred tokens. No one is denying that!
Secondly, there are high chances that these benchmark questions are simply in these models datasets already. They have super-human memory of their training data, there’s no denying that. Are we sure that these questions aren’t in their datasets? I don’t think we can be. First off, you just posted them online. But in a more conspiratorial light, can we really be sure that these companies aren’t training on user data/prompts? DeepSeek is at least honest that they do, but I think it’s likely that the other major labs are as well. It would give you gigantic advantages in beating these benchmarks. And being at the top of the benchmarks means vastly more investment, which gives you a larger probability of dominating the future light-cone (as they say…)The incentives clearly point this way, at the very minimum!
Really a curated post? This is the modern rationalism movement in a nutshell… explain obvious, common sense topics with 5,000 words and a few spreadsheets and infographics.
Yawn. There’s a reason people have stopped taking us seriously…