My point here was that even if the deep learning paradigm is not anywhere close to as efficient as the brain, it has a reasonable chance of getting to AGI anyway since the brain does not use all that much energy. The biggest models from GPT-3 can run on a fraction of what a datacenter can supply, hence the original question, how do we know AGI isn’t just a question of scale in the current deep learning paradigm.
Here is a link to my forecast
And here are the rough justifications for this distribution:
I don’t have much else to add beyond what others have posted, though it’s in part influenced by an AIRCS event I attended in the past. Though I do remember being laughed at for suggesting GPT-2 represented a very big advance toward AGI.
I’ve also never really understood the resistance to why current models of AI are incapable of AGI. Sure, we don’t have AGI with current models, but how do we know it isn’t a question of scale? Our brains are quite efficient, but the total energy consumption is comparable to that of a light bulb. I find it very hard to believe that a server farm in an Amazon, Microsoft, or Google Datacenter would be incapable of running the final AGI algorithm. And for all the talk of the complexity in the brain, each neuron is agonizingly slow (200-300Hz).
That’s also to say nothing of the fact that the vast majority of brain matter is devoted to sensory processing. Advances in autonomous vehicles are already proving that isn’t an insurmountable challenge.
Current AI models are performing very well at pattern recognition. Isn’t that most of what our brains do anyway?
Self attended recurrent transformer networks with some improvements to memory (attention context) access and recall to me look very similar to our own brain. What am I missing?
Thanks for the recommendation!
I’ve been feeding my parents a steady stream of facts and calmly disputing hypotheses that they couldn’t support with evidence (“there are lots of unreported cases”, “most cases are asymptomatic”, etc.). It’s taken time but my father helped influence a decision to shut down schools for the whole Chicago area, citing statistics I’ve been supplying from the WHO.
I think the best thing you can do if they don’t take it seriously is to just whittle down their resistance with facts. I tend to only pick a few to talk about in depth at a time. A fact that particularly influenced my mother was that preventing one infection today can prevent thousands over the course of just a few weeks.
It seems to me that trying to create a tulpa is like trying to take a shortcut with mental discipline. It seems strictly better to me to focus my effort on a single unified body of knowledge/model of the world than to try to maintain two highly correlated ones at the risk of losing your sanity. I wouldn’t trust that a strong imitation of another mind would somehow be more capable than my own, and it seems like having to simulate communication with another mind is just more wasteful than just integrating what you know into your own.
Thinking about it, it reminds me of when I used to be Christian and would “hear” God’s thoughts. It always felt like I was just projecting what I wanted or was afraid to hear about a situation and it never really was helpful (this thing was supposed to be the omniscient omnipotent being). This other being is the closest thing to a tulpa I’ve experienced and it was always silent on things that really mattered. Since killing the damned thing I’ve been so much happier and don’t regret it at all.
That isn’t to say it has to be like that, after all in my experience I really did believe the thing was external to my mind. But I feel like you would be better off spending your mental energies on understanding what you don’t or learning about how to approach difficult topics than creating a shadow of a mind and hoping it outperforms you on some task.