I’m curious how much space is left after learning the MSP in the network. Does representing the MSP take up the full bandwidth of the model (even if it is represented inefficiently)? Could you maintain performance of the model by subtracting out the contributions of anything else that isn’t part of the MSP?
ChosunOne
I observe this behavior a lot when using GPT-4 to assist in code. The moment it starts spitting out code that has a bug, the likelihood of future code snippets having bugs grows very quickly.
I’ve found that using Bing/Chat-GPT has been enormously helpful in my own workflows. No need to have to carefully read documentation and tutorials just to get a starter template up and running. Sure it breaks here and there, but it seems way more efficient to look up stuff when it goes wrong vs. starting from scratch. Then, while my program is running, I can go back and try to understand what all the options do.
It’s also been very helpful for finding research on a given topic and answering basic questions about some of the main ideas.
I’m not sure how that makes the problem much easier? If you get the maligned superintelligence mask, it only needs to get out of the larger model/send instructions to the wrong people once to have game over scenario. You don’t necessarily get to change it after the fact. And changing it once doesn’t guarantee it doesn’t pop up again.
This could be true, but then you still have the issue of there being “superintelligent malign AI” as one of the masks if your pile of masks is big enough.
At a crude level, the earth represents a source of resources with which to colonize the universe shard and maximize its compute power (and thus find the optimal path to achieve its other goals). Simply utilizing all the available mass on earth to do that as quickly as possible hedges against the possibility of other entities in the universe shard from impeding progress toward its goals.
The question I’ve been contemplating is “Is it worth it to actually try to spend any resources dissassembling all matter on Earth given the cost of needing to devote resources to stop humans from interfering” when weighed against the lost time and energy from solar output given the resources needed to accomplish that. I don’t know, but it’s a reasonable way of getting “kill everyone” from an unaligned goal if it makes sense to pursue that strategy.
Thanks for the measured response.
If I understand the following correctly:
Putin made it very clear on the day of the attack that he was threatening nukes to anyone who “interfered” in Ukraine, with his infamous “the consequences will be such as you have never seen in your entire history” -speech. NATO has been helping Ukraine by training their forces and supplying materiell for years before the invasion, and vowed to keep doing so. This can be considered “calling his bluff” to an extent, or as a piecemeal maneuver in it’s own right. Yet they withdrew their personell from the country in the days and weeks leading up to the attack. Some have called Biden weak for doing that, essentially “clearing the way” for Putin by removing the tripwire force, and maybe he is. What is clear is that he didn’t want for that bluff to be called.
You interpret that as being specifically a warning against overt deployment of troops to Ukraine? I suppose my reading of it was more broad, and as such NATO already fell on the side of violating my understanding of “interfering”. While at the outset I can see that being a strong reason at the beginning of the war, i.e. “Don’t take my attempt at a quick victory away from me or else I’ll nuke you”, I don’t know how feasible that remains over time. Putin can’t think that if the war goes on for months without victory that everyone would just sit on the sidelines forever. I suppose clarity from the Russians about their commitment to the war in general would help especially regarding:
Putin’s unambiguous red line is Russia’s geographical border, and he is trying very successfully and believably to assert a red line over any direct military intervention inside Ukraine, and less successfully over less direct help.
While I agree that Russia’s border is not something NATO tanks should go rolling across, I haven’t seen as strong of a message in recent days threatening nuclear retaliation if say a THAAD battery near the Polish border with Ukraine engaged a Russian fighter jet. NATO could plausibly claim the fighter penetrated Polish airspace (even if it wasn’t actually the case).
In fact, the US and USSR engaged in direct aerial combat in the Korean War, in the infamous “MiG Alley” without escalating into a full fledged war.
That does not mean he would back down from other more direct help, with the infamous Polish fighter jets toeing the line too close for comfort, so the US backed down on that one.
Agreed, but if Russia starts bombing the supply convoys from NATO, that would almost certainly invite more direct NATO intervention. “Russia is bombing humanitarian aid convoys” etc.
All this to say I think the situation is a lot more nuanced than “If NATO fires a single bullet at a Russian it’s the end of civilization”.
Obviously Putin would be in a much stronger position if he would have been able to conquer Ukraine within a few days.
Strangely enough, I think this was the intention. I think the prospect of this war escalating could if nothing else be used to help force the Russians to reevaluate their goals and hasten the end of the war.
Yes I did, and it doesn’t follow that nuclear retaliation is immediate.
Beaufre notes that for piecemeal maneuvers to be effective, they have to be presented as fait accompli – accomplished so quickly that anything but nuclear retaliation would arrive too late to do any good and of course nuclear retaliation would be pointless
Failure to perform the fait accompli means that options other than nuclear retaliation are possible.
When Putin called that obvious bluff, it would have damaged the credibility and thus the deterrence value of that same statement when applied to NATO members or Taiwan, weakening the effect of US deterrence, and thus potentially encouraging another state (like China) to try to call an American bluff elsewhere (essentially inviting a piecemeal maneuver).
Take this statement and reverse the positions. If NATO calls Russia’s bluff that any and all military assistance to Ukraine would be met with nuclear retaliation, as they have already done, then Russia by this logic is inviting a piecemeal maneuver on the part of NATO.
Russia is fighting an aggressive war. NATO can clearly signal via way of action that it has no intention of threatening the existence of the Russian state.
Ukraine is already in an effectively total war (from their perspective; Russia is not totally mobilized) with Russia. Russian forces are already targeting Ukrainian civilian centers with the apparent aim of inflicting civilian casualties and making the refugee situation worse.
Involvement here doesn’t escalate the situation inside of Ukraine beyond its borders. I’d rather see counterpoints to my arguments than blanket assertions I didn’t read the article or that it “addresses my points”. Please point out exactly where I’m missing something.
Given that Russia’s attempt at a fait accompli in Ukraine has failed, and that the situation already is a total war, I fail to see Russia’s logic of nuclear deterrence against NATO involvement. In a sense, NATO has already crossed the red lines that Russia stated would be considered acts of war, such as economic sanctions and direct military supply. From the Russian perspective, would NATO intervention really invite a total nuclear response the way that a Russian attack on Poland would?
NATO intervention and subsequent obliteration of the Russian army seems extremely in the interest of NATO. This intervention could be covert, limited in scope, and done in piecemeal, but with the effect of resolutely destroying the Russian military over time. A slow steady trickle of first supplies, then limited direct AA support, with a gradual buildup of force in Ukraine seems to me like it could be used to avoid escalation all the way to nuclear conflict. At some point, the invasion becomes pointless and the Russians have no option but to withdraw.
In Zelenskyy’s latest appeal to congress, he offered an alternative to a No Fly Zone, which is massive support for AA equipment and additional fighters. By creating pressure for a NFZ, he’s bought himself significant AA equipment boosts.
This is more or less what Kasparov believed back in 2015:
I think one of the things to consider with this hypothesis is what is the signal that indicates an area is “overpopulated”, and how should members of the species respond to that signal? And how can this signal be distinguished from other causes? For instance, an organism that has offspring that are unable to reproduce because they had limited resources will likely be outcompeted by an organism that produces fertile offspring regardless of the availability of resources.
If you open up a variable that determines how likely your offspring are to reproduce, it also could become an attack vector for other competing species to trigger. Imagine a species that sends out the signal for your species that you are overpopulated, but ignores that signal itself. I think you will quickly find your species going extinct in such a scenario.
Personally I think a more compelling reason for species who sometimes can’t or won’t have biological offspring is that it frees up their time and energy to focus on things other than child rearing in the larger community. If instead of having a variable that determines how likely your offspring want to naturally reproduce based on the currently available resources, you have a fixed percentage, then the communities that descend from you might outperform communities that focus more on child rearing at the expense of other activities (research, sentries, hunting, protecting relatives etc).
Consider that if the fixed percentage hypothesis is correct, then a natural consequence of population growth is a growing number of LGBTQ+ members of the population.
Are we also presuming that you can acquire all desired things instantaneously? Even in a situation when all agents are functionally identical, if it costs 1 unit of time per x units of a resource, wouldn’t trade still be useful in acquiring more than x units of a resource in 1 unit of time? Time seems to me the ultimate currency that still needs to be “traded” in this scenario.
My point here was that even if the deep learning paradigm is not anywhere close to as efficient as the brain, it has a reasonable chance of getting to AGI anyway since the brain does not use all that much energy. The biggest models from GPT-3 can run on a fraction of what a datacenter can supply, hence the original question, how do we know AGI isn’t just a question of scale in the current deep learning paradigm.
Here is a link to my forecast
And here are the rough justifications for this distribution:
I don’t have much else to add beyond what others have posted, though it’s in part influenced by an AIRCS event I attended in the past. Though I do remember being laughed at for suggesting GPT-2 represented a very big advance toward AGI.
I’ve also never really understood the resistance to why current models of AI are incapable of AGI. Sure, we don’t have AGI with current models, but how do we know it isn’t a question of scale? Our brains are quite efficient, but the total energy consumption is comparable to that of a light bulb. I find it very hard to believe that a server farm in an Amazon, Microsoft, or Google Datacenter would be incapable of running the final AGI algorithm. And for all the talk of the complexity in the brain, each neuron is agonizingly slow (200-300Hz).
That’s also to say nothing of the fact that the vast majority of brain matter is devoted to sensory processing. Advances in autonomous vehicles are already proving that isn’t an insurmountable challenge.
Current AI models are performing very well at pattern recognition. Isn’t that most of what our brains do anyway?
Self attended recurrent transformer networks with some improvements to memory (attention context) access and recall to me look very similar to our own brain. What am I missing?
Thanks for the recommendation!
I’ve been feeding my parents a steady stream of facts and calmly disputing hypotheses that they couldn’t support with evidence (“there are lots of unreported cases”, “most cases are asymptomatic”, etc.). It’s taken time but my father helped influence a decision to shut down schools for the whole Chicago area, citing statistics I’ve been supplying from the WHO.
I think the best thing you can do if they don’t take it seriously is to just whittle down their resistance with facts. I tend to only pick a few to talk about in depth at a time. A fact that particularly influenced my mother was that preventing one infection today can prevent thousands over the course of just a few weeks.
It seems to me that trying to create a tulpa is like trying to take a shortcut with mental discipline. It seems strictly better to me to focus my effort on a single unified body of knowledge/model of the world than to try to maintain two highly correlated ones at the risk of losing your sanity. I wouldn’t trust that a strong imitation of another mind would somehow be more capable than my own, and it seems like having to simulate communication with another mind is just more wasteful than just integrating what you know into your own.
Thinking about it, it reminds me of when I used to be Christian and would “hear” God’s thoughts. It always felt like I was just projecting what I wanted or was afraid to hear about a situation and it never really was helpful (this thing was supposed to be the omniscient omnipotent being). This other being is the closest thing to a tulpa I’ve experienced and it was always silent on things that really mattered. Since killing the damned thing I’ve been so much happier and don’t regret it at all.
That isn’t to say it has to be like that, after all in my experience I really did believe the thing was external to my mind. But I feel like you would be better off spending your mental energies on understanding what you don’t or learning about how to approach difficult topics than creating a shadow of a mind and hoping it outperforms you on some task.
An interesting consequence of your description is that resurrection is possible if you can manage to reconstruct the last brain state of someone who had died. If you go one one step further, then I think it is fairly likely that experience is eternal, since you don’t experience any of the intervening time (akin to your film reel analogy with adding extra frames in between) being dead and since there is no limit to how much intervening time can pass.