Which category does this story fit into?
burmesetheaterwide
losing all the friends it has left with the possible exception of Iran
To be pedantic, they also wouldn’t very likely wouldn’t lose Syria or North Korea.
In any moment, you have literally millions of options.
Has anyone actually made an attempt to calculate possible degrees of freedom for a human being at any instant? There are >millions of websites that could be brought up in those tabs alone.
If you’re into information then learning to code can help you acquire more information more easily and process that information in beautiful ways that could be laborious or impractical otherwise. That’s probably the simplest explanation with the broadest appeal. At the risk of downvotes (maybe there are a lot of professional coders here), I’m not sure why anyone would want a job coding because then you risk the fun aspect for someone else’s purposes in exchange for some tokens and quite a lot of your time.
Taking for granted that AGI will kill everybody, and taking for granted that this is bad, it’s confusing why we would want to mount costly, yet quite weak, and (poorly) symbolic measures to merely (possibly) slow down research.
Israel’s efforts against Iran are a state effort and are not accountable to the law. What is proposed is a ragtag amateur effort against a state orders of magnitude more powerful than Iran. And make no mistake, AGI research is a national interest. It’s hard to overstate the width of the chasm.
Even gaining a few hours is pretty questionable, and a few hours for a billion people may be a big win or it might not. Is a few seconds for a quadrillion people a big win? What happens during that time and after? It’s not clear that extending the existence of the human race by what is mostly a trivial amount of time even in the scope of a single life is a big deal even if it’s guaranteed.
There is also a pretty good chance that efforts along the lines described may backfire, and spur a doubling-down on AGI research.
Overall this smells like a Pascal’s scam. There is a very, very low chance of success against a +EV of debatable size.
How are we to know that we aren’t making similar errors today?
Based only on priors, the probability we aren’t is very low indeed. A better question is, given an identified issue, how can change happen? One main problem is that contra-orthodox information on moral issues tends not to travel easily.
This isn’t really much different from life outside the club. Social forces are often not aligned with majority personal preference and can even be in conflict. For example, people want to make friends or hook up but seeking those goals explicitly tends to be perceived as low-class and / or strange.
I’m not sure considering how to restrict interaction with super-AI is an effective way to address its potential risks, even if some restrictions might work (and it is not at all clear that such restrictions are possible). Humans tend not to leave capability on the table where there’s competitive advantage to be had so it’s predictable that even in a world that starts with AIs in secure boxes there will be a race toward less security to extract more value.
If the US knew of a way to locate subs, then it would worry that Russia or China would figure it out, too
There are many conceivable ways to track subs and this is only part of the problem because subs still need to be destroyed after being located. Russia and China combined don’t have enough nuclear attack subs to credibly do this to the US. The US does have enough nuclear attack subs to credibly destroy Russia’s deterrent fleet, if they can be tracked, with attack subs left over to defend our own ballistic missile subs. A primary mission for nuclear attack subs is to shadow nuclear ballistic missile subs. That Russia is (allegedly) developing weapons like Status-6 and Burvestnik suggests they are not satisfied with the ongoing deterrent capability they already have.
Also, about 2 thirds of the US’s 1357 “strategic” (capable of incinerating the heart of a major city) nuclear warheads are currently on subs, rather than in missile silos or on bombers
The number of weapons deployed, and where they are deployed simply isn’t verifiable. Keep in mind that it is widely held, and codified in public law, that use of nuclear weapons by the US including for retaliatory purposes must follow the kind of centralized authorization that could be extremely difficult to guarantee under a surprise nuclear attack. This would open us up to surprise decapitation attack so the probability it’s true in practice is very low.
- 3 Apr 2022 16:51 UTC; 4 points) 's comment on Russian x-risk newsletter March 2022 update by (
at what point would you expect the average (non-rationalist) AI researcher to accept that they’ve created an AGI?
Easy answers first: the average AI researcher will accept it when others do.
at what point would you be convinced that human-level AGI has been achieved?
When the preponderance of evidence is heavily weighted in this direction. In one simple class of scenario this would involve unprecedented progress in areas limited by things like human attention, memory, io bandwidth, etc. Some of these would likely not escape public attention. But there are a lot of directions AGI can go.
To the extent that there are believers, you won’t change their mind with reason, because their beliefs are governed, guarded and moderated by more basic aspects of the brain—the limbic system is a convenient placeholder for this.
So a problem you are focused on is that minority (or majority) of individual opinions are prevented from being honestly expressed. Flipping a small number of individual opinions, as is your motivation, does not address this problem.
Because the benefits of quantum computing were so massive
Please elaborate. I’m aware of Grover’s algorithm, Shore’s algorithm, and quantum communication, and it’s not clear that any of these pose a significant threat to even current means of military information security / penetration.
I’m interested if there were any attempts at formal rules of transforming media feed into world model. Preferably with Bayesian interference and cool math. So I can try to discuss these with my friends and maybe even update my own model.
So you are interested in changing other people’s minds on a complicated issue that has more to do with the limbic system than rational hardware by using reason. This distribution of influence is one reason why their intelligence isn’t really important here, and it is also why your strategy won’t work.
More generally, you are in a trap. Be skeptical of your own motivations. The least worst course of action available is probably to disengage.
Realistically, a complexity limit on practical work may not be imposed if the AI is considered reliable enough and creating proofs too complex to otherwise verify is useful, and it’s very difficult to see a complexity limit imposed for theoretical exploration that may end up in practical use.
Still in your scenario the same end can be met with a volume problem where the ratio of new AI-generated proofs with important uses is greater than external capability of humans to verify, even in the case that individual AI proofs are in principle verifiable by humans, possibly because of some combination of enhanced productivity and reduced human skill (possibly less incentive to become skilled at proofs if AI seems to do it better).
AI becomes trusted and eventually makes proofs that can’t otherwise be verified, makes one or more bad proofs that aren’t caught, results used for something important, important thing breaks unexpectedly.
I do not feel entirely comfortable talking the whole thing over with my profs.
If you’re going to take a 3 month internship they will all know about anyway, it can’t hurt to talk about it, right? Cryonics isn’t really that taboo, especially if, as it appears, you will take the position that you don’t expect current methods to work (but you would like to see about creating ones that might).
You can’t account for AGI because nobody has any idea at all what a post-AGI world will look like, except maybe that it could be destroyed to make paperclips. So if starting a business is a real calling, go for it. Or not. Don’t expect the business to survive AGI even if thrives pre-arrival. Don’t underestimate that your world may change so much that scenarios like you (or an agent somewhat associated with the entity formerly known as you, or even anyone else at all) running a business might not make sense—the concept of business is a reflection of how our world is structured. Even humans can unbalance this without the help of AGI. In short it’s a good bet that AGI will be such a great disruption that the patent system is more likely to be gone rather than filled with AGI patent trolls.