Existing property rights get respected by the successor species.
What makes you believe this?
Existing property rights get respected by the successor species.
What makes you believe this?
Given this argument hinges on China’s higher IQ, why couldn’t the same be said about Japan, which according to most figures has an average IQ at or above China, which would indicate the same higher proportion of +4SD individuals in the population. If it’s 1 in 4k, there would be 30k of those in Japan, 3x as much as the US. Japan also has a more stable democracy, better overall quality of life and per capita GDP than China. If outsized technological success in any domain was solely about IQ, then one would have expected Japan to be the center of world tech and the likely creators of AGI, not the USA, but that’s likely not the case.
The wording of the question is ambiguous. It asks for your determination on the likelihood it was heads when you were “first awakened”, but by your perception any wakening is you being first awakened. If it is really asking about your determination given you have the information that the question is being asked on your first wakening regardless of your perception, then it’s 1⁄2. If you know the question will be asked on your first or second wakening (though the second one will in the moment feel like the first), then it’s 1⁄3.
This suggests a general rule/trend via which unreported but frequent phenomenon can be extrapolated. If X phenomenon is discovered accidentally via method Y almost all the time, then method Y must be done far more frequently than people suspect.
Generally it makes no sense for every country to collectively cede the general authority of law and order and unobstructed passage of cargo wrt global trade. He talks about this great US pull back because the US will be energy independent, but America pulling back and the global waters to turning into a lawless hellscape would send the world economy into a dark age. Hinging all his predictions on this big head-turning assumption gives him more attention but the premise is nonsensical.
Why can’t this be an app. If their LAM is better than competitors then it would be profitable in their hardware and standalone.
The easiest way to check whether this would work is to determine a causal relationship between diminished levels of serotonin in the bloodstream and neural biomarkers similar to that of people with malnutrition.
I feel the original post, despite ostensibly being a plea for help, could be read as a coded satire on the worship of “pure cognitive heft” that seems to permeate rationalist/LessWrong culture. It points out the misery of g-factor absolutism.
It would help if you clarified why specifically you feel unintelligent. Given your writing style: ability to distill concerns, compare abstract concepts and communicate clearly, I’d wager you are intelligent. Could it be imposter syndrome?
It’s simple: No AGI = guaranteed death within 200 years. AGI = possible life extension beyond millions of years and the end of all human pain. Until we can automate all current human economic tasks we will never reach post-scarcity, and until then we will always need to persist current social hierarchies and dehumanizing constructs.
I totally agree with that notion, I however believe the current levers of progress massively incentivize and motivate AGI development over WBE. Currently regulations are based on flops, which will restrict progress towards WBE long before it restricts anything with AGI-like capabilities. If we had a perfectly aligned international system of oversight that assured WBE were possible and maximized in apparent value to those with the means to both develop it and push the levers, steering away from any risky AGI analogue before it is possible, then yes, but that seems very unlikely to me.
Also I worry. Humans are not aligned. Humans having WBE at our fingertips could mean infinite tortured simulations of the digital brains before they bear any more bountiful fruit for humans on Earth. It seems ominous, fully replicated human consciousness so exact a bit here or there off could destroy it.
It really is. My conception of the future is so weighed by the very likely reality of an AI transformed world that I have basically abandoned any plans with a time scale over 5 years. Even my short term plans will likely be shifted significantly by any AI advances over the next few months/years. It really is crazy to think about, but I’ve gone over every single aspect of AI advances and scaling thousands of times in my head and can think of no reality in the near future not as alien to our current reality as ours is to pre-eukaryotic life.
I separate possible tech advances by the criterion: “Is this easier or harder than AGI?” If it’s easier than AGI, there’s a chance it will be invented before AGI, if not, AGI will invent it, thus it’s pointless to worry over any thought on it our within-6-standard-deviations-of-100IQ brains can conceive of now. WBE seems like something we should just leave to ASI once we achieve it, rather than worrying over every minutia of its feasibility.
I think most humans agree with this statement in an “I emotionally want this” sort of way. The want has been sublimated via religion or other “immortality projects” (see The Denial of Death). The question is, why is it taboo, and it is taboo in the sense you say? (a signal of low status)
I think these elements are at play most in peoples mind, from lay people to rationalists:
It’s too weird to think about: Considering the possibility of a strange AI-powered world where either complete extinction or immortality are possible feels “unreal”. Our instinct that everything that happens in the world is within an order of magnitude of “normal” directly opposes being able to believe in this. As a result, x/s-risk discussions, either due to personal imagination or optics reasons, are limited to natural extrapolations of things that have occurred in history (eg. biological attacks, disinformation, weapons systems, etc.). It’s too bizarre to even reckon that there is a non-zero chance immortality via any conduit is possible. This also plays into the low status factor: weird, outlandish opinions on the future not validated by a high-status figure are almost always met with resistance
The fear of “missing out” leads to people not even wanting to think about it seriously at all: People don’t want give higher credence to hypotheticals that increase the scale of their losses. If we think death is the end for everyone, it doesn’t seem so bad to imagine. If we think that we will be the ones to maybe die and others won’t, or that recent or past loved ones are truly gone forever in a way not unique to humankind, it feels unfair/insulting by the universe.
Taking it seriously would massively change one’s priorities in life and upset the equilibrium of their current value structures: As in: one would do everything they could to minimize the risk of early death. if they believe immortality could be possible in 20 years or less, their need for long term planning is reduced, as immortality would also imply post-scarcity, so their assiduous saving, sacrifices for their children, future are worthless. That cognitive dissonance does not sit well in the mind and hinders one’s individual agentic efficiency.
That’s very true, but there are two reasons why a company may not be inclined to release an extremely capable model:
1. Safety risk: someone uses a model and jailbreaks it in some unexpected way, the risk of misuse is much higher with a more capable model. OpenAI had GPT-4 for 9-10 months before releasing it trying to RHLF and even lobotomized it to being more safe. The Summer 2022 internal version of GPT-4 was, according to Microsoft researchers, more generally capable than the released version (as evidenced by the draw a unicorn test). This needed delay and assumed risks will naturally be much greater with a larger model, both in that larger models, so far, seem harder to simply RHLF into unjailbreakability, and by being more capable, any jailbreak carries more risk, thus the general business level margin of safety will be higher.
2. Sharing/exposing capabilities: Any business wants to maintain a strategic advantage. Releasing a SOTA model will allow a company’s competitors to use it, test its capabilities and train models on its outputs. This reality has become more apparent in the past 12 months.
The major shift in the next 3 years will be that, as a rule, top level AI labs will not release their best models. I’m certain this has somewhat been the case for OpenAI, Anthropic and Google for the past year. At some point full utilization of a SOTA model will be a strategic advantage for companies themselves to use for their own tactical purposes. The moment any $X of value can be netted from an output/inference run of a model for less than $(X-Y) in costs, where Y represents the marginal labor/maintenance/averaged risk costs for each run’s output, no company would ever be advantaged by releasing the model to be used by anyone other than themselves. This closed-source event horizon I imagine will occur sometime in late 2024.
The thing about writing stories which are analogies to AI is, how far removed from the specifics of AI and its implementations can you make the story while still preserving the essential elements that matter with respect to the potential consequences. This speaks perhaps to the persistent doubt and dread that we may have in a future awash in the bounty of a seemingly perfectly aligned ASI. We are waiting for the other shoe to drop. What could any intelligence do to prove its alignment in any hypothetical world, when not bound to its alignment criteria by tangible factors?
This reminds me about the comment on how effective LLM’s will be for mass scale censorship.
IMO the proportion of effort into AI alignment research scales with total AI investment. Lots of AI labs themselves do alignment research and open source/release research on the matter.
OpenAI at least ostensibly has a mission. If OpenAI didn’t make the moves they did, Google would have their spot, and Google is closer to the “evil self-serving corporation” archetype than OpenAI