delete
Foyle
delete
The gap between invention of radio and Superintelligent AI in our case (and perhaps most cases of evolution of intelligent life) appears to be <150 years. A pretty narrow window to hit unless we are being actively observed—and that would likely imply they have had time to notice multicellular life on earth and get observers to us at low fractions of light speed.
If intelligent (inevitably superintelligent) Aliens exist and care about physical reality beyond their own stellar system then they can and likely will spread out to have a presence in every interesting star system in the galaxy within a million years—and planets with multicellular life are likely highly anomalous and interesting for curious Aliens.
It would be hard to believe that this hasn’t already happened given 1-4e11 stars and 5-10e9 years ‘window for life’ in milky way, making the zoo hypothesis in my mind the most likely solution of the Fermi paradox (with weak anecdotal evidence in the form of seemingly increasingly furtive UFOs over last century). Evolution selects for aliens that choose to propagate and endure, and the technology to do so is almost trivially easy once intelligence and superintelligence evolves, so if intelligence has evolved in the Milky Way and cares about other species developing, then it is clearly not hegemonic (evidenced by our continuing existence) and is likely already here.
If all this is the case—and aliens are here watching us then it also provides an existence proof than Alignment is possible. Conversely if they are not here then that is perhaps weak evidence that Alignment is not possible—that Super intelligent AI is either auto-extinguishing or almost universally disinterested in biological life.
“He [Arthur Dent] learned to communicate with birds and discovered their conversation was fantastically boring. It was all to do with windspeed, wingspans, power-to-weight ratios and a fair bit about berries.”. Douglas Adams; So long and thanks for all the fish.
Telling lies and discerning lies are both extremely important skills, becoming adept at it involves developing better and better cognitive models of other humans reactions and perspectives, a chess game of sorts. Human society elevates and rewards the most adept liars; CEOs, politicians, actors and sales people in general, you could perhaps say that Charisma is in essence mostly convincing lying. I take the approach with my children of punishing obvious lies, and explaining how they failed because I want them to get better at it, and punishing less or not at all when they have been sufficiently cunning about it.
For children I think the Santa deception is potentially a useful awakening point—a right of passage where they learn not to trust everything they are told, that deception and lies and uncertainty in the truth are a part of the adult world, and a little victory where they can get they get to feel like they have conquered an adult conspiracy. They rituals are also a fun interlude for them and the adults in the meantime.
As a wider policy I generally don’t think absolutism is a good style for parenting (in most things), there are shades of grey in almost everything, even if you are a hard-core rationalist in your beliefs, 99.9% of everyone you and your children deal with won’t be, and they need to be armed for that. Discussing the grey is an endless source of useful teachable moments.
Human brains are estimated to be ~1e16flops equivalent, suggesting about 10-100 of these maxed-out GPUs a decade hence could be sufficient to implement a commodity AGI (Leading Nvidia A100 GPU already touts 1.2 p-ops Int8 with sparsity), at perhaps 10-100kW power consumption, (less than $5/hour if data center is in low electricity cost market). There are about 50x 1000mm² GPUs per 300mm wafer, and latest generation TSMC N3 process costs about $20000 per wafer—eg an AGI per wafer seems likely.
It’s likely then that (if it exists and is allowed) personal ownership of human-level AGI will be, like car ownership, within the financial means of a large proportion of humanity within 10-20 years, and their brain power will be cheaper to employ than essentially all human workers. Economics will likely hasten rather than slow an AI apocalypse.
If any superintelligent AI is capable of wiping out humans should it decide to, it is better for humans to try and arrange initial conditions such that there are ultimately a small number of them to reduce probability of doom. The risk posed by 1 or 10 independent but vast SAI is lower than from a million or a billion independent but relatively less potent SAI where it may tend to P=1.
I have some hope the the physical universe will soon be fully understood and from there on prove relatively boring to SAI, and that the variety thrown up by the complex novelty and interactions of life might then be interesting to them
Sam Altman: “multiple AGIs in the world I think is better than one”. Strongly disagree. if there is a finite probability than an AGI decides to capriciously/whimsically/carelessly end humanity (and many technological modalities by which it can) then each additional independent instance multiplies that probability to an end point where it near certain.
“I have better reason to trust authorities over skeptics” argumentum ad auctoritatem (appeal to authority) is a well known logical fallacy, and unwise in an era of orthodoxies enforced by brutal institutional financial menaces. Far better to adhere to nullius in verba (on the word of no one), the motto of the Royal Society, or as Deming said “In god we trust, all others must bring data”
Followed closely; the pandemic years have provided numerous clear examples of very old problems like bureaucratic reluctance to change direction even when strongly indicated—such as holding on to vaccine mandates for young in era of very low risk covid strains, the malign impacts of regulatory/institutional capture by rich corporates (eg pharma cutting-short vaccine trials without doing long term follow-up, and buying support from media and regulators to prevent dissent or contrary evidence and opinions seeing light) and high ranking individuals conspiring to corrupt scientific process (published mendacious statements dismissing Wuhan lab leak theories for political reasons) all of course abetted by Big Tech censorship. All these and a hyper partisan media and academic landscape that constantly threaten heretics and heterodox thinkers with financial destruction has broken the truth-finding and sense-making mechanisms of our world. Institutions do not deserve trust when dissenters are punished, that is the hallmark of religion not science.
Current concerns about vaccine harms seem to have a lot of signal in data; most clearly in excess death figures for New Zealand where covid, flu and RSV deaths were near zero due to effective zero-covid lock-downs from 2020 till end of 2021, and yet in 2021 excess deaths jumped by about 400 per million above the 2020 baseline in the 6 months after the vaccine programs started in Q1 2021 prior to covid becoming widespread in December 2021. The temporal correlation pointing to covid vaccination as the cause of these excess deaths is powerful in the absence of other reasonable explanations. And with a natural experimental ‘control’ population test of 5 million and 2000 extra deaths it is not a small number to be dismissed.
Hopefully the argument will be resolved scientifically over next few years, but it will be politically very difficult battle given large number of powerful people and corporations with reputations and fortunes on the line.
Evolution favours organisms that grow as fast as possible. AGIs that expand aggressively are the ones that will become ubiquitous.
Computronium needs power and cooling. Only dense, reliable and highly scalable form of power available on earth is nuclear, why would ASI care about ensuring no release of radioactivity into the environment?
Similarly mineral extraction—which at huge scales needed for VInge’s “aggressively hegemonizing” AI will be using inevitably low grade ores becomes extremely energy intensive and highly polluting. Why would ASI care about the pollution?
If/when ASI power consumption rises to petaWatt levels the extra heat is going to start having a major impact on climate. Icecaps gone etc. Oceans are probably most attractive locations for high power intensity ASI due to vast cooling potential.
Have just watched E.Y’s “Bankless” interview
I don’t disagree with his stance, but am struck that he sadly just isn’t an effective promoter for people outside of his peer group. His messaging is too disjointed and rambling.
This is, in the short term clearly an (existential) political rather than technical problem, and needs to be solved politically rather than technically to buy time. It is almost certainly solvable in the political sphere at least.
As an existence proof we have a significant percentage of western world’s pop stressing about (comparatively) unimportant environmental issues (generally 5-15% vote Green in western elections) and they have built up an industry that is collecting and spending 100′s of billions a year in mitigation activities—equivalent to something on the order of a million workers efforts directed toward it.
That psychology could certainly be redirected to the true existential threat of AI mageddon—there is clearly a large fraction of humans with patterns of belief needed to take this on this and other existential issues as a major cause if they have it explained in a compelling way. Currently Eliezer appears to lack the charismatic down-to-earth conversational skills to promote this (maybe media training could fix that), but if a lot of money was directed towards buying effective communicators/influencers with large reach into youth markets to promote the issue it would likely quickly gain traction. Elon would be an obvious person to ask for such financial assistance. And there are any number of elite influencers who would likely take a pay check to push this.
Laws can be implemented if there is are enough people pushing for it, elected politicians follow the will of the people—if they put their money where their mouths are, and rogue states can be economically and militarily pressured into compliance. A real Butlerian Jihad.
- 22 Feb 2023 21:41 UTC; 3 points) 's comment on Bankless Podcast: 159 - We’re All Gonna Die with Eliezer Yudkowsky by (
I don’t think there is any chance of malign ASI killing everyone off in less than a few years, because it would take a long time to reliably automate the mineral extraction and manufacturing processes and power supplies required to guarantee an ASI in its survival and growth objectives (assuming it is not suicidal). Building precise stuff reliably is really really hard, robotics and many other elements of infrastructure needed are high maintenance, and demanding of high dexterity maintenance agents, and the tech base required to support current leading edge chip manufacturing probably couldn’t be supported by less than a few tens to hundred million humans—that’s a lot of high-performance meat-actuators and squishy compute to supplant. Datacenter’s and their power supplies and cooling systems plus myriad other essential elements will be militarily vulnerable for a long time.
I think we’ll have many years to contemplate our impending doom after ASI is created. Though I wouldn’t be surprised if it quickly created a pathogenic or nuclear gun to hold to our collective heads and prevent our interfering or interrupting its goals.
I also think it won’t be that hard to get large proportion of human population clamoring to halt AI development—with sufficient political and financial strength to stop even rogue nations. A strong innate tendency towards Millennialism exists in a large subset of humans (as does a likely linked general tendency to anxiousness). We see it in the Green movement and redirecting it towards AI is almost certainly achievable with the sorts of budgets that existential alignment danger believers (some billionaires in their ranks) could muster. Social media is a great tool to do these days if you have the budget.
I suspect that humans will turn out to be relatively simple to encode—quite small amounts of low-resolution memory that we draw on, with detailed understanding maps—smaller than LLMs that we’re creating. Added to which there is an array of motivation factors that will be quite universal but of varying levels of intensity in different dimensions for each individual.
If that take on things is correct then it may be that emulating a human by training a skeleton AI using constant video streaming etc over a 10-20 year period (about how long neurons last before replacement) to optimally better predict behaviour of the human being modelled will eventually arrive at an AI with almost exactly the same beliefs and behaviours as the human being emulated.
Without physically carving up brains and attempting to transcribe synaptic weightings etc that might prove the most viable means of effective up-loading and creation of highly aligned AI with human like values. And perhaps would create something closer to being our true children-of-the-mind
For AGI alignment; seems like there will at minimum need to be a perhaps multiple blind & independent hierarchies of increasingly smart AIs continually checking and assuring that next level up AIs are maintaining alignment with active monitoring of activities, because as AIs get smarter their ability to fool monitoring systems will likely grow as the relative gulf between monitored and monitoring intelligence grows.
I think a wide array of AIs is a bad idea. If there is a non-zero chance that an AI goes ‘murder clippy’ and ends humans, then that probability is additive—more independent AIs = higher chance of doom.
Given almost certainty that Russia, China and perhaps some other despotic regimes ignore this does it:
1. help at all?
2. could it actually make the world less safe (If one of these countries gains a significant military AI lead as a result)
Over what time window does your assessed risk apply. eg 100years, 1000? Does the danger increase or decrease with time?
I have deep concern that most people have a mindset warped by human pro-social instincts/biases. Evolution has long rewarded humans for altruism, trust and cooperation, women in particular have evolutionary pressures to be open and welcoming to strangers to aid in surviving conflict and other social mishaps, men somewhat the opposite [See eg “Our Kind” a mass market anthropological survey of human culture and psychology] . Which of course colors how we view things deeply.
But to my view evolution strongly favours Vernor Vinge’s “Aggressively hegemonizing” AI swarms [“A fire upon the deep”]. If AIs have agency, freedom to pick their own goals, and ability to self replicate or grow, then those that choose rapid expansion as a side-effect of any pretext ‘win’ in evolutionary terms. This seems basically inevitable to me over long term. Perhaps we can get some insurance by learning to live in space. But at a basic level it seems to me that there is a very high probability that AI wipes out humans over the longer term based on this very simple evolutionary argument, even if initial alignment is good.
IQ is highly heritable. If I understand this presentation by Steven Hsu correctly [https://www.cog-genomics.org/static/pdf/ggoogle.pdf slide 20] he suggests that mean child IQ relative to population mean is approximately 60% of distance from population mean to parental average IQ. Eg Dad at +1 S.D. Mom at +3 S.D gives children averaging about 0.6*(1+3)/2 = +1.2 S.D. This basic eugenics give a very easy/cheap route to lifting average IQ of children born by about 1 S.D by using +4 S.D sperm donors. There is no other tech (yet) that can produce such gains as old fashioned selective breeding.
It also explains why rich dynasties can maintain average IQ about +1SD above population in their children—by always being able to marry highly intelligent mates (attracted to the money/power/prestige)
I think cold war incentives with regards to tech development were atypical. Building 1000′s of ICBMs was incredibly costly, neither side derived any benefit from it, it was simply defensive matching to maintain MAD, both sides were strongly motivated to enable mechanisms to reduce numbers and costs (START treaties).
This is clearly not the case with AI—which is far cheaper to develop, easier to hide, and has myriad lucrative use cases. Policing a Dune-style “thou shalt not make a machine in the likeness of a human mind” Butlerian Jihad (interesting aside; Samuel Butler was a 19th century anti-industrialisation philosopher/shepard who lived at Erewhon in NZ (nowhere backwards) a river valley that featured as Edoras in the LOTR trilogy) would require radical openness to inspection everywhere all the time, that almost certainly won’t be feasible without establishment of liberal democracy basically everywhere in the world. Despots would be a magnet for rule breakers.
Humans generally crave acceptance by peer groups and are highly influenceable, this is more true of women than men (higher trait agreeableness), likely for evolutionary reasons.
As media and academia shifted strongly towards messaging and positively representing LGBT over last 20-30 years, reinforced by social media with a degree of capture of algorithmic controls be people with strongly pro-LGBT views, they have likely pulled means beliefs and expressed behaviours beyond what would perhaps be innately normal in a more neutral non-proselytising environment absent the environmental pressures they impose.
International variance in levels of LGBT-ness in different cultures is high even amongst countries where social penalties are (probably?) low. The cultural promotion aspect is clearly powerful.
https://www.statista.com/statistics/1270143/lgbt-identification-worldwide-country/
delete