I think it’s called signal jamming? An alarm that sounds all the time is just as useless as an alarm that never goes off.
To an individual human, death by AI (or by climate catastrophe) is worse than old age “natural” death only to the extent that it comes sooner, and perhaps in being more violent.
I would expect death by AI to be very swift but not violent, e.g. nanites releasing neurotoxin into the bloodstream of every human on the planet like Yudkowsky suggested.
To someone who cares about the species, or who cares about quantity of sentient individuals, AI is likely to reduce total utility by quite a bit.
Like I said above, I expect the human species to be doomed by default due to lots of other existential threats, so in the long term superintelligent AI has only upsides.
(e.g., the increased wavelength of the whip in the figure below)
Don’t you mean increased amplitude?
But it is surprising that life could only appear on our planet, since it doesn’t seem to have unique features.
What does “could appear” mean here? 1 in 10? 1 in a trillion? 1 in 10^50?
Remember we live in a tiny universe with only ~10^23 stars.
It was a rhetorical question, there is nothing strange about not observing aliens. I’m an avid critic of the Fermi paradox. You simply update towards their nonexistence and, to a lesser extent, whatever other hypothesis fits that observation. You don’t start out with the romantic idea that aliens ought to be out there, living their parallel lives, and then call the lack of evidence thereof a “paradox”.
The probability that all sentient life in the observable universe just so happens to invariably reside in the limbo state between nonexistence and total dominance is vanishingly small, to a comical degree. Even on our own Earth, sentient life only occupies a small fragment of our evolutionary history, and intelligent life even more so. Either we’re alone, or we’re in a zoo/simulation.
Either way, Clippy doesn’t kill more than us.
How strange for us to achieve superintelligence where every other life in the universe has failed, don’t you think?
Moloch is to the world what senescence is to a person. It, too, dies by default.
Is death by AI really any more dire than the default outcome, i.e. the slow and agonizing decay of the body until cancer/Alzheimer’s delivers the final blow?
The second definitely doesn’t work because it’s actually an endothermic reaction (reverse neutron decay), but Churchill couldn’t have known that in 1931 before neutron mass was measured accurately.
My takeaway from this:
Beware of laundry lists of future predictions
Update against currently promising (“hot”) technologies turning out to be impactful
Update against the idea that government institutions are becoming less competent
Update against wisdom and coordination as useful tools for defusing x-risks
A more paranoid man than myself would start musing about anthropic shadows and selection effects.
Why paranoid? I don’t quite get the argument here; doesn’t anthropic shadow imply we have nothing to worry about (except for maybe hyperexistential risks) since we’re guaranteed to be living in a timeline where humanity survives in the end?
A pandemic happened that hurt the economy and increased demand for consumer electronics, driving up the cost of computer chipsIntel announced that it was having major manufacturing issuesBitcoin, Ethereum, and other coins reached an all-time high, driving up the price of GPUs
A pandemic happened that hurt the economy and increased demand for consumer electronics, driving up the cost of computer chips
Intel announced that it was having major manufacturing issues
Bitcoin, Ethereum, and other coins reached an all-time high, driving up the price of GPUs
I don’t see much of a coincidence here. The pandemic and crypto boom are highly correlated events; it’s hardly surprising deflationary value storages will do well in times of crisis, gold also hit an all-time high during the same period. Besides, the last crypto boom in 2017 didn’t seem to have slowed down investment in deep learning. Intel has never been a big player in the GPU market and CPU prices are reasonable right now but isn’t that relevant for deep learning anyway. And the “AI and Compute” trend line broke down pretty much as soon as the OpenAI article was released, a solid 1.5 − 2 years before the Covid-19 crisis hit. That’s a long time in ML world.
Unless you’re a fanatic reverend of the God of Straight Lines, there isn’t anything here to be explained. When straight lines run into physical limitations, physics wins. Hardware progress clearly can’t keep up with the 10x per year growth rate of AI compute, and the only way to make up for it was to increase monetary investment into this field, which is becoming harder to justify given the lack of returns so far.
But, if you disagree and believe that the Straight Line is going to resume any day now, go ahead and buy more Nvidia stocks and win.
I don’t mean to sound overdramatic here, but equating honesty with obedience to authority is quite a sinister sleight of hand. Skipping excessive homework is not only advantageous, it is also righteous.
but if the dean had also been unsympathetic we would have had no recourse
I beg to differ. Maybe your school was particularly strict, but usually there are plenty of ways around homework assignments in high school: Copy homework from other classmates last minute, turn in fake homework, read the summary instead of a whole book, share workload with your friends, get solutions from older students, call in sick strategically on days with especially large workloads etc.
And the advance of technology isn’t all bad, it also provides students with new options: GPT-3 for essays, Wolfram Alpha for math problems, mechanical-turk-like services to outsource homework, handwriting robots and soon, test-taking AIs. If you got hacking skills, well, let’s just say you’d be surprised what sort of stuff teachers leave on the school server. And for online classes, keep in mind that it also becomes harder for the teacher to verify the authenticity of your homework. Be creative, think positive.
The absolute travel time matters less for disease spread in this case. It doesn’t matter how long it would theoretically take to travel to North Sentinel Island if nobody is actually going there years on end. Disease won’t spread to those places naturally.
And if an organization is so hell-bent on destroying humanity as to track down every last isolated pocket of human settlements on Earth (a difficult task in itself as they’re obscure almost by definition) and plant the virus there, they’ll most certainly have no trouble bringing it to Mars either.
I strongly believe that nuclear war and climate change are not existential risks, by a large margin.
For engineered pandemics, I don’t see why Mars would be more helpful than any other isolated pockets on Earth—do you expect there to be less exchange of people and goods between Earth and Mars than, say, North Sentinel Island?
Curiously enough, the last scenario you pointed out—dystopias—might just become my new top candidate for x-risks amenable through Mars colonization. Need to think more about it though.
Moving to another planet does not save you from misaligned superintelligence.
Not only that, there is hardly any other existential risks to be avoided by Mars colonization, either.
Neuralink… I just don’t see any scenario where humans have much to contribute to superintelligence, or where “merging” is even a coherent idea
The only way I can see Musk’s position making sense is that it’s actually a 4D chess move to crack the brain algorithm and using it to beat everyone else to AGI, and not the reasoning he usually gives in public for why Neuralink is relevant to AGI. Needless to say I am very skeptical of this hypothesis.
I would love to hear some longevity-related biotech investment advices from rationalists, which I (and presumably many others here) predict to be the second biggest deal in big picture futurism.
The only investment idea I can come up with myself are for-profit spin-off companies from SENS Research Foundation, but that’s just the obvious option to someone without expertise in the field and trusting the most vocal experts.
Although some growth potential has already been lost due to the pandemic bringing a lot of attention towards this field, I think we’re still early enough to capture some of the returns.
If you want to learn more about ongoing research into superheavy elements:
To me the most exciting prospect of this research is the potential discovery of not just an island, but an entire continent of stability that could open up endless engineering potential in the realm of nuclear chemistry.
No that’s not what I meant; these two issues divide different tribes but the level of toxicity and fanaticism is similar. Heated debates around US-China war scenarios are very common in Taiwanese/Chinese overseas communities.
I also have a personal interest in trying to keep Lesswrong politics-free because for me fighting down the urge to engage in political discussions is a burden, like an ex-junkie constantly tempted with easily available drugs. Old habits die hard, so I immediately committed to not participate in any object-level discussions upon seeing the title of this post. I’m not sure whether this applies to anyone else.