I’m not sure what you’re objecting to—the idea of superhuman intelligence? the idea that superhuman intelligence would determine the fate of the world? the idea that “unaligned” superhuman intelligence would produce a world inhospitable to humanity?
How did GPT-3 participate?
Rome gave Europe, and also Russia and America, its religion, its politics, and much of its intellectual culture. It is the ancestor of those modern societies, just as much as ancient China is the ancestor of modern China. The only difference is that for Europe, like India and Islam, political unification has been the exception rather than the rule.
If I just go by the beginning and the ending of your essay, its tone is: China was always the center of the world, Europe is a bunch of hillbillies who conquered the world by accident. It emphasizes a handful of economic and geographic contingencies, rather than the continuities of European political and cultural history. It’s interesting looking for obscure and ironic turning points, but one shouldn’t forget the big picture.
A few comments:
Designing an organization to overthrow your government is a dangerous form of “fun”.
To explain the motives and solidarity of 9/11 hijackers, one does not need to overemphasize the role of religion, at the complete expense of politics and warfare. They were Arab Muslims aiming to drive the US and Israel out of their world. I do wonder about their counterintelligence though—what they did to throw off detection.
In devising a scenario for Epstein’s murder, you don’t explore how it might have further been facilitated, if the order for it came “from above”, i.e. from persons with authority over the prison and the subsequent investigations.
You suggest that he was a rich and powerful sex offender who was given immunity in exchange for providing intelligence e.g. to the CIA. Well, he may have had a kind of immunity, but it seems unlikely that his status as a brothel-master with the capacity to blackmail VIPs is something that he developed independently of his liaisons with intelligence agencies. Then there’s his relationship with the Maxwells, and his keen interest in bringing science and technology pioneers into his web… Concubines and imperial harems have been a playground for spycraft and political intrigue for as long as they have existed. I suspect Epstein was simply the public face of an operation designed and operated by experienced professionals.
There is much to discuss here, but I’ll just focus on what’s missing: Rome. Unless you agree with Donna D2 from TikTok, Rome existed, it’s a civilizational ancestor of America, Russia, and Western Europe, and it’s an essential part of why Europe conquered the world.
the clay telling the potter what he should do with the clay … It’s His game, His gameboard, His pieces, His rules, His decisions
In other words, might makes right?
The great sin against reason is not belief in a God, it’s belief in a good God. But people cling to scraps of unreason and hope in order to endure this horror show of a world.
Are you related to Tom McCabe, who posted on this page years ago? Is there some tragedy that brings you here?
I don’t know… If I try to think of Anglophone philosophers of mind who I respect, I think of “Australian materialists” like Armstrong and Chalmers. No doubt there are plenty of worthwhile thoughts among the British, Americans, etc too, but you seem to be promoting something I deplore, the attempt to rule out various hard problems and unwelcome possibilities, by insisting that words shouldn’t be used that way. Celia Green even suggested that this 1984-like tactic could be the philosophy of a new dark age in which inquiry was stifled, not by belief in religion, but by “belief in society”; but perhaps technology has averted that future. Head-in-the-sand anthropocentrism is hardly tenable in a world where, already, someone could hook up a GPT3 chatbot to a Boston Dynamics chassis, and create an entity from deep within the uncanny valley.
Do you have any thoughts on chess computers, guided missiles, computer viruses, etc, and whether they make a case for worries about AGI, even if you consider them something alien to the human kind of intelligence?
Does that paper actually mention any overall models of the human mind? It has a list of ingredients, but does it say how they should be combined?
arxiv now carries a “response to economics as gauge theory”, by a physicist turned ML researcher, known for co-authoring a critique of Weinstein’s unified theory of physics, earlier this year. As with the physics critique, the commentary seems pretty basic and hardly the final word on anything, but it’s notable because the critic works at Deep Mind (he’s presenting a paper at NeurIPS this afternoon).
http://www.metaethical.ai is the state of the art as far as I’m concerned…
That’s funny. “100” was a stand-in for sqrt(11009), I didn’t anticipate that all factors would actually be above 100.
Basically you do long division by every prime less than 100.
Before we speculate on how Omicron may spread, perhaps we should do a postmortem on Delta? In India, after the big spike of April, the daily new cases dropped way down. In pre-Omicron South Africa, daily new cases were also low. Presumably there are other countries where a Delta boom has died away to a trickle of daily cases. Do we have any understanding of why?
About Omicron, there is speculation that it has been selected to escape vaccines. But my understanding was that vaccines don’t do much to stop the spread anyway, they just protect against severe illness. Is this wrong? Or is the idea that people who do fall ill, produce enough extra virus, that there is selection pressure to produce a virus which evades even the protection against illness?
I’ll also remark that natural immunity is likely to be more robust than mRNA-vaccine-derived immunity, since the latter gives you antibodies just against the spike protein, while natural immunity will produce antibodies against other components of the virus too.
There are at least two more clues that became public knowledge since this article was written: the EcoHealth grant application obtained by The Intercept, which included proposals to modify spike proteins and cleavage sites in coronaviruses; and similar news that the Wuhan Institute of Virology was studying bat coronaviruses from Laos, home of the closest known natural relatives to SARS-CoV-2.
Add that the WIV took a crucial viral database offline in September 2019, and it’s easy to suppose that they had some lab accident that month, involving Laotian bat virus, and that they took the database offline to hide the evidence.
Evidently there is some possibility that the virus was modified in spike protein or cleavage sites or both, but I cannot judge the evidence. E.g. maybe the furin cleavage site was CRISPRed into place, or maybe it was produced by a natural recombination.
One may also ask whether WIV was modifying its viruses only because EcoHealth suggested it, or whether that would have been taking place anyway. Also, the biodefense/biowarfare departments of the Chinese and American militaries would, I think, be silent partners in all such collaborations. Both China and America would want to know what these viruses can do, and America would want to know what experiments China may be conducting.
Or, you know, maybe someone in central China was importing frozen food from Laos that happened to contain infected tissues from an unknown intermediate species, and the virus was just on the threshold of mutating to a human-adapted form but hadn’t done so in Laos, and it’s just a big coincidence that this threshold was finally crossed in the vicinity of the world’s main collection of bat viruses… I’m sure more could be done to steelman the theory of a natural origin, but at this point, it does look more like a product of human action.
I don’t think anyone (e.g., at FHI or MIRI) is worried about human extinction via gray goo anymore.
The fate of the concept of nanotechnology has been a curious one. You had the Feynman/Heinlein idea of small machines making smaller machines until you get to atoms. There were multiple pathways towards control over individual atoms, from the usual chemical methods of bulk synthesis, to mechanical systems like atomic force microscopes.
But I think Eric Drexler’s biggest inspiration was simply molecular biology. The cell had been revealed as an extraordinary molecular structure whose parts included a database of designs (the genome) and a place of manufacture (the ribosome). What Drexler did in his books, was to take that concept, and imagine it being realized by something other than the biological chemistry of proteins and membranes and water. In particular, he envisaged rigid mechanical structures, often based on diamond (i.e. a lattice of carbons with a surface of hydrogen), often assembled in hard vacuum by factory-like nano-mechanisms, rather than grown in a fluid medium by redundant, fault-tolerant, stochastic self-assembly (as in the living cell).
Having seen this potential, he then saw this ‘nanotechnology’ as a way to do all kinds of transhuman things: make AI that is human-equivalent, but much smaller and faster (and hotter) than the human brain; grow a starship from a molecularly precise 3d printer in an afternoon; resurrect the cryonically suspended dead. And also, as a way to make replicating artificial life that could render the earth uninhabitable.
For many years, there was an influential futurist subculture around Drexler’s thought and his institute, the Foresight Institute. And nanotechnology made it was into SF pop culture, especially the idea of a ‘nanobot’. Nanobots are still there as an SF trope—and are sometimes cited as an inspiration in real research that involves some kind of controlled nanomechanical process—but I think it’s unquestionable that the hype that surrounded that nano-futurist community has greatly diminished, as the years kept passing without the occurrence of the “assembler breakthrough” (ability to make the nonbiological nano-manufacturing agents).
There is a definite sense in which I think Eliezer eventually took up a place in culture analogous to that once held by Eric Drexler. Drexler had articulated a techno-eschatology in which the entire future revolved around the rise of nanotechnology (and his core idea for how humanity could survive was to spread into space; he had other ideas too, but I’d say that’s the essence of his big-picture strategy), and it was underpinned not just by SF musings but also by nanomachine designs, complete with engineering calculations. With Eliezer, the crucial technology is artificial intelligence, the core idea is alignment versus extinction via (e.g.) paperclip maximizer, and the technical plausibility arguments come from computer science rather than physics.
Those who are suspicious of utopian and dystopian thought in general, including their technologically motivated forms, are happy to say that Drexler’s extreme nano-futurology faded because something about it was never possible, and that the same fate awaits Eliezer’s extreme AI-futurology. But as for me, I find the arguments in both cases quite logical. And that raises the question, even as we live through a rise in AI capabilities that is keeping Eliezer’s concerns very topical, why did Drexler’s nano-futurism fade… not just in the sense that e.g. the assembler breakthrough never became a recurring topic of public concern, the way that climate change did; but also in the sense that, e.g., you don’t see effective altruists worrying about the assembler breakthrough, and this is entirely because they are living in the 2020s; if effective altruism had existed in the 1990s, there’s little doubt that gray goo and nanowar would have been high in the list of existential risks.
Understanding what happened to Drexler’s nano-futurism requires understanding what kind of ‘nano’ or chemical progress has occurred since those days, and whether the failure of certain things to eventuate is because they are impossible, because not enough of the right people were interested, because the relevant research was starved of funds and suppressed (but then, by who, how, and why), or because it’s hard and we didn’t cross the right threshold yet, the way that artificial neural networks couldn’t really take off until the hardware for deep learning existed.
It seems clear that ‘nanotechnology’ in the form of everything biological, is still developing powerfully and in an uninhibited way. The Covid pandemic has actually given us a glimpse of what a war against a nano-replicator is like, in the era of a global information society with molecular tools. And gene editing, synthetic biology, organoids, all kinds of macabre cyborgian experiments on lab animals, etc, develop unabated in our biotech society.
As for the non-biological side… it was sometimes joked that ‘nanotechnology’ is just a synonym for ‘chemistry’. Obviously, the world of chemical experiment and technique, quantum manipulations of atoms, design of new materials—all that continues to progress too. So it seems that what really hasn’t happened, is that specific vision of assemblers, nanocomputers, and nanorobots made from diamond-like substances.
Again, one may say: it’s possible, it just hasn’t happened yet for some reason. The world of 2D carbon substances—buckyballs, buckytubes, graphenes—seems to me the closest that we’ve come so far. All that research is still developing, and perhaps it will eventually bootstrap its way to the Drexlerian level of nanotechnology, once the right critical thresholds are passed… Or, one might say that Eric’s vision (assemblers, nanocomputers, nanorobots) will come to pass, without even requiring “diamondoid” nanotechnology—instead it will happen via synthetic biology and/or other chemical pathways.
My own opinion is that the diamondoid nanotechnology seems like it should be possible, but I wonder about its biocompatibility—a crucial theme in the nanomedical research of Robert Freitas, who was the champion of medical applications as envisaged by Drexler. I am just skeptical about the capacity of such systems to be useful in a biochemical environment. Speaking of astronomically sized intelligences, Stanislaw Lem once wrote that “only a star can survive among stars”, meaning that such intelligences should have superficial similarities to natural celestial bodies, because they are shaped by a common physical regime; and perhaps biomedically useful nanomachines must necessarily resemble and operate like the protein complexes of natural biology, because they have to work in that same regime of soluble biopolymers.
Specifically with respect to ‘gray goo’, i.e. nonbiological replicators that eat the ecosphere (keywords include ‘aerovore’ and ‘ecophagy’), it seems like it ought to be physically possible, and the only reason we don’t need to worry so much about diamondoid aerovores smothering the earth, is that for some reason, the diamondoid kind of nanotechnology has received very little research funding.
Does anyone have an informed comment about the use of gauge theory in decision theory?
Eric Weinstein just gave a controversial econophysics talk [edit: link added] at the University of Chicago about “geometric marginalism”, which uses geometric techniques from Yang-Mills theory to model changing preferences.
If this can be used in economics, it can probably be used in decision theory in general, and I see at least one example of a physicist doing this (“decision process theory”), but I don’t know how it compares to conventional approaches.
Do you agree that AI could make militaries, terrorists, organized crime, and dictators more effective? If you do agree, do you need any further discussion?
Zuckerberg just went meta. Reminder that in the novel, the architect of the metaverse was a billionaire using a Burroughsian information virus for mind control…
Hegel seems to be saying that one can become an incarnation of a moral ideal. I imagine quite a few people in the rationalist and effective altruist communities have attempted this…