Student in fundamental and applied mathematics, interested in theoretical computer science and AI alignment
Technoprogressive, biocosmist, rationalist, defensive accelerationist, longtermist
Student in fundamental and applied mathematics, interested in theoretical computer science and AI alignment
Technoprogressive, biocosmist, rationalist, defensive accelerationist, longtermist
The US government acting so early increase the risk of a coup led by high-ranking politicians and military officials. There is a subcategory of scenarios where AI is nominally “aligned” to e.g. the President or Secretary of Defense and will follow the orders of that specific individual. Once AI grows sufficiently powerful and advanced, they will use the AI to seize power and subject the fate of humanity to their whims. This is unlikely to end well.
By assuming direct control over AI development, the US government will be able to increase the chance of this bad outcome happening. Other bad outcomes, such as human extinction or permanent enslavement by ASI, also become significantly more likely as the government will select for agents that are more willing to commit violence and select for (feigned) obedience. Hence, I think the net effect of early government intervention similar to what is happening right now is net-negative on several counts.
And
Reportedly xAI, OpenAI, and DeepMind are already in discussions with the Pentagon to replace Anthropic. I wonder if Elon’s recent misogynistic outburst against Amanda Askell is related or just serendipitous.
@Rob Bensinger on the EA Forum:
As a side-note, I do want to emphasize that from the MIRI cluster’s perspective, it’s fine for correct reasoning in AGI to arise incidentally or implicitly, as long as it happens somehow (and as long as the system’s alignment-relevant properties aren’t obscured and the system ends up safe and reliable).
The main reason to work on decision theory in AI alignment has never been “What if people don’t make AI ‘decision-theoretic’ enough?” or “What if people mistakenly think CDT is correct and so build CDT into their AI system?” The main reason is that the many forms of weird, inconsistent, and poorly-generalizing behavior prescribed by CDT and EDT suggest that there are big holes in our current understanding of how decision-making works, holes deep enough that we’ve even been misunderstanding basic things at the level of “decision-theoretic criterion of rightness”.
It’s not that I want decision theorists to try to build AI systems (even notional ones). It’s that there are things that currently seem fundamentally confusing about the nature of decision-making, and resolving those confusions seems like it would help clarify a lot of questions about how optimization works. That’s part of why these issues strike me as natural for academic philosophers to take a swing at (while also being continuous with theoretical computer science, game theory, etc.).
Surely if you’re around those parts you should know that billionaire philanthropy is generally ineffective and not focused on effective interventions in global health and development. Epstein was primarily known as a philanthropist focused on academic and nonprofit scientific research, hence the high amount of famous/Ivy League scientists in his social circle. According to Eliezer’s account he didn’t understand SIAI’s beliefs on alignment.
As Eliezer himself notes, Epstein do seem to have personally benefited from improving his image through philanthropy as social cover for his trafficking ring, so in this case in particular (not necessarily in all cases of felons or even sex offenders) taking money from him seems straightforwardly bad.
I tend to agree with you that it was in hindsight an error to even discuss with at all, but I don’t think @Rob Bensinger is being dishonest here. “decided against pursuing the option” do mean they took some time to make the decision and didn’t just ghost him without exchanging any more information (which I agree would have been preferable in hindsight).
It was new when it was published in 1995! Industrial Society and Its Future was explicitly cited in Kurzweil’s The Age of Spiritual Machines (1999) and then Bill Joy’s “Why the Future Doesn’t Need Us” (2000), the latter of which helped found modern existential risk research.
This is quite confusing to me. It was never my read in your slowdown scenario that the shareholders were supposed to have any relevance by the end of it. My read (which appear to align with what @williawa is saying elsewhere in this thread) as that the “Oversight Committee” emerged as the new ruling class supplanting the shareholders (let alone any random person who got rich trying to “escape the permanent underclass”), just like e.g. the barbarian lords replaced the Roman patricians, the industrial capitalists replaced the aristocracy, guild masters, and landed gentry, etc. Technological transitions are notoriously a common time for newly empowered elites to throw a revolution against old elites!
On the flip side the OpenAI foundation now have the occasion to do the funniest thing.
Émile Torres would be the most well-known person in that camp.
I think @rife is talking either about mutual cooperation betwen safety advocates and capabilities researchers, or mutual cooperation between humans and AIs.
Pause AI is clearly a central member of Camp B? And Holly signed the superintelligence petition.
If it is a concern that your tool might be symmetric between truth and bullshit, then you should probably not have made the tool in the first place.
I think one can make a stronger claim that the Curry-Howard isomorphism mean a superhuman (constructive?) mathematician would near-definitionally be a superhuman (functional?) programmer as well.
Trying to outline the cruxes:
If you think AI safety require safety research, differential acceleration, etc. and trust AI companies to deliver them, your best bet political affiliation will be with tech-industry-friendly bipartisan centrists.
If you think AI safety require safety research, differential acceleration, etc. and don’t trust AI companies to deliver them, your best bet political affiliation will be with tech-friendly progressives.
If you think AI safety require pausing or stopping all AI research as soon as possible through an international agreement, your best bet political affiliation will be with anti-tech progressives, as anti-tech conservatives will recoil on the “international agreement” aspect.
If you think AI safety require pausing or stopping all AI research as soon as possible, and no international agreement is needed because every country should independently realize that AGI will kill them all, your best bet political affiliation will be with anti-tech people in general whether progressives or conservatives, and probably more with anti-tech conservatives if you expect them to have more political power within AGI timelines.
He was a commenter Overcoming Bias as @Shane_Legg, received a monetary prize from SIAI for his work, commented on SIAI’s strategy on his blog, and took part in the 2010 Singularity Summit, where he and Hassabis would be introduced to Thiel as the first major VC funder of DeepMind (as recounted by both Altman in the tweet mentioned in OP, and in IABIED). I’m not sure this is “being influenced by early Lesswrong” as much as originating in the same memetic milieu – Shane Legg was the one who popularized the term “AGI” and wrote papers like this with Hutter, for example.
IIRC Aella and Grimes got copies in advance and AFAIK haven’t written book reviews (at least not in the sense Scott or the press did).
Not only I have shorter ASI timelines, I think the AI capabilities required for authoritarianism to be lower than ASI, are already there to some extent, and will already be far more advanced by 2028.
Even if AI capabilities stalled, I would still be at the very least uncertain about whether they will still be free and fair elections in 2028.
In any case I expect substantial organizational continuity to persist in the military-industrial complex in particular, comparable to that in major AI companies.
I expect AI CEOs to be somewhat less likely to be malevolent, and much less likely to be ideological fanatics, than politicans and military officials.