I currently work for Palisade Research as a generalist and strategy director and for the Survival and Flourishing Fund, as a grant round facilitator.
I’ve been personally and professional involved with the rationality and x-risk mitigation communities since 2015, most notably working at CFAR from 2016 to 2021 as an instructor and curriculum developer. I’ve also done contract work for MIRI, Lightcone, BERI, the Atlas fellowship, etc.
I’m the single person in the world that has done the most development work on the Double Crux technique, and have explored other frameworks for epistemically resolving disagreements and bridging ontologies.
Even as I’m no longer professionally focused on rationality training, I continue to invest in my personal practice of adaptive rationality, developing and training techniques for learning and absorbing key lessons faster than reality forces me to.
My personal website is elityre.com.
Eli Tyre
Badass. I wish you the best of skill.
Unfortunately, this policy action is not that.
In other cases, econ-brained thinking is harnessed to defend a position, but isn’t the main force behind that position. For example, the cultural wars that are currently raging over immigration definitely feature clashes between economic and sociopolitical considerations. However, I suspect that the pro-immigration side is not fundamentally motivated by immigration’s purported economic benefits, which are better understood as fig leaves on a deeper-rooted globalist ideology. Similarly, even though much of the explicit debate about Brexit pitted economic against cultural considerations, the sheer vitriol that elites leveled against Brexiteers suggests that they were primarily motivated by sociopolitical considerations of their own.
Surely different people support positions for different reasons. Many people (even if only a small minority) support these specific policies, in good faith, on the basis of economic arguments.
Which, of course, is not a counterpoint to “econ-brained thinking is harnessed to defend a position, but isn’t the main force behind that position”. That seems obviously true.
But the basic stance of rounding off the “reasons why a majority of people support a position” to “the true motives of that position”, seems fraught.The “pro-immigration side” is heterogenous! It includes lots of different sub groups, which might have very different ideologies or internal mechanisms.
You might take a look at Wei Dai’s writing on metaphilosophy. He has a specific view that isn’t shared by everyone on this site. But a core part of his view is that “a powerful AI (or human-AI civilization) guided by wrong philosophical ideas would likely cause astronomical (or beyond astronomical) waste.”
Why do you say that?
I remember attending an academic cryptography conference when the Internet was just getting started (HTTP was just invented) and there were already hundreds of researchers there, with most top universities having at least a few
That’s pretty interesting!
I wonder how much of the difference is due to the center of innovation that lead to the internet being accedmia, where the center of innovation that’s leading to the singularity is mostly private companies.
Unlike my youthful expectations (upon reading Vernor Vinge), there are no university departments filled with super-geniuses charting a path for humanity to safely navigate the Singularity.
Well, there was FHI for a while, which wasn’t quite an academic department and which only had regular old geniuses, not super-geniuses.
And there are other offices in the world, outside of academia, that house a handful of geniuses attempting to chart a path for humanity to safely navigate the Singularity.
As a creepy prince might say to a fairy-tale princess
Or any creepy man, to any woman?
Or any creepy woman to any man, for that matter.
I think we should be careful about reasoning from “if X is true, the consequences are so much worse than not-X, so we should act as if X is true”, since there are many ideas that are in fact exponentially unlikely (eg Pascal’s wager, and analogous situations). The first line of attack should always be just trying to figure out what’s true on the object level.
But yeah, I think this line of argument is basically valid.This is indeed how I think people should think about the question of animal consciousness. I feel less strongly about the abortion question for various reasons, but I am much less sold on “pro choice is obviously correct” than most liberal Americans.
I think it’s totally in scope to be correctly confident about an issue like abortion even if ~half of people agree with you.
But, I also think your point basically stands: there’s an asymmetry in costs to being wrong. (Don’t take that too far though, a lot of harm is caused by forcing people to have children that they don’t want, and aren’t prepared to handle.)
Seems falsified?
I thought that there were mechanisms for using the same particle beam to decelerate as to accelerate?
Something like “You put a mirror that can be deployed at the front of your probe. When you want to start slowing down, you aim the beam at the mirror, and it bounces off and hits the probe, now adding thrust away from the direction of motion.”
Isn’t decel just a difference of a factor of 2?
I’m skeptical. I guess that cost of probes turns out to be negligible compared to the resources available (and possibly the cost of research will also turn out to be negligible—it remains unclear how fast an intelligence explosion shoots all the way up to technological maturity).
Checking with a BOTEC:
Probes will be nanotechnological, and so probably pretty tiny. The lower the mass the less energy it takes to accelerate them to near lightspeed. Let’s say each probe is the mass of a coke can. (This is probably a significant overestimate.)
Claude tells me that it takes 7.1 e17 J to accelerate that mass to 99.9% the speed of light, assuming unrealistic perfect efficiency. There are an extra 3 or so orders of magnitude for thermodynamic inefficiency. So let’s go up to 7.1 e20 J.
There are ~7 billion galaxies in the reachable universe.
Sending one probe to every galaxy would take 4 e30 J.
Claude also tells me that, in one day, earth’s sun outputs about 3.3 e31 J.
So we could send a probe to every galaxy in the reachable universe using a tenth of the energy of one day, after building a Dyson swarm.
...which is I guess not actually negligible, such that resource allocation question isn’t literally overdetermined.
The furthest away parts of the theoretically reachable universe is 16-18 billion years away, so a 100 years delay is worth it if you can just increase the speed by 1⁄100 millionth of c
Why is there a tradeoff? Why don’t you launch your early comparatively technologically-unsophisticated probes as soon as you can, and then, if you develop faster probes, also launch those if you calculate that they could catch up to the ones that you already launched?
It’s not like the resources spent on early probes trade off appreciably with technological development.
Mostly, I just need to decide to spend a block of time writing, instead of doing other work. But aside from that, I have less need of co-writers, and more of a need for audiences that I’m actively writing for and engaging with (who those audiences are will be different for the many things that I have in the backlog).
Man, I don’t know if I’m confabulating it, but the part that Opus wrote really has the feel of LLM text to me, in a way that I don’t like, such that I would be sad if all your comments had that style.
I am criticizing OpenAI not just because of the terms of their contract, but because they previously said that they had the same redlines as Anthropic, and then not 2 days latter, signed a contract abandoning those redlines, while quite transparently lying about whether the redlines were protected.
That is bad behavior, and I’m glad they’re getting pushback about it. When you claim to stand for principles, you’re taking on additional social cost when you abandon those principles.
I wouldn’t care nearly as much, if they had accepted the the contract that they accepted but had never made any pretense of standing for their supposed redlines. Which is is what xAI (and possibly GDM?) have done.Further, from other incidents, I believe Sam Altman to be dishonest. This is a very clear and legible instance of his dishonesty. I’m in favor of more people having the (IMO correct) understanding that Sam shouldn’t be trusted. As a matter of political expediency, promoting this incident is an opportunity to inform more people about Sam’s honesty and trustworthiness.
Finally, I think because this has gotten a lot of media attention, it could turn out to be a leverage point for broader changes. If OpenAI decided to change it’s contract with the DoD, that might also put pressure on DeepMind, or might lead to changes in legislation that would close the loopholes that make this a problem. (That second thing seems like a low probability hope, to be clear).
Like what?
Is this subtweeting something?