Since you marked as a crux the fragment “absent acceleration they are likely to die some time over the next 40ish years” I wanted to share two possibly relevant Metaculus questions. Both of these seem to suggest numbers longer than your estimates (and these are presumably inclusive of the potential impacts of AGI/TAI and ASI, so these don’t have the “absent acceleration” caveat).
Decaeneus
OK, agreed that this depends on your views of whether cryonics will work in your lifetime, and of “baseline” AGI/ASI timelines absent your finger on the scale. As you noted, it also depends on the delta between p(doom while accelerating) and baseline p(doom).
I’m guessing there’s a decent number of people who think current (and near future) cryonics don’t work, and that ASI is further away than 3-7 years (to use your range). Certainly the world mostly isn’t behaving as if it believed ASI was 3-7 years away, which might be a total failure of people acting on their beliefs, or it may just reflect that their beliefs are for further out numbers.
Simple math suggests that anybody who is selfish should be very supportive of acceleration towards ASI even for high values of p(doom).
Suppose somebody over the age of 50 thinks that p(doom) is on the order of 50%, and that they are totally selfish. It seems rational for them to support acceleration, since absent acceleration they are likely to die some time over the next 40ish years (since it’s improbable we’ll have life extension tech in time) but if we successfully accelerate to ASI, there’s a 1-p(doom) shot at an abundant and happy eternity.
Possibly some form of this extends beyond total selfishness.
So, if your ideas have potential important upside, and no obvious large downside, please share them.
What would be some examples of obviously large downside? Something that comes to mind is anything that tips the current scales in a bad way, like some novel research result that directs researchers to more rapid capabilities increase without a commensurate increase in alignemnt. Anything else?
Immorality has negative externalities which are diffuse, and hard to count, but quite possibly worse than its direct effects.
Take the example of Alice lying to Bob about something, to her benefit and his detriment. I will call the effects of the lie on Alice and Bob direct, and the effects on everybody else externalities. Concretely, the negative externalities here are that Bob is, on the margin, going to trust others in the future less for having been lied to by Alice than he would if Alice has been truthful. So in all of Bob’s future interactions, his truthful counterparties will have to work extra hard to prove that they are truthful, and maybe in some cases there are potentially beneficial deals that simply won’t occur due to Bob’s suspicions and his trying to avoid being betrayed.
This extra work that Bob’s future counterparties have to put in, as well as the lost value from missed deals, add up to a meaningful cost. This may extend beyond Bob, since everyone else who finds out that Bob was lied to by Alice will update their priors in the same direction as Bob, creating second order costs. What’s more, since everyone now thinks their counterparties suspect them of lying (marginally more), the reputational cost of doing so drops (because they already feel like they’re considered to be partially liars, so the cost of confirming that is less than if they felt they were seen as totally truthful) and as a result everyone might actually be more likely to lie.
So there’s a cost of deteriorating social trust, of p*ssing in the pool of social commons.
One consequence that seems to flow from this, and which I personally find morally counter-intuitive, and don’t actually believe, but cannot logically dismiss, is that if you’re going to lie you have a moral obligation to not get found out. This way, the damage of your lie is at least limited to its direct effects.
[Question] Self-censoring on AI x-risk discussions?
Agreed that ultimately everything is reverse-engineered, because we don’t live in a vacuum. However, I feel like there’s a meaningful distinction between:
1. let me reverse engineer the principles that best describe our moral intuition, and let me allow parsimonious principles to make me think twice about the moral contradictions that our actual behavior often implies, and perhaps even allow my behavior to change as a result
2. let me concoct a set of rules and exceptions that will justify the particular outcome I want, which is often the one that best suits meFor example, consider the contrast between “we should always strive to treat others fairly” and “we should treat others fairly when they are more powerful than us, however if they are weaker let us then do to them whatever is in our best interest whether or not it is unfair, while at the same time paying lip service to fairness in hopes that we cajole those more powerful than us into treating us fairly”. I find the former a less corrupted piece of moral logic than the latter even though the latter arguably describes actual behavior fairly well. The former compresses more neatly, which isn’t a coincidence.
There’s something of a [bias-variance tradeoff](https://en.wikipedia.org/wiki/Bias%E2%80%93variance_tradeoff) here. The smaller the moral model, the less expressive it can be (so the more nuance it misses), but the more helpful it will be on future, out-of-distribution questions.
The more complex the encoding of a system (e.g. of ethics) is, the more likely it is that it’s reverse-engineered in some way. Complexity is a marker of someone working backwards to encapsulate messy object-level judgment into principles. Conversely, a system that flows outward from principles to objects will be neatly packed in its meta-level form.
In linear algebra terms, as long as the space of principles has fewer dimensions than the space of objects, we expect principled systems / rules to have a low-rank representation, with a dimensionality approaching that of the space of principles and far below that of the space of objects.
As a corrolary, perhaps we are justified in being more suspicious of complex systems over simple ones, since they come with a higher risk that the systems are “insincere”, in the sense that they were deliberately created with the purpose of justifying a particular outcome rather than being genuine and principled.
This rhymes with Occam’s razor, and also with some AI safety approaches which planned to explore whether dishonesty is more computationally costly than honesty.
Does this mean that meta-level systems are memetically superior, since their informational payloads are smaller? The success of Abrahamic religions (which mostly compress neatly into 10-12 commandments) might agree with this.
What’s the cost of keeping stuff stuff around vs discarding it and buying it back again?
When you have some infrequently-used items, you have to decide between keeping them around (default, typically) or discarding them and buying them again later when you need them.
If you keep them around, you clearly lose use of some of your space. Suppose you keep these in your house / apartment. The cost of keeping them around is then proportional to the amount of either surface area or volume they take up. Volume is the appropriate measure to use especially if you have dedicated storage space (like closets) and the items permit packing / stacking. Otherwise, surface area is a more appropriate measure, since having some item on a table kind of prevents you from using the space above that table. The motivation for assigning cost like this is simple: you could (in theory) give up the items that take up a certain size, live a house that is smaller by exactly that amound, and save on the rent differential.
The main levers are:the only maybe non-obvious one is whether you think 2d or 3d the fair measure. 3d gives you a lot more space (since items are not cubic, and they typically take up space on one of their long sides, so they take up a higher fraction of surface area than volume). In my experince it’s hard to stack too many things while still retaining access to them so I weigh the 2d cost more.
cost (per sqft) of real estate in your area
how expensive the item is
how long before you expect to need the item again
There’s some nuance here like perhaps having an item laying around has higher cost than just the space it takes up because it contributes to an unpleasant sense of clutter. On the other hand, having the item “at the ready” is perhaps worth an immediacy premium on top of the alternative scenario of having to order and wait for it when the need arises. We are also ignoring that when you discard and rebuy, you end up with a brand new item, and potentially in some cases you can either gift or sell your old item, which yields some value to yourself and/or others. I think on net these nuances nudge in the direction of “discard and rebuy” vs what the math itself suggests.
I made a spreadsheet to do the math for some examples here, so far it seems like for some typical items I checked (such as a ball or balloon pump) you should sell and rebuy. For very expensive items that pack away easily (like a snowboard) you probably want to hang onto them.
The spreadsheet is here, feel free to edit it (I saved a copy) https://docs.google.com/spreadsheets/d/1oz7FcAKIlbCJJaBo8XAmr3BqSYd_uoNTlgCCSV4y4j0/edit?usp=sharing
This raises the question of what it means to want to do something, and who exactly (or which cognitive system) is doing the wanting.
Of course I do want to keep watching YT, but I also recognize there’s a cost to it. So on some level, weighing the pros and cons, I (or at least an earlier version of me) sincerely do want to go to bed by 10:30pm. But, in the moment, the tradeoffs look different from how they appeared from further away, and I make (or, default into) a different decision.
An interesting hypothetical here is whether I’d stay up longer when play time starts at 11:30pm than when play time starts at, say, 10:15pm (if bedtime is 10:30pm). The wanting to play, and the temptation to ignore the cost, might be similar in both scenarios. But this sunk cost / binary outcome fallacy would suggest that I’ll (marginally) blow further past my deadline in the former situation than in the latter.
Things slow down when Ilya isn’t there to YOLO in the right direction in an otherwise very high-dimensional space.
I often mistakenly behave as if my payoff structure is binary instead of gradual. I think others do too, and this cuts across various areas.
For instance, I might wrap up my day and notice that it’s already 11:30pm, though I’d planned to go to sleep an hour earlier, by 10:30pm. My choice is, do I do a couple of me-things like watch that interesting YouTube video I’d marked as “watch later”, or do I just go to sleep ASAP? I often do the former and then predictably regret it the next day when I’m too tired to function well. I’ve reflected on what’s going on in my mind (with the ultimate goal of changing my behavior) and I think the simplest explanation is that I behave as if the payoff curve, in this case of length of sleep, is binary rather than gradual. Rational decision-making would prescribe that, especially once you’re getting less rest than you need, every additional hour of sleep is worth more rather than less. However, I suspect my instinctive thought process is something like “well, I’ve already missed my sleep target even if I go to sleep ASAP, so might as well watch a couple of videos and enjoy myself a little since my day tomorrow is already shot.”
This is pretty terrible! It’s the opposite of what I should be doing!
Maybe something like this is going on when poor people spend a substantial fraction of their income on the lottery (I’m already poor and losing an extra $20 won’t change that, but if I win I’ll stop being poor, so let me try) or when people who are out of shape choose not to exercise (I’m already pretty unhealthy and one 30-minute workout won’t change that, so why waste my time.) or when people who have a setback in their professional career have trouble picking themselves back up (my story is not going to be picture perfect anyway, so why bother.)
It would be good to have some kind of mental reframing to help me avoid this prectictably regrettable behavior.
What if a major contributor to the weakness of LLMs’ planning abilities is that the kind of step-by-step description of what a planning task looks like is content that isn’t widely available in common text training datasets? It’s mostly something we do silently, or we record in non-public places.
Maybe whoever gets the license to train on Jira data is going to get to crack this first.
Right—successful private companies (like nearly all the hot AI labs) are staying private for far longer (indefinitely?) so this bet will not capture any of the value they create for themselves.
It might also be that AGI is broadly deflationary, in that it will mostly melt moats and, with them, corporate margins (in most cases, except maybe the ones of the first company to roll out AGI).
Daniel Gross’ [AGI Trades](https://dcgross.com/agitrades) (in particular the first question under “Markets”) comes to mind.
It just seems far from certain to me that this bet will benefit from the outcome it’s trying to hedge / capture, and given the possible implications here, I’d just urge whoever is considering putting this kind of bet on to get comfortable with that linkage (between real-world outcome and financial outcome) and not just take it for granted.
What gives you confidence that much value will accrue to the equity of the companies in those indices?
It seems like, in the past, technological revolutions mostly increase churn and are anti-incumbent in some way e.g. (this may be false in particular, but just to illustrate my argument with a concrete-sounding example) ORCL has over 150k employees whose jobs might get nuked if AGI can painlessly and securely transfer its clients to OSS instead of expensive enterprise solutions.
If I try to think about what’s the most incumbent-friendly environment, almost by definition it ought to be one where not much is changing, but you’re trying to capture value in the opposite scenario.
(sci-fi take?) If time travel and time loops are possible, would this not be the (general sketch of the) scenario under which it comes into existence:
1. a lab figures out some candidate particles that could be sent back in time, build a detector for them and start scanning for them. suppose the particle has some binary state. if the particle is +1 (-1) the lab buys (shorts) stock futures and exits after 5 minutes
2. the trading strategy will turn out to be very accurate and the profits from the trading strategy will be utilized to fund the research required to build the time machine
3. at some arbitrary point in the future, eventually, the r&d and engineering efforts are successful. once the device is built, the lab starts sending information back in time to tip itself to future moves in stock futures (the very same particles it originally received). this closes the time loop and guarantees temporal consistency
Reasons why this might not happen:
time doesn’t work like this, or time travel / loops aren’t possible
civilization doesn’t survive long enough to build the device
the lab can’t commit to using its newfound riches to build the device, breaking the logic and preventing the whole thing from working in the first place
Thanks for these references! I’m a big fan, but for some reason your writing sits in the silly under-exploited part of my 2-by-2 box of “how much I enjoy reading this” and “how much of this do I actually read”, so I’d missed all of your posts on this topic! I caught up with some of it, and it’s far further along than my thinking. On a basic level, it matches my intuitive model of a sparse-ish network of causality which generates a much much denser network of correlation on top of it. I too would have guessed that the error rate on “good” studies would be lower!
Does belief quantization explain (some amount of) polarization?
Suppose people generally do Bayesian updating on beliefs. It seems plausible that most people (unless trained to do otherwise) subconsciosuly quantize their beliefs—let’s say, for the sake of argument, by rounding to the nearest 1%. In other words, if someone’s posterior on a statement is 75.2%, it will be rounded to 75%.
Consider questions that exhibit group-level polarization (e.g. on climate change, or the morality of abortion, or whatnot) and imagine that there is a series of “facts” that are floating around that someone uninformed doesn’t know about.
If one is exposed to facts in a randomly chosen order, then one will arrive at some reasonable posterior after all facts have been processed—in fact we can use this as a computational definition of the what it would be rational to conclude.
However, suppose that you are exposed to the facts that support the in-group position first (e.g. when coming of age in your own tribe) and the ones that contradict it later (e.g. when you leave the nest.) If your in-group is chronologically your first source of intel, this is plausible. In this case, if you update on sufficiently many supportive facts of the in-group stance, and you quantize, you’ll end up with a 100% belief on the in-group stance (or, conversely, a 0% belief on the out-group stance), after which point you will basically be unmoved by any contradictory facts you may later be exposed to (since you’re locked into full and unshakeable conviction by quantization).
One way to resist this is to refuse to ever be fully convinced of anything. However, this comes at a cost, since it’s cognitively expensive to hold onto very small numbers, and to intuitively update them well.
Causality is rare! The usual statement that “correlation does not imply causation” puts them, I think, on deceptively equal footing. It’s really more like correlation is almost always not causation absent something strong like an RCT or a robust study set-up.
Over the past few years I’d gradually become increasingly skeptical of claims of causality just by updating on empirical observations, but it just struck me that there’s a good first principles reason for this.
For each true cause of some outcome we care to influence, there are many other “measurables” that correlate to the true cause but, by default, have no impact on our outcome of interest. Many of these measures will (weakly) correlate to the outcome though, via their correlation to the true cause. So there’s a one-to-many relationship between the true cause and the non-causal correlates. Therefore, if all you know is that something correlates with a particular outcome, you should have a strong prior against that correlation being causal.
My thinking previously was along the lines of p-hacking: if there are many things you can test, some of them will cross a given significance threshold by chance alone. But I’m claiming something more specific than that: any true cause is bound to be correlated to a bunch of stuff, which will therefore probably correlate with our outcome of interest (though more weakly, and not guaranteed since correlation is not necessarily transitive).
The obvious idea of requiring a plausible hypothesis for the causation helps somewhat here, since it rules out some of the non-causal correlates. But it may still leave many of them untouched, especially the more creative our hypothesis formation process is! Another (sensible and obvious, that maybe doesn’t even require agreement with the above) heuristic is to distrust small (magnitude) effects, since the true cause is likely to be more strongly correlated with the outcome of interest than any particular correlate of the true cause.
You’re right, this is not a morality-specific phenomenon. I think there’s a general formulation of this that just has to do with signaling, though I haven’t fully worked out the idea yet.
For example, if in a given interaction it’s important for your interlocutor to believe that you’re a human and not a bot, and you have something to lose if they are skeptical of your humanity, then there’s lots of negative externalities that come from the Internet being filled with indistinguishable-from-human chatbots, irrespective its morality.