The Covid you get will be a reinfection (or at least, every time after the first will be), plus you’ll have been vaccinated. So mostly it will be asymptomatic. And over the long run, how many infections would you be preventing per booster? I have a hard time thinking it’s more than one every three or four boosters.
Every 12 months, if you can update to ‘this year’s’ like the flu and get it in time, might plausibly prevent e.g. 0.5 Covid infections in expectation at equilibrium and be worth it, but every 5-6 months is NGMI.
We have made less progress than I expected on that front, to be sure, and far less than Biden expected or promised, or than most people expected or felt they were promised.
Having further parsed the comments at ACX I am now at MU. Questions do seem like they are asked.
Not sure what you mean by progress in context of Biden’s approval rating. Biden’s probably accomplished less of his goals than I’d expected, but not too surprisingly less.
Was definitely supposed to be Schelling, misspelled in original.
Same as Richard, I think this was graded correctly. The question is whether you can do it now, not whether you can do it indefinitely into the future, and right now I presume that you can due to Omicron (or as of 1⁄1). Your information does make me think my sale was a lot less bad, but I do think I still lost.
Thank you, that is clarifying, together with your note to Scott on ACX about wanting it to ‘lack a motivational system.’ I want to see if I have this right before I give another shot at answering your actual question.
So as I understand your question now, what you’re asking is, will the first AI that can do (ideally pivotal) task X be of Type A (general, planning, motivational, agentic, models world, intelligent, etc) or Type B (basic, pattern matching, narrow, dumb, domain specific, constrained, boxed, etc).
I almost accidentally labeled A/B as G/N there, and I’m not sure if that’s a fair labeling system and want to see how close the mapping is? (e.g. narrow AI and general AI as usually understood). If not, is there a key difference?
Consider what happens if you had to solve your list of problems and didn’t inherently care about human values? To what extent would you do ‘unfriendly’ things via consequentialism? How hard would you need to be constrained to stop doing that? Would it matter if you could also do far trickier things by using consequentialism and general power-seeking actions?
The reason, as I understand it, that a chess-playing AI does things the way we want it to is that we constrain the search space it can use because we can fully describe that space, rather than having to give it any means of using any other approaches, and for now that box is robust.
But if someone gave you or I the same task, we wouldn’t learn chess, we would buy a copy of Stockfish, or if it was a harder task (e.g. be better than AlphaZero) we’d go acquire resources using consequentialism. And it’s reasonable to think that if we gave a fully generic but powerful future AI the task of being the best at chess, at some point it’s going to figure out that the way to do that is acquire resources via consequentialism, and potentially to kill or destroy all its potential opponents. Winner.
Same with the poem or the hypothesis, I’m not going to be so foolish as to attack the problem directly unless it’s already pretty easy for me. And in order to get an AI to write a poem that good, I find it plausible that the path to doing that is less monkeys on a typewriter and more resource acquisition so I can understand the world well enough to do that. As a programmer of an AI, right now, the path is exactly that—it’s ‘build an AI that gets me enough more funding to potentially get something good enough to write that kind of poem,’ etc.
Another approach, and more directly a response to your question here, is to ask, which is easier for you/the-AI: Solving the problem head-on using only known-safe tactics and existing resources, or seeking power via consequentialism?
Yes, at some amount of endowment, I already have enough resources relative to the problem at hand and see a path to a solution, so I don’t bother looking elsewhere and just solve it, same as a human. But mostly no for anything really worth doing, which is the issue?
I went through and reviewed all of my posts that didn’t previously have a review, and also the one un-reviewed post I didn’t write that I was actively sad no one had tackled yet, but didn’t offer additional input on anything that had the necessary one review already due to lack of sufficient motivation. Noting this in case it is useful.
So the obvious take here is that this is a long post full of Paths Forward and basically none of those paths forward were taken, either by myself or others.
Two years later, many if not most of those paths do still seem like good ideas for how to proceed, and I continue to owe the world Moloch’s Army in particular. And I still really want to write The Journey of the Sensitive One to see what would happen. And so on. When the whole Covid thing is behind us sufficiently and I have time to breathe I hope to tackle some of this.
But the bottom line for now is that this doesn’t make much sense absent the rest of the sequence, and the parts that stand on their own didn’t truly ‘pay off’ so it probably shouldn’t be included in a collection like this unless one makes the decision to include most/all of the sequence (e.g. maybe you try to skip the Competition posts, and if I do a book version I might change them a lot, but otherwise yeah lot of words here).
This post was important to my own thinking because it solidified the concept that there exists the thing Obvious Nonsense, that Very Serious People would be saying such Obvious Nonsense, that the government and mainstream media would take it seriously and plan and talk on such a basis, and that someone like me could usefully point out that this was happening, because when we say Obvious Nonsense oh boy are they putting the Obvious in Nonsense. It’s strange to look back and think about how nervous I was then about making this kind of call, even when it was this, well, obvious. Making that first correct call makes a difference.
But in terms of being part of an overall ‘best of’ or ‘most important’ collection for a community as a whole, it would only count if you think it had the same effect on you/others, and made it clear how nonsensical all the Very Serious People could be, and that you had to think for yourself. If all it did for others was point out that the Obvious Nonsense was obvious nonsense in this particular case, there’s not much point.
Focusing on the Alpha (here ‘English Strain’) parts only and looking back, I’m happy with my reasoning and conclusions here. While the 70% prediction did not come to pass and in hindsight my estimate of 70% was overconfident, the reasons it didn’t happen were that some of the inputs in my projection were wrong, in ways I reasoned out at the time would (if they were wrong in these ways) prevent the projection from becoming true. And at the time, people weren’t making the leap to ‘Alpha will take over, and might be a huge issue in some worlds depending on its edge spreading and how fast we vaccinate’ at all.
We also saw with Omicron how, when the variables turn out differently, we do see the thing I was pointing towards, and how people are slow to recognize that it might happen or is going to happen. I do think this had the virtue of advancing the understanding of what was plausibly going to happen. If it overshot a bit in terms of how likely it had its core predictions coming true, that’s something to improve going forward, but very much a ‘man in the arena’ situation, and much better than my ‘not be confident so say little or nothing’ approach that I shared with most others in Jan/Feb 2020.
To what extent this justifies inclusion in a timeless list is up for grabs, but I think it’s important that the next time we notice something like this, we speak up fast and loud (while also striving for good calibration on the chance it happens, and its magnitude)
This is a long and good post with a title and early framing advertising a shorter and better post that does not fully exist, but would be great if it did.
The actual post here is something more like “CFAR and the Quest to Change Core Beliefs While Staying Sane.”
The basic problem is that people by default have belief systems that allow them to operate normally in everyday life, and that protect them against weird beliefs and absurd actions, especially ones that would extract a lot of resources in ways that don’t clearly pay off. And they similarly protect those belief systems in order to protect that ability to operate in everyday life, and to protect their social relationships, and their ability to be happy and get out of bed and care about their friends and so on.
A bunch of these defenses are anti-epistemic, or can function that way in many contexts, and stand in the way of big changes in life (change jobs, relationships, religions, friend groups, goals, etc etc).
The hard problem CFAR is largely trying to solve in this telling, and that the sequences try to solve in this telling, is to disable such systems enough to allow good things, without also allowing bad things, or to find ways to cope with the subsequent bad things slash disruptions. When you free people to be shaken out of their default systems, they tend to go to various extremes that are unhealthy for them, like optimizing narrowly for one goal instead of many goals, or having trouble spending resources (including time) on themselves at all, or being in the moment and living life, And That’s Terrible because it doesn’t actually lead to better larger outcomes in addition to making those people worse off themselves.
These are good things that need to be discussed more, but the title and introduction promise something I find even more interesting.
In that taxonomy, the key difference is that there are games one can play, things one can be optimizing for or responding to, incentives one can create, that lead to building more effective tools for modeling and understanding reality, and then changing it. One can cultivate an asthetic sense that these are good, healthy, virtuous, wholesome, etc. Interacting with these systems is ‘good for you’ and more people being in such modes more leads to more good things, broadly construed (if I was doing a post I’d avoid using such loaded language, it’s not useful, but it’s faster as a way to gesture at the thing).
Then there are reality-masking puzzles, which are where instead of creating better maps of the territory and enabling us to master the world, we instead learn to obscure our maps of the world, obscure the maps of others, fool ourselves first to then fool others, and otherwise learn how to do symbolic actions and social manipulations to get advantage or cause actions.
This is related to simulacra (level 1 puzzles versus level 2-4 puzzles), it is related to moral mazes (if you start a small business buying and selling things you are reality revealing, whereas if you are navigating corporate politics you are reality masking, etc). Knowing how to tell which is which, and how to chart paths through problem spaces that shift problems of one type into the other (e.g. finding ways to do reality-revealing marketing/sales/public-relations/politics/testing/teaching/etc to extent possible). In particular, the question of: Are you causing optimization towards learning and figuring out how reality functions, or are you causing optimization towards faking that you understand or agree or are smart/agreeable/conscientious/willing-to-falsify? Are you optimizing for making things explicit, or for making things implicit? Etc.
So I’d love to see a post by Anna, or otherwise, that is entitled “Reality-Revealing and Reality-Masking Puzzles, No Really This Time” that takes this out of the CFAR/AI context entirely. But this still has a lot going on that’s good and seems well over the threshold for inclusion in such a collection.
Vaccination will be net positive for a while but majority of benefit is in the past.
Link didn’t work.
China can downplay things for a week or two but this fails quickly in the face of exponentials. If they hit 70k and pretend they didn’t, they then hit 200k and then 400k and then can’t pretend.
Sorry I didn’t reply earlier, been busy. I would be happy to have a call at some point, you can PM me contact info that is best.
I do think we have disagreements beyond a political agenda, but it is always possible communication fell short somehow.
If you don’t have a political agenda I would say your communications seem highly misleading, in the sense that they seem to clearly indicate one.
https://thezvi.wordpress.com/2021/04/27/scott-alexander-2021-predictions-buy-sell-hold/ is the canonical version. Surprised the differences were this big. The struggle on knowing when to update all versions is real, especially now that there’s 3x.
Then beyond that your decisions seem fine.
And no need to apologize for doing the exercise, it’s good to check things, long as it’s clear what’s being done.
When/if I do predictions for 2022 I’ll see what I can do about also including explicit fairs (and ideally, where I’d call BS on a market, and where I wouldn’t).
OK, so I am obviously biased but I’ll look to see if I think this is fair.
First of all, I didn’t look at market prices for a lot of the things (where I did, I mentioned it). If I had done this more I would have done considerably better. Instead, I was saying whether I would trade on Scott’s markets based on my current knowledge level. Does that count as predicting that number when comparing to the market? That’s up to you to decide.
You could of course just say ‘should have done the research.’ You could also say ‘I’m comparing your ability to predict to what a market would do, on arbitrary questions, so tough that you only had Scott’s prediction’ or something. Again, not my call?
Second of all, the procedure for deciding what I meant seems to not match the way I was making predictions. In general, it would be fair to say that ‘buy to X%’ is actually saying ‘it’s at least X%’ so my ‘fair’ must be higher than that ,and reverse for selling.
But it’s pretty bad to be doing this now, in hindsight, if we want to do Briar we need to specify those numbers at that time.
So for e.g. Biden’s approval, 80% was a dumb prediction and I should have sold it down somewhat. But Starlink I would strongly push back. Basic summary:
Biden approval: Giving me 80% outright is a tad unfair, but I take the L on this. Dumb.
Court packing: Meh.
Yang: My fault for looking at the prediction market, ironically. Should have been lower.
Newsome: Actually did make money selling this btw.
Tokyo: Not convinced I was right to buy this but I got away with it. Probably went too high.
Russia/Ukraine: I am confused why my ‘maybe buy to 10’ got interpreted as 15 here, whereas my ‘sell down to X’ in other places got interpreted as X and compared to a lower market. I think being lower than market was right here, although 15 was likely a better prediction than 10.
Netanyahu: Getting punished for this one feels wrong—I basically said sell while I’m above market, that’s not exactly a statement that I should be higher than market—except that my explanation was that I was going to be slightly higher than market by default. I could argue 25 is fair. Can’t really judge but I feel like I’d slightly buy this again at the new market fair if we ran another Everett branch?
Prospera: This is a ‘trust Scott because I know nothing and there’s no market’ rather than anything else. In hindsight, people who get excited enough to write posts about X are a little too excited about X so I should have been moderately lower and sold a bit. Whoops. But note that if there had been a market I would have mostly defaulted to it.
GME: Yeah, I still dunno what to think of this, no further comments at this time.
Bitcoin: I noticed there was an easy arbitrage here, I think market was being dumb. Couldn’t hedge the way this was scored, but notice that what I did was “Sell January 1 BTC 100k calls and buy spot BTC” and that trade seems like it does fine depending on the ratio, was definitely good. I think you gotta give me the market price or lower.
Ether: My trade here is “Sell ETH January 1 5k calls and buy ETH at 2300” and that’s… a very very good trade and damn I shoulda done that. Feels weird to penalize me on that trade. But then again, if you’d told me ETH calls were trading at 11%, yeah, probably would have bought some cause that’s too low, so maybe I lose anyway? Again, it’s weird.
Dow: Yeah, I think this is exactly right, I get market prices here. I’m not challenging the EMH on this one.
Unemployment: Definitely taking the L. Bigger L in terms of my fair at the time but would have come down a bit if I’d seen market. I am surprised.
Starship: I literally said ‘no idea’ and didn’t trade, which is another way of saying market.
Vaccination: 62% of the US population is ‘fully vaccinated’ which is lower than 66%, and the Metaculus market currently predicts January 22, 2022. I think 77% was clearly too high and also it’s not clear it happened.
Vitamin D: See my edit on 4⁄27, I did not predict 50%, that was me adjusting to a false understanding of Scott’s prediction and therefore not selling this as far down as all that. E.g.
EDITED VERSION 4/27: I updated a lot on Scott being at 30% for this (e.g. 70% for this being recognized) in the original, and moved it to 50%. With Scott at 70% instead, we’re much closer, but I think I still want to nudge a little higher and buy this to 75%, instead of moving 30% to 50%. This is a sign of how much I’m reluctant to move a reasonable person’s odds in this type of exercise; if you’d asked me before seeing Scott’s number, I’d have said recognition is very unlikely, and put it at something like 85%-90%, and my true probability is still likely 80% or so.
I think when I say my ‘true probability is 80% for not happening’ you need to give me a 20% for happening.
17. Astrazeneca: Probably was actually slightly lower having only seen Scott, but seeing market would have undone that. 20 seems fine.
The big adjustment is that I took a big knock for the 50% on Q16, and that’s just a misread, should be 20%.
I’ll let Simon decide what to do with the rest. I also find it super weird to be punished vs. market for when I said “this is the wrong price, do an arbitrage’ in the correct direction, and made money even vs. market prices doing the trade, but hey.
I have been trying to format tables on LW for a while have up and started using images.
It’s relative to each group’s vulnerability to Delta, vaccines are still very useful, but less useful than before.
I presume that many, likely most, will do that for a while.