In such a case, you might get many of the benefits without the covid risks from driving to very close to the ER, then hanging out there and not going in and risking infection unless worse symptoms develop, but being able to act very fast if they do.
1) Even if it counts as a DSA, I claim that it is not very interesting in the context of AI. DSAs of something already almost as large as the world are commonplace. For instance, in the extreme, the world minus any particular person could take over the world if they wanted to. The concern with AI is that an initially tiny entity might take over the world.
2) My important point is rather that your ’30 year’ number is specific to the starting size of the thing, and not just a general number for getting a DSA. In particular, it does not apply to smaller things.
3) Agree income doesn’t equal taking over, though in the modern world where much purchasing occurs, it is closer. Not clear to me that AI companies do better as a fraction of the world in terms of military power than they do in terms of spending.
The time it takes to get a DSA by growing bigger depends on how big you are to begin with. If I understand, you take your 30 years from considering the largest countries, which are not far from being the size of the world, and then use it when talking about AI projects that are much smaller (e.g. a billion dollars a year suggests about 1⁄100,000 of the world). If you start from a situation of an AI project being three doublings from taking over the world say, then most of the question of how it came to have a DSA seems to be the question of how it grew the other seventeen doublings. (Perhaps you are thinking of an initially large country growing fast via AI? Do we then have to imagine that all of the country’s resources are going into AI?)
This sounds great to me, and I think I would be likely to sign up for it if I could, but I haven’t thought about it for more than a few minutes, am particularly unsure about the implications for culture, and am maybe too enthusiastic in general for things being ‘well organized’.
Oh yeah, I think I get something similar when my sleep schedule gets very out of whack, or for some reason when I moved into my new house in January, though it went back to normal with time. (Potentially relevant features there: bedroom didn’t seem very separated from common areas, at first was sleeping on a pile of yoga mats instead of a bed, didn’t get out much.)
I think random objects might work in a similar way. e.g. if talking in a restaurant, you grab the ketchup bottle and the salt to represent your point. I’ve only experimented with this once, with ultimately quite an elaborate set of condiments, tableware and fries involved. It seemed to make things more memorable and followable, but I wasn’t much inclined to do it more for some reason. Possibly at that scale it was a lot of effort beyond the conversation.
Things I see around me sometimes get involved in my thoughts in a way that seems related. For instance, if I’m thinking about the interactions of two orgs while I’m near some trees, two of the trees will come to represent the two orgs, and my thoughts about how they should interact will echo ways that the trees are interacting, without me intending this.
No, never heard of it, that I know of.
I’m pretty unsure how much variation in experience there is—‘not much’ seems plausible to me, but why do you find it so probable?
I also thought that at first, and wanted to focus on why people join groups that are already large. But yeah, lack of very small groups to join would entirely explain that. Leaving a group signaling not liking the conversation seems like a big factor from my perspective, but I’d guess I’m unusually bothered by that.
Another random friction:
If you just sit alone, you don’t get to choose the second person who joins you. I think a thing people often do rather than sitting alone is wander alone, and grab someone else also wandering, or have plausible deniability that they might be actually walking somewhere, if they want to avoid being grabbed. This means both parties get some choice.
Aw, thanks. However I claim that this was a party with very high interesting people density, and that the most obvious difference between me and others was that I ever sat alone.
I share something like this experience (food desirability varies a lot based on unknown factors and something is desirable for maybe a week and then not desirable for months) but haven’t checked carefully that it is about nutrient levels in particular. If you have, I’d be curious to hear more about how.
(My main alternative hypothesis regarding my own experience is that it is basically imaginary, so you might just have a better sense than me of which things are imaginary..)
A page number or something for the ‘more seasoned’ link might be useful. The document is very long and doesn’t appear to contain ‘season-’.
The ‘blander’ link doesn’t look like it supports the claim much, though I am only looking at the abstract. It says that ‘in many instances’ there have been reductions in crop flavor, but even this appears to be background that the author is assuming, rather than a claim that the paper is about. If the rest of the paper does contain more evidence on this, could you quote it or something, since the paper is expensive to see?
I am somewhat hesitant to share simple intuition pumps about important topics, in case those intuition pumps are misleading.
This sounds wrong to me. Do you expect considering such things freely to be misleading on net? I expect some intuition pumps to be misleading, but for considering all of the intuitions that we can find about a situation to be better than avoiding them.
Thanks for your thoughts!
I don’t quite follow you on the intelligence explosion issue. For instance, why does a strong argument against the intelligence explosion hypothesis need to show that a feedback loop is unlikely? Couldn’t we believe that it is likely, but not likely to be very rapid for a while? For instance, there is probably a feedback loop in intelligence already, where humans with better thoughts and equipment are effectively smarter, and can then devise better thoughts and equipment. But this has been true for a while, and is a fairly slow process (at least for now, relative to our ability to deal with things).
My example for high status/small was an esteemed teacher unexpectedly dropping in to see to see their student perform, and entering silently and at the last minute, then standing quietly at the back of the room by the door.
I also think they are probably wrong, but this kind of argument is a substantial part of why. So I want to see if they can be rescued from it, since that would affect their probability of being right from my perspective.
Do you think there are more compelling arguments that they are wrong, such that we need not consider ones like this? (Also just curious)
>Katja: do people infer that taste and wealth go together?
My weak guess is yes, but not sure.
I don’t follow why you think this dynamic exists because wealth and taste are correlated. I think the dynamic I am describing is independent of that, and caused by it being very hard to find a signal of taste say that you cannot buy with other resources at least somewhat. If in fact taste was anticorrelated with wealth in terms of underlying characteristics, a wealthy person could still buy other people’s tasteful guidance for instance.
Scott’s understanding of the survey is correct. They were asked about four occupations (with three probability-by-year, or year-reaching-probability numbers for each), then for an occupation that they thought would be fully automated especially late, and the timing of that, then all occupations. (In general, survey details can be found at https://aiimpacts.org/2016-expert-survey-on-progress-in-ai/)
“It’s not enough to know about the Way and how to walk it; you need gnosis of walking.”
Could I have a less metaphorical example of what people need gnosis of for rationality? I’m imagining you are thinking of e.g. what it is like to carry out changing your mind in a real situation, or what it looks like to fit knowing why you believe things into your usual sequences of mental motions, but I’m not sure.