Just this guy, you know?
Dagon
Hmm. I don’t doubt that targeted voice-mimicking scams exist (or will soon). I don’t think memorable, reused passwords are likely to work well enough to foil them. Between forgetting (on the sender or receiver end), claimed ignorance (“Mom, I’m in jail and really need money, and I’m freaking out! No, I don’t remember what we said the password would be”), and general social hurdles (“that’s a weird thing to want”), I don’t think it’ll catch on.
Instead, I’d look to context-dependent auth (looking for more confidence when the ask is scammer-adjacent), challenge-response (remember our summer in Fiji?), 2FA (let me call the court to provide the bail), or just much more context (5 minutes of casual conversation with a friend or relative is likely hard to really fake, even if the voice is close).
But really, I recommend security mindset and understanding of authorization levels, even if authentication isn’t the main worry. Most friends, even close ones, shouldn’t be allowed to ask you to mail $500 in gift cards to a random address, even if they prove they are really themselves.
In deep meditation people become disconnected from reality
Only metaphorically, not really disconnected. In truth, in deep meditation, the conscious attention is not focused on physical perceptions, but that mind is still contained in and part of the same reality.
This may be the primary crux of my disagreement with the post. People are part of reality, not just connected to it. Dualism is false, there is no non-physical part of being. The thing that has experiences, thoughts, and qualia is a bounded segment of the universe, not a thing separate or separable from it.
Is your mind causally disconnected from the actual universe? That’s the only way I can understand the merging of minds that share some similarities (but are absolutely not identical across universes that aren’t themselves identical). Your forgetting may make two possible minds superficially the same, but they’re simply not identical.
I don’t know why you think path-based configuration of brain state would be false. That may not be “identity” for all purposes—there may be purposes for which it doesn’t suffice or is too restrictive, but it’s probably good for this case.
I expect what the right call is to be very different from person to person and, for some people, from situation to situation.
Definitely. And the balance changes as one ages as well. For me, there are some kinds of work where it’s very hard to get into the zone, and the cost of an interruption is very high. However, I just get less effective over long sessions, and this has gotten much worse in the last few decades. So the point of indifference between “I may not be able to recover this mind-state tomorrow” and “I may not be that useful tonight, and may not be good for ANYTHING tomorrow” has shifted.
I would recommend trying it at least a few times each year, in both directions. Don’t ever make one or the other the only option for yourself—it’s always a choice.
If you have the memories of every single human up to that point, then you don’t know which of them you are.
This depends on the mechanism of attaining all these memories. In that world, it COULD be that you still know which memories are privileged, or at least which ones include meeting God and being in position to be asked the question.
I mean, I’m with you fundamentally: it’s not obvious that ANYTHING is truly objective—other people can report experiences, but that’s mediated by your perceptions as well. In most cases, one can avoid the confusion by specifying predicting WHAT experiences will happen to WHICH observer.
My recommended way to resolve (aka disambiguate) definitional questions is “use more words”. Common understandings can be short, but unusual contexts require more signals to communicate.
I actually upvoted, but mostly because it was a hook for comedy, because it’s so common a trope (the surprise value of taking something literally). If it weren’t for that, I’d probably have just passed, rather than downvoting, but I find it pretty low-value overall.
Some mix of “obvious parts are obvious, non-obvious parts are some mix of pretentious and and suspect.” I’d actually enjoy a (somewhat) deeper exploration of your agreement or disagreement with the Wittgenstein framing of this phrase, and the value of invoking cultural tropes. Personally, this isn’t one I’m confident enough to use, but there are other hyperbolic ideas I use for emphasis or humor, and I generally agree that communication is multimodal and contextual, much more than objective semantic content.
Where do you even put the 10^100 objects you’re iterating through? What made you pick 10^100 as the scale of difficulty? I mean, even though you’ve ignored parallelism and the sheer number of processing pipelines available to simultaneously handle things, that’s only a dozen orders of magnitude, not 100. Exponents go up fast.
So, to answer your title, “no, I cannot”. Fortunately, I CAN abstract and model that many objects, if they’re similar in most of the ways that matter to me. The earth, for instance, has about 10^50 atoms (note: that’s not half the size of your example, it’s 1/10^50 the size). And I can make a fair number of predictions about it. And there’s a LOT of behavior I can’t.
[epistemic status: just what I’ve read in popular-ish press, no actual knowledge nor expertise]
Two main mechanisms that I know of:
- Some cancers are caused (or enabled, or activated, or something) by viruses, and there’s been immense progress in tailoring vaccines for specific viruses.
- Some cancers seem to be susceptible to targeted immune response (tailored antibodies). Vaccines for these cancers enable one’s body to reduce or eliminate spread of the cancer.
Note that everything is relative and marginal (“compared to what, for what increment?”). I don’t think “favor” is the right word for surplus from trade, as it goes in both directions, and is unmeasurable. If you buy a car for $66K, the dealer makes $11k profit, but also has effort and employment costs, so that’s not net. And you’re getting more than $66k of value in owning the car (or you wouldn’t have bought it—you’re not intending to do a favor, just making a trade that benefits you and happens to benefit them). So they’re doing you a favor as much as you doing them one.
Which is to say that the “favor” framing isn’t very helpful, except in motivational terms—you may purposfully take a worse trade than you otherwise could, in order to benefit some specific person (or even a group, if you’re weirdly altruistic enough). But most economic analysis assumes this is a very small part of trade and work choices.
The key insight in figuring out the work and purchase decisions is that most things have different values to different people. A given hour of effort in an endeavor you’re relatively skilled at (“work”) is worth some amount to you, and some amount to an employer. It’s worth more to an employer than to you, and your pay for that hour will be between those values. For simplification reasons, and measurement difficulty, and preference for stability, it’s usually traded in bundles—agreement to work 40+ hours per week for multiple weeks. That doesn’t change the underlying difference in valuation as the main transactional motivation.
You probably need to be a bit more explicit in tieing your title to your text. I’d guess you’re just pointing out that these labels (“materialist” and “idealist”) are both ridiculous when taken to the extreme, and that all sane people use different models for different decisions. Oh, and that all cognition is about models and abstractions, which are always wrong (but often useful).
If I’m wrong in that, please use more words :)
As to your questions about the moon, I don’t thing “observable” has ever meant only and exactly “directly viewable by the person doing the writing”. It means “inferrable from observations and experiences that are causally linked in simple/justifiable ways”.
It only makes sense to two-box if you believe that your decision is causally isolated from history in every way that Omega can discern.
Right. That’s why CDT is broken. I suspect from the “disagree” score that people didn’t realize that I do, in fact, assert that causality is upstream of agent decisions (including Omega, for that matter) and that “free will” is an illusion.
For me, these topics seem extremely contextual and variable with the situation and specifics of the tradeoff in the moment. For many of them, I do somewhat frequently explore consciously what it might feel like (and for cheap ones, try out) to make a different tradeoff, but those experiments don’t generalize well.
I suspect that for the impactful ones (heavily repeated or large), your first two bullet points don’t apply—feedback is delayed from the decision, and if harmful, it will be significant.
Still, it’s VERY GOOD to be reminded that these decisions are mostly made by type-1 thinking, out of habit or instinct (aka deep/early learning) that deserves reconsideration from time to time.
If you’re giving one number, that IS your all-inclusive probability. You can’t predict the direction that new evidence will change your probability (per https://www.lesswrong.com/tag/conservation-of-expected-evidence), but you CAN predict that there will be evidence with equal probability of each direction.
An example is if you’re flipping a coin twice. Before any flips, you give 0.25 to each of HH, HT, TH, and TT. But you strongly expect to get evidence (observing the flips) that will first change two of them to 0.5 and two to 0, then another update which will change one of the 0.5 to 1 and the other to 0.
Likewise, p(doom) before 2035 - you strongly believe your probability will be 1 or 0 in 2036. You currently believe 6%. You may be able to identify intermediate updates, and specify the balance of probability * update that adds to 0 currently, but will be specific when the evidence is obtained.
I don’t know any shorthand for that—it’s implied by the probability given. If you want to specify your distribution of probable future probability assignments, you can certainly do so, as long as the mean remains 6%. “There’s a 25% chance I’ll update to 15% and a 75% chance of updating to 3% over the next 5 years” is a consistent prediction.
Yes! No! What does “richer” actually mean to you? For that matter, what does “we” mean to you (since the existing set of humans is changing hour to hour as people are born, come of age, and die, and even in a given set there’s an extremely wide variance in what they have and in what’s considered rich).
To the extent that GDP is your measure of a nation’s richness, then it’s tautological that increasing GDP makes the nation richer. The weaker argument that it (often) correlates (not necessarily causes) with well-being (in some averages and aggregates) is more defensible, but makes it unsuitable for answering your question.
I think my intuition is that GDP is the wrong tool for measuring how “rich” or “overall satisfied” people are, and simple sum or average is probably the wrong aggregation function. So I fall back on more personal and individual measures of “well-being”. This, for most people I know, and as far as I can tell, the majority of neurotypical people, is about lack of worry for near- and medium-term future, access to pleasurable experiences, and social acceptance among accessible sub-groups (family, friends, neighbors, online communities small enough to care about, etc.).
For that kind of “general current human wants”, a usable and cheap shared-but-excludable VR space seems to improve things for a lot of people, regardless of what happens to GDP. In fact, if consumption of difficult-to-manufacture-and-deliver luxuries gets partially replaced by consumption of patterns of bits, that likely reduces GDP while increasing satisfaction.
There will always be needs for non-virtual goods and experiences—it’s not currently possible to virtualize food’s nutrition OR pleasure, and this is true for many things. Which means a mixed economy for a long long time. I don’t think anyone can tell you whether this makes those things cheaper or more expensive, relative to an hour spent working online or in the real world.
Thanks for the conversation and exploration! I have to admit that this doesn’t match my observations and understanding of power and negotiation in the human agents I’ve been able to study, and I can’t see why one would expect non-humans, even (perhaps especially) rational ones, to commit to alliances in this manner.
I can’t tell if you’re describing what you hope will happen, or what you think automatically happens, or what you want readers to strive for, but I’m not convinced. This will likely be my last comment for awhile—feel free to rebut or respond, I’ll read it and consider it, but likely not post.
These are probably useful categories in many cases, but I really don’t like the labels. Garbage is mildly annoying, as it implies that there’s no useful signal, not just difficult-to-identify signal. It’s also putting the attribute on the wrong thing—it’s not garbage data, it’s data that’s useful for other purposes than the one at hand. “verbose” or “unfiltered” data, or just “irrelevant” data might be better.
Blessed and cursed are much worse as descriptors. In most cases there’s nobody doing the blessing or cursing, and it focuses the mind on the perception/sanctity of the data, not the use of it. “How do I bless this data” is a question that shows a misunderstanding of what is needed. I’d call this “useful” or “relevant” data, and “misleading” or “wrongly-applied” data.
To repeat, though, the categories are useful—actively thinking about what you know, and what you could know, about data in a dataset, and how you could extract value for understanding the system, is a VERY important skill and habit.
I’ve seen links to that video before (even before your previous post today). Is there a text or short argument that justifies “Non-naive cooperation is provably optimal between rational decision makers” ALONG WITH “All or any humans are rational enough for this to apply”?
I’m not sure who the “we” is in your thesis. If something requires full agreement and goodwill, it cannot happen, as there will always be bad actors and incompatibly-aligned agents.
I tend to read most of the high-profile contrarians with a charitable (or perhaps condescending) presumption that they’re exaggerating for effect. They may say something in a forceful tone and imply that it’s completely obvious and irrefutable, but that’s rhetoric rather than truth.
In fact, if they’re saying “the mainstream and common belief should move some amount toward this idea”, I tend to agree with a lot of it (not all—there’s a large streak of “contrarian success on some topics causes very strong pressure toward more contrarianism” involved).