Natália(Natália Mendonça)
Yeah, it would be interesting to investigate how that would work. I think the insights would serve to set a lower bound to mood, the same as what religion does for many people.
I’m fairly certain that in the vast majority of the time, negative emotions are ego-dystonic.
They’re not something actively sought out out of a desire for meaning, they’re something essentially inflicted upon the sufferer by parts of their mind that they can’t control.
I think acceptance of negative emotion is often driven from being in that position, a position of helplessness, often driven out of a desire to maintain a good self-image, and avoid entering the negativity loop — and not from a position of having control over whether it happens or not, and seeking it because it brings meaning.
Who are you and how is it that we don’t we know each other yet?
I noticed recently that the tradeoff you have to make to be more dependable in that way is to be less open. Less open to new projects, new information, new people. You have to be less malleable, and more definite. It is largely about being able to knowingly cut the majority of the world from your attention, to ignore what isn’t important. I don’t think that’s a bad thing—there’s much more joy in being focused and determined than in shifting your attention and commitment around. But it is something that comes more naturally to people once they figure out what seems to them to be the right path, once they figure out a task or project that deserves their undivided attention and commitment, once they don’t feel like they’re getting stuck in a local maximum in an avoidable way.
Thank you for the correction. Thinking about it, I think that is true even of humans, in a certain sense. I would guess that the ability to hold several goal-nodes in one’s mind would scale with g and/or working memory capacity. Someone who is very smart and has tolerance for ambiguity would be able to aim for a very complex goal while simultaneously maintaining a great performance in the day-to-day mundane tasks they need to accomplish which might have seemingly no resemblance to the original goal at all.
It seems to be a skill that requires “buckets” https://www.lesswrong.com/posts/EEv9JeuY5xfuDDSgF/flinching-away-from-truth-is-often-about-protecting-the
So, both in humans and computers, I would guess this is an ability that requires certain cognitive or computational resources. So I maintain my original claim granted that those resources are controlled for.
I agree. I used the modifier “sufficiently” in order to avoid making claims about where a hard line between complex goals and simple goals would lie. Should have made that clearer.
I think saying that people hate prophets is like saying that people hate ads. They hate the bad ones, because those are the ones they consciously notice, whereas the best ads/prophets probably exert their influence without people even thinking of associating them with those categories.
Besides, if “low rank in the pecking order but high decision-making power” applies to people who exert substantial influence with their ideas but don’t have a correspondingly impressive amount of wealth or shiny credentials, it’s not difficult to think of examples who are very far from hatable.
We don’t know that all possible worlds are actual. This could be the only one.
Indeed. This entire post assumes all possible worlds are actual and reasons from there; I didn’t mean to argue for their existence.
How were you first informed of the existence of numbers, colors, space, time, or people? It wasn’t by non-contradiction.
Correct. But we are quite bad at actually reasoning from the law of non-contradiction; we often tend to act as if we believed contradictory things (as is shown by how frequently we make math errors). I conjecture that that is the reason why we need observation to figure things out (assuming all possible worlds exist), although I am not completely sure.
Thanks for pointing this out! I fixed it.
I don’t see how that contradicts his claim. Having the data required to figure out X is really not the same as knowing X.
Agreed — I feel like it makes more sense to be proud of changing your mind when that entails acquiring a model of complexity similar to or lower than that of the model you used to have that makes better predictions, rather than merely making your model more complex.
The third question is
Does X agree that there is at least one concern such that we have not yet solved it and we should not build superintelligent AGI until we do solve it?
Note the word “superintelligent.” This question would not resolve as “never” if the consensus specified in the question is reached after AGI is built (but before superintelligent AGI is built). Rohin Shah notes something similar in his comment:
even if we build human-level reasoning before a majority is reached, the question could still resolve positively after that, since human-level reasoning != AI researchers are out of a job
Unrelatedly, you should probably label your comment “aside.” [edit: I don’t endorse this remark anymore.]
if things get crazy you want your capital to grow rapidly.
Why (if by “crazy” you mean “world output increasing rapidly”)? Isn’t investing to try to have much more money in case world output is very high somewhat like buying insurance to pay the cost of a taxi to the lottery office in case you win the lottery? Your net worth is positively correlated with world GDP, so worlds in which world GDP is higher are worlds in which you have more money, and thus worlds in which money has a lower marginal utility to you. People do tend to value being richer than others in addition to merely being rich, but perhaps not enough to generate the numbers you need to make those investments be the obviously best choice.
(h/t to Avraham Eisenberg for this point)
Thanks for pointing this out; you’re right that your net worth wouldn’t necessarily be correlated with world GDP in many plausible scenarios of how takeoff could happen. I suppose the viability of things like taxation and redistribution of wealth by governments as well as trade involving humans during and after a takeoff could be the main determinants of whether the correlation between the two would be as strong as it is today or closer to zero. I wonder what I should expect the correlation to be.
ETA: After all, governments don’t redistribute human wealth to either horses or chimpanzees, and humans don’t engage in trade with them.
I still don’t understand what you mean by “causally-disconnected” here. In physics, it’s anything in your future light cone (under some mild technical assumptions).
I think you mean to say “causally-connected,” not “causally-disconnected”?
I’m referring to regions outside of our future light cone.
A causally disconnected part would be caring now about something already beyond the cosmological horizon
Yes, that is what I’m referring to.
Things outside of your future light cone (that is, things you cannot physically affect) can “subjunctively depend” on your decisions. If beings outside of your future light cone simulate your decision-making process (and base their own decisions on yours), you can affect things that happen there. It can be helpful to take into account those effects when you’re determining your decision-making process, and to act as if you were all of your copies at once.
Those were some of my takeaways from reading about functional decision theory (described in the post I linked above) and updateless decision theory.
[F]ew members of [LessWrong] seem to be aware of the current state of the anti-aging field, and how close we are to developing effective anti-aging therapies. As a result, there is a much greater (and in my opinion, irrational) overemphasis on the Plan B of cryonics for life extension, rather than Plan A of solving aging. Both are important, but the latter is under-emphasised despite being a potentially more feasible strategy for life extension given the potentially high probability that cryonics will not work.
I think there is a good reason for there being more focus on cryonics than solving aging on LessWrong. Cryonics is a service anyone with the means can purchase right now, whereas there is barely anything anyone can do to slow their aging (modulo getting young blood transfusion and perhaps taking a few drugs, neither of which work that well).
If you are a billionaire, or very knowledgeable about biology, you might be able to contribute somewhat to anti-aging research — but only a very small fraction of the population is either of those things, whereas pretty much anyone that can get life insurance in the US can get cryopreserved.
I think panpsychism is outrageously false, and profoundly misguided as an approach to the hard problem.
What do you think of Brian Tomasik’s flavor of panpsychism, which he says is compatible with (and, indeed, follows from) type-A materialism? As he puts it,
It’s unsurprising that a type-A physicalist should attribute nonzero consciousness to all systems. After all, “consciousness” is a concept—a “cluster in thingspace”—and all points in thingspace are less than infinitely far away from the centroid of the “consciousness” cluster. By a similar argument, we might say that any system displays nonzero similarity to any concept (except maybe for strictly partitioned concepts that map onto the universe’s fundamental ontology, like the difference between matter vs. antimatter). Panpsychism on consciousness is just one particular example of that principle.
(Brian Tomasik’s view superficially sounds a lot like what Ben Weinstein-Raun is criticizing in his second paragraph, so I thought I’d add here the comment I wrote in response to Ben’s post:
> Panhousism isn’t exactly wrong, but it’s not actually very enlightening. It doesn’t explain how the houseyness of a tree is increased when you rearrange the tree to be a log cabin. In fact it might naively want to deny that the total houseyness is increased.
I really don’t see how that is what panhousism would say, at least what I have in mind when I think of panhousism (which is analogous to what I have in mind when I think of (type-A materialist[1]) panpsychism). If all that panhousism means is that (1) “house” is a cluster in thingspace and (2) nothing is infinitely far away from the centroid of the “house” cluster, then it seems very obvious to me that the distance of a tree from the “house” centroid decreases if you rearrange the tree into a log cabin. As an example, focus on the “suitability to protect humans from rain” dimension in thingspace. It’s very clear to me that turning a tree into a log cabin moves it closer to the “house” cluster in that dimension. And the same principle applies to all other dimensions. So I don’t see your point here.
I’m not sure if I should quote Ben’s reply to me, since his post is not public, but he pretty much said that his original post was not addressing type-A physicalist panpsychism, although he finds this view unuseful for other reasons.
)
I get bodily fatigue when I don’t take it for over five days, I haven’t ventured farther than that. No particular reason to.