There’s an Astral Codex Ten meetup this saturday, so I’m moving the LW meetup to next week. I encourage you all to come to the ACX meetup instead!
Looks like an upgrade. One problem is that it’s not 100% surpassable, especially on phone, but for some use cases that’s fine. For my phone I use Scalefusion, by the way
Selfcontrol for abstaining from visiting websites without expending your precious willpower. If you’re a mac user.Sorry for not following the rules but, I think discovering a whole category of software (like terminal multiplexers) is often higher value than discovering the best in that category.
If you’re struggling with confidence in the LW crowd, I recommend aiming for doing one thing well, instead of trying too hard to prevent criticism. You will inevitably get criticism and it’s better to embrace that.
I am admittedly working off the definition of it’s critics, including Wilber’s definition which includes, in his words:- constructivism (the world is not just a perception but an interpretation)- contextualism (all truths are context-dependent, and contexts are boundless)- integral-aperspectivism (no context is finally privileged, so an integral view should include multiple perspectives; pluralism; multi-culturalism)Do you think this definition is missing the point? If yes, where do you think I should be looking for a better one?
I’m curious where this complaint comes from. Is their stuff often misrepresented?Changed it in any case. Is this what you meant?
My answer to your first mischievous question tends to be: “if you (also) identify as selfish, this will make you more predictable and thereby more trustworthy”. Don’t give two shits about one guy’s contribution to the economy. My selfish incentives are purely local.Besides, it’s always better to cooperate with a rich guy than exploit a poor one.But of course there’s no need to convince you to be something you already are
But could you explain why you think wireheading is bad?Besides, I don’t think the comparison is completely justified. Enlightenment isn’t just claimed to increase happiness, but also intelligence and morality.
I don’t think “if you do this you’ll be super happy (while still alive)” is comparable to “if you do this you’ll be super happy (after you die)”. The former is testable, and I have close friends who have already fully verified it for themselves. I’ve also noticed in myself a superlinear relation between meditation time and likelihood to be in a state of bliss, and I have no reason to think this relation won’t hold when I meditate even more.The buddha also urged people to go and verify his claims themselves. It seems that the mystic (good) part of buddhism is much more prominent than the organised religion (bad) part, compared to christianity.
Cause X candidate:Buddhists claim that they can put brains in a global maximum of happiness, called enlightenment. Assuming that EA aims to maximize happiness plain and simple, this claim should be taken seriously. It currently takes decades for most people to reach an enlightened state. If some sort of medical intervention can reduce this to mere months, this might drive mass adoption and create a huge amount of utility.
Relevant: the non-adversarial principle of AI alignment
Whereas if you’re good at your work and you think that your job is important, there’s an intervening layer or three—I’m doing X because it unblocks Y, and that will lead to Z, and Z is good for the world in ways I care about, and also it earns me $ and I can spend $ on stuff...
Yes initially there might be a few layers, but there’s also the experience of being really good at what you do, being in flow, at which point Y and Z just kind of dissolve into X, making X feel valuable in itself like jumping on a trampoline.Seems like this friend wants to be in this state by default. If X inherits its value from Z through an intellectual link, a S2-level association, the motivation to do X just isn’t as strong as when the value is directly hardcoded into X itself on the S1 level. “Why was I filling in these forms again? Something with solving global coordination problems? Whatever it’s just my Duty as a Good Citizen.” or “Whatever I can do it faster than Greg”.But there is a problem: the more the value is a property of X, the harder it will be to detach from it when X suddenly doesn’t become instrumental to Z anymore. Here we find ourselves in the world of dogma and essentialism and lost purposes.So we’re looking at a fundamental dilemma: do I maintain the most accurate model by always deriving my motivation from first principles, or do I declare the daily activities of my job to be intrinsically valuable?In practice I think we tend to go back and forth between these extremes. Why do we need breaks, anyway? Maybe it’s to zoom out a bit and rederive our utility function.
A thought experiment: would you A) murder 100 babies or B) murder 100 babies? You have to choose!
Sidestepping the politics here: I’ve personally found that avoiding (super)stimuli for a week or so, either by not using any electronic devices or by going on a meditation retreat, tends to be extremely effective in increasing my ability to regulate my emotions. Semi-permanently.I have no substitute for it, it’s my panacea against cognitive dissonance and mental issues of any form. This makes me wonder: why aren’t we focusing more on this from an applied rationality point of view?
This seems to be a fully general counterargument against any kind of advice.As in: “Don’t say ‘do X’ because I might want to do not X which will give me cognitive dissonance which is bad”You seem to essentially be affirming the Zen concept that any kind of “do X” will imply that X is better than not X, i.e. a dualistic thought pattern, which is the precondition for suffering.But besides that idea I don’t really see how this post adds anything. Not to mention that identity tends to already be an instance of “X is better than not X”. Paul Graham is saying “not (X is better than not X) is better than (X is better than not X), and you just seem to be saying “not (not (X is better than not X) is better than (X is better than not X)) is better than (not (X is better than not X) is better than (X is better than not X))”.At that point you’re running in circles and the only way out is to say mu and put your attention on something else.
Since this is the first Google result and seems out of date, how do we get the RSS link nowadays?