So enlightenment is defragmentation, just like we do with hard drives?
That make a fair bit of sense. And what are your thoughts on work days? I get my work for my job done, but advice on improving productivity on chores and future planning would be appreciated. Also good point on pica!
Very interesting dichotomy! Definitely seems worth trying. I’m confused about the reading/screen time/video games distinction though. Why would reading seem appealing but being in front of a screen not? Watching TV is essentially identical to reading right? You’re taking in a preset story either way. Admittedly you can read faster than TV characters can talk, so maybe that makes it more rewarding?
Also, while playing more video games while recovering and fewer while resting makes sense (they’re an easy activity while low on energy, and thus will take up much of a recovery day, but less of a rest day), “just following my gut” can still lead to plenty of gaming. Does this mean that I should still play some on a rest day, just less? That I almost never have enough energy to rest instead of recover? That I’m too into gaming and this is skewing my gut such that a good rest day rule would be “follow your gut, except playing fewer/no games today”?
First off, you probably want to figure out if your nihilism is due to philosophy or depression. Would you normally enjoy and value things, but idea of finite life gets in the way? Or would you have difficulty seeing a point to things even if you were suddenly granted immortality and the heat death of the universe was averted?
Either way, it’s difficult to give a definitive solution, as different things work for different people. That said, if the problem seems to be philosophy, it might be worth noting that the satisfaction found in a good moment isn’t a function of anything that comes after it. If you enjoy something, or you help someone you love, or you do anything else that seems valuable to you, the fact of that moment is unchangeable. If the stars die in heaven, that cannot change the fact that you enjoyed something. Another possible solution would be trying to simply not think about it. I know that sounds horribly dismissive, but it’s not meant to. In my own life there have been philosophical (and in my case religious) issues that I never managed to think my way out of… but when I stopped focusing on the problem it went away. I managed this only after getting a job that let my brain say “okay, worry about God later, we need to get this task done first!” If you think it would help, finding an activity that demands attention might help (if you feel that your brain will let you shift your attention; if not this might just be overly stressful).
If the problem seems to be depression, adrafinil and/or modafinil are extremely helpful for some people. Conventional treatments exist too of course (therapy and/or anti-depressants); I don’t know anyone who has benefited from therapy (at least not that they’ve told me), but one of my friends had night and day improvement with an anti-depressant (sadly I don’t remember which one; if you like I can check with her). Another aspect of overcoming depression is having friends in the moment and a plan for the future, not a plan you feel you should follow, but one you actively want to. I don’t know your circumstances, but insofar as you can prioritize socialization and work for the future, that might help.
As for the actual question of self-improvement, people vary wildly. An old friend of mine found huge improvements in her life due to scheduling; I do markedly better without it. The best advice I can offer (and this very well might not help; drop it if it seems useless or harmful) is three things:
Don’t do what you think you should do, do what you actually want to (if there isn’t anything that you want, maybe don’t force trying to find something too quickly either). People find motivation in pursuing goals they actually find worthwhile, but following a goal that sounds good but doesn’t actually excite you is a recipe for burnout.
Make actionable plans-if there’s something you want to do, try to break it down into steps that are small enough, familiar enough or straightforwards enough that you can execute the plan without feeling out of your depth. Personally, at least, I find there’s a striking “oh, that’s how I do that” feeling when a plan is made sufficiently explicit, a sense that I’m no longer blundering around in a fog.
Finally, and perhaps most importantly, don’t eliminate yourself. That is, don’t abandon a goal because it looks difficult; make someone else eliminate you. This is essential because many tasks look impossible from the outside, especially if you are depressed. It’s almost the mirror image of the planning fallacy-when people commit to doing something, it’s all too easy to envision everything going right and not account for setbacks. But before you actually take the plunge, so to speak, it’s easy to just assume you can’t do anything, which is simply not true.
“To understand anatomy, dissect cadavers.” That’s less a deliberate study of an edge case, and more due to the fact that we can’t ethically dissect living people!
At the risk of appearing defective, isn’t this the sort of action one would only want to take in a coordinated manner? If it turns out that use of such delivery services tends to force restaurants out of business, then certainly one would prefer a world where we don’t use those services and still have the restaurants-you can’t order take out from a place that doesn’t exist anymore! But deciding unilaterally to boycott delivery imposes a cost without any benefit-whether I choose to use delivery or not will not make the difference. This looks like a classic tragedy of the commons, where it is best to coordinate cooperation, but cooperating without that coordination is a pure loss.
Interesting article. It argues that the AI learned spam clicking from human replays, then needed its APM cap raised to prevent spam clicking from eating up all of its APM budget and inhibiting learning. Therefore, it was permitted to use inhumanly high burst APM, and with all its clicks potentially effective actions instead of spam, its effective actions per minute (EPM, actions not counting spam clicks) are going to outclass human pros to the point of breaking the game and rendering actual strategy redundant.
Except that if it’s spamming, those clicks aren’t effective actions, and if those clicks are effective actions, it’s not spamming. To the extent Alphastar spams, its superhuman APM is misleading, and the match is fairer than it might otherwise appear. To the extent that it’s using high burst EPM instead, that can potentially turn the game into a micro match rather than the strategy match that people are more interested in. But that isn’t a question of spam clicking.
Of course, if it started spam clicking, needed the APM cap raised, converted its spam into actual high EPM and Deepmind didn’t lower the cap afterwards, then the article’s objection holds true. But that didn’t sound like what it was arguing (though perhaps I misunderstood it). Indeed, it seems to argue the reverse, that spam clicking was so ingrained that the AI never broke the habit.
It depends on the goal. We can probably defeat aging without needing much more sophisticated AI than Alphafold (a recent Google AI that partially cracked the protein folding problem). We might be able to prevent the creation of dangerous superintelligences without AI at all, just with sufficient surveillance and regulation. We very well might not need very high-level AI to avoid the worst immediately unacceptable outcomes, such as death or X-risk.
On the other hand, true superintelligence offers both the ability to be far more secure in our endeavors (even if human-level AI can mostly secure us against X-risk, it cannot do so anywhere nearly as reliably as a stronger mind), and the ability to flourish up to our potential. You list high-speed space travel as “neither urgent nor necessary”, and that’s true-a world without near lightspeed travel can still be a very good world. But eventually we want to maximize our values, not merely avoid the worst ways they can fall apart.
As for truly urgent tasks, those would presumably revolve around avoiding death by various means. So anti-aging research, anti-disease/trauma research, gaining security against hostile actors, ensuring access to food/water/shelter, detecting and avoiding X-risks. The last three may well benefit greatly from superintelligence, as comprehensively dealing with hostiles is extremely complicated and also likely necessary for food distribution, and there may well be X-risks a human-level mind can’t detect.
Most people seem to need something to do to avoid boredom and potentially outright depression. However, it is far from clear that work as we know it (which is optimized for our current production needs, and in no way for the benefit of the workers as such) is the best way to solve this problem. There is likely a need to develop other things for people to do alongside alleviating the need for work, but simply saying “unemployment is bad” would seem to miss that there may be better options than either conventional work or idleness.
Where governance is the barrier to human flourishing, doesn’t that mean that using AI to improve governance is useful? A transhuman mind might well be able to figure out not only better policies but how to get those policies enacted (persuasion, force, mind control, incentives, something else we haven’t thought of yet). After all, if we’re worried about a potentially unfriendly mind with the power to defeat the human race, the flip side is that if it’s friendly, it can defeat harmful parts of the human race, like poorly-run governments.
Safer for the universe maybe, perhaps not for the old person themselves. Cryonics is highly speculative-it *should* work, given that if your information is preserved it should be possible to reconstruct you, and cooling a system enough should reduce thermal noise and reactivity enough to preserve information… but we just don’t know. From the perspective of someone near death, counting on cryonics might be as risky or more so than a quick AI.
This. Also, political factors-ideas that boost the status of your tribe are likely to be very competitive independently of truth and nearly so of complexity (though if they’re too complex one would expect to see simplified versions propagating as well).
“Emotions have their role in providing meaning.”
Even if true, is meaning actually valuable? I would far rather be happy than meaningful, and a universe of truth, beauty, love and joy seems much more worthwhile than a universe of meaning.
Caveat-I feel much the same disconnect in hearing about meaning that Galton’s non-imagers appeared to feel about mental imaging, so there’s a pretty good chance I simply don’t have the mental circuitry needed to appreciate or care about meaning. You might be genuinely pursuing something very important to you in seeking meaning. On the other hand, even if that’s true, it’s worth noting that there are some people who don’t need it.
It’s a noticed gap in your knowledge.
Link doesn’t seem to work.
My best guess: There’s a difference between reviewing ideas and exploring them. Reviewing ideas allows you to understand concepts, think about them and talk about them, but you’re looking at material you already have. Consider someone preparing a lecture well-they’ll make sure that they have no confusion about what they’re covering, and write eloquently on the topic at hand.
On the other hand, this is thinking along pre-set pathways. It can be very useful for both learning and teaching, but you aren’t likely to discover something new. Exploring ideas, by contrast, is looking at a part of idea space and then seeing what you can find. It’s thinking about the implications of things you know, and looking to see if an unexpected result shows up, or simply considering a topic and hoping that something new on the subject occurs to you.
“The more liberal policies you pass, the more likely it is any future policy will be fascist.”
Sadly this one is likely true irl. When you have a government that passes more and more laws, and does not repeal old laws, then the degree of restriction of people’s lives increases monotonically. This creates a precedent for ever more control, until the end is either a backlash or tyranny.
Not Kaj, but shame and self-concept (damaging or otherwise) are thoughts (or self-concept is a thought and shame is an emotion produced by certain thoughts). It seems obvious that people with a greater tendency to think will be at greater risk of harmful thoughts. Of course, they’ll also have a better chance of coming up with something beneficial as well, but that doesn’t strike me as likely to cancel out the harm. Humans are fairly well adapted for our intellectual and social niche; there are a lot more ways for introspection to break things than to improve them.
Happy Petrov Day!
...? “Winning” isn’t just an abstraction, actually winning means getting something you value. Now, maybe many rationalists are in fact winning, but if so, there are specific values we’re attaining. It shouldn’t be hard to delineate them.
It should look like, “This person got a new job that makes them much happier, that person lost weight on an evidence-based diet after failing to do so on a string of other diets, this other person found a significant other once they started practicing Alicorn’s self-awareness techniques and learned to accept their nervousness on a first date...” It might even look like, “This person developed a new technology and is currently working on a startup to build more prototypes.”
In none of these cases should it be hard to explain how we’re winning, nor should Tim’s “not looking carefully enough” be an issue. Even if the wins are limited to subjective well-being, you should at least be able to explain that! Do you believe that we’re winning, or do you merely believe you believe it?