RomanHauksson
Manifold.love is in alpha, and the MVP should be released in the next week or so. On this platform, people can bet on the odds that each other will enter in at least a 6-month relationship.
I suspect this was written by ChatGPT. It doesn’t say anything meaningful about applying Bayes’ theorem to memory techniques.
Microsolidarity is a community-building practice. We’re weaving the social fabric that underpins shared infrastructure.
The first objective of microsolidarity is to create structures for belonging. We are stitching new kinship networks to shift us out of isolated individualism into a more connected way of being. Why? Because belonging is a superpower: we’re more courageous & creative when we “find our people”.
The second objective is to support people into meaningful work. This is very broadly defined: you decide what is meaningful to you. It could be about your job, your family, or community volunteering. Generally, life is more meaningful when we are being of benefit to others, when we know how to contribute, when we can match our talents to the needs in the world.
You don’t even necessarily do it on purpose, sometimes entire groups simply drift into doing it as a result of trying to up each other in trying to sound legitimate and serious (hello, academic writing).
Yeah, I suspect some intellectual groups write like this for that reason: not actively trying to trick people into thinking it’s more profound than it is, but a slow creep into too much jargon. Like a frog in boiling water.
Then, when I look at their writing, it seems needlessly intelligible to me, even when it’s writing designed for a newcomer. How do they not realize this? Maybe the water just feels warm to them.
When the human tendency to detect patterns goes too far
And, apophenia might make you more susceptible to what researchers call ‘pseudo-profound bullshit’: meaningless statements designed to appear profound. Timothy Bainbridge, a postdoc at the University of Melbourne, gives an example: ‘Wholeness quiets infinite phenomena.’ It’s a syntactically correct but vague and ultimately meaningless sentence. Bainbridge considers belief in pseudo-profound bullshit a particular instance of apophenia. To find it significant, one has to perceive a pattern in something that is actually made of fluff, and at the same time lack the ability to notice that it is actually not meaningful.
Np! I actually did read it and thought it was high-quality and useful. Thanks for investigating this question :)
Too long; didn’t read
From Pluriverse:
A viable future requires thinking-feeling beyond a neutral technocratic position, averting the catastrophic metacrisis, avoiding dysoptian solutionism, and dreaming acutely into the techno-imaginative dependencies to come.
How do you decide which writings to convert to animations?
I was also disappointed to read Zvi’s take on fruit fly simulations. “Figuring out how to produce a bunch of hedonium” is not an obviously stupid endeavor to me and seems completely neglected. Does anyone know if there are any organizations with this explicit goal? The closest ones I can think of are the Qualia Research Institute and the Sentience Institute, but I only know about them because they’re connected to the EA space, so I’m probably missing some.
You can browse the “Practical” tag to find posts which are directly useful. Here are some of my favorites:
Lukeprog’s The Science of Winning at Life sequence summarizes scientifically-backed advice for “winning” at everyday life: in productivity, relationships, emotions, etc. Not exaggerating, it is close to the most useful piece of media I have ever consumed. I especially recommend the first post Scientific Self Help: The State of our Knowledge, which transformed my perception of where I should look to learn how to improve my life.
After reading Scientific Self Help, my suspicion that popular self-help books were epistemically garbage was confirmed, and I learned that many of my questions for how to improve my life could be answered by textbooks. This gave me a strong intrinsic motivation for self-learning, which was made more effective by another of Lukeprog’s posts, Scholarship: How to Do It Efficiently, combined with his thread The Best Textbooks on Every Subject.
Romeo Steven’s no-bullshit recommendations on how to increase your longevity and exercise routines, with a ten-year update and reflection.
Many posts that are Repositories of advice.
I see. Maybe you could address it towards “DAIR, and related, researchers”? I know that’s a clunkier name for the group you’re trying to describe, but I don’t think more succinct wording is worth progressing towards a tribal dynamic between researchers who care about X-risk and S-risk and those who care about less extreme risks.
I don’t think it’s a good idea to frame this as “AI ethicists vs. AI notkilleveryoneists”, as if anyone that cares about issues related to the development of powerful AI has to choose to only care about existential risk or only other issues. I think this framing unnecessarily excludes AI ethicists from the alignment field, which is unfortunate and counterproductive since they’re otherwise aligned with the broader idea of “AI is going to be a massive force for societal change and we should make sure it goes well”.
Suggestion: instead of addressing “AI ethicists” or “AI ethicists of the DAIR / Stochastic Parrots school of thought”, why not address “AI X-risk skeptics”?
Does anyone know whether added sugar is bad for you if you ignore the following points?
It spikes your blood sugar quickly (it has a high glycemic index)
It doesn’t have any nutrients, but it does have calories
It does not make you feel full, so it makes it easier to eat more calories, and
It increases tooth decay.
I’m asking because I’m trying to figure out what carbohydrate-dense foods to eat when I’m bulking. I find it difficult to cram in enough calories per day, so most of my calories come from fat and protein at the moment. I’m not getting enough carbs. But most “carby foods for bulking” (e.g. potatoes, rice) are very filling! E.g., a cup of rice has 200 kcal, but a cup of nuts has 800.
I did some stats to figure out what carby foods have a low glycemic index but also a low satiation index, i.e. how quickly they make you feel full. My analysis showed that sponge cake was a great choice, having a glycemic index of only 40 while being the least filling of all the foods I analyzed!
But common sense says that cake would be classified as a “dirty bulk” food, which I’m trying to avoid. If it’s not dirty for its glycemic index, what makes it dirty? Is it because cake has a “dirty” kind of fat, or is there something bad about sugar besides its glycemic index?
Just going off of the points I listed, eating cake to bulk up isn’t “dirty”, except for tooth decay. That’s because
Cake has a low glycemic index, I think because it has a lot of fat?
I would be getting enough nutrients from the rest of what I eat; cake would make up the surplus.
The whole point of me eating cake is to get more calories, so this point is nil.
What am I missing?
They meant a physical book (as opposed to an e-book) that is fiction.
I’ve also reflected on “microhabits” – I agree that the epistemics are tricky, of maintaining a habit even when you can’t observe causal evidence for it being beneficial. I’ll implement a habit if I’ve read some of the evidence and think it’s worth the cost, even if I don’t observe any effect in myself. Unfortunately, that’s the same mistake homeopathics make.
I’m motivated to follow microhabitats mostly out of faith that they have some latent effects, but also out of a subconscious desire to uphold my identity, like what James Clear talks about in Atomic Habits.
Like when I take a vitamin D supplement in the morning, I’m not subconsciously thinking “oh man, the subtle effects this might have on my circadian rhythm and mood are totally worth the minimal cost!”. Instead, it’s more like “I’m taking this supplement because that’s what a thoughtful person who cares about their cognitive health does. This isn’t a chore; it’s a part of what it means to live Roman’s life”.
Here’s a list of some of my other microhabits (that weren’t mentioned in your post) in case anyone’s looking for inspiration. Or maybe I’m just trying to affirm my identity? ;P
Putting a grayscale filter on my phone
Paying attention to posture – e.g., not slouching as I walk
Many things to help me sleep better
Taking 0.3 mg of melatonin
Avoiding exercise, food, and caffeine too close to bedtime
Putting aggressive blue light filters on my laptop and phone in the evening and turning the lights down
Taking a warm shower before bed
Sleeping on my back
Turning the temperature down before bed
Wearing headphones to muffle noise and a blindfold
Backing up data and using some internet privacy and security tools
Anything related to being more attractive or likable
Whitening teeth
Following a skincare routine
Smiling more
Active listening
Avoiding giving criticism
Flossing, using toothpaste with Novamin, and tounge scraping
Shampooing twice a week instead of daily
I haven’t noticed any significant difference from any of these habits individually. But, like you suggested, I’ve found success with throwing many things at the wall: it used to take me a long time to fall asleep, and now it doesn’t. Unfortunately, I don’t know what microhabits did the trick (stuck to the wall).
It seems like there are three types of habits that require some faith:
Those that take a while to show effects, like weightlifting and eating a lot to gain muscle.
Those that only pay off for rare events, like backing up your data or looking both ways before crossing the street.
Those with subtle and/or uncertain effects, like supplementing vitamin D for your cognitive health or whitening your teeth to make a better first impression on people. This is what you’re calling microhabits.
I find it interesting that all but one toy is a transportation device or a model thereof.
I’ve also been thinking a lot about this recently and haven’t seen any explicit discussion of it. It’s the reason I recently began going through BlueDot Impact’s AI Governance course.
A couple questions, if you happen to know:
Is there anywhere else I can find discussion about what the transition to a post-superhuman-level-AI society might look like, on an object level? I also saw the FLI Worldbuilding Contest.
What are the implications for this on career choice for a early-career EA trying to make this transition go well?