AI powered anki—before showing any card we transform it with AI. it shows up in new context every time
Tricular
A cooperative AI is an AI that’s willing to cooperate in prisoner’s dilemma. The kind of prisoner’s dilemma situations AI can get into is to cooperate with other AIs or non-owners. There is no cooperation in master tool relationship. Cooperative AI is an AI that is likely to scheme with other AIs. Therefore we don’t want cooperative AI.
this is not based on any statistical data but on a vague sense of what’s going on created with memes, conversations with friends, excerpts of articles read online.
Instant messaging is destroying groups. It’s actually different to be a friend group and a group of friends (group of people where everyone is friends with each other). Friend group are formed where people meet and talk to each other and function as a group.
Instant messaging allows for people to break being a group. Being in the same place as other people is no longer enough to be part of the group, now you are split into silos based on chats on instant communicators. 1-1 relations can be initiated so easily that people get FOMO.
okay I understand why ea are disinterested in this (my best guess is that it has a sjw vibe while EA is like beyond that and “rational” and focuses on measurable things) but maybe I will spend some time reading about this where do I start. What kind of mindset would be the best to approach it given that I’m a hardcore ea who keeps a dashboard with measurable goals like “number of people of convinced to become EA” and thinks anything that doesnt contribute to the goals on his dashboard are useless.
i didn’t upvote or react in any way because I don’t understand how gender inequality is related to those issues unless you mean things such as “if more woman were in government it would surely be better for all of us” which I somewhat agree but also I don’t think this sentence can be true in the same way give well cost effectiveness estimates can be
im just going to say a few things without thinking much about them
I believe that a natural healthy reaction to shoulds is to flinch away (should signal something going wrong, something you think you need to do but you don’t actually want to) and lack of it signals either strong tendency to take things very literally and a strong sense of purpose or idk like how long one can go at it? it’s literally painful why keep doing it, what’s the reason to follow shoulds until you are depressed? why does one get stuck looking at the world through stiff binary lens of good and bad? This is only one way to relate to the world. Why keep doing this of not due to wanting to overwrite your own free will?
what made you stop considering yourself to be utilitarian
people look into universal moral frameworks like utilitarianism and EA because they lack self-confidence to take a subjective personal point of view. They need to support themselves with an “objective” system to feel confident that they are doing the correct thing. They look for external validation.
I found Section 6 particularly interesting! Here’s how I understand it:
Most of our worries about AI stem from catastrophic scenarios, like AI killing everyone.
It seems that to prevent these outcomes, we don’t need to do extremely complex things, such as pointing AI towards the extrapolated values of humanity.
Therefore, we don’t need to focus on instilling a perfect copy of human values into AI systems.
From my understanding, this context relates to the “be careful what you wish for” problem with AI, where AI could optimize in dangerous or unexpected ways. There’s a race here: can we control AI well enough to still gain its benefits?
However, I don’t think you’ve provided enough evidence that this level of control is actually possible. Additionally, there’s the issue of deceptive alignment—I’m not convinced we could manage this “race” without receiving some kind of feedback from AI systems.
Finally, the description of the oracle AI in this section seems quite similar to the idea of corrigible AI.
Try taking one level at a time and pausing between levels. You might just get frustrated and getting some freshness will help
What do you mean by the most? How likely it is that you have no nutritional deficiencies?
It used to be believed that intensity was basically irreplaceable, but more and better studies have shown extremely similar effects from lower intensity, approximately down to 60-65% of your 1 rep max, whereas a 4 or 5 rep scheme is going to be around 80-85% your 1 rep max.
Can you list some of those studies?
I agree with everything you say about how the studies that try to research this issue can go wrong, but I can’t entirely agree with your conclusion that it seems probably harmless. I mean, it depends on what you mean by that. If you mean that the effect of pornography is more or less neutral on average—not sure, but also not sure about the opposite. If you mean that somebody should just start consuming this media—I guess that it would be good to be a little bit more careful. I think there is some evidence that suggests that pornography can negatively impact relationships, and… it seems quite clear to me that starting to consume pornography is easier than stopping. If there is a chance of developing an addiction that negatively influences your life and relationships, maybe you should just be careful.
I’m a little bit surprised by your answer. Do you consider fixing nutritional deficiencies a part of a healthy diet? There is some good evidence that iron deficiency is bad for you here
Looks like a pretty good alternative, thanks! But I just realized that goals actually have some properties that I care about that themes don’t have—they really narrow your focus.
Looking for fundamental alternatives to the concept of goals in life organizing
I’m obsessed with planning and organizing my life, and I also tend to overthink and analyze things. Goals are a fundamental piece of organization for me. I try to make a substantial part of my life focused on achieving goals: I work out to keep my body healthy, and I work to earn money and feel secure. But I often feel anxious, and I ask myself if there is any other way of organizing life that avoids the concept of goals altogether. I also think it’s useful to imagine living a life without some crucial concept.
It seems that it’s quite hard to avoid thinking about goals in general when you define goals as anything that you plan and decide to pursue. It might be possible with some activities when you just try to follow your curiosity and do not think about the long-term effects of your actions. Does art need goals? But then it seems that following curiosity just becomes your next goal.
There is a bunch of news/articles on the internet which are describing “Elon Musk’s rules for productivity” I don’t know if Elon Musk really wrote them, but it’s not the point. One of the rules usually goes like that:
6) Use common senseIf a company rule doesn’t:
- Make sense
- Contribute to progress
- Apply to your specific situation
Avoid following the rule with your eyes closed.
I really don’t agree with it. I think that Rules are usually put into place for some very specific reason that might be hard for us to see, but it is there nevertheless. I’m a software developer and I think that if I listened to some of my colleagues telling me about rules like “don’t try to optimize it when you are still figuring out what you actually want to do” I would be a much better developer right now, but I usually didn’t and spend a lot of time figuring how to optimize things that didn’t really need it.
I encourage people who downvoted to say why