One thing I’m confused about this post is whether constructivism, subjectivism count as realisms. The cited realists (Enoch and Parfit) are substantive realists.
I agree that substantive realists are a minority in the rationality community, but not that constructivists + subjectivists + substantive realists are a minority.
Let’s just consider emotions. A really simple model of emotions is that they’re useful bec they provide info and bec they have motivational power. Neurotic emotions are useful when they provide valuable info or motivate valuable actions.
If you’re wondering whether a negative emotion is useful, check whether it’s providing valuable info or motivating useful action. I think internal family systems might be especially useful for this.
Of course, sometimes you can get the valuable info or motivation w/o experiencing a negative emotion (see replacing guilt).
Many negative emotions are hypersensitive, which is why we see the trend towards limiting them. Ie. most often anxiety is not providing useful information or motivating useful action. The hypersensitivity would be justified if the costs of being wrong were super high, but for many of the things we experience anxiety about, this is no longer the case. That being so, I imagine for some people, negative emotions can play a useful role in some contexts, but one needs to be concrete here.
I use Google sheets.
Rate my progress on key goals (1-5). Add notes justifying the score.
Note how much time I spent working, review work cycle sheets for trends, insight, and things I’d like to try next week.
Compare my view of what a successful week looked like with what actually happened.
Determine what a successful week looks like for the next week.
The ontology is key goals, weekly goals. Each goal is grouped under a broad project. I like flexibility, so the ontology is as general as it sounds.
I use work cycles to track my work and plan for 50 min sessions during the day. Work cycles have an in built review mechanism which is useful.
At the beginning of the day, I’ll collect my main todos.
At the end of the day, I’ll listen to this reflection or a similar one.
What I could be better at:
Sometimes my vision of what a successful week looks like slips through the cracks. Not for very important things, but I don’t have the best system for reviewing moderately important things. It’s not clear to me how bad this is, but there’s a class of chore like thing that can take longer me longer to do than I’d often like.
I have compiled all of these pieces rather slowly.
I agree that the return to “learning to navigate moods” varies by person.
It sounds to me, from your report, that you tend to be in moods conducive to learning. My sense is that there are many who are often in unproductive moods and many who aware that they spend too much time in unproductive moods. These people would find learning to navigate moods valuable.
Awesome, thanks for the super clean summary.
I agree that the model doesn’t show that AI will need both asocial and social learning. Moreover, there is a core difference between the growth of the cost of brain size between humans and AI (sublinear [EDIT: super] vs linear). But in the world where AI dev faces hardware constraints, social learning will be much more useful. So AI dev could involve significant social learning as described in the post.
Can you say more about what you mean by I. metaphilosophy in relation to AI safety? Thanks.
Have there been explicit requests for web apps that have may solve an operations bottleneck at x-risk organisations? Pointers towards potential projects would be appreciated.
Lists of operations problems at x-risk orgs would also be useful.