Hi, I am a Physicist, an Effective Altruist and AI Safety student/researcher.
Linda Linsefors(Linda Linsefors)
Suggested solution to The Naturalized Induction Problem
Suggested solution to The Naturalized Induction Problem
The Virtue of Numbering ALL your Equations
Call for cognitive science in AI safety
Recent talk by Stuart Armstrong related to this topic:
Yes, that is correct.
I wrote the text and asked people to cosign if the agreed, for signaling value.
Do you have a good idea on how to make this clearer?
Extensive and Reflexive Personhood Definition
Better?
Basically, if I change the title, it can go on the front page?
The Mad Scientist Decision Problem
>it seems that in order to be worthwhile the person would most likely have to be co-located with the team
My conclusion was the opposite. For this to work well the bread winner should be in a high earning location (which typically high cost living) and the rest of the team should be in a low cost location (which typically have low earning potential).
Being the only one in the team that is i a separate lotion, is not optimal for inclusion. But many teams are spread out anyway. I am pretty sure RAISE is not all in one location. As an other example, the organizers of AI Safety Camp is spread out all over Europe.
>Also, if the organisation later receives funding, the amount of prestige/influence of those taking this role will seem to drop or they might even become completely obsolete.
This might actually be feature, not a bug. When the new organisation has grown up and are receiving all the grants they need, then it is time for the funder to move on, to the next project, brining with them knowledge and experience from the first project.
Probability is fake, frequency is real
Repeated (and improved) Sleeping Beauty problem
Non-resolve as Resolve
I agree.
An even simpler example: If the agents are reward learners, both of them will optimize for their own reward signal, which are two different things in the physical world.
I agree that “want” is not the correct word exactly. What I mean by prior is an agents actual a priori beliefs, so by definition there will be no mis-match there. I am not trying to say that you choose your prior exactly.
What I am gesturing at is that no prior is wrong, as long as it does not assign zero probability to the true outcome. And I think that much of the confusion in atrophic situation comes from trying to solve an under-constrained system.
Generalized Kelly betting
Optimization Regularization through Time Penalty
Hi, approximately when will it be decided who gets funding this round?