>it seems that in order to be worthwhile the person would most likely have to be co-located with the team
My conclusion was the opposite. For this to work well the bread winner should be in a high earning location (which typically high cost living) and the rest of the team should be in a low cost location (which typically have low earning potential).
Being the only one in the team that is i a separate lotion, is not optimal for inclusion. But many teams are spread out anyway. I am pretty sure RAISE is not all in one location. As an other example, the organizers of AI Safety Camp is spread out all over Europe.
>Also, if the organisation later receives funding, the amount of prestige/influence of those taking this role will seem to drop or they might even become completely obsolete.
This might actually be feature, not a bug. When the new organisation has grown up and are receiving all the grants they need, then it is time for the funder to move on, to the next project, brining with them knowledge and experience from the first project.
An even simpler example: If the agents are reward learners, both of them will optimize for their own reward signal, which are two different things in the physical world.
I agree that “want” is not the correct word exactly. What I mean by prior is an agents actual a priori beliefs, so by definition there will be no mis-match there. I am not trying to say that you choose your prior exactly.
What I am gesturing at is that no prior is wrong, as long as it does not assign zero probability to the true outcome. And I think that much of the confusion in atrophic situation comes from trying to solve an under-constrained system.
I did not mean to imply that the choices had to be made simultaneous, or in any other particular order, just that this is the type of payoff matrix. But I also think that “simultaneous choice” v.s. “sequential game” is a false dichotomy. If both players are UDT, every game is a game simultaneous choice game (where the choices are over complete policies).
I know that according to what I describe, the blackmailers threat is not credible in the game theory sense of the word. Sow what? It is still possible to make credible threats in the common-use meaning of the word, which is what matters.
I would decompose that in to a value trade + a blackmail.
The default for me would be to take the action that gives me 1 utility. But you can offer me a trade where you give me something better in return for me not taking that action. This would be a value trade.
Lets now take me agreeing to your proposition as the default. If I then choose to threaten to call the deal off, unless you pay me a even higher amount, than this is blackmail.
I don’t think that these parts (the value trade and the blackmail) should be viewed as sequential. I wrote it that way for illustrative purposes. However, I do think that any value trade has a Game of Chicken component, where each player can threaten to call of the trade if they don’t get the more favorable deal.
Good initiative. I will add a question to the application form, asking if the applicant allows me to share that they are coming. I then will share the participant list here (with the names of those how agreed) and update every few days.
However, i might close the application at some point due to the unconference being full. We have more or less unlimited sleeping space since the EA Hotel is literally surrounded by other hotels. So the limitation is spaces for talks, discussions and workshops and such.
If all activities are in the EA Hotel, we should not be much more than 20 people. If it looks like I will get more applications than that I will see if it is possibly to rent some more commons spaces at other hotels I have not looked in to this yet, but I will soon.
This workshop is now full, but due to the enthusiasm I have received I am going to organize a second Learning-by-doing AI Safety workshop some time in October/November this year. If you want to influence when it will be you can fill in our doodle: https://doodle.com/poll/haxdy8iup4hes9xy
I am leaving the application form open. You can fill it in to show interest in the second Learning-by-doing AI Safety workshop and future similar events.
There is still room for more participants at TAISU, but sleeping space is starting to fill up. The EA Hotel dorm rooms are almost fully booked. Fore those who don’t fit in the dorm or want some more privet space, there are lots of near by hotel. However since TAISU happens to be on a UK bank holiday, these might fill up too.
Linda Linsefors(Linda Linsefors)
Hi, I am a Physicist, an Effective Altruist and AI Safety student/researcher.
Recent talk by Stuart Armstrong related to this topic:
https://www.youtube.com/watch?v=19N4kjYbZD4
Yes, that is correct.
I wrote the text and asked people to cosign if the agreed, for signaling value.
Do you have a good idea on how to make this clearer?
Better?
Basically, if I change the title, it can go on the front page?
>it seems that in order to be worthwhile the person would most likely have to be co-located with the team
My conclusion was the opposite. For this to work well the bread winner should be in a high earning location (which typically high cost living) and the rest of the team should be in a low cost location (which typically have low earning potential).
Being the only one in the team that is i a separate lotion, is not optimal for inclusion. But many teams are spread out anyway. I am pretty sure RAISE is not all in one location. As an other example, the organizers of AI Safety Camp is spread out all over Europe.
>Also, if the organisation later receives funding, the amount of prestige/influence of those taking this role will seem to drop or they might even become completely obsolete.
This might actually be feature, not a bug. When the new organisation has grown up and are receiving all the grants they need, then it is time for the funder to move on, to the next project, brining with them knowledge and experience from the first project.
I agree.
An even simpler example: If the agents are reward learners, both of them will optimize for their own reward signal, which are two different things in the physical world.
I agree that “want” is not the correct word exactly. What I mean by prior is an agents actual a priori beliefs, so by definition there will be no mis-match there. I am not trying to say that you choose your prior exactly.
What I am gesturing at is that no prior is wrong, as long as it does not assign zero probability to the true outcome. And I think that much of the confusion in atrophic situation comes from trying to solve an under-constrained system.
Hi, approximately when will it be decided who gets funding this round?
I did not mean to imply that the choices had to be made simultaneous, or in any other particular order, just that this is the type of payoff matrix. But I also think that “simultaneous choice” v.s. “sequential game” is a false dichotomy. If both players are UDT, every game is a game simultaneous choice game (where the choices are over complete policies).
I know that according to what I describe, the blackmailers threat is not credible in the game theory sense of the word. Sow what? It is still possible to make credible threats in the common-use meaning of the word, which is what matters.
I would decompose that in to a value trade + a blackmail.
The default for me would be to take the action that gives me 1 utility. But you can offer me a trade where you give me something better in return for me not taking that action. This would be a value trade.
Lets now take me agreeing to your proposition as the default. If I then choose to threaten to call the deal off, unless you pay me a even higher amount, than this is blackmail.
I don’t think that these parts (the value trade and the blackmail) should be viewed as sequential. I wrote it that way for illustrative purposes. However, I do think that any value trade has a Game of Chicken component, where each player can threaten to call of the trade if they don’t get the more favorable deal.
Good initiative. I will add a question to the application form, asking if the applicant allows me to share that they are coming. I then will share the participant list here (with the names of those how agreed) and update every few days.
For pledges, just write here as Ryan said.
There i no specific deadline for signing up.
However, i might close the application at some point due to the unconference being full. We have more or less unlimited sleeping space since the EA Hotel is literally surrounded by other hotels. So the limitation is spaces for talks, discussions and workshops and such.
If all activities are in the EA Hotel, we should not be much more than 20 people. If it looks like I will get more applications than that I will see if it is possibly to rent some more commons spaces at other hotels I have not looked in to this yet, but I will soon.
We currently have 4 accepted applicants.
commend removed by me
Are you worried about the unconference not having enough participants (in total), or it not having enough senior participants?
Accepted applicants so far (July 5)
Gavin Leech, University of Bristol (soon)
Michaël Trazzi, FHI
David Lindner, ETH Zürich
Gordon Worley, PAISRI
anonymous
Josh Jacobson, BERI
anonymous
Andrea Luppi, Harvard University / FHI
Dragan Mlakić
Noah Topper
Andrew Schreiber, Ought
Jan Brauner, University of Edinburgh—weekend only
Søren Elverlin, AISafety.com
Victoria Krakovna, DeepMind—weekend only
Janos Kramar, DeepMind—weekend only
Fixed! Thank you for pointing this out.
This workshop is now full, but due to the enthusiasm I have received I am going to organize a second Learning-by-doing AI Safety workshop some time in October/November this year. If you want to influence when it will be you can fill in our doodle: https://doodle.com/poll/haxdy8iup4hes9xy
I am leaving the application form open. You can fill it in to show interest in the second Learning-by-doing AI Safety workshop and future similar events.
There is still room for more participants at TAISU, but sleeping space is starting to fill up. The EA Hotel dorm rooms are almost fully booked. Fore those who don’t fit in the dorm or want some more privet space, there are lots of near by hotel. However since TAISU happens to be on a UK bank holiday, these might fill up too.
The Acceptance stuff was most useful for me. I don’t remember any CFAR technique that focus on this.