Perhaps This one?
Measure
Occasionally I hear a song on the radio that I don’t remember hearing in a while but that I had been thinking about only a day or two before. This surprised me, so I started thinking of possible explanations. My first thought was that I think about a lot of things and I hear a lot of songs on the radio so maybe there was bound to be a connection somewhere via the birthday paradox and that this was just more salient than the many times I had heard an unrelated song. I started making a mental note every time it happened, and even after considering the above, it still felt like it was happening too often to be pure coincidence. I currently think this happens because...
The station doesn’t draw songs from the entire library of songs they play but rather selects from a smaller subset that gets updated every week or month. This song was recently added to the rotation and I already heard it once recently without remembering which prompted me to think about it.
Would WikiLeaks fit this model?
A potential issue that could arise if the exchange rate of glory to votes is one-to-one, is that two accounts could farm glory by trading it back and forth indefinitely.
This part confuses me. If you spend 1 glory to give me 1 glory and I spend 1 to give you 1, then we’re just back where we started. If we both make lots of low-effort posts and upvote each others posts (assuming the first upvote is free), then we risk lots of other people downvoting all those posts when they realize what we’re doing (assuming the first downvote is free).
A couple of other potential problems. If the first downvote is free, you could cheaply punish someone by downvoting lots of their posts once, and more prolific posters would be more vulnerable to this. If the first downvote is not free, then there is an incentive to avoid downvoting objectionable posts in the hopes that someone else will spend the points instead.
The naive approximation gives 100% chance of death for both options, but we know it’s less accurate for larger probabilities, so that should mean the two 50% risks is safer. In fact, 1 - (1 − 0.5)^2 = 75% is actually larger than 1 - (1 − 0.05)^20 = 64%. This means that the naive approximation is also bad at numerous iterations (large exponents).
Robert must behave like somebody assigning some consistent dollar value to saving a human life.
Note that this number provides only a lower bound on Robert’s revealed preference regarding the trade-off and that it will vary with the size of the budget.
One could imagine an alternative scenario where there is a fluctuating bankroll (perhaps with a fixed rate of increase — maybe even a rate proportional to its current size) and possible interventions are drawn sequentially from some unknown distribution. In this scenario Robert can’t just use the greedy algorithm until he runs out of budget (modulo possible knapsack considerations), but would have to model the distribution of interventions and consider strategies such as “save no lives now, invest the money, and save many more lives later”.
For a takeoff scenario, though, I would think you need something like human AI researchers using the recommendation algorithm to help them design a more general AI, and I’m not sure how much content would be relevant to that.
How significant of a limitation would it be that a YouTube AI can’t recursively improve its own architecture?
How significant of a limitation would it be that it can only recommend existing videos rather than create its own?
From a status/signaling perspective, I want to imitate the behavior and fashion of those with higher status (without copying them exactly since that signals low creativity), and I want to differentiate myself from those with lower status (without deviating so far that I lose the in-group affiliation entirely).
Maybe not all seats are up for (re)election any given year?
I’d be worried about integer overflow with that protocol. If it can understand BB and division, you can probably just ask for the remainder directly and observe the change.
As someone who cares a lot about what I allow to influence my beliefs and opinions, I found this post fascinating and informative, and I’m glad I read it. Strong upvote.
I guess that makes sense. Thanks for clarifying!
Computational complexity only makes sense in terms of varying sizes of inputs. Are some Y events “bigger” than others in some way so that you can look at how the program runtime depends on that “size”?
What do X and Y represent in this construction? What is the scaling parameter used to define the complexity class?
At a minimum, there should be a good argument that the new equilibrium is stable in that no one can benefit by unilaterally defecting.
I thought the ability to deploy mixed strategies was a pretty standard part of CDT. Is this not the case, or are you considering a non-standard formulation of CDT?
“Relinquish” might be a good alternative. To me “grieving” is more about emotions and is an ongoing process whereas “letting go” or “relinquishing” is about goals and is a one-time decision to stop striving for an outcome.
Also “weighs a kilo [and] is called a gram”
As I understand it the main ethical problem with rewarding trial participation is that people may sign up for the reward without adequately considering the risk. However this becomes a moot point when the “reward” is just the trial vaccine itself along with any associated risks.