Better to use an archive link to not give them traffic
Yoav Ravid
Building Blocks of Politics: An Overview of Selectorate Theory
Is the forum format important, with the separation between posts and the profile pic and info on the side of each post? Cause if not I would love to have an epub version of this so I can read it on my kindle, might even find a way to create it myself if it doesn’t exist yet.
A sketch of ‘Simulacra Levels and their Interactions’
Self-Resolving Prediction Markets for Unverifiable Outcomes by Siddarth Srinivasan, Ezra Karger, Yiling Chen:
Prediction markets elicit and aggregate beliefs by paying agents based on how close their predictions are to a verifiable future outcome. However, outcomes of many important questions are difficult to verify or unverifiable, in that the ground truth may be hard or impossible to access. Examples include questions about causal effects where it is infeasible or unethical to run randomized trials; crowdsourcing and content moderation tasks where it is prohibitively expensive to verify ground truth; and questions asked over long time horizons, where the delay until the realization of the outcome skews agents’ incentives to report their true beliefs. We present a novel and unintuitive result showing that it is possible to run an incentive compatible prediction market to elicit and efficiently aggregate information from a pool of agents without observing the outcome by paying agents the negative cross-entropy between their prediction and that of a carefully chosen reference agent. Our key insight is that a reference agent with access to more information can serve as a reasonable proxy for the ground truth. We use this insight to propose self-resolving prediction markets that terminate with some probability after every report and pay all but a few agents based on the final prediction. We show that it is an Perfect Bayesian Equilibrium for all agents to report truthfully in our mechanism and to believe that all other agents report truthfully. Although primarily of interest for unverifiable outcomes, this design is also applicable for verifiable outcomes.
This is a fascinating result.
Our work builds on three connected literatures: Aumann’s agreement theorem, prediction markets, and peer prediction. Specifically: (1) unlike the standard framework of Aumann’s agreement theorem, our mechanism provides incentives for truthful information revelation and aggregation under the standard Aumannian protocol with many agents when agents’ signals are conditionally independent given the ground truth (a kind of ‘informational substitutes’ condition); (2) unlike prediction markets, our mechanism works even without access to the ground truth; and (3) unlike peer prediction mechanisms, our mechanism also efficiently aggregates information into a consensus prediction in the single-task setting while ensuring that it elicits minimal information, accommodates heterogeneous agents with non-binary signals, and pays zero in uninformative equilibria, as long as we have access to a sufficiently large pool of informed agents who share a common prior.
I wonder how important that last part about the common prior is. Here’s how it works:
Each node represents an agent reporting a prediction to the mechanism, and the mechanism terminates with probability 1 − α after each report. Payouts for the first T − k agents are determined using a negative cross-entropy market scoring rule with respect to the terminal agent T, while the last k agents receive a flat payout R. k can be chosen to be large enough so the incentive to deviate from truthful reporting is no more than some desired ε.
They conclude:
Our analysis of self-resolving prediction markets (or equivalently, sequential peer prediction) opens up rich directions for future work. One important direction is to consider situations where our informational substitutes assumption does not hold, e.g., when agents have a cost to exerting effort or acquiring information. It may be that agent signals are not conditionally dependent given the outcome Y , and are perhaps only conditionally independent given the outcome, agent effort, and agent expertise. Thus, studying how agents can signal their effort or expertise in a simple way is an important direction for future work. It would also be interesting to explore incentives in the
presence of risk aversion. Lastly and most importantly, empirically tests of our proposed mechanism would shed further light on the viability of our mechanism in aggregating agents’ beliefs on both resolvable and unresolvable questions ranging from causal effects, to long-term forecasts about economic indicators, to data-collection tasks widely used in machine learning.I really hope someone makes an empirical study of this idea. It could be extremely useful if it works.
- 12 Dec 2023 11:18 UTC; 4 points) 's comment on Truthseeking, EA, Simulacra levels, and other stuff by (
I think deleting it was a fair response (though perhaps banning is a little over the top). assuming the moderator has no way of checking for himself whether this makes sense and he knows he doesn’t, he’s left with a bet about whether this is the real thing or just bullshit. he expects more bullshit than real things, and he expects the bullshit to be dangerous. so he removes everything that fits this class of things, knowing he might end up also removing something real.
I think the main prediction/expectation error many rationalists (including me) made, was expecting countries to either do practically nothing and let the virus run through the population like a wildfire, or respond heavily in a way that stomp it out in a few months. in both cases life goes back to normal in a few weeks/months, and if you know that it will only be a few months then taking extreme measures in that time frame makes sense.
Alas, what actually happened was this weird middle ground where we never quite eradicate the virus nor let it run wild, which drew out the problem for a year+.
I wasn’t prepared for that, and my thinking was too short term, so i also ended up sacrificing too much.
Interestingly, i think the current coordination around the GameStop short squeeze can be explained with this framework.
If WallStreetBets tried to do this coordination with the goal of making money, they would probably fail. but this is not the stated goal. for many, the goal is one of entertainment, sticking it to the man, or other goals that are unrelated to making money. many even say they don’t care if they lose money on this because it’s not the point.
So in a way, they’re pretending to value the stock, and they’re coordinating to pretend.
I think the benefits in this case from having a non-monetary goal is that it changes the incentive structure. if the goal was to make money, then there would be much more incentive to defect and not everyone would be able to win. if the point is to bankrupt those who shorted these companies (or some other non-monetary goal), then everyone can “win” (even if they lose money) and it’s easier to coordinate.
I don’t know if it goes all the way to level 3, but i still find this an interesting connection.
The covers are beautiful!
I notice I’m confused
books added since the list was last updated -
On applied Bayesian statistics, Dr_Manhattan recommends Lambert’s A student’s guide to Bayesian Statisticsover McEarlath’s Statistical Rethinking, Kruschke’s Doing Bayesian Data Analysis, and Gelman’s Bayesian Data Analysis.
On Functional Analysis, krnsll recommends Brezis’s Functional Analysis, Sobolev Spaces and Partial Differential Equationsover Kreyszig’s and Lax’s.
On Probability Theory, crab recommends Feller’s An Introduction to Probability Theory over Jaynes’ Probability Theory: The Logic of Science and MIT OpenCoursewar’s Introduction to Probability and Statistics.
On History of Economics, Pablo_Stafforini recommends Sandmo’s Economics Evolving over Robbins’ A History of Economic Thought and Schumpeter’s History of Economic Analysis.
On Relativity, PeterDonis recommends Carroll’s Spacetime and Geometry over Taylor & Wheeler’s Spacetime Physics, Misner, Thorne, & Wheeler’s Gravitation, Wald’s General Relativity, and Hawking & Ellis’s The Large Scale Structure of Spacetime.
On Category Theory, adamShimi recommends Awodey’s Category Theory over Maclane’s category theory for the working mathematician.
On General Psycology, Jurij Fedorov recommends Larsen’s and Buss’ Personality Psychology: Domains of Knowledge about Human Nature.
On Econometrics Niklas Lehmann recommends Josh Angrist’s and J.S. Pischke’s Mastering ’Metrics over Josh Angrist’s and J.S. Pischke’s Mostly Harmless Econometrics, Wooldridge’s Introductory Econometrics A Modern Approach, and Ökonometrie Eine Einführung (German).
(if you add another book you can reply here with a link to your comment and I’ll add it )
- 5 Feb 2020 20:22 UTC; 5 points) 's comment on The Best Textbooks on Every Subject by (
- 22 Dec 2020 7:39 UTC; 4 points) 's comment on The Best Textbooks on Every Subject by (
- 1 Aug 2020 8:17 UTC; 1 point) 's comment on The Best Textbooks on Every Subject by (
Thoughts on Crowdfunding with Refund Bonuses
Meta comment about the dialogue feature:
This was the first time I used the dialogue feature and it was a blast (much better experience than comment threads). Being able to see what the other person is writing as they write it, suggest edits, and swap things around is such a great user experience, and is so much closer to talking than any other form of written communication I used thus far. I kinda wish I had the option to use this format in each of my chats (Whatsapp, Discord, etc..).
I loved how this allowed the conversation to be free-flowing, and took us on interesting tangents that we probably wouldn’t have gone on otherwise. OTOH, this might make it worse to read. I personally haven’t found any dialogue great to read yet, and it might be related to this quality, but it seems they are definitely great to have. So perhaps what’s needed is just to go the extra step and distill the dialogue afterward.
Two other points:
One thing I noticed is that we very often wrote meta notes that we later deleted, and it may be nice to have a box on the side for meta discussion, so you can keep the main thread clean.
I think it would also be nice if we could do inline reacts while editing, to be easily able to mark agreement on something (Like you would nod your head or go “aha” in the middle of a sentence to show that you agree).
A Socratic Dialogue about Socratic Dialogues
“the damage to people who implement veganism badly is less important to me than the damage to animals caused by eating them”
I agree with Soto on this, but think that suppressing truth-seeking causes far more damage than just making people implement veganism worse, including, importantly, making some people not go vegan at all.
If you believe that marginal health benefits don’t justify killing animals, I think that’s a far more effective line of argument. And it remains truthful/honest.
I do not think this post serves some greater goal (if it does, like many others in this comment section, I am confused)
(I’ll try to explain as best I understand, but some of it may not be exactly right)
The goal of this post is to tell the story of Zack’s project (which also serves the project). The goal of Zack’s project is best described by the title of his previous post—he’s creating a Hill of Validity in Defense of Meaning.
Rationalists strive to be consistent, take ideas seriously, and propagate our beliefs, which means a fundamental belief about the meaning of words will affect everything we think about, and if it’s wrong, then it will eventually make us be wrong about many things.
Zack saw Scott and Eliezer, the two highest status people in this group/community, plus many others, make such a mistake. With Eliezer it was “you’re not standing in defense of truth if you insist on a word, brought explicitly into question, being used with some particular meaning.”. With Scott it was “I ought to accept an unexpected [X] or two deep inside the conceptual boundaries of what would normally be considered [Y] if it’ll save someone’s life.”.
This was relevant to questions about trans, which Zack cares a lot about, so he made a bunch of posts arguing against these propositions. The reason it didn’t remain a mere philosophy of language debate, is that it bumped into the politics of the trans debate. Seeing the political influence made Zack lose faith with the rationalist community, and warranted a post about people instead of just about ideas.
I don’t have a lot to say, but I feel like mentioning that I read the whole thing, enjoyed it, and agreed with you, including on the point that if rationalists can’t agree with your philosophy of language because of instrumental motivations then it’s a problem for us as a group of people who try to reason clearly without such influences.
No I don’t think it’s a good assumption that most people past a 100 karma have figured out how to write publicly with decent quality (though, depends on what you consider decent).
I’m well past a 100 and I expect this to be very useful to me when I write posts.
And if we’re talking in general then even the best writers usually have proofreaders/beta-readers (take Paul graham for example, every essay he releases credits at least a few beta readers)
I do agree it might be especially important to new people that don’t have karma, though. It’ll be interesting to hear more from the team why they decided on that specific limit. My guess, though, is that they want to mostly review posts that are going to be good posts, and don’t want to get spammed with low quality requests. And the 100+ karma filter does that pretty nicely.
One middleground I can think of is you can get a limited number of posts reviewed under 100 karma (even just 1), and at 100 that limit just goes away.
Daniel Kahneman, you were a milestone in humanity’s road to rationality. Thank you for that.