I think I am implicitly just saying that prediction markets are not perfectly efficient. I think that is normal and still have great utility, I don’t think efficient markets are something to be worried about because they will only exist mathematically. All these behaviors that only make sense when the market is not efficient makes sense exactly because the market is not efficient. But, there are different degrees of inefficiency. It is a scale, and I think prediction markets are pretty accurate already even if it is not mathematically efficient. Have you seen the brier score on different prediction markets? They are obviously not perfect, but it is pretty close. Close enough that it sometimes makes sense to act as if market probability = true probability.
Also the prediction markets are not negative sum for society as a whole, the fee doesn’t just evaporate. It simply goes to the platform. Taking money from dumb people in a zero-sum game is fine.
Subsidy can come in another form, a player can spend money trying to elicit information so that they can use the information somewhere else.
Again I am assuming an imperfect world. There are a lot of public information but you don’t want to spend time aggregating them, instead you create a market and put $10k in the automatic market maker to incentivize people to make their best guess.
Isn’t a natural conclusion then that in reality prediction markets are not completely efficient? There are subsidy, irrational traders/gamblers, and risk hedging (yes I know you mentioned it).
Subsidy can come in another form, a player can spend money trying to elicit information so that they can use the information somewhere else.
Also efficiency can be relative to a trader even if everyone only trades on public information because in reality no one has the time to discover and aggregate all the public information. This is just wrong
Quick thoughts. Note that I’m not that into AI safety problems and am speaking in general about LW.
I think this is the case because much of the discussion here is happening “on the cutting edge” so to speak; most users are actively exploring different lines of thought with new ideas built on complex world models, and are not necessarily revisiting or referencing the years of prior knowledge and belief that underpin their thinking.
I agree. This seems downstream of the poor search in LW and wikitags. Better search is good, but absent that we should tell people to google with query site:lesswrong.com. The UI also could link posts to related posts automatically, this is somewhat hard but definitely doable.
I’m interested in seeing how this process could be made more efficient; I think a more streamlined “onboarding” process for newcomers could save a lot of valuable time and help people become more impactful quicker.
(This is not a confident statement I’m thinking out loud) I’m not sure if a significant portion of the time from newcomers to contributors is spent on things that would benefit from better onboarding, like even with perfect onboarding someone would still need to motivate themselves to read quite a large chunk of text to reach the pareto frontier.
Encourage and facilitate creation of “current beliefs” pages for each user, where they can detail their present-day positions on certain subjects (AI timelines are a relatively simple example), recount their intellectual journey / past updates, reference particularly impactful posts and threads, etc.
This seems like it will just create a bunch of stale pages that are also hard to find what you want because each person’s belief page will contain too many different topics. Also, although I’m sure with official site-level support more people will do this, the fact that I basically never seen someone do this makes me think that people just don’t like to spend so much time organizing their intellectual journey such that it is clear enough to other people; it seems genuinely time-consuming to put in this effort.
Other organizational tools: content tags are nice, but they are a little too broad to be very helpful IMO. Maybe a more sophisticated tagging scheme, or some way for users to individually tag/curate articles (and maybe comments) in a publicly visible way?
Similar to the “current beliefs” pages, I think the main bottleneck isn’t tooling, it is how much time people are spending to make things legible for other people/newcomers. I don’t think sophisticate tagging scheme is a good idea, I have never seen extremely fine grain tags work out. Also, with fine-grained tags the problem shifts to the discoverability of tags, how would you know you have found all the tags that are somewhat related? Also tags are time-consuming to maintain, the LW tagging AI works well currently but I think accuracy would fall with too many tags.
It would be nice to accurately situate individual posts within the larger discussion they are a part of and I don’t think the current “mentioned in” section does a good enough job at that.
Someone can just make a post organizing the past discussions! But again ~nobody does that.
When you travel/move, enter into a new environment, you get to be the current version of yourself, and have everyone interact with that version of you. When you are at home or in an older environment, you have people who are simultaneously interacting with many different versions of you at once (ie people who knew the high school you, college you, early career you etc). This is challenging because it pressures you to also carry on multiple historical iterations of yourself, rather than being truly the present version of yourself you want to be in that moment.
I am encountering a lot of formatting issues when editing a comment. The state of a comment should be exactly the same when you first submit it and after you click “edit”, but the lexical editor is adding a couple random newlines, changing the image caption to a link, detaching the caption from the image, etc...
Link sharing only. I just read this but I didn’t think about whether the statistics presented here makes sense or not.
When the WISC IQ test is administered, it’s done so by an examiner who listens to the verbal remarks of the child. In this dataset, the examiner also provides their subjective impressions of the each child’s cognitive function, personality, etc. The latent factor from these impressions is strongly correlated with latent g (r ≈ 0.9) and, as it turns out, these impressions are also entirely unbiased by race and they’re 90% as large as the tested Black-White IQ gap.
This might seem a bit boring, after all, the examiner generating the impressions just gave the child an IQ test. However, these impressions are not just tautological. They also correlate highly with a different, shorter set of tests that have objective scoring criteria independent of the examiner’s impression, and they correlate highly with test scores from a few years prior, when the kids were examined by someone totally different as four-year-olds:
Insufficient Quality for AI Content. There’ve been a lot of new users coming to LessWrong recently interested in AI. To keep the site’s quality high and ensure stuff posted is interesting to the site’s users, we’re currently only accepting posts that meet a pretty high bar.
If you want to try again, I recommend writing something short and to the point, focusing on your strongest argument, rather than a long, comprehensive essay. (This is fairly different from common academic norms.) We get lots of AI essays/papers every day and sadly most of them don’t make very clear arguments, and we don’t have time to review them all thoroughly.
We look for good reasoning, making a new and interesting point, bringing new evidence, and/or building upon prior discussion. If you were rejected for this reason, possibly a good thing to do is read more existing material. The AI Intro Material wiki-tag is a good place, for example.
Sometimes you get scroll blocked when hovering on the widgets, it is not a big deal but the inconsistency feels weird. https://streamable.com/7da79g
Actually some people intentionally use an extra paragraph break as section break/horizontal divider because imo the ckeditor one is way too tall and the lexical one is still a bit too tall
like this is probably 2 paragraphs tall
...
...
Uhh ok it is taller when editing, what. The lexical divider is only 1.5 paragraphs or so after saving.
Why is the font size of the LLM content block slightly larger than normal? 19.3px vs 18.2px. It was subtle enough that I didn’t notice it before using inspect element but it feels off once I noticed it.
I think you are intuiting the question of “which DT is better” using the real world too heavily in a sort of “I think a world where people all do this is better” → “this DT is better” way. You can’t just hope things work out this way.
This seems like a good thing
I don’t think that is actually a good outcome to be able to impose any contract on a disadvantaged party
Yes, thats why you use laws / precommitments to prevent it. I guess I used “good” and that misled you a bit, I think it is game theoretically good, not morally ideal.
But I do not think that is a realistic state of affairs and I think on the flip side you can get asymmetric information causing FDT agents to behave sub optimally when presented with misanthropic actors.
As I said, this is very close to the no free lunch theorem where any DT benefits you in some universes and hurts you in others. I fully expect you can construct a situation including a hostile telepath where DT A outperforms DT B for any A/B.
What his prior is, however, is irrelevant, he is not offered that price and doesn’t get to proposition Derek.
We are assuming Derek knows everything about Will right? So if Will changes his strategy based on his prior then Derek knows that too.
I am again speaking from intuition only and don’t want to put more time thinking about this for now. I may not even endorse what I say if I put 5 minutes into thinking.
when we assume non-telepaths we get FDT losing by amounts dependent on the degree of information asymmetry
This seems like a good thing
For CDT, lacking retro-causality, they will only be willing to pay up to whatever their honesty value and signaling value is (i.e. less than the $200 for Will). For the FDT agent, they will be willing to pay up to whatever they value the totality of the outcomes (live and pay vs. die and don’t).
This means CDT-Will will die if Derek’ has a different utility function and is only willing to drive them home for $201+? This is the “other” universes I’m talking about.
In an even more realistic scenario, Will should have a prior for the minimum amount Derek is willing to get to drive them home. I expect this would make FDT-Will get some better calculations.
Huh, the current sans-serif font is super in my face, but I am very sensitive to formatting issues (like the redundant paragraph thing we talked last time). I would prefer something like this on a wide desktop with a vertical line over the whole LLM block and the model name on the left, which should be sticky. I acknowledge this leaves the issue of what to do on smaller viewports though.
I feel like in general when unbeknownst to you you have a hostile telepath inspecting you, you are just fucked in arbitrary ways that are decision theory-agnostic. Completely speaking from intuition, this is very close to (but definitely not identical to) the no free lunch theorem where any DT benefits you in some universes and hurts you in others, in a roughly but probably not exactly symmertric way.
I think I am implicitly just saying that prediction markets are not perfectly efficient. I think that is normal and still have great utility, I don’t think efficient markets are something to be worried about because they will only exist mathematically. All these behaviors that only make sense when the market is not efficient makes sense exactly because the market is not efficient. But, there are different degrees of inefficiency. It is a scale, and I think prediction markets are pretty accurate already even if it is not mathematically efficient. Have you seen the brier score on different prediction markets? They are obviously not perfect, but it is pretty close. Close enough that it sometimes makes sense to act as if market probability = true probability.
Also the prediction markets are not negative sum for society as a whole, the fee doesn’t just evaporate. It simply goes to the platform. Taking money from dumb people in a zero-sum game is fine.
Again I am assuming an imperfect world. There are a lot of public information but you don’t want to spend time aggregating them, instead you create a market and put $10k in the automatic market maker to incentivize people to make their best guess.