When you travel/move, enter into a new environment, you get to be the current version of yourself, and have everyone interact with that version of you. When you are at home or in an older environment, you have people who are simultaneously interacting with many different versions of you at once (ie people who knew the high school you, college you, early career you etc). This is challenging because it pressures you to also carry on multiple historical iterations of yourself, rather than being truly the present version of yourself you want to be in that moment.
I am encountering a lot of formatting issues when editing a comment. The state of a comment should be exactly the same when you first submit it and after you click “edit”, but the lexical editor is adding a couple random newlines, changing the image caption to a link, detaching the caption from the image, etc...
Link sharing only. I just read this but I didn’t think about whether the statistics presented here makes sense or not.
When the WISC IQ test is administered, it’s done so by an examiner who listens to the verbal remarks of the child. In this dataset, the examiner also provides their subjective impressions of the each child’s cognitive function, personality, etc. The latent factor from these impressions is strongly correlated with latent g (r ≈ 0.9) and, as it turns out, these impressions are also entirely unbiased by race and they’re 90% as large as the tested Black-White IQ gap.
This might seem a bit boring, after all, the examiner generating the impressions just gave the child an IQ test. However, these impressions are not just tautological. They also correlate highly with a different, shorter set of tests that have objective scoring criteria independent of the examiner’s impression, and they correlate highly with test scores from a few years prior, when the kids were examined by someone totally different as four-year-olds:
Insufficient Quality for AI Content. There’ve been a lot of new users coming to LessWrong recently interested in AI. To keep the site’s quality high and ensure stuff posted is interesting to the site’s users, we’re currently only accepting posts that meet a pretty high bar.
If you want to try again, I recommend writing something short and to the point, focusing on your strongest argument, rather than a long, comprehensive essay. (This is fairly different from common academic norms.) We get lots of AI essays/papers every day and sadly most of them don’t make very clear arguments, and we don’t have time to review them all thoroughly.
We look for good reasoning, making a new and interesting point, bringing new evidence, and/or building upon prior discussion. If you were rejected for this reason, possibly a good thing to do is read more existing material. The AI Intro Material wiki-tag is a good place, for example.
Sometimes you get scroll blocked when hovering on the widgets, it is not a big deal but the inconsistency feels weird. https://streamable.com/7da79g
Actually some people intentionally use an extra paragraph break as section break/horizontal divider because imo the ckeditor one is way too tall and the lexical one is still a bit too tall
like this is probably 2 paragraphs tall
...
...
Uhh ok it is taller when editing, what. The lexical divider is only 1.5 paragraphs or so after saving.
Why is the font size of the LLM content block slightly larger than normal? 19.3px vs 18.2px. It was subtle enough that I didn’t notice it before using inspect element but it feels off once I noticed it.
I think you are intuiting the question of “which DT is better” using the real world too heavily in a sort of “I think a world where people all do this is better” → “this DT is better” way. You can’t just hope things work out this way.
This seems like a good thing
I don’t think that is actually a good outcome to be able to impose any contract on a disadvantaged party
Yes, thats why you use laws / precommitments to prevent it. I guess I used “good” and that misled you a bit, I think it is game theoretically good, not morally ideal.
But I do not think that is a realistic state of affairs and I think on the flip side you can get asymmetric information causing FDT agents to behave sub optimally when presented with misanthropic actors.
As I said, this is very close to the no free lunch theorem where any DT benefits you in some universes and hurts you in others. I fully expect you can construct a situation including a hostile telepath where DT A outperforms DT B for any A/B.
What his prior is, however, is irrelevant, he is not offered that price and doesn’t get to proposition Derek.
We are assuming Derek knows everything about Will right? So if Will changes his strategy based on his prior then Derek knows that too.
I am again speaking from intuition only and don’t want to put more time thinking about this for now. I may not even endorse what I say if I put 5 minutes into thinking.
when we assume non-telepaths we get FDT losing by amounts dependent on the degree of information asymmetry
This seems like a good thing
For CDT, lacking retro-causality, they will only be willing to pay up to whatever their honesty value and signaling value is (i.e. less than the $200 for Will). For the FDT agent, they will be willing to pay up to whatever they value the totality of the outcomes (live and pay vs. die and don’t).
This means CDT-Will will die if Derek’ has a different utility function and is only willing to drive them home for $201+? This is the “other” universes I’m talking about.
In an even more realistic scenario, Will should have a prior for the minimum amount Derek is willing to get to drive them home. I expect this would make FDT-Will get some better calculations.
Huh, the current sans-serif font is super in my face, but I am very sensitive to formatting issues (like the redundant paragraph thing we talked last time). I would prefer something like this on a wide desktop with a vertical line over the whole LLM block and the model name on the left, which should be sticky. I acknowledge this leaves the issue of what to do on smaller viewports though.
I feel like in general when unbeknownst to you you have a hostile telepath inspecting you, you are just fucked in arbitrary ways that are decision theory-agnostic. Completely speaking from intuition, this is very close to (but definitely not identical to) the no free lunch theorem where any DT benefits you in some universes and hurts you in others, in a roughly but probably not exactly symmertric way.
Yeah, I realized this afterwards. I think I missed/read-and-forgot “high agency” in the original text. Whether it actually will be a good idea depends on how many people with long COVID will be reached
but… why not just share the group publicly so it is easier for people to join? You can have the verification within a channel in the discord server so unverified people cannot read other channels if you want. This seems like unnecessary friction.
https://danfrank.ca/daniel-isms-50-ideas-for-life-i-repeatedly-share/ (#20)