In case of rain, we are going to hold the meetup in the California Coffee Company nearby.
Richard Horvath
ACX/LW Meetup: Saturday May 16, 2pm, Margit Sziget
The posted timestamp shows local time, so this showed up to me as if posted on the 2nd of April. I actually thought it was genuine until I got to the “A joint statement was issued by five nations...” section, lol. I might learn about such an event on LW for the first time, but by the time a joint statement could be coordinated I would have been flooded about this from other sources.
Old school book on a wooden table, inkwell, pen, candle.
Dark and red, imperial geometry, lightsaber glows.
I comparable password-pattern I noticed: Options that are clearly longest or most difficult to spell are more likely to be the correct answer for multiple choice type questions (and conversely: Options that are short and easy to spell compared to others are less likely to be correct).
My reasoning is that people tend to spend less energy on less important things, hence won’t take as much effort (time and focus to spell) when creating the bad options.
I think this fits in well with/potentially explain Courtship Confusions Post-Slutcon article from John. One of my hypotheses there also proposed banter being a selection/discovery process. This game theory perspective seem to point to the same but with way more detailed and deeper explanation.
ACX/LW Meetup: Saturday March 21, 1 pm at Tim’s
Yes, I think they are. For example:
Senator Rand Paul and representative Don Bacon directly opposed Trump on tariffs even though both being republicans
The Economist magazine did the same and even more generally (being openly classical liberal)
Bloggers/intellectuals such as Noah Smith, Matt Yglesias, Richard Hanania and Nassim Taleb are openly against these again
The econ department of George Mason university has been consistently anti-tariff before and after Trump
I think you might be over-updating from your original post. You had a lot of somewhat unrelated and potentially politically sensitive statements (ethnonationalism, IQ, managerial class, ethics, government debt, taboos, egalitarianism, AI stuff). Even if one agrees with the majority of your points, it is tempting to agreement-downvote due to the minority, especially as they have high valency due to sensitive nature.
The relationship of “Christianity → Christians” is entirely different from the relationship of “Liberal capitalism → Big companies” or even “Liberal capitalism → Capitalists (meaning rich people who own a lot of capital)”.
The first is the connection between an idea and people who (claim to) share that idea. The second one is an idea and entities or people who visibly benefit from the system (*supposedly) based on that idea. However, even if they benefit from it there is no strict necessity for them to share the idea itself. In fact, as their wealth is generally concentrated in (a) particular sector(s) they are better off lobbying for special measures (e.g. subsidies, tariffs, preventing entry of competitors) that benefit their market position. This is actually what we see, e.g. Nvidia lobbying to be able to export their chips to China, Apple getting tarriff exemptions and so on.
The primary economic idea of liberal capitalism is that competition creates the most economic value, and thus elements decreasing that competition should be as few as possible, which is often the opposite of what a particular capitalist or corporation would want for itself. This is also what we see if we look at champions of the idea, who tend to be academics and intellectuals historically, rather than capitalists. In addition, if a capitalist or corporation would fight openly against all tariffs, being successful would benefit everyone equally, but failing and becoming the target of the vindictive administration would only hurt them in particular.
So we should expect academics and other public intellectuals to be the champions of liberalism, as it is very difficult to create legislation where they are the primary focus of benefits or harms without a bunch of unrelated people being equally affected.
*”Supposedly” as they themselves might not share the idea that their success is due to the system but may think it comes from some other factor (e.g. their own skills) independent from it
Whether it’s possible to remodel the code from 1. to 2. without “engine stopping running” is an empirical question about the slipperiness of this particular slope works. Your proclamation that it can’t be done isn’t actually an argument.
Following through to the logical conclusion of the general sentiment would stop the “engine”. Although one could probably come up with some economic/econometric model with an optimal way of taxation for effectively redistributing higher wealth concentration while still keeping wealth generation mostly intact, that is not what people usually ask for. “Billionaire” is not a specific value, it is just the current stand-in word for the outgroup. The actual pointer is to “people who have so much money I consider them to be different from my kind”. If we would just go back 50 years, when household median income was below 10 000 USD a year and property values even more depreciated, redistributing the fortune of millionaires’ fortune would seem as reasonable as billionaires’ is today.
ACX/LW Meetup: Sunday February 22, 1 pm at Tim’s
I wonder if this may have been true a (couple of) decade(s) ago, when ordering food was less common and there were fewer pizza alternatives. It is possible that Pentagon guys indeed order more food at such events, but nowadays baseline is so high that it does not bump stats meaningfully.
“However, a Pentagon spokesperson has denied this, telling Newsweek, “There are many pizza options available inside the Pentagon, also sushi, sandwiches, donuts, coffee, etc.””
In my experience food vendors within office buildings close by the end of official work hours and if you work late you have to order from outside.
ACX/LW Meetup: Sunday January 18th 2 pm at Sirius Teahouse
″...isn’t the experience of me or women I know. Asking men out leads to boyfriends who are generally passive and offload a bunch of work onto you (even when they’re BSDM tops). ”
This is very interesting and a perspective I haven’t considered. Now that I think about it, the women I know who are asking man out have a mixture of outcomes, and while tend to move towards high quality partners long term (especially if they are polyamorous), they indeed complain about having had very passive exes. I suspect asking out removes the filter for proactivity and they are falling back to the base rate with higher chance of getting passive partners due to prevalence in the population. Actually even worse if we assume proactive males are sorting themselves out from the available population. (There may be some additional factor potentially contributing to passivity, but haven’t thought it through yet).
Another observation I have is that they tend to be tops or switches with top preference. Assuming John is correct about nonconsent preference being the prevalent attribute in the general population, I would say they are the inverse, with that being the minority here.
My sample size is single digit though, so YMMV.
“Usually people who do this much model building in this way, and say these things about it, turn out to be concerning, but sometimes they don’t.”
By this do you mean that:
John asserting that nonconsent is the baseline cis female preference in dating resembles to what is stated in openly misogynistic areas of the internet (redpilled/incel/altright), hence you feel he might be in the same category?
“John, I worry you’re going to take bad models too seriously because you’re systematically unable to see some kind of disconfirming evidence.”
Would you be able give some more specific examples about the kind of disconfirming evidence you reckon John is missing? I think that would be the quickest way to show the weakness of his model.
I suppose one important difference is that people usually don’t read assembly/compiled binaries but they do proofread AI generated code (at least most claim to). I think it would be easier to couple manual code with LLM generated, marking it via some in line comment to force the assistant to ignore it or ask for permission before changing anything there compared to inserting assembly into compiled code (plus non-assembly code should be mostly hardware independent). This suggests human level enhancements are going to stay feasible and coding assistants have larger gap to close than compilers did before removing 99.99% of lower level coding.
If the piece of knowledge is not actionable, probably bemoaning it is not a good use time either.
I think this might slightly break down on higher echelons of larger organizations with deeper hierarchy levels. The general logic of carrier progression in such places follows this pattern:
You start doing the grunt work (e.g, junior developer or analyst).
You gain enough expertise to be able to juggle mental models of the workflow easily.
You can now jump up on the higher hierarchy level (e.g. senior dev, manager, project manager), where you coordinate work of people doing the grunt work.
Repeat #2 for this level
Repeat #3 for this level
End so on until the top.
The single 1->3 cycle is probably what you can teach in your proposed timeframe of couple of months, but each level in the hierarchy might take the same (in line with the “do not delegate something you cannot perform” principle) . So training someone to be the head of the a large organization, such as an Army General or a CEO might take multiple years.
This theory is probably incomplete and might be a special case of a broader, better theory just like the already mentioned mathematician case by Jay Baily earlier.