Max Niederman
It seems to me like buying an investment property is almost always a bad decision, because 1) single properties are very volatile, 2) you generally have to put a very large chunk of your net worth (sometimes even >100%!) in a property that’s completely undiversified, and 3) renting out a property is work and you likely could get a better hourly elsewhere.
The only advantages I see are that there’s far more cheap leverage available to retail investors in real estate than other sectors, and mortgages can act as a savings commitment device. Are there other reasons I’m missing that explain the apparent popularity of these investments?
One thing you could do is give users relatively more voting power if they vote without seeing the author of the post. I.e., you can enable a mode which hides post authors until you give a vote on the anonymized content. After that, you can still vote like normal.
Obviously there are ways author identity can leak through this, but it seems better than nothing.
It has been updated now—I didn’t get around to it that afternoon and then forgot until now.
Onboard the flight and while ordering the ticket, the airline will try to sell you their branded credit card. If you take the deal and use the credit card, they give you a bunch of miles, and they get a cut of every purchase you make.
This is correct and a big oversight on my part, thanks! I’ll update the post later today.
EDIT: It’s been updated.
No, That’s Not What the Flight Costs
This seems plausible given that virtually every knowledge worker I know fantasizes to some extent about working with their hands.
I suspect that this method will only work well on tasks where the model needs to reason explicitly in order to cheat. So, e.g., if the model needs to reason out some trait of the user in order to flatter them the prompt will likely kick in and get it to self-report its cheating, but if the model can learn to flatter the user without on-the-fly without reasoning the prompt probably won’t do anything. By analogy, if I instruct a human to tell me whenever they use hand gestures to communicate something, they will have difficulty because their hand gestures are automatic and not normally promoted to conscious attention.
There’s the atom transformer in AlphaFold-like architectures, although the embeddings it operates on do encode 3D positioning from earlier parts of the model so maybe that doesn’t count.
Transformers do not natively operate on sequences.
This was a big misconception I had because so much of the discussion around transformers is oriented around predicting sequences. However, it’s more accurate to think of general transformers as operating on unordered sets of tokens. The understanding of sequences only comes if you have a positional embedding to tell the transformer how the tokens are ordered, and possibly a causal mask to force attention to flow in only one direction.
The Money Stuff column mentioned AI alignment, rationality, and the UK AISI today:
Here is a post from the UK AI Security Institute looking for economists to “find incentives and mechanisms to direct strategic AI agents to desirable equilibria.” One model that you can have is that superhuman AI will be terrifying in various ways, but extremely rational. Scary AI will not be an unpredictable lunatic; it will be a sort of psychotic pursuing its own aims with crushing instrumental rationality. And arguably that’s where you need economists! The complaint people have about economics is that it tries to model human behavior based on oversimplified assumptions of rationality. But if super AI is super-rational, economists will be perfectly suited to model it. Anyway if you want to design incentives for AI here’s your chance.
@samuelshadrach (currently ratelimited) sent me the following document on the difference between elite and knowledge class social norms. This is not per se about economic class like I’m primarily interested in, and it’s more about different social norms than subtle markers, but it’s somewhat relevant so I’m linking it here:
Skiing is an interesting one. I never thought about it in those terms since I grew up in Alaska where downhill skiing was relatively accessible (like CO/UT). I also wouldn’t be surprised if outdoor activities in general are correlated with class, even when they’re not necessarily expensive (e.g. hiking).
[Question] What are non-obvious class markers?
I don’t think it’s correct to describe the optimization social media companies do as Goodharting. They’re optimizing for exactly what they want: money. It’s not that they want what’s truly best for their users and are mistaking engagement for that—I think it’s pretty clear at this point social media companies don’t care at all about their users’ wellbeing.
[Question] Where are the AI safety replications?
FWIW, I don’t think the site looks significantly worse on dark mode, although I can understand the desire not to have to optimize for two colorschemes.
Is there a reason that LessWrong defaults to light mode rather than automatically following the browser’s setting? I personally find it a bit annoying to have to select auto every time I have a new
localStorageand it’s not clear to me what the upside is.
It’s worth noting that, though it’s true that for a sufficiently large cluster most pairs of people are not strongly connected, they are significantly more likely to be connected than in a random graph. This is the high clustering coefficient property of small-world graphs like the social graph.
One reason is that quality is not one-dimensional. Some consumers prefer different things to others, and so in order to meet differing preferences brands will make different trade-offs at the same price point.