But San Francisco is also pretty unusual, and only a small fraction of the world lives there. The amount of new construction in the United States is not flat over time. It responds to prices, like in most other markets. And in fact, on the whole, the majority of Americans likely have more and higher-quality housing than their grandparents did at the same age, including most poor people. This is significant material progress despite the supply restrictions (which I fully concede are real), and it’s similar to, although smaller in size than what happened with clothing and smartphones.
This part seems to just be to not allow an LLM translation to get the problem slightly wrong and mess up the score as a result.
It would be a shame for your once a year attempt to have even a 2% chance of being messed up by an LLM hallucination.
I think something like this is true:
For humans, quality of life depends on various inputs.
Material wealth is one input among many, alongside e.g., genetic predisposition to depression, or other mental health issues.
Being relatively poor is correlated with having lots of bad inputs, not merely low material wealth.
Having more money doesn’t necessarily let you raise your other inputs to quality of life besides material wealth.
Therefore, giving poor people money won’t necessarily make their quality of life excellent, since they’ll often still be deficient in other things that provide value to life.
However, I think this is a different and narrower thesis from what is posited in this essay. By contrast to the essay, I think the “poverty equilibrium” is likely not very important in explaining the basic story here. It is sufficient to say that being poor is correlated with having bad luck across other axes. One does not need to posit a story in which certain socially entrenched forces keep poor people down, and I find that theory pretty dubious in any case.
Here, you can go hunt down the people who used the react that way on this comment instead.
Domain: Various: Startups, Events, Project Management, etc
Link: Manifund, Manifold, and Manifest {2023, 2024: meeting notes, docs, budget}
Person: Various: generally, the Manifold, Manifund, and Manifest teams
Why: This isn’t a video, but it’s probably relevantly close. All Manifold-sphere things are public — all meeting notes, budgets/finances, strategy docs, etc. I think that someone could learn a lot of tacit knowledge based on how the Manifold-sphere teams work by skimming e.g. our meeting notes docs, which are fairly comprehensive/extensive.
This is my first article, so I’d appreciate any feedback!
Here’s the letter: https://s3.documentcloud.org/documents/25003075/sia-sb-1047-anthropic.pdf
I’m not super familiar with SB 1047, but one safety person who is thinks the letter is fine.
The difference you are interested in—short vs long—is explicitly studied by the GiveDirectly UBI study in Kenya.
Indeed. Clearly, we can’t make this an experiment except in very small, poor countries. The closest I’m aware of is the GiveDirectly UBI study in Kenya. Also, note that Germany has a minimum unconditional basic income of more than 700€/month (not universal, though).
So, we cannot make the experiment, and we also can’t model the effect sufficiently well yet. But with studies like this, somebody will hopefully be able to come up with an economic theory eventually.
Now it makes sense, thank you!
Anthropic issues questionable letter on SB 1047 (Axios). I can’t find a copy of the original letter online.
We never got around to write more unfortunately.
I recommend this paper for a good overview of compMecb https://arxiv.org/abs/cond-mat/9907176
Thank you very much for your reply—I appreciate the commentary and direction
this interview between alexey guzey and dwarkesh patel gets into it a bit!
I want to get more experience with adversarial truth-seeking processes, and maybe build more features for them on LessWrong. To get started, I’d like to have a little debate-club-style debate, where we pick a question and each take opposing sides to present evidence and arguments for. Is anyone up for having such a debate with me in a LW dialogue for a few hours? (No particular intention to publish it.)
I have a suggested debate topic in mind, but I’m open to debating any well-operationalized claim (e.g. the sort of thing you could have a Manifold market on). The point isn’t that we’re experts in it, the point is to test our skills for finding relevant evidence and arguments on our feet (along with internet access). We flip a coin to decide which of us searches for evidence and arguments for each position.
If you may be up for doing this with me sometime in the next few days, let me know with comment / private message / thumbs-up react :-)
It’s actually a lot worse than that.
Q:Which numbers are above a yellow region in the image?
GPT4o: [2,4]
GPT4: [4,5]
Claude 3.5 Sonnet: [2,5]
Claude sonnet describes the image as follows:
User: List the numbers in the above image, how are they placed in the image? What’s below them?The numbers shown in the image are:
1, 2, 3, 4, 5
These numbers are placed in a single horizontal row, appearing to be in separate cells or boxes arranged from left to right in ascending numerical order. Each number occupies its own cell, creating a simple table-like structure with 5 columns.
Below the numbers, there are cells or boxes that correspond to each number. Some of these cells are colored, while others are not. Specifically:Below number 1: The cell is white or uncolored.
Below number 2: The cell is colored yellow.
Below number 3: The cell is white or uncolored.
Below number 4: The cell is white or uncolored.
Below number 5: The cell is colored yellow.
So, out of the 5 cells below the numbers, two are colored yellow (under 2 and 5), while the rest are white or uncolored.
So these models have very very bad visuospatial abilities.
This seems like a problem that’s will be easy to solve with some synthetic training data.
I have wondered something very similar to this myself. I think (at least in most cases) it is easier, on evolutionary timescales, to adapt to local climate conditions, rather than develop the machinery (and spend the metabolic energy) fighting against those conditions.
As far as I know, there are also no organisms that directly extract metabolic energy from wind, wave, tidal, other mechanical motion. Chemosynthesis based on thermal gradients AFAIK only happens in bacteria near hydrothermal vents. I assume any biological heat pumps that could exist would need to be macroscopic to be useful, but really insulation, coloring, and evaporation are just simpler.
People generally expected math AI to progress pretty fast already. I was angry about machine-assisted math being a neglected area before this and my anger levels aren’t substantially increased by the news.
Yes. I believe that is consistent with what I said.
“not((necessarily, for each thing) : has [x] → those [x] are such that P_1([x]))”
is equivalent to, ” (it is possible that something) has [x], but those [x] are not such that P_1([x])”
not((necessarily, for each thing) : has [x] such that P_2([x]) → those [x] are such that P_1([x]))
is equivalent to “(it is possible that something) has [x], such that P_2([x]), but those [x] are not sure that P_1([x])” .
The latter implies the former, as (A and B and C) implies (A and C), and so the latter is stronger, not weaker, than the former.
Right?