I agree, but I suspect the causal relationship lines up the other way—we are very good at behaving a particular way in response to particular situations, regardless of our subjective experiences.
I can’t speak for Raemon, but I point out that how low the fruit hangs is not a variable upon which we can act. We can act on the coordination question, regardless of of anything else.
Strong upvote for reporting on measured self-experimentation.
It doesn’t speak directly to your results, but: I was reading a comment elsewhere about the phenomenon of people having a meditative experience (like kensho), feeling very different subjectively, but then when they describe it to their friends/families/colleagues those people don’t notice anything different.
I noticed that I would shocked if a few months of doing something for an hour a day were to outweigh one or more decades of socialization, under the same stimuli as usual, enough that it would be casually obvious.
As a result, my estimation of how much meditation would be required to even make a good test got pushed much higher. Alternatively, and in my estimation more likely, casual observation is a very wrong thing to be looking at for evidence of the effectiveness of meditation.
The way they addressed this question was by comparing how much time was spent monitoring the looms versus actively performing tasks. The quote in the article is as follows:
Bessen shows that in the early 19th century, a New England weaver operating a single power loom spent 70-75% of the time watching the loom. By 1900, monitoring without active intervention was reduced to ~20% of the weaver’s time, and actively performing tasks took up 80% of the time. This is because the weaver in 1900 was made to operate 8 power looms.
I don’t have access to the Bessen paper currently, though I’ll probably go ahead and read it anyway.
Written with a view to meeting the following criteria:
Short, so as to be digestible at a glance
Get people who will increase the quality of the meetup
Broad enough to capture new and interesting people
“You: enjoy careful thinking and value good communication.”
They enable this sole reliance on truth, without imposing virtual taxes via long lock-up periods.
I am not sure why exactly, but this sentence prompted me to imagine prediction markets differently along a particular dimension. Mostly I imagined a prediction market would wind up organizing its expertise in a way that mirrors the stock market; there are experts in particular types of commodities, in particular industries, and in particular types of transaction, etc.
The “long lock-up period” got me wondering about how to predict longer term outcomes, and the obvious answer was to break up the outcome you are really concerned with into sub-outcomes, enabling faster payouts and communicating information in a more fine-grained way in the bargain. This suggests to me that long-term and high importance outcomes will each have a family of sub-outcomes and therefore each develop into their own areas of expertise.
This looks like it doesn’t have an equivalent in current markets, which strikes me as interesting and possibly important.
Some things that this idea doesn’t explain:
1. If a shortage of cognitive resources is the problem, why do unemployed people do even less of the things Putnam measures than employed people do?
a. Does the fact of unemployment deplete cognitive resources to a similar or greater degree, perhaps because of loss of status?
b. Am I perhaps mislead by the unemployed people metric, and not comparing like with like? For example, if someone is not working because of disability, I expect that same disability to interfere with volunteer work.
2. No accounting for change in the environment. The omnipresence of advertising could have an effect; the extreme ease of communication could have an effect; both work and community are situated in the physical environment so if just being there demands more resources, that could partially explain it.
a. Although if the mechanism is real, we should expect both work and community to be adversely affected. I note productivity growth has been slowing down, but I have the impression that can be satisfactorily explained by productivity gains from computers achieving saturation and nothing else driving growth. I don’t know of any case where we see previously stable productivity actually declining, which the idea predicts.
More precise, maybe. I don’t think it is a better term.
doesn’t seem like an action that anybody who actually wants to meet should find offensive
You might be right, but whenever I have a thought like this it turns out badly for me.
This is an excellent description of the phenomenon. I have found that a lot of these sorts of problems dissolve if I view my contribution to the group as reducing the information load.
I am tempted to declare that the whole of leadership.
Nothing you are saying comes as a surprise, but my confidence in the process remains reduced. The problem here is that having a procedure for establishing whether a patient is informed, all of the weight rests on the procedure, and virtually none on the practitioner. This is the same for all fields of expertise.
I have read many of these kinds of forms as a patient. What we want them to be for is informing the patient; what they are actually for is defending against the accusation that the patient was not informed.
The audience was very much preoccupied with how the procedure for informing the patient was conducted, and seemed to consider this the biggest red flag. I find the fact that the lead scientist kept referring to a form to be the biggest red flag, because it suggests he didn’t engage the ethical issues directly.
Suppose for a moment that he did a much better job informing the patients—proper training, third party verified composition, etc. I don’t think this would have any implications at all for how He Jiankui engaged with the question of whether it was right to do this, but I do expect the audience to have been largely mollified. I see this as a problem.
I did not know what to expect, but I am not surprised.
I am now interested in how this plays out from an alignment perspective. It seems to me that the ethics of genetic editing have been taken pretty seriously by practitioners, and I’m tempted to make an analogy between the ethics here and safety in AI.
I really hope those kids are okay.
That’s an interesting transcript. It managed to decrease my confidence in the ethical frameworks we have set up around medicine; most of the questions were about forms, training for forms, number of institutions to whom the forms were submitted, etc. Only a few of those questions went right to the heart of the ethical problems. Those questions were:
How do you see your obligation to these children?
Are you sure the parents understood what they were doing?
Would you do this to your own child?
There doesn’t seem to be any articulation of the risks during the session, or plans for dealing with them.
I have heard the same claim, but I don’t find it credible. Even if it were, in order to make a credible attempt the Marine Corps would need the cooperation of the Navy, who don’t have the same level of admiration.
Assuming the trendline cannot continue seems like the Gambler’s Fallacy. Saying we can resume the efficiency of the 1930′s research establishment seems like a kind of institution-level Fundamental Attribution Error.
I find the low-hanging-fruit explanation the most intuitive because I assume everything has a fundamental limit and gets harder as we approach that limit as a matter of natural law.
I’m tempted to go one step further and try to look at the value added by each additional discovery; I suspect economic intuitions would be helpful both in comparing like with like and with considering causal factors. I have a nagging suspicion that ‘benefit per discovery’ is largely the same concept as ‘discoveries per researcher’, but I am not able to articulate why.
The United States military is extremely unlikely to launch a coup. In the event any element of it tries, other elements can be relied on to fight them. There are a couple of reasons for this:
1) Our oaths are to the Constitution, which is to say we are formally loyal to the system, not to an office or its occupant. Nominally the Marine Corps has more specific loyalty to the office of the President, but even then sitting Presidents clearly trump aspiring ones.
2) Enlisted hold no special affection for senior military leadership. Partially this is because the organizations are huge and bureaucratic so there is no real contact, and partially this is because they aren’t particularly competent. We’re in a low ebb of military success, so even the famous recent generals you have heard of are famous because they failed-to-fail rather than because they did outstanding work. There are no generals popular enough to move a lot of soldiers to break the law or betray their oaths.
3) At least among the Army infantry, we talked about this kind of thing pretty frequently. I expect that if the military is to have a bad effect during a coup, it is much more likely because of excessive enthusiasm in putting one down.
I am extremely pleased to see surreal numbers put to more practical use (for liberal interpretations of ‘practical’). It’s of no particular relevance to the paper, but when I read for the first time that every number has a game, but not all games have numbers, and thus game-space is larger than number-space, my head exploded.
Contra the other responders, I like Neuromancer.
Of course, I felt largely the same way vis-a-vis the damage and emphasis on style, but that’s the whole pitch of the genre: the world is damaged and the punk aesthetic is the only non-corporate culture remaining, and that largely out of spite.