I am skeptical of the framing “this episode” and the relevance of the Venezuela raid. The Pentagon White House has been complaining about Anthropic since September. There is a Pentagon memo dated 1⁄9 that talks about rewriting contracts to use the phrase “any lawful use.”
Douglas_Knight
It is not clear about OpenAI, but it was never clear about Anthropic. The news coverage has never mentioned any details about enforcement, only the words in the contract. The closest we get is the claim that the Pentagon was upset that Antropic was asking questions of Palantir, which means that Antropic doesn’t have any direct channel to learn about lines being crossed.
In 2020, the DPA was used to order car companies to manufacture ventilators, a new product. Not for the military, but is that really the relevant detail? The car companies wanted to make ventilators anyhow, so it probably had no effect, but, in principle, this is the most extreme version of DPA.
I think your output number is correct, but all your inputs incorrect (in the same direction, not canceling!).
A few thousand nuclear weapons is too low even today. The peak number was about 60k. Maybe most of the bombs are aimed at rural areas, but how many are aimed at cities? 40%? First strike bombs should be aimed at military targets, but aren’t second strike missiles aimed at populations in revenge? Doesn’t RAND famously talk about cities as hostages? I believe Ellsberg’s book claims that an awful lot of first strike planning was aimed at cities, probably most.
But when Ellsberg asked the Pentagon in 1960 what would be the death toll, they estimated 600 million, evenly divided between the two sides, about 80% immediate and 20% from fallout.
How can we know that these examples are real?
Roman Malov links to a Hank Green video with much more mundane examples, like rumble strips on highways. The falling automobile death rate is evidence that car interventions are doing something right, if not proof of a particular example like rumble strips. But how do I know that the Y2K problem was not overblown? If a few systems had had big disasters, I could estimate how much all the other systems had accomplished by avoiding them. But if no one had disasters, I have to consider the possibility that the problem was overblown and the effort expending on the fix wasteful.
You assert that the cost of fighting disease has increasing marginal costs. It is probably true that the cost of physical acts like distributing mosquito nets has increasing marginal costs. But we should test this and find out how fast it grows. Moreover, the sign of the marginal change to deaths/net is really not obvious and has to be measured. It might go down as you run out of people who would use them. Or it might go up, as you achieve herd immunity; or by keeping transmission out of houses, you might cause malaria to evolve to be less debilitating.
PEPFAR is a counterexample to your claim. Its original implementation had poor effectiveness per dose, in line with predictions. In 2006 it switched to generics, expecting to increase effectiveness per dollar by a factor of 10, leaving it still ineffective per dollar. But increasing doses by 10x turned out to be much more than 10x as effective. It is quite mysterious why its effectiveness improved, the best candidate being herd immunity.
The idea of coating the bottom of the pan with tin oxide reminds me of attaching iron to the bottom of a pan to make it work with induction stoves. Microwaves are more flexible in that you could put the tin oxide in an arbitrary shape, such as a wok, while induction requires the ferrous component to be close to the “burner,” but induction stoves seem like a pretty good compromise, covering the advantages you mention.
Does Smith talk about any device with a complicated distribution of receivers? I recently encountered a sandwich press that goes in the microwave. This could be unfolded to lie flat, but I think it was intended to be microwaved folded, heating both sides of the sandwich. What else exists? When I search “microwave accessories” I just find things that don’t absorb microwaves. Is there some other search term? Related products on amazon leads to what seem to be modern microwave skillets, largely flat.
Yes, if you’re going to freeze eggs or embryos, the earlier the better. But what are the tradeoffs between those two choices? Eggs postpone the future choice of sperm, while embryos freeze better. You can put these in the same units: at what age does the yield from freezing embryos at that age match the yield from freezing eggs at age 20?
Most of the time people say “Schelling point” they mean this. Maybe it would be better to call it a Schelling fence, but even that post claims that it is a Schelling point. I suspect that you can reframe it to make it true Schelling point, such as the participants coordinating to approximate the real game by a smaller tractable game, but I’m not sure.
I wanted to take measure theory in college, but my advisor talked me out of it, saying that it is an old, ossified field where writers play games of streamlining their proofs. They seek too much generality and defer applications to later courses. That complaint could apply more generally, that introductory graduate classes are bad because they have captive audiences, but it seems to me much worse in analysis than other fields of mathematics. What is the point of measure theory? Archimedes gave a rigorous delta-epsilon proof that if there is a coherent notion of measure, then the area of a circle is πr². But how do you know that you don’t encounter inconsistencies?
Applications are related to constructibility. If you know what your goal is, you can see if you can skip the axiom of choice. Indeed, as I phrased it above, the goal is to show that measure is defined on some sigma algebra, not just the maximal one. And it is also related to constructibility. Why do we want measureable functions? What is a function? If a function is something you can apply at a point, then from a constructive viewpoint it must be continuous. But you can constructively describe things like infinite Fourier series. You can’t evaluate them at points, but only do other things, like compute an average over a small interval. You want a theorem that the Hilbert space of square integrable functions on the circle is isometric to the Hilbert space of square summable sequences L²(S¹)=ℓ². Usually you define L² as measurable functions up to the equivalence relation of equality away from a measure zero set. But you could instead define it as the metric completion of infinitely differentiable functions under the appropriate norm. This is a much better definition for many reasons, including constructibility, but it requires you to open up your definition of function.
Here are two alternate books. Measure and Probability by Adams and Guillemin is a book about measure theory that tries to justify it by the context of things like 0-1 laws of probability. I’m not sure it succeeds in the justification, but it gives something more serious to think about if you want to drop the axiom of choice or the law of the excluded middle. Also, see this MO question.
The second book is more advanced, outside of the scope of this post. After measure theory, one has functional analysis, the study of infinite dimensional topological vector spaces of functions. I once heard it described as “degenerate topology.” For this, I recommend Essential Results of Functional Analysis by Robert Zimmer. It gives a bunch of applications to differential equations with a geometric flavor. It minimizes the amount of theory to get to the applications, in particular, by only using Hilbert spaces, not general Banach spaces.
Could you say more about taste? How fast can you evaluate the taste of a book? If it’s fast, could you check whether there was a trajectory over the 12 editions?
showcasing the fearsome technicality of the topic in excruciatingly detailed estimates (proofs involving chains of inequalities, typically ending on “< ε”).
That sounds bad. The ultimate proof is a chain of inequalities, but just presenting it is bad compared to deriving it.
“I wish we had the education system they have in Doorways in the Sand,” I said… “Did you know, there’s a new Heinlein? The Number of the Beast. And he’s borrowed the idea of that education system, where you study all those different things and sign up and graduate when you have enough credits in everything, and you can keep taking courses forever if you want, but he doesn’t acknowledge Zelazny anywhere.”
Wim laughed. “That’s what they really do in America,” he said.
— Jo Walton, Among Others
I found this Terry Tao blog post helpful. In particular, this paragraph,
It is difficult to prove that no conspiracy between the primes exist. However, it is not entirely impossible, because we have been able to exploit two important phenomena. The first is that there is often a “all or nothing dichotomy” (somewhat resembling the zero-one laws in probability) regarding conspiracies: in the asymptotic limit, the primes can either conspire totally (or more precisely, anti-conspire totally) with a multiplicative function, or fail to conspire at all, but there is no middle ground. (In the language of Dirichlet series, this is reflected in the fact that zeroes of a meromorphic function can have order 1, or order 0 (i.e. are not zeroes after all), but cannot have an intermediate order between 0 and 1.) As a corollary of this fact, the prime numbers cannot conspire with two distinct multiplicative functions at once (by having a partial correlation with one and another partial correlation with another); thus one can use the existence of one conspiracy to exclude all the others. In other words, there is at most one conspiracy that can significantly distort the distribution of the primes. Unfortunately, this argument is ineffective, because it doesn’t give any control at all on what that conspiracy is, or even if it exists in the first place!
But I’m not sure how much this is just restating the problem.
Yes, if we accept your ifs, we conclude that the new business is net negative. This really happens and some new businesses really are net negative (although I think negligible compared to negative externalities). But why think your assumptions are normal? Why think that the fixed cost of the business is larger than the time savings of the closer customers? Why expect no price competition, no price sensitivity?
There is a standard analysis of competition. If you reject it, it would be good to address it, rather than ignoring it. The standard analysis is that competition reduces prices. The first order effect of reducing prices is a transfer from producer surplus to consumer surplus, taken as morally neutral. But the lower price induces more sales, creating increased surplus. The expectation is that the first order neutral effect swamps the second order positive effect, which swamps the fixed costs.
The producer surplus is a rent. It induces rent-seeking. The second company to enter the market is mainly driven by rent-seeking. But by lowering the price they probably produce much more aggregate surplus than they capture. The more competitive the market, the lower the rents and the less new entrants are driven by rent-seeking. Late entrants are driven by the belief that they are more efficient.
The producer surplus is a rent. It induces rent-seeking. One form of that rent-seeking is new entrants, but another form is parasites within the organization, which seem much worse to me. Competition applies discipline which discourages these parasites. If the producers are innovative, you might think that they will make better use of the surplus than the consumers. If you do not expect parasites, maybe it would be better for innovators to capture more wealth. Maybe this was true a century ago, but it seems to me very far from true today. So I think the dispersal of wealth by transferring from producer surplus to consumer wealth is morally good by discouraging parasites within larger firms.
Putting lamps in ducts is not very different from putting filters in ducts; but with the downside that I’m a lot more worried about fraudulent lamps than filters. I guess it’s easy to retrofit a lamp into a duct, whereas a filter slows the air; but you probably already have a system designed with a filter.
The point of lamps is to use them in an open room where they cover the whole volume continuously.
This is standard today, but how recent is it? It looks like the industrial age to me.
How much of institutions is about solving akrasia and how much is about breaking the ability to act autonomously?
We get the word akrasia from Plato, but was he really talking about the same thing?
There is always the question of whether to study things bottom-up or top-down. These are bottom-up studies of what to do if you have a single infected patient. If you had an individual infected with a novel cold, that would be important, but we are generally interested in epidemics. In particular, why do colds go epidemic in the winter? We know there must be some environmental change. Maybe it’s a small change, since it only takes a small change in reproduction number to cause an epidemic. Then these controlled experiments might identify the main method of transmission. But maybe the change from summer to winter is a big change that swamps the effects we can measure in these bottom-up experiments.
I think Benquo’s claim is that most institutions do want financial fraud. Most people don’t, but a big constituency arranges to profit from it and to corrupt institutions. So his advice is aimed at the individual, not to blindly trust institutions.
It’s one thing to say “I know X is controversial, but I want to assume it and talk about the consequences.” But saying that there “this article is for people who know X and not the ignorant” leaves it fair game to attack X.
I have a different complaint. Saying “Epstein stuff” and not being specific could create an alliance between people who believe contradictory things. This is a common pattern. For a crisp example, consider anti-carb as an alliance between people who think glucose is fine and fructose is poison with people who think the opposite.