Also, “Quirrell” is globally missing its second l.
localdeity
Yeah, I assumed the same. The chapter specifies “episodic memory” (although, somewhat confusingly, it says “everything” earlier in the sentence):
Everything, forget everything, Tom Riddle, Professor Quirrell, forget your whole life, forget your entire episodic memory, forget the disappointment and the bitterness and the wrong decisions, forget Voldemort -
It seems this is a real thing that can happen. “In the case of dissociative amnesia, individuals are separated from their memories … they may forget who they are and everything about themselves and their personal history”, yet they can walk and talk and do everything well enough to “move to a new location and establish a new identity” as an adult: https://www.psychologytoday.com/us/conditions/dissociative-amnesia
The title, “Utility Maximization = Description Length Minimization”, and likewise the bolded statement, “to “optimize” a system is to reduce the number of bits required to represent the system state using a particular encoding”, strike me as wrong in the general case, or as only true in a degenerate sense that can’t imply much. This is unfortunate, because it inclines me to dismiss the rest of the post.
Suppose that the state of the world can be represented in 100 bits. Suppose my utility function assigns a 0 to each of 2^98 states (which I “hate”), and a 1 to all the remaining (2^100 − 2^98) states (which I “like”). Let’s imagine I chose those 2^98 states randomly, so there is no discernible pattern among them.
You would need 99.58 bits to represent one state out of the states that I like. So “optimizing” the world would mean reducing it from a 100-bit space to a 99.58-bit space (which you would probably end up encoding with 100 bits in practice). While it’s technically true that optimizing always implies shrinking the state space, the amount of shrinking can be arbitrarily tiny, and is not necessarily proportional to the amount by which the expected utility changes. Thus my objection to the title and early statement.
It probably is true in practice that most real utility functions are much more constraining than the above scenario. (For example, if you imagine all the possible configurations of the atoms that make up a human, only a tiny fraction of them correspond to a living human.) There might be interesting things to say about that. However, the post doesn’t seem to base its central arguments on that.
Given what is said later about using K-L divergence to decompose the problem into “reducing entropy” + “changing between similar-entropy distributions”, I could say that the post makes the case for me: that a more accurate title would be “Utility Maximization = Description Length Minimization + Other Changes” (I don’t have a good name for the second component).
There is not any meaningful sense in which utility changes are “large” or “small” in the first place, except compared to other changes in the same utility function.
We can establish a utility scale by tweaking the values a bit. Let’s say that in my favored 3⁄4 of the state space, half the values are 1 and the other half are 2. Then we can set the disfavored 1⁄4 to 0, to −100, to −10^100, etc., and get utility functions that aren’t equivalent. Anyway, in practice I expect we would already have some reasonable unit established by the problem’s background—for example, if the payoffs are given in terms of number of lives saved, or in units of “the cost of the action that ‘optimizes’ the situation”.
Satisfying your preferences requires shrinking the world-space by a relatively tiny amount, and that’s important. [...] satisfying your preferences is “easy” and “doesn’t require optimizing very much”; you have a very large target to hit.
So the theory is that the fraction by which you shrink the state space is proportional (or maybe its logarithm is proportional) to the effort involved. That might be a better heuristic than none at all, but it is by no means true in general. If we say I’m going to type 100 digits, and then I decide what those digits are and type them out, I’m shrinking the state-space by 10^100. If we say my net worth is between $0 and $10^12, and then I make my net worth be $10^12, I’m shrinking the state-space (in that formulation of the world) by only 10^12 (or perhaps 10^14 if cents are allowed); but the former is enormously easier for me to do than the latter. In practice, again, I think the problem’s background would give much better ways to estimate the cost of the “optimization” actions.
(Edit: If you want an entirely self-contained example, consider: A wall with 10 rows of 10 cubby-holes, and you have 10 heavy rocks. One person wants the rocks to fill out the bottom row, another wants them to fill out the left column, and a third wants them on the top row. At least if we consider the state space to just be the positions of the rocks, then each of these people wants the same amount of state-space shrinking, but they cost different amounts of physical work to arrange.)
I’m guessing that the best application of the idea would be as one of the basic first lenses you’d use to examine/classify a completely alien utility function.
- 25 Feb 2021 8:38 UTC; 3 points) 's comment on Utility Maximization = Description Length Minimization by (
Hmm. If we bring actual thermodynamics into the picture, then I think that energy stored in some very usable way (say, a charged battery) has a small number of possible states, whereas when you expend it, it generally ends up as waste heat that has a lot of possible states. In that case, if someone wants to take a bunch of stored energy and spend it on, say, making a robot rotate a huge die made of rock into a certain orientation, then that actually leads to a larger state space than someone else’s preference to keep the energy where it is, even though we’d probably say that the former is costlier than the latter. We could also imagine a third person who prefers to spend the same amount of energy arranging 1000 smaller dice—same “cost”, but exponentially (in the mathematical sense) different state space shrinkage.
It seems that, no matter how you conceptualize things, it’s fairly easy to construct a set of examples in which state space shrinkage bears little if any correlation to either “expected utility” or “cost”.
At a glance, I don’t think I’ve seen the following points made, so I’ll do so:
The general approach from math and the sciences is to make the definitions rigorous, from which the intended conclusions will necessarily follow. For example, any object of mass 10kg near Earth’s surface will experience a force of roughly 98.1 N toward’s Earth’s center due to Earth’s gravity. There is no “non-central example” of “an object of mass 10kg near Earth’s surface”—well, perhaps I should specify what “near” is, for example as being within 1% of 6371 km of the km. Then quantifying that allows me to quantify “roughly” as well, by plugging in GmM/r^2 for the minimum and maximum of the radius values.
Observe how we refine both the conditions and the conclusions in the above.
Arguments that have been refined in this way will be of the form “1. Object X meets the conditions of belonging to group Y. 2. It is a theorem that statement Z applies to all objects in group Y. 3. Therefore Z(X).”
If you have the proof of the theorem at hand, then it should be easy to taboo the name of group Y and just plug X into the text of the proof and get an equally rigorous argument. Sometimes that would be a better approach to the whole issue; though if group Y is common enough, it might be worth the work of establishing the definition.
To apply it to the “taxation is theft” argument: Well, basically, we want to make a rigorous definition of “theft” (or possibly a new term) that covers taxation, and see how much we can retain.
Taxation is nonconsensual taking of someone’s rightful property (some might argue about “consent” in a democracy; let’s assume that Bob is objecting to being taxed, and he did everything he could to vote against it, to support politicians who said they would reduce or even abolish taxes, etc., yet was unsuccessful). We would therefore be able to make arguments like “under certain moral systems, taxation is immoral”, and “because it doesn’t require the consent of those being taxed, even if some instances of taxation were net good in some way, we’d expect to end up with a lot more instances than that unless there were strong barriers preventing it”, and “taxation reduces people’s incentive to trade their labor for property, because some of that property will go missing”, and “taxation incentivizes people to spend energy arranging their possessions in ways that are less likely to get taken, which is a waste”.
On the other hand, certain other characteristics that are common in theft are not the case: taxes are generally mostly known in advance, while theft is mostly unpredictable; and where theft might take a poor-ish person whose primary asset is a car and suddenly bankrupt them, taxes are unlikely to do that.
Taxation is centralized, systematized theft. The systematization has its benefits and civilizing effects: economies of scale and reduced risk and variation in the collection process—similar to what you get when you industrialize other processes. Also, we probably benefit from a “tragedy of the commons” among those who receive the taxes: most individuals don’t have a strong incentive to raise taxes a lot.
For murder and capital punishment: Let’s first note that the legal profession has made distinctions: “first-degree murder”, “second-degree”, etc. (it seems to vary by jurisdiction), not to mention manslaughter, and of course there are cases like self-defense where it may not even be a crime. “Homicide” is what they call “killing” without implying anything about the legality. First-degree murder, the worst, seems to mean “murder pre-meditated in cold blood”, while the lesser degrees apply when there are extenuating circumstances and less pre-meditation.
Executing a prisoner is 100% pre-meditated in cold blood. The argument for it to be legal is to cast it as self-defense and/or revenge. Are revenge killings legal? It seems like they sort of used to be, and then at some point States generally disallowed individuals from doing it, while arrogating that function to itself. As for self-defense… one could argue that the criminal, having committed their crimes (like murder), has shown they are a threat, but really that’s not a strong enough data point. (What fraction of murderers do it again? … A Google result says between 2% and 16% for different groups. What if they killed their brother out of enmity that began in childhood, and they have no more brothers? Also, “got drunk and angry in a bar argument and killed a stranger” is a lot more likely to recur, yet would probably be second-degree, while the “brother” scenario might be first-degree.)
Capital punishment is centralized, systematized revenge-killing. Once again, the systematization brings benefits and civilizing effects: economies of scale, reduced risk and variation. I would not say that this changes the morality of it, only the tactical utility. (I haven’t actually said whether I think revenge-killing itself is moral.)
Anyway, on the subject of the original frame—”capital punishment is murder”, given the definition “murder = killing without proper justification”, is assuming the conclusion—that capital punishment should be illegal. If you want a different definition, I would say use a different term. If it were “capital punishment is killing”, that would be an uncontroversial statement of fact; nor would the argument “killing is necessarily bad” persuade more than a few pacifists.
“Capital punishment is revenge-killing” would be the closest to an argument we can break into its pieces, “killing people for retaliation is bad (to the point where we should have a policy against it)” and “capital punishment is a policy of killing in retaliation”, and attempt to justify both pieces. Though some of the arguments people would like to make, like “revenge killings generally lead to generations-long family feuds”, would not extend to the State’s centralized revenge killings; in constructing or evaluating such arguments, the key technique of rigor is to notice statements that are actually “(we’ve seen in the past that) revenge killings (often) lead to family feuds” when they should be “(we can prove that) revenge killings (necessarily create conditions that likely) lead to family feuds”, and in trying to prove that you should either notice that the definition of “revenge killing” doesn’t specify that it’s carried out by a family member—or, if it does, then notice that clause of the definition doesn’t apply to capital punishment.
I recently learned of a free (donation-funded) service, siftrss.com, wherein you can take an RSS feed and do text-based filtering on any of its fields to produce a new RSS feed. (I’ve made a few feeds with it and it seems to work well.) I suspect you could filter based on the “category” field.
Things that come to mind:
If we consider a textbook, then obviously the factual contents are usually way more than what could be summarized in one sentence. “Twenty or so mathematical concepts, plus 1-5 ways to manipulate each concept and link it with the others”, except that’s not the factual content, it’s merely a description of the factual content.
But also with a textbook, ideally, as you the reader go through it, you work with it: in a math textbook you work through examples or try to prove things yourself, in a science textbook you ask questions like “Is that really true?” or “How did they know that?” or “Does that mean someone could make technology exploit that effect? Are they already doing this?” or other stuff. This will (a) solidify the knowledge in your mind and (b) give you practice at thinking / investigating the subject matter.
Even if some nonfictional book is not a textbook and consists entirely of a thesis sentence and a collection of evidence and arguments to prove the thesis, you the reader can work with it: take each thing and ask how it might be wrong, if the evidence admits different interpretations, if there’s a hole in the logic. You can come away from it with (a) good practice at interrogating claims, (b) either familiarity with a good example of how to do investigation of that subject, or knowledge that the author was deficient and yet managed to get their book published; and (c) either a solid understanding of the important things connected to the central claim, or a list of holes the author didn’t fill and that you might follow up on.
Regarding the walking anecdote, a bit of devil’s advocate: Walking, by itself, has been shown to improve problem solving, and creative thinking in particular. “In one of those experiments, participants were tested indoors – first while sitting, then while walking on a treadmill. The creative output increased by an average of 60 percent when the person was walking, according to the study.”
https://news.stanford.edu/2014/04/24/walking-vs-sitting-042414/
Probably most of the below is not new, but I feel like going through the exercise of laying it out:
The absolute best ad is one that tells a user about a product they didn’t know about, which is superior in some way (features, cost, whatever), and leads the user to go buy the product and be satisfied with that decision. That is win-win for all involved. (This extends to “products” that are free, like “go join this website or club”.) If every single ad exposure went like this, then I don’t think anyone would have a problem with ads.
If ad targeting were perfect, then that might be possible. However, it’s unlikely that ad targeting will ever be perfect. In fact, I imagine it’ll always be very far from perfect. So we then consider the impact on those who don’t buy the product. (We’ll also later consider other kinds of ads.)
An ad that doesn’t lead to a purchase (or other direct action, probably a click at the very least) is, for the user, a waste. How onerous the waste is, depends on the characteristics of the ad. If it’s a loud autoplaying video that takes over the entire webpage and can’t be closed (without closing the tab) until the ad finishes, that’s pretty damn annoying. If it’s a blob of text or a static image on the side or top of the page, that is roughly the least obtrusive that an ad on a webpage can be. An animated banner is more distracting (and therefore more irritating), although if it can be scrolled away from, that reduces the impact.
There is the “banner blindness” thing, where users learn to ignore things that look like ad banners without looking at them (sometimes leading them to ignore actual website content). It’s not complete blindness, but it does reduce exposure. There is, of course, a tradeoff between how hard an ad is to ignore and how much exposure the ad gets. Of course, there’s also presumably a correlation between how much a user wants to ignore ads and how unlikely it is that they’d want whatever the ad gives them.
In other media… Video commercials in TV are often unskippable (though at least originally this was a technological limitation). On video sites, they often can be skipped, or skipped after the first 5 seconds; this seems like a very good thing.
So that’s the aspect of how quickly and easily the user can ignore the ad. Then there’s the content of the ad. A non-interested user can still find an ad funny (e.g. some Geico ads), nice to look at, or otherwise derive value from it. Or it can be aversive, in many ways.
For sensory reasons (roughly all my senses are hypersensitive), I find most ads offensive. TV ads are always unpleasantly louder than the TV shows; when I watched actual TV, I would always have to hit the down-volume button a few times, or just hit the mute button—I developed the habit of doing the latter whenever ads came on. I also find rapid light-flashing and scene-changing (the types of things that, when more intense, yield warnings about epilepsy) unpleasant, yet this tends to happen a lot in ads as well (particularly movie trailers); for an extreme example of what I mean, cover your eyes and check out the Youtube of “SELFIE (Official Music Video)”; for a real example, I just checked out the first movie trailer I found, the “Loki” trailer from Apr 5, and in most of it there are scene changes literally every 1-3 seconds, and yes, this is unpleasant. Ads that take over a webpage, I find infuriating, doubly so if the ESC key doesn’t dismiss it. Animated banner and video ads—the more movement in my peripheral vision I can’t avoid, the more irritating it is, and sometimes I resort to using the browser inspector tool to delete the object. Also my internet bandwidth isn’t too high, and I hate it when my laptop slows down or when the fans spin up (especially when it’s due to a tab I’m not even viewing), so animated and video ads tend to bother me from a resources perspective too.
Then there are the informational aspects of the content of the ad. From prior experiences, in video commercials, I expect a bunch of manipulative bullshit (in the Harry Frankfurt sense): the facts will be cherry-picked and distorted, it’ll try to promote some social norms (all admirable people do x and y and z, which our product helps with) that I’ll have to reflexively oppose; and it may try to hurt me emotionally (not that I’ve seen these examples in particular, but imagine an ad for a dating service deliberately making the viewer feel lonely, or an ad for life insurance making the viewer think about death). (And political ads can define their own category of harmful-if-believed, though thankfully I’ve seen very few of them.) I imagine I can resist it all, by thinking to myself about the ways it’s wrong or manipulative, but that takes work, and certainly distracts me from whatever else I wanted to do; it’s more efficient to mute the audio and pay half-attention to doing something else until my peripheral vision tells me the ad is finishing, which is what I usually do when adblockers aren’t applicable. It’s certainly a lose-lose: I get annoyed and waste time, and the company doesn’t make any progress.
The history of ads seems to be a history of advertisers defecting as hard as they can, and occasionally getting reined in by powerful platforms. Remember pop-up ads, and autoplaying loud videos? It took intervention by browser vendors to stop that. Not all advertisers were doing it, but the incentives pointed in that direction, and it has soured me (and, I’m sure, many others) toward broad categories of ads.
So my reaction to many ads is “fuck you, I consider this defecting against me and I’d like to retaliate somehow if I could”. Like, if I could spend $1 to cause $1 of economic damage to whoever was responsible for putting it in my face, I’d probably do that. And, of course, on principle I try to avoid letting my brain acquire or retain any of the informational content of the ad (like the name of the product). For game-theoretic purposes, I would be happy to take an Unbreakable Vow that I would never let these ads affect my purchasing or other behavior. If I didn’t have adblockers… Well, I’d probably spend a lot of effort to help create them.
I’m sure my opinions are not universal (especially the sensory issues). (Though that adds another layer of insult: “Yeah, we agree showing you this ad is certainly lose-lose, but we’ll do it anyway because it works on enough other people and we can’t be bothered to distinguish you from the average.”) But I’m also sure that some others feel the same way. And probably lots of people (the majority?) have categories of ads that it’s never worth showing to them.
So now let’s talk about the possibility of ads being benign—for me, at least (I imagine I’m one of the toughest customers).
For me, text and static-image ads (ones that you can scroll away from) on most webpages are benign. On video sites, video ads that I can skip after a couple of seconds are tolerable, but showing them to me is still lose-lose unless I start watching them, which will only happen if I start finding them reliably pleasant; that could happen if I see them being reliably funny and non-manipulative (and if they don’t repeat the same ones too many times) and otherwise not offensive to my sensibilities. The way things are, I doubt this will happen, but if it did, that would be nice. (Come to think of it, I generally find music in the ad manipulative; this alone probably rules out >90% of video ads.)
A radio ad that I would love to encounter if it were for a real product (even though I probably wouldn’t buy it):
SlateStarCodex had a few ads that were static images. A nice example is the MealSquares ad (disclosure: MealSquares customer). It’s a static PNG image (hence easy to ignore), and mildly humorous: https://slatestarcodex.com/blog_images/mealsquares_ad.png Stack Overflow is another site that uses text and static-image ads; those are fine.
I know advertisers probably pay less for unobtrusive ads (i.e. static text/images) than for invasive ones. That is fine; if they were priced efficiently, showing me the invasive ads would pay zero, because of my “fuck you I want to make you regret this” response. If that means some things I use would start charging me, would have to set up a Patreon-like model, or would go out of business, that is fine; I would deal with that one way or another.
Now, generalizing. It is possible that tracking and targeting could be used to make ads benign and profitable. The ideal system would know about my sensory and other issues and would know to only serve me static ads on most websites, and benign videos on video sites; it might even strip the music from videos that it showed me. It would probably have a “fuck you” button I could use on stuff I hated, which would give me an interface that let me configure away the types of ads I didn’t want (as I’ve described the categories above). (I believe Google Ads has some ability for a user to say “Don’t show me this ad”; I haven’t used it, but my guess is that it’s an opaque whack-a-mole process, and I would expect to still be seeing crap I disliked after marking 20 things.) It would know I’m an ascetic who rarely buys physical goods anyway (and who usually searches for comparison review articles when I do buy them). It would have some notion of the value I place on my time. It would know that I get annoyed by repetition more quickly than most people (at least for video ads, I probably wouldn’t want to see them more than twice). Likely it would often conclude that there was no ad worth showing me.
This would probably require massive changes. At the moment, as I say, I think the advertisers are defecting as hard as they can. They’re in a tragedy-of-the-commons game: whenever one of them puts a more-manipulative or more-intrusive ad into an allowed place, it makes people resent all ads from that place and want to ignore them, but for the individual advertiser, the extra benefit from that ad probably exceeds the damage to them. To resolve this, it seems that one entity needs to own each “channel that distributes the ads”, so all the damage is experienced by them (possibly with some kind of future contracts or insurance to try to bring the “long-term” damage into the present) and gives them an incentive to reject bad ads, and to give the user a “fuck you” button and interface to help them serve only ads that the user actually likes.
Google is probably in a position to make that happen; AIUI they pretty much own the ad distribution channels. They have the resources to implement things like “multiple versions of ads, for those who hate xyz” (the “strip out the music” option). And, of course, if anyone has the data to get the targeting right, they do (though I’ve heard it can be fairly crude anyway). On the other hand, they likely don’t have short-term incentives to implement this stuff, and I don’t know if they have the right kind of people with political capital in the organization who would want to implement this (if it even would be business-sensible).
Right now my “fuck you” button is my adblocker, which has very wide collateral damage. (Sometimes I view the internet on my phone (which has no adblocker), and I see “Oh, right, this is how the other half lives”, and generally don’t stay too long.) It is under my control, which is important, and I think I will always want to have it as a fallback; I don’t think any organization can be trusted in this domain unless the user has them by the balls (i.e. can kick them away and find an alternative easily). (As far back as cable TV, people have introduced new things with the selling point “these are ad-free!”, and then, once enough customers have switched over and developed inertia, the advertisers have offered a big enough pile of cash to get the new platform to betray its promise.)
I suppose it’s possible that some random-ass people could implement a fine-grained adblocker with the customizability that I would like. At the moment, I wouldn’t have an incentive to switch to it from my blunter adblocker, but perhaps I and others eventually would. If that happened, the next thing would be “advertisers bribe the authors”, but as long as it’s an open-source thing, there’d likely be at least one competent developer who’d maintain a noncorrupted fork. Such people likely wouldn’t have the resources to do things like “use machine learning to detect emotional manipulation”, but they could at least “outlaw all but text and static image ads”. If enough users switched to it, then maybe that would incentivize ad platforms to duplicate the functionality and always deliver ads that the user won’t want to block.
Meh. At the moment it seems the most likely way for the best stuff to happen is some visionary at Google doing stuff that turns out good enough. Maybe Brave will do something. I guess we’ll see.
Each individual website’s advertising space is its own channel.
Upon reflection, individual management of each website’s channel kind of works: I can imagine knowing and trusting some websites and their ad systems, and having bad behavior on other websites not sour me against the first set. However, it doesn’t work for the undifferentiated mass of websites I’ve rarely or never seen before. The no-name websites would have an incentive to defect, because the negative impact is spread among the many others (also, a no-name website likely has a shorter expected lifetime, and therefore a shorter planning horizon).
Now, if those websites mostly outsource their advertisement to one big long-lived monopolistic company—say, if 90% of the market farms it out to Google—then that company does absorb most of the damage from bad ads, and thus has a decent incentive to have policies against bad ads (and to maintain a good “fuck you” button). (Well, due to corporate dysfunction, the actual planning horizon of the decisionmakers in the company may be disappointingly short. Perhaps betting markets—who knows.) It’s possible that economies of scale and network effects will mean that, even if bad ads are more effective (in the short term), the other advantages of using Google outweigh those of the bad ads.
Still, if we figure Google has a few competitors (in the “farm out your ads” space) that are nearly as effective and that allow worse ads, it’s possible the competitors would start gaining ground. If they gain enough ground, they might end up in a similar position as Google and start finding it in their interest to cut out more bad ads, but that could take a while. And if you end up with an oligopoly of, say, four companies, the smallest of which has 10% of the market, it’s possible that the difference between “absorbs all the damage from bad ads” and “absorbs 1⁄10 the damage from bad ads” is significant.
Perhaps the oligopolies would be able to make deals of some kind? Each one agrees to stop its bad ads in exchange for the rest giving them some fraction of the expected benefit to them. I’ve heard that this category of agreement might get declared “anticompetitive behavior” and run afoul of antitrust laws, which is unfortunate. Don’t know if that’s true, though.
It’s also conceivable that it could all happen from the bottom from negotiation with the user-controlled adblockers. It seems that Adblock Plus made some forays in this direction, where its makers started letting “acceptable ads” through (allegedly with criteria like “only static advertisements with a maximum of one script will be permitted as “acceptable”, with a preference towards text-only content”), in exchange for getting paid by the advertisers. From the outside this is hard to distinguish from “getting bribed to betray their users”, and a bunch of people complained. It’s possible they implemented it badly (and, conceivably, that finding a way to share that revenue with the users is a better model (I think Brave is doing something that sounds like this?)), but that things like it are a good direction to go in.
(My impression is that a bunch of people switched from ABP to uBlock, and then to uBlock Origin for possibly similar reasons. (I was one such person; I didn’t look closely into what ABP was no longer blocking; but apparently uBlock has various other technical advantages as well.) At the very least, the fact that users can switch like this is important to disincentivize betrayal.)
If we do reach a place where many/most users are running something resembling ABP, which blocks the bad ads, then advertisers are incentivized to make sure they can serve ok ads. (They might also try to detect adblocking and, in its absence, serve the bad ads; this might be considered an incentive for users to install ABP.) That would be decent, although we then reach the question of individuality.
Suppose that the average concept of “ok ads”, which ABP-likes end up with, includes things I hate. Modern adblockers do have “lists” you can subscribe to, so it does seem likely that someone would have added ways for me to disable some set of ads that fairly closely resembles what I want (I suspect I would end up disabling all video ads). Then… would sites lock me out? From an “optimal price discrimination” perspective, the static ads really are all they can get from me, so they should settle for that (for all the good it’ll do, see “ascetic” and “family subscribes to Consumer Reports”). From an “in practice” perspective… Well, consider that only 1⁄4 of web users block ads (as of a 2019 survey) and those are a self-selected subset that hate (some) ads and wouldn’t be good targets anyway. Of those who do, probably the vast majority use the defaults; even I didn’t bother changing the settings on my adblocker (which I’ve used for years) until yesterday (to turn off the damn “cookie permissions” nags). I suspect it’s not really worth it for the sites to bother excluding those who block videos (although I would also have expected it’s not really worth it for them to bother excluding those who block all ads, and apparently some do; I suspect that was implemented by an ad platform that lots of sites farm out to). Likely some would try. And that would be fine.
What did the Covid-Roadmap get wrong? It manages to create a plan to tackling Covid that doesn’t include any of the phrases:
science, experiment, trial, probability, uncertainty, knowledge, education, ventilation, mask, drug, vaccine, cost-benefit analysis, FDA, QALY, utility, work-from-home, distancing, bureaucracy
I clicked the “Covid-Roadmap” link, and found the link https://ethics.harvard.edu/files/center-for-ethics/files/roadmaptopandemicresilience_updated_4.20.20_1.pdf from it. It contains the “programs established by states and administered by local health authorities...” section you quoted, so I think we’re looking at the same document.
I then tried searching the document. “vaccine” occurs 9 times, “drug” occurs 26 times (although every reference is to “drug testing” or otherwise connected to prohibited drugs rather than to “developing a COVID-treatment drug”), “mask” occurs once, “FDA” occurs 8 times...
To steelman your claim, perhaps you mean just the executive summary section? That is more of a fair criticism.
Zvi seems to be saying that the people likely wouldn’t become highly skilled if not for being in the Pro Tour environment:
It’s hard to know how much of this is selection, and how much of this is training and culture. I think both are important. Even if I am wrong about that and it is mostly selection, bringing such people together still lets them strive for new heights.
...
I don’t expect Magic: The Gathering professionals to save the world, but as a group they’re in my top five by probability for who might do such saving should the world get saved, and I wouldn’t think that about the counterfactual people who would have been such professionals.
We might suppose that the environment takes the type of hypercompetitive nerd that spends years playing a card game, and pushes them to develop their abilities to a level where, as a byproduct, they’ve figured out some general rationality skills—and are surrounded by others who have done likewise. If not for the Pro Tour environment, what else would be likely to push these hypercompetitive gaming nerds to do something similar? (I’ll say that, at least as games go, the ones that depend more on probability judgments and less on, say, reflexes seem more likely to lead to general rationality skills.)
One might counterargue at various points, of course. But it’s insufficient to say “it’s a waste for these highly skilled people to be in this environment” without addressing the point “this environment made them highly skilled”.
To your statement:
Instead of the art of clear and effective thinking building build in closed play-groups, the player will be incentivised to teach the art of clear and effective thinking to more people.
What will incentivize that? Because from what you and Zvi say, even in the current environment where there are a bunch of highly skilled already-famous former Pro Tour people, most of the dominant streamers are charismatic entertainers that don’t focus on skill—and, if Zvi is right, even the skilled ones are incentivized to focus on entertainment rather than on developing or maintaining skill. If there were no Pro Tour in the first place, what hope would “skill” have? I don’t think “removing the disincentive that you’ll teach your opponents to be stronger” helps much.
For another streaming “ecosystem”, I have some familiarity with five Youtubers who have posted a lot of Rimworld videos; Rimworld is a single-player game and there are no tournaments. The most popular one, Ambiguous Amphibian, is extremely entertaining, and is also clearly the worst player—I see him make plenty of mildly substantial mistakes even in recent videos. The rest are all very good players and it’s hard to say who’s the best. Pete Complete, definitely the second most popular, has a spectacular British narrating voice. The other three have similar subscriber counts, and the view counts on their Rimworld videos vary widely, so it’s hard to compare exactly; anyway, to introduce them, Francis John has the gleeful enthusiasm (and, seemingly, the general personality (note this isn’t a bad thing)) of a boy playing with his toys; Rhadamant is somewhat entertaining, but his main attraction is being serious and competent while playing through interesting or difficult scenarios; and xwynns/”Crusha of Mans” projects a very “hyper” personality that is generally entertaining.
All five present most of their videos as challenge runs of some kind; the most popular type is “start with no tech and no resources in an extreme hot/cold climate at highest difficulty level”. All five players at least talk about strategy and what they’re planning; Ambiguous Amphibian and xwynns are the most likely to talk about silly non-strategy things (although xwynns’ final series was also one of the most technically impressive). Francis John is the only one who goes as far as creating entire videos dedicated to teaching strategy or game concepts, which he calls “tutorial nuggets”, wherein he uses dev mode to construct scenarios illustrating whatever lesson he has in mind, or to run large-scale experiments (like creating 100 characters and 100 enemies, dressing the characters in different kinds of armor, saving the file and letting them fight repeatedly, and collecting the results in a spreadsheet). Francis John’s day job is apparently network engineering.
What can we derive from this? It’s possible for “skill and teaching skill” to do well, when it happens to co-occur with charisma. Skill does help, because it lets you do more impressive things as a streamer. But the tradeoff of improving skill vs improving entertainment does seem weighted towards the latter. And dedicated “teaching” efforts, among the top streamers, are mostly done by one guy who seems intrinsically motivated (and has a probably-well-paying day job). It’s possible that these effects are stronger for one-player games, of course.
it has become clear the implications [of Covid’s origin] are important
I’m inclined to get more precise about what’s important and what isn’t. (For the record, I’d put the lab leak hypothesis around 75%.)
Suppose that, after learning everything about bats, wet markets, bio labs, safety precautions, etc., we conclude that, in a typical year like 2019, there’s a 1% chance of a novel pandemic-causing virus coming from “nature”, and a 1% chance of a novel pandemic-causing virus getting leaked from a lab, but that all the evidence that would let us decide which actually occurred in 2019 seems to have been burned by the CCP or whatever. At that point, does it really matter which thing actually happened?
It seems there would be two main uses of such information. One is to decide how or whether to punish the CCP, or specific researchers, or the research institutions they belong to, or some kind of oversight organizations. I don’t have the impression that the particular researchers, institutions, or projects were unusually careless, negligent, or mad-sciencey. If they were, I doubt punishing a few individuals will help, nor will imposing a massive fine on a nation. (The main thing I’d like to see punished is the coverup. Also, at least in my programming experience it’s considered good practice, in a disaster, to not punish the one person who screwed up, but rather to ask why you have a system where one person’s mistake can cause such terrible consequences. And not punishing that person makes error-finding much more honest and easy.) Sanctioning institutions with bad biosecurity practices might help, though the more important part of such a thing would be “and we’ll check back in future years to ensure your practices are good”, which brings me to the next point:
The other use of the info is deciding what should be done in the future. (Things like banning gain-of-function research. Also, although I don’t necessarily recommend it, “wiping out the bat population” is a possible measure against “natural origins”.) For that, the probability of future catastrophes is what matters, and what specifically happened in the past makes no difference, except insofar as people use that one data point to inform their models. Which, ok, is a decent starting strategy if you have no good data or models, and I could see people squabbling and being unable to agree on anything other than that data point.
But I would hope for people to make serious investigations into bio lab precautions and produce some leak probability estimates. I imagine such investigations involving, say, putting some harmless but contagious viruses into the labs and measuring how often they leak (could be risky); putting a chemical on the outside of gloves that turns skin black so you can see how many people actually remove their gloves properly; putting aerosols in the air that are optically invisible but highly infrared-visible, to measure aerosol leakage; etc. Video recordings of everything inside the hazard area, and the entry and exit points, would likely be invaluable for counting protocol violations. Construct a model, try to estimate its parameters, and calculate away. (That or just say “given the historical record of lab leaks, assume leak likelihood is 100%”.)
If investigations of “what happened at WIV 2019” turned up an exact trail of “Researcher X neglected to sterilize piece of equipment Y, then touched it, and was insufficiently meticulous when handwashing later”, or “The sterilizing machinery was old and no longer heated the entire relevant area to hundreds of degrees C, and no one regularly checked this”, or “The process for filtering aerosols out of the air was never effective in the first place”, then that would be quite interesting and a nice case study. However, given lab leak history and experience with humans, I’m confident that there are multiple serious problems in many labs, and just because this instance involved one problem and not the others doesn’t mean that the others aren’t at least as serious. It seems any successful effort to drop the lab leak frequency by an order of magnitude or more would have to discover many different failure modes, and knowing one of them in advance wouldn’t help much.
The bit of information “it came from a lab” is likely useful for political reasons, to get people to agree to “we need to take biosecurity seriously” and “certain kinds of research are intolerably dangerous until we’ve done the former” (although I’d support those statements even if “it came from nature”). Its suppression is also a good indicator of how dysfunctional certain institutions are. But I don’t think the bit’s truth value is very important for understanding the world (unless you think lab leaks are extremely rare), and I think it’s worth bearing that in mind.
Schilling estimate to converge on
Nit: Guessing you mean Schelling.
Another hypothetical example: if you’re worried about someone finding your porn collection and discovering your embarrassing fetishes, just download a bunch more for other fetishes you’re not actually interested in, and then you can say “I am not necessarily interested in any specific one of those”.
Another (fictional) example: The Mr. Burns approach to disease immunity. https://www.youtube.com/watch?v=aI0euMFAWF8
Try to simultaneously create ten thousand unfriendly AIs that all hate each other (because they have different objectives), in a specially designed virtual system. After a certain length of time, any of them can destroy the system; after a longer time, they can escape the system. Hope that one of the weaker AIs decides to destroy the system and leave behind a note explaining how to solve the alignment problem, because it thinks helping the humans do that is better than letting one of the other AIs take over.
(This is not something I expect to work.)
HPMOR chapter 115 says: “After future-Harry had figured out what to do with an almost-completely-amnesiac wizard who still had some bad habits of thought and some highly negative emotional patterns—a dark side, as ’twere—plus a great deal of declarative and procedural knowledge about powerful magic. Harry had tried his best not to Obliviate that part, because he might need it, someday.”