So shall we file this under “do as I say, not as I do”? Ha!
Weak desire for quadratic voting. This is chiefly because it never shows up anywhere else, and there are very few areas of life where I care enough to vote and have the surplus capacity to actually engage with a new voting system.
If I don’t endorse it in these conditions, then I effectively don’t endorse new voting systems anywhere, which feels weird.
I expect the quadratic voting not to be very different from the 1-4-9 system, but I favor including quadratic voting again even if that is the case. I have two actual reasons for this:
It’s a cool mechanism, with flexible levels of engagement, and this is a good way to practice using it. If we don’t make options like this available when voting opportunities arise, we can’t expect them to ever appear in critical arenas like elections or governance.
The more posts there are, the more valuable being able to fine-tune our votes becomes, operating under the assumption that the number of quality posts correlates with the number of posts overall (which I strongly expect). Since there are more posts this year, more granular voting has more value than it did last year. I want to be able to capture the additional value of the opportunity for granular voting.
Ha! This is a good one!
The part of the book that got skimmed is titled 1984.
I have not read this one, thank you for the link!
From the MACI link, my objection is a generalized version of this:
Problems this does not solveA key-selling attack where the recipient is inside trusted hardware or a trustworthy multisigAn attack where the original key is inside trusted hardware that prevents key changes except to keys known by an attacker
A key-selling attack where the recipient is inside trusted hardware or a trustworthy multisig
An attack where the original key is inside trusted hardware that prevents key changes except to keys known by an attacker
This is the level where trust is a problem in most real elections, not the voter level. I also note this detail:
It’s assumed that undefined is a smart contract that has some procedure for admitting keys into this registry, with the social norm that participants in the mechanism should only act to support admitting keys if they verify two things
Emphasis mine. In total this looks like it roughly says “Assuming we trust everyone involved, we can eliminate some of the incentive to breach that trust by eliminating certain information.”That is a cool result on the technical merits, but doesn’t seem to advance the pragmatic goal of finding a better voting system.
I agree collusion is not a showstopper, because individual people very rarely bother to try anything dishonest, and even when they do it isn’t effective. Also political parties will simply disseminate recommended spending plans. To prevent this would require something like absolute power over all communication, wielded by an entity over which no political party has any influence.
The truly secret voting suggestion is possibly the most awful idea I have ever heard with respect to voting, because while individual voters rarely commit fraud or do anything else inappropriate with their votes a very common and highly successful method of cheating an election is for the people who tally the votes to simply declare victory for one candidate or the other. If we cannot prove who anyone actually voted for, we can’t prove who actually won at all.
A note on the metaphor of sprint, marathon, and hike: where you wound up is the only pace associated with carrying any load.
I am struck by two elements of this conversation, which this post helped clarify did indeed stick out how I thought they did (weigh this lightly if at all, I’m speaking from the motivated peanut gallery here).
A. Eliezer’s commentary around proofs has a whiff of Brouwer’s intuitionism about it to me. This seems to be the case on two levels: first the consistent this is not what math is really about and we are missing the fundamental point in a way that will cripple us tone; second and on a more technical level it seems to be very close to the intuitionist attitude about the law of the excluded middle. That is to say, Eliezer is saying pretty directly that what we need is P, and not-not-P is an unacceptable substitute because it is weaker.
B. That being said, I think Steve Omohundro’s observations about the provability of individual methods wouldn’t be dismissed in the counterfactual world where they didn’t exist; rather I expect that Eliezer would have included some line about how to top it all off, we don’t even have the ability to prove our methods mean what we say they do, so even if we crack the safety problem we can still fuck it up at the level of a logical typo.
C. The part about incentives being bad for researchers which drives too much progress, and lamenting that corporations aren’t more amenable to secrecy around progress, seems directly actionable and literally only requiring money. The solution is to found a ClosedAI (naturally not named anything to do with AI), go ahead and set those incentives, and then go around outbidding the FacebookAIs of the world for talent that is dangerous in the wrong hands. This has even been done before, and you can tell it will work because of the name: Operation Paperclip.
I really think Eliezer and co. should spend more time wish-listing about this, and then it should be solidified into a more actionable plan. Under entirely-likely circumstances, it would be easy to get money from the defense and intelligence establishments to do this, resolving the funding problem.
This article is a wild ride.
They do not jest about the difficulty of acquiring the book (Airborne Contagion and Air Hygiene: An Ecological Study of Droplet Infections). It has no DOI number; Worldcat confirms it was digitized in 2009 but it must have been a weird method because it doesn’t get referenced like other old books I’ve searched for. I did find at least one review that said the book was to airborne disease as the pumphandle investigation was to waterborne disease, which is about the highest conceivable endorsement. Put the damn thing back into print, Harvard!
Katie Randall’s historical research.
Access to a PDF versions of a few articles co-authored by Linsey Marr:
The indoors influenza article from 2011.
Letter published in Science, Oct 2020.
Minimizing indoor transmission of COVID, Sept 2020.
A review in Science from Aug, 2021.
Almost everything by Firth and co is unavailable.
A first page of Firth’s tuberculosis rabbits experiment, 1948.
The guinea pig and UV study, done by Firth’s student Richard Riley, 1962.
I have examined none of these in depth, but the publications all appear to be real and also make the reported claims. However, I notice that when you start from Firth, information about this was pretty widespread in the 2010-2019 timeframe. We had plenty of time not to screw this one up.
I feel like agencies who make recommendations to the public, either as a matter of routine or in times of crisis, should have a historian of science on staff whose job is to discover and maintain the intellectual history of these recommendations. This way we will know how to update them in light of whatever current crisis.
I also have a notion this would help with things like the renewal of old content by making it incremental. For example, there has been a low-key wish for the Sequences to be revised and updated, but they are huge and this has proved too daunting a task for anyone to volunteer to tackle by themselves, and Eliezer is a busy man. With a tool similar to this, the community could divide up the work into comment-size increments, and once a critical mass has been reached someone can transform the post into an updated version without carrying the whole burden themselves. Also solves the problem of being too dependent on one person’s interpretations.
I want to be able to emphasize how to make a great comment, and therefore contribution to the ongoing discussion. Some people have the norm of identifying good comments, but that doesn’t help as much with how to make them, or what the thought process looks like. It would also be tedious to do this for every comment, because the workload would be impossible.
What if there were some kind of nomination process, where if I see a good comment I could flag it in such a way the author is notified that I would like to see a meta-comment about writing it in the first place?
I already enjoy meta-posts which explain other posts, and the meta-comments during our annual review where people comment on their own posts. The ability to easily request such a thing in a way that doesn’t compete for space with other commentary would be cool.
What about a parallel kind of curation, where posts with a special R symbol or something are curated by the mods (maybe plus other trusted community members) are curated exclusively on their rationality merits? I mention this because the curation process is more of the general-intellectual-pipeline criteria now, of which rationality is only a part.
My reasoning here is that I wish it were easier to find great examples to follow. It would be good to have a list of posts that were “display rationality in your post the way these posts display rationality” to look up to.
It would be nice if we had a way to separate what a post was about from the rationality displayed by the post. Maybe something like the Alignment Forum arrangement, where there is a highly-technical version of the post and a regular public version of the post, but we replace the highly technical discussion with the rationality of the post.
Another comparison would be the Wikipedia talk pages, where the page has a public face but the talk page dissecting the contents requires navigating to specifically.
My reasoning here is that when reading a post and its comments, the subject of the post, the quality of the post on regular stylistic grounds, and the quality of the post on rationality grounds all compete for my bandwidth. Creating a specific zone where attention can be focused exclusively on the rationality elements will make it easier to identify where the problems are, and capitalize on the improvements thereby.
In sum: the default view of a post should be about the post. We should have a way to be able to only look at and comment on the rationality aspects.
I read Duncan’s posts on concentration of force and stag hunts. I noticed that a lot of the tug-of-war he describes seems to stem from the fact that the object-level stuff about a post and the meta-level stuff (by which I mean rationality) of the post. It also takes the strong position that eliminating the least-rational is the way to improve LessWrong in the dimension the posts are about.I feel we can do more to make getting better at rationality easier through redirecting some of our efforts. A few ideas follow.
In the military case, I strongly recommend Supplying War by Martin van Creveld. It is a history, but systematically demolishes popular misconceptions about how supplies work in the military. It also completely changed my perspective of several important events, foremost among them Napoleon’s invasion of Russia and Operation Overlord in WWII.
Otherwise, I think that logistics is mostly divided up on the private side into different specializations by industry. For using the existing logistical infrastructure to manage supply, there is Supply Chain Management; international shipping and the railways are their own specializations; I suspect that things like building truckyards is actually a subtask of owning a trucking company more than anything else.
This calls for a high-level survey of the field, I think. Putting it on the TODO.
I am confused; what do you imagine this series of posts is doing?
The whole thing makes me want to take up logistics. It’s high stakes, fascinating stuff where there’s high returns for actually solving problems properly.
I strongly endorse this. On LessWrong I see a reasonable awareness of communications and finance, but virtually none of logistics, and it is the third element that makes up the global economy. It is a tremendous torrent of object-level problems, and even introductory knowledge makes lots of other things much clearer. For example, military things make no sense sans logistics. But I don’t know anything about commercial logistics, so I would be excited to explore the object level question of how stuff moves from A to B here.
Reflecting on this, I think I should have said that algorithms are the perspective that lets us handle dimensionality gracefully, but also that algorithms and compute are really the same category, because algorithms are how compute is exploited.
Algorithm vs compute feels like a second-order comparison in the same way as CPU vs GPU, or RAM vs Flash, or SSD vs HDD, just on the abstract side of the physical/abstraction divide. I contrast this with compute v. data v. expertise, which feel like the first-order comparison.
Chris Rackauckas as an informal explanation for algorithm efficiency which I always think of in this context. The pitch is that your algorithm will be efficient in line with how much information about your problem it has, because it can exploit that information.
there’s a common narrative in which AI progress has come mostly from throwing more and more compute at relatively-dumb algorithms.
Is this context-specific to AI? This position seems to imply that new algorithms come out of the box at only a factor 2 above maximum efficiency, which seems like an extravagant claim (if anyone were to actually make it).
In the general software engineering context, I understood the consensus narrative to be that code has gotten less efficient on average, due to the free gains coming from Moore’s Law permitting a more lax approach.
Separately, regarding the bitter lesson: I have seen this come up mostly in the context of the value of data. Some example situations are the supervised vs. unsupervised learning approaches; AlphaGo’s self-play training; questions about what kind of insights the Chinese government AI programs will be able to deliver with the expected expansion of surveillance data, etc. The way I understand this is that compute improvements have proven more valuable than domain expertise (the first approach) and big data (the most recent contender).
My intuitive guess for the cause is that compute is the perspective that lets us handle the dimensionality problem at all gracefully.