Physicist and dabbler in writing fantasy/science fiction.
Ben
Enjoyable dialogues. I am convinced of the “all in one place” argument from a strictly efficiency standpoint. Even more so if we have a lot less money than the example where diminishing returns (some beds that need bednets are out in the wilderness far from roads, you do those last) will not matter.
A (fairly unorigonal) utilitarianism problem that I find myself often toying with is something like the following:
What if we could spend the $1 billion on a brain in a jar connected to the matrix. We can feed that brain the sensation of a day at the beach. Then wipe its memory (to avoid diminishing returns) and then feed it that exact sensation again. Then repeat. Even better, (Using science!) we can overclock this process like crazy and can lead this brain through its day at the beach millions of times per second. In some sense this is a staggeringly good return on investment. It also does something weird to the veil of ignorance “you could be any one of the instances of that brain! They will soon outnumber the population of real humans so most likely you either get a nice day at the beach or you don’t exist at all”.
I would be interested in what you think.
Nice ideas. It sounds like (without mentioning it directly) that you are thinking about publishing pressure in academia. Academics applying for a new job, or for funding that enables them to keep their current job, will typically provide a list of publications they produced. It goes without saying that a list containing entries {A, B, C} must be strictly better than the list merely {A, B}. So long lists are good. People whose funding requests/promotion applications fail often claim (to themselves and others) to be 1,000 day monks by your metric. Many of them are right, others have simply been unsuccessful 100 day monks (due to various factors, that may include ability but also luck, resources, coworkers etc).
Sometimes two textbooks on the exact same topic serve completely different purposes. There are “I want to learn this thing I don’t know” textbooks, and there are “I am an expert but I still want this book on hand as a tool to help me with my experting”. Below I describe the most extreme examples I am aware of, which are unfortunately on different topics:
Nonlinear Dynamics and Chaos by Strogatz is incredibly readable. You can sit with it on a train and read non-stop without needing to look anything up and you will keep wanting to relate the next amazing thing you have learned to the poor stranger sat next to you. You emerge from it with a new conceptual framework, and then you never look at it again because if their is ever a detail or issue you can’t remember this book is the wrong place to check it.
Statistical Methods in Quantum Optics by Carmichael is a fantastic book. But, if you are new to the topic of quantum optics you should NOT touch it at all. It is an almost useless teaching aide. If you can’t remember whether the quantum regression theorem is dependent on the Markovian, Born or Secular approximations then this is the book for you. If you don’t know what any of those things are and want to find out then stay away! Find something else.
The same book cannot be ideal at both, their is a trade-off in optimising towards these two objectives.
Very interesting. I can’t help feeling that “trying to be a better rationalist” is somehow a paradoxical aim.
Roughly speaking I would say that we have preferences, and their is no rational way of picking preferences. If you prefer pizza to icecream, or pleasure to pain, or living to dying, then that is that. Rationality is a mechanism for effectively seeking your preferences, ordering pizza, not putting your had in a fire etc. You can’t pick rational preferences (goals), you can pick a rational route towards those goals.
If you adopt “I want to be more rational” as a preference/goal in-itself it feels like the snake is eating its own tail.
Maybe “meta goals” like this do arise elsewhere, eg. “I don’t currently have any interest in being strong/rich/powerful/skilled for its own sake, and nor are these things worth pursuing based on my current preferences (which are more efficiently achieved else-ways). However, these are things that might be generically useful for achieving preferences I may or may not have in the future, so I should acquire them as tools for later”.
But if we take rationality to mean “taking the best actions with the available information to meet your goals”, then, at least by this definition, pursuing the meta-goals appears to be definitionally irrational. This extends to the meta-goal of “being a better rationalist”.
A very nice read. Something to note: typically in the arms race that can precede a war the side that is behind will invest in unconventional weapons. For example Germany’s U-boat investments. They didn’t know for sure if submersibles were a good tactic or not, but they did know for sure that their ordinary battleships would be outnumbered even if they committed the full navy budget to them. So they took a punt on submarines.
This adds more ambiguity. If the underdog has invested much of their budget into a new-fangled weapon that has never seen real action before then they introduce additional uncertainty into the two sides likelihoods of victory.
I think this is very much true. Only if you are both aiming for the same things (eg. economy growth) will different constraints be the primary drivers of different choices.
But I think that the two samples were, in some aggregate sense, not aiming for similar objectives. In Medieval Europe, presumably, people believed that increasing prosperity and economic growth were good things.
This is not a universal standard. According to Karl Popper’s “Open Society and its Enemies”, the ancient Spartan government placed as the greatest objective : stability. All forms of change were seen as dangerous, including economic growth and population growth. Government policy was introduced with the aim to prevent both of them. At one point in Chinese history the government was so concerned that foreign trade was destabilising the country that all of the ships over a certain size were burned, and all the shipwrights had their tongues cut out to stop them teaching people how to build new big ships. They then (I think a bit later) doubled down and made it illegal to live within a few miles of the coast, to ensure you were not tempted to trade with foreigners and thereby risk new ideas entering and destabilising the realm.
I think the article has a good point but only when objectives do in fact align. The extreme conservatism described in the examples above would, I think, have stopped the Industrial revolution, regardless of capital/labour balances.
Its interesting. A relatively recent example is people investigating some conspiracy involving English kings and royal succession using DNA evidence in the bones https://www.theguardian.com/uk-news/2015/mar/25/richard-iii-dna-tests-uncover-evidence-of-further-royal-scandal#:~:text=When%20scientists%20revealed%20last%20year,they%20vowed%20to%20investigate%20further.
One aspect that presents something of a problem is not just whether the window has closed on the evidence itself, but on the medium used to transmit the evidence. If someone on the street tells me all about their epigenetic studies then their level of evidence is epigenetic testing, but my level of evidence is “some guy on the street said...”. This is, I think, why photographs, videos and physical artefacts are so convincing. They are transmissible.
I recognise that problem as well. Unfortunately it has a really large number of advantages. Not only might you flatter your reviewers and build comradery with the people you want to cite your papers but you also trigger citation alerts for more people. Google scholar (or presumably alternatives) tell people about “hey, this cited you” and then more of the relevant audience see your paper. Often these “padding references” are not papers you have actually read in full detail. You know the abstract, the conclusion, the figures and maybe saw a familiar equation then joined the dots. “Oh, its like their paper from last year, but they applied the method too...”
My ideal solution (although I have never actually tried this with a journal) would actually be so split the reference list into two sections, “critical references” (up to maybe 4 things that really set up what you are doing). Then the “other references” where you cite the most recent paper from every other researcher working on the topic.
But yes, I have so many times gone down pointless rabbit holes when a paper says “we used the method of [4,5].” I (naturally) look at [4] first, and none of it makes any sense to me. Then I look at [5] and its exactly the same notation as the first paper I was looking at and explains the method well. [4] was the slightly-rubbish version of the method that came first. The paper could have just cited [5] (the improved version they actually used), but the incentives were wrong.
I have a couple of pet theories for “why physics not chemistry” on arXiv use.
(1) arXiv’s structure really wants you to be using latex to produce your paper. My experience is that latex has conquered physics, but not other fields as much. This is supported by my impression that the more theoretical the physics the more computational it tends to be, and the more likely the author is to use latex and arXiv.
(2) The American Physical Society Journals are formatted in quite a minimalist manner, which tries to look quite formal. A typical arXiv preprint using a standard latex template will look like a less clean version of an APS paper. This means that too physicists (who read a lot of APS papers) a journal published paper doesn’t look drastically different to an arXiv preprint. If the popular journals for chemistry and biology format papers to look like Science or Nature articles then they will (at a glance) look quite distinct from a typical arXiv preprint. I think the “does it look drastically unlike a paper at first glance” test will have a very strong bearing on the seriousness people attach to arXiv.
Reading this post with the power of hindsight one downside for a more aggressive strategy is obvious: Trust.
The Oxford AstraZenica vaccine is, by any reasonable standards safe. A tiny fraction of the people it is administered to have blood clots, but the only reason this is even known is because it has been given to millions of people. If the sample wasn’t so large we could lack the statistics to resolve the effect from coincidence. Yet, people are still quite worried about this vaccine, some governments dismiss it.
So, a faster more aggressive vaccine testing process, with the actual rollout starting at the point where your info is “its safe, and probably works” instead of waiting till you know “its safe and does work” definitely wins on trolley problem logic: until all the anti-vacers point to the one thing you rolled out that did not work. Or they point to the few people who were on the wrong side of that trolley arithmetic and died from a side-effect.
This isn’t to say you are wrong, just that their is an additional non-obvious cost to that kind of approach.
PS: I love the title. Reminds me of the website DakkaDakka, where, if I recall, one person lays out the units they have selected for their army in a wargame, then 20 people respond in all caps “NEEDS MORE DAKKA!”.
I agree that the italics would be nicer, it would make that paragraph more obviously skip-able. And it is a paragraph that I (and possibly others) chose to skip.
Brian Josephson is an interesting example, his discovery of the Josephson effect at the age of 22 won him a Nobel physics prize. At about the same time that the physics prizes started to pour in for his discovery he switched hard into studying telekinesis, meditation and psychic powers (https://en.wikipedia.org/wiki/Brian_Josephson)
Now, specifically in his case it is tempting to reason as follows: If he spent the rest of his career working in physics he would always be primarily known for that thing he did right at the beginning. If he wanted his new work to match the success of the old he had to go for the high-risk high-reward stuff. Telekinesis probably doesn’t exist, but if it does… This is my interpretation of his strange change in direction. Other people might instead suppose that the early success gave him the space to work on more-or-less whatever he wanted at a university, and what he wanted was the new age stuff.
Splitting it by internal/external is a nice system.
I think people do this instinctively in real life. Exhibit A: people buy lottery tickets. My theory for this is that they know that the odds of winning are too low to justify buying a ticket assuming it is actually fully random. However, most people are willing to put the probability that karma, divine justice, God’s plan or their lucky ritual might swing the lottery in their direction at some nonzero value. If they believe in one of these things with even 1% certainty then the ticket is a good deal for them.
While these ideas are interesting I think there are many reasons not to worry about SETI. The first is that I find the “malicious signal” attack very implausible to begin with. Even if the simple plain-text message “Their is no God” would be enough to wipe out a typical civilization I still think the aliens don’t stand much of a chance. How could they create a radio signal that carries that exact meaning to a majority of all possible civilizations that could find the broadcast? And this is a scenario where the cards are stacked in the aliens favor by assuming such a low-data packet can wipe us out. A powerful AI would be a much larger piece of data: which multiplies all of the difficulties of sending it to an unknown civilization.
My second reason is that I think singling out SETI specifically is unfair. We are looking at all kinds of space data the whole time. Radio telescopes, normal telescopes and now even gravity wave detectors. Almost all of these devices are aimed at understanding natural processes. If you were some aliens who DID have the ability to send a malicious death-message then your message might be detected by SETI, but its just as likely to be detected by someone else first. Someone notices something odd, maybe “gamma ray bursts” from the galactic center. They investigate what (assumedly natural) mechanism might cause them, then Oh no! Someone put the spectrum of a gamma ray burst into the computer, but its Fourier series contained the source code of an AI that then started spontaneously running on the office computer before escaping into the internet to start WW3.
- 21 Mar 2022 18:14 UTC; 9 points) 's comment on My current thoughts on the risks from SETI by (
The fact that the mutual information cannot be zero is a good and interesting point. But, as I understand it, this is not fundamentally a barrier to it being a good “true name”. Its the right target, the impossibility of hitting it doesn’t change that.
One aspect that complicates the situation with the sports and music fans is an unwillingness to kick people when they are down.
The grieving student, or a student who was sick (even slightly sick, could have still got the report in but at a utility cost) are both people for whom we feel bad.
In contrast your example of the music fan is different. Or the example of a student who says “Last night a Billionaire’s experimental utility AI calculated that giving me a surprise trip to space in a rocket would be worth 10^7 utility points. So I missed the deadline while I was in orbit.” [You can add extra awesome to the example however you like]. In this case we are not at all surprised they missed the deadline, but maybe would be happy to punish, on the basis “meh, the 10^7 utility points you got yesterday aren’t going to be scratched by the −100 for failing this course”.
Your “select 1,000 voters at random, give them time to do research. Only they vote” system made me think of how Venice used to elect a doge. It was madness! Basically they introduced N layers of “this group of people appoint a larger group, that is stripped back by lot. They then appoint the next group...” [1]. The fact that this kind of system did exist, but doesn’t any longer, perhaps reflects badly on it.
As a separate point, the official system, “constitution”, is always actually beholden to whatever the “real” system is, which can drift quite far. In a charity I helped out at the directors, in theory, told the CEO what to do, but in practice it was just a lot of directors saying “oh great CEO, you have been here for more time than me and therefore you are more wise. What would you recommend?”. The director turnover was high, and it takes time for someone to learn how to use the reins of power even when they have them in hand. The TV program “Yes minister” made a similar claim about the UK government. Its not hard to imagine a situation where a jury are officially in charge. but are actually commanded by a judge.
[1] To quote wiki: Thirty members of the Great Council, chosen by lot, were reduced by lot to nine; the nine chose forty and the forty were reduced by lot to twelve, who chose twenty-five. The twenty-five were reduced by lot to nine, and the nine elected forty-five. These forty-five were once more reduced by lot to eleven, and the eleven finally chose the forty-one who elected the doge. (https://en.wikipedia.org/wiki/Doge_of_Venice#:~:text=Doges%20of%20Venice%20were%20elected,were%20republics%20and%20elected%20doges.)
You describe three systems (Switzerland, Singapore and China) as “very successful” I was wondering if you could elaborate a little on what metrics you believe are marks of success. You offer 6 categories, but I have trouble seeing how you might connect them to outcomes.
Specifically, for me at least PRC feels like an odd include on a list of “very successful, democratic designs”, as (1) it is not widely described as democratic, and (2 - more relevantly) by most of the metrics I would reach for (eg. GDP per capita, global happiness index, press freedom index, corruption index) PRC is also not really successful. For example, taking corruption (maybe this goes in your procedural category), then according to (https://www.transparency.org/en/cpi/2021), Singapore is the 4th least corrupt country on Earth, Switzerland 7th, and China 66th.
So what features do you associate with success?
I agree 66th of of 200 is pretty good. My general point is that to talk about “success” you need to already know what winning looks like. Low corruption is certainly not the #1 thing, and probably not in the top 10 for most people. But it probably makes it into the top 100. Maybe GDP per capita is a in the top 10. These discussions (what is good) are sort of needed to ground any kind of discussion about whether a particular system produces good outcomes. I singled out China simply because the other two on this list would (by the kinds of metrics I would reach for) be world-leading (A/A+), while China would not be.
For example, when you say that China has improved quickly since 1945 you are presumably using an economic metric (GDP)? The problem with going all the way back to 1945 is that systems change. In my weird and unscientific “how efficient do I feel different governments are” I can give the 2022 Chinese government a fair score, but I would score the 1950′s and 60′s Chinese governments very, very low.
This is a wonderful point made well. I know who have fallen into the trap of taking the path of most resistance for reasons like those you lay out.
My own (less clear) version of a similar concept was a strong dislike of the saying “work ethic”. What is ethical about working?
Some comments:
(1) I have noticed some cultural correlations in this area. In my limited experience Danes approach this issue in a more healthy way that the English. A colleague who used to work in Japan complained bitterly about his Japanese colleagues making themselves (and him) miserable with mountains of needless and unproductive busywork—which sounds like a strong manifestation of this effect.
(2) Early PhD students seem to be very susceptible to this thought-trap. 18 months into a typical physics PhD you will have nothing tangible to show for your efforts. Your code/maths/experiment will either not work or you will have discovered that the result it gives is trivial. At this point many PhD students start trying to latch on to anything that is a demonstrable time-sink. I suspect that the less tangible the output the stronger the effect you describe is likely to be.