Physicist and dabbler in writing fantasy/science fiction.
Ben
Tax codes change in small ways the whole time without much warning. So the idea of a giant taxation shift taking place over 30 or 40 years doesn’t seem ridiculous to me. For historical comparison I found this graph on UK income tax. Income tax was 0% up until 1909. Then it increased to over 90% in the next 32 years. https://commons.wikimedia.org/wiki/File:UK_top_income_tax_and_inequality.png
yes, the world wars obviously had a lot to do with it. But if it had instead gone from 0% to 25% that would still have been a huge shift.The world can change fast, and all changes have winners and loosers, so I am not sure this is the right place to attack Georgism.
Very interesting. Thank you for sharing this theory.
I had two thoughts. The first is: “Doesn’t radiation cause cancer? Isn’t this effect well established with evidence?”. Because if radiation does cause cancer then that is strong evidence that the DNA theory is true of at least some cancers. (Because radiation cannot spread a fungus).
My second thought (which I agree is in some tension with the radiation one), is that, even if I subscribe to a DNA theory of cancer, I don’t have to imagine that every tumour has a mutation (relative to the rest of the organism), or that tumour cells are unable to produce healthy embryos without a cancer. To use a software analogy, lets imagine we have a piece of software with a bug. We have all played a computer game where things are basically fine but there is that one time you ended up halfway inside a wall because of some collision error thing. You never worked out quite what did it that time, but the game was usually fine.
When a piece of software shows that collision bug, we don’t need to assume that a cosmic ray has flipped a bit in the software. We can check that the code is the same before and after we saw the collision error. This doesn’t mean we are going to see collision errors every single time we play that computer game, it just means that the game has a bug that can appear in some limited situations. Similarly, I can imagine that many organisms contain “bugs” in their DNA (the DNA they were born with, undamaged) and that some of these bugs only express themselves rarely, when specific circumstances arise, and sometimes the result of the “glitch” is cancer. In this model the tumours are not mutated. This model is consistent with the idea that radiation and so on can make cancer more likely, as flipping a bunch of bits in a piece of software is much more likely to introduce more bugs than to reduce the number. But it also seems consistent with some tumours being genetically identical to the rest of the organism. The main prediction of this sort of model would be a strong inherited tendency for certain cancers, especially for identical twins.
As a final thought. If the fungus theory is correct, it doesn’t seem like it would be impossibly hard for someone to find some of the fungus cells in mouse A. Look at them under a microscope, “Yes, fungus all right.” and then use them to give cancer to a bunch of other mice. So the fungus theory (unlike my speculations above) has the great advantage of seeming to be very testable.
Yes, I like the “add more doors” way of explaining it.
If you add enough doors you don’t even need to finish the first run to make people see. Say there are 100 doors “Player: I pick door 99”. “Judge: Ok, well I can reveal that door 1 is empty. And door 2. Door 3. 4, ….24, 25, 26, 28 (cough), 29, 30, 31...”
A very clever answer. Although I worry it might not actually carry through. My understanding is that chiral molecules react differently with other chrial molecules. So that if molecules A and B react to give C then the mirror of A reacts with the mirror of B to give the mirror of C.
So the clone might be immune to snake venom (yay!), but all kinds of everyday foods might effect them as if they were snake venom (boo!). But if the clone has (behind them) a whole mirror-world ecosystem then I think they are OK.
There is some particle physics stuff that is believed to break this symmetry intrinsically, without needing another Chiral thing to react with. [I find this really hard to believe, but apparently it is so]. So I suppose over a very long timescale some of those obscure particle interactions might break the symmetry.
Even within universities there is a constant battle between providing students with good chances to learn, and providing them with fair assessment criteria.
The lab experiments in physics courses are a good example of this. Everyone knows the students learn the most, and have the most fun when you tell them what they need to measure, and leave it up to them how they are going to measure it. Then leave it to them to decide which follow-up questions they want to ask ask and how they want to pursue those. However, a minority of students sometimes complain , especially if they feel their assessments were not fair. The university hates student complaints, and likes having ammunition to shoot them down by showing that everything was equal for everyone: which usually means a lot of structure in the marking, which in turn imposes structure on what the students can do (reducing freedom).
Obviously I prefer to lean towards more freeform than my institution, but I wouldn’t go “the whole way”. The bits of paper students get at the end are actually important for their future lives (unfortunately they may be more important than the stuff they learn). Having the genius who does an experiment in a very novel and weird way get a failing grade because the assessor doesn’t understand what they have done is the cost you pay for going too free-form. The reward is that they did the experiment in a very novel and weird way, so probably had fun and learned a lot.
Confusion slain!
I forgot that their were leftover chips rewarded to the player with the most goal suit cards (I now remember seeing that in the rules, but wrote it off as a way of fixing the fact that the number of goal suit cards and players could both vary so their would be rounding errors, and didn’t keep it in mind). That achieves the same kind of thing I was gesturing at (most of a suit), but much more elegantly.
Thank you for clarifying that.
Something that confuses me a bit about Figgie, is that not only is it a zero-sum game (which is fine), but every individual exchange is also zero-sum (which seems not fine). If I imagine a group of 4 people playing it, and two of them just say “I won’t do any trading at all, just take my dealt hand (without looking at it) to the end of the round”, and the other two players engage in trade, then (on average) the score of the two trading players will be the same as those of the two players who don’t trade. This, seems like its a problem. If your assessment is that the other players are more skilled than you, then it is optimal to just not engage.
I haven’t played it, so this idea might be very silly, but it feels like the scoring should be rewarding players who have made their hand very strongly contain one particular suit (even if its not the goal suit). Then in the example above the two players engaging in trade can help one another to end up with lopsided hands (eg. one has lots of hearts, the other lots of spades), so that the group that trades has a relative advantage over a group that doesn’t.
As a candidate rule it would be something like: At round end every spade you have makes you pay 1 chip out to the person with the most spades (for all suits except the goal suit).
As an insight into the power of Wikipedia. When I first found lesswrong when googling something I read a couple of articles, thought “this seems good, if a bit weird”, and then read its Wikipedia page before going any further.
At the time my view (which I remember saying to a friend who was looking up the website after I recommended an article) was “Its mostly right, but fixates a weird amount on the basilisk thing”.
It certainly works. Another friend of mine recently opinionated “everyone on LW is an idiot, because basilisk thing”, which was interesting because I didn’t know that friend knew about LW at all, and from the basilisk thing mentioned it seemed likely they had just read its Wikipedia. (To them, the argument is not “All users of this website are idiots because I think one topic discussed on it once was dumb”. But instead “All users of this website are idiots because the one thing the website apparently discusses seems dumb”. Its important that the basilisk on wiki’s LW article was not one thing out of 10 or 20. But the single one thing.)
I am here because the Wikipedia page didn’t put me off reading LW.
We expect heat to flow from hot to cold, devices that deviate from this are thermodynamically unlikely, which is another way of saying that they require a low entropy source. (As you said.) Low entropy = thermodynamically unlikely. This means that heat pumps are extremely non-random. So any system that looks like its random (a hot cup of tea) is going to be a very bad candidate. Similarly I think that things like weather phenomena are a bad place to look.
Living creatures can do thermodynamically unlikely things. As an example lots of (all?) individual cells move various chemicals (like salt) against the density gradients (so they move salt from a place of low concentration to a place of high salt concentration). This is Active Transport. This is just as thermodynamically unlikely as a heat pump, but its a “salt pump” not a “heat pump” so its not exactly right.
My feeling is that an actual “heat pump” (with heat, not salt) must occur in some organisms, and I think I have found a borderline example at this link (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3962001/#:~:text=After%20getting%20hot%20enough%20the,the%20nest%20surface%2015%2C%2046.) :
“In the spring ants are observed to create clusters on the mound surface as they bask in the sun. Their bodies contain a substantial amount of water which has high thermal capacity making ant bodies an ideal medium for heat transfer. After getting hot enough the ants move inside the nest where the accumulated heat is released.”
If we suppose that, as a result of this, the inside of the ant’s nest ends up warmer than the air outside then this I think possibly counts. Its a heat pump where the working fluid is living ants. The cold ones leave to bask in the sun, then return hot.
Its borderline because there is cheating going on, in that the sun is much hotter than the inside of the ant’s nest (I assume), and they are using the sun to heat themselves up. Ideally we need ants that carry around little compressible air sacks they can inflate inside and deflate outside, so that they can unambiguously take heat from the cool air outside to deposit in the hot air inside their nest.
I think the best model for “why ban fresh bread” is something like what lexande said, but modified like this:
-People were buying fresh bread every day and (if wealthy) throwing out the bread from the day before (or throwing it out on some horizon). The idea was maybe that by preventing this use of flour (optimised for niceness) the economic forces would then optimise better for calories.
-A solidarity thing? Upping the price would disproportionately hurt the poor. Pushing the market by lowering the quality is more egalitarian in some sense. It pushes the rich into buying something more expensive, and the poor into just having worse bread.
-Bakers could optimise bread for deliciousness on the day of baking, or (somehow) make bread that was likely to last longer. Longer lasting bread would improve efficency by seeing less of it go bad.
-It was just really dumb. eg. People were arriving at the bakers to find the bread sold out. And were furious and petitioned the government to do something. The government (for some bizare reason) believed that if the sales were delayed a day that the bakeries wouldn’t be out of bread when you visited (although they may be out of bread they were allowed to sell you).
This is quite similar to the “swampman” thought experiment (https://en.wikipedia.org/wiki/Donald_Davidson_(philosopher)).
My thoughts: Assuming there is no subjective experience after death (no afterlife or anything), then it is sort of trivial that subjective experience ends at death, so you don’t ever experience it.
Now, my read on your argument is that in a sufficiently big universe or multiverse, there will be many “mes” with exactly the same subjective experiences so far, and that whenever one (or a large number) of “mes” die there will be some others who are narrowly saved at the last moment, just as they wheeze their last breath an alien turns up and heals them or whatever. Or they were in a simulation the whole time or similar.
However, it remains the case that before the death there were N copies, and afterwards there were N-1. Its not like you “merged with” or “snapped into” the surviving ones. You are not causally propagating yourself into them. Its just you have accepted a world view where it is possible to likely that there are people arbitrarily similar to you.
My feeling is that its like this analogy. Imagine that in the near future all records of the works of Shakespeare (all of them, including all quotes) are lost forever. But that, it just so happens that by complete coincidence there are pebbles on a beach in another galaxy, that can be read in binary (dark/pale pebbles 1⁄0) to symbolise the full works of Shakespeare to the letter. Does that make it any less of a loss that the works were lost here on Earth?
If you think consciousness is a real existent thing then it has to either be in:
The Software, or
The Hardware.
Assuming to be in the hardware causes some weird problems. Like “which atom in my brain in the conscious one?”. Or “No, Jimmy can’t be conscious because he had a hip replacement, so his hardware now contains non-biological components”.
Most people therefore assume it is in the software. Hence why a simulation of you, even one done by thousands of apes working it out on paper, is imagined to be as conscious as you are. If it helps, that simulation (assuming it works) will say the same things you would. So, if you think its not conscious then you also think that everything you do and say, in some sense, does not depend on your being conscious, because a simulation can do and say the same without the consciousness.
There is an important technicality here. If I am simulating a projectile then the real projectile has mass, and my simulation software doesn’t have any mass. But that doesn’t imply that the projectiles mass does not matter. My simulation software has a parameter for the mass, which has some kind of mapping onto the real mass. A really detailed simulation of every neuron in your brain will have some kind of emergent combination of parameters that has some kind of mapping onto the real consciousness I assume you possess. If the consciousness is assumed to be software, then you have two programs that do the same thing. I don’t think there is any super solid argument that forces you to accept that this thing that maps 1:1 to your consciousness is itself conscious. But there also isn’t any super solid argument that forces you to accept that other people are conscious. So at some point I think its best to shrug as say “if it quacks like consciousness”.
I agree with the post generally. However, the chef example is (I think) somewhat flawed, as with all TV the footage is edited before you see it. So you have no idea how many pieces of advice the chef mentor gave that were edited out. In the UK version of the Apprentice the contestants would have a 3 hour planning session that would be edited down to 5 minutes, so you knew that whatever it was they were talking about in that 5 minutes of footage was the decision that would dominate their performance, meaning (as a viewer) it was very easy to see what was going to go wrong ahead of time.
Exactly this. Your client is charged with 9 murders. You, followed by all other lawyers, refuse to defend them because they are so obviously guilty. They go to prison. But, they only killed 8 people. The real culprit in the 9th case goes free.
Very possible. I am not fully convinced. The dog had to identify the people who had food in there bags, and tell them apart from all the people who used to have food in those same bags, or were eating on the flight and have food on there breath or hands. A dog trying to identify (for example) canabis would probably have an easier time.
My stance is not “I know 100% that sniffer dogs are a silver bullet”, but the weaker position “The majority of the value of a sniffer dog comes from it actually smelling things, rather than giving the officer controlling it a plausible way of profiling based on other (possibly protected) characteristics.”
The idea that the sniffer dog picks up on what the handler is thinking and plays it out for them is very interesting, and maybe does indeed happen sometimes.
But I think you are probably overcorrecting somewhat. Sniffer dogs do actually smell things. In much more low-stakes situations I have seen one in New Zealand successfully identify several people getting off a flight who had forgotten about food in their backpacks (they have strict laws against food going in in case you bring a new blight or pest or whatever). So my read is that sniffer dogs are at least good enough at actual sniffing to demand some kind of response from would be smugglers (eg. extra plastic wrapping).
That Verizon call is terrifying. The caller made a critical mistake a couple of times though, he asked for his bill in dollars. He should have asked them to, starting blank, calculate how many cents he owed them. I think that may have potentially clarified it for them. (I still don’t understand how they can continue to fail at this for some long though).
A big contributor is the fact that $ signs (or £ or whatever) go at the beginning of numbers, when every other unit goes at the end. If we were used to prices like 100$, or 0.99$ or whatever then they would have immediately seen that 0.002c was different from 0.002$. (But in their head it was $0.002c). So, the real culprit is inconsistent notation.
A housemate of mine at university had a project to build Rubik’s cube solving machine with a group as part of his course.
The “human hands are actually really good and hard to replace mechanically” would be a sentiment he could sympathise with. All other aspects of the project (the solving code, the camera data interpretation) were negligible in comparison to making hands that could turn the faces they wanted to turn, and turn them in increments of 90 degrees (more or less causes the next turn to “jam up” the cube). I think in the end they got it to the point it would usually work with a brand new cube that had never been used before, but after a few hundred moves had been made on any given cube the stiffness of the turns would have changed in an inconsistent way and jams would occur.
Some recommendations of mine:
Inverted World—A surreal world with a very strong mystery that pulls you in. Ending is unfortunately not fully satisfying.
The Fifth Head of Cerberus (Gene Wolf) - A very engrossing story that pulls you in and keeps you interested. Not much in the way of scifi cool technology or fancy concepts, but still very good.
The Stars my Destination (also published under the name “tiger tiger”) - This is a very fun book, the author seems to be possessed by some kind of madness and I imagine them typing the book at the same furious pace it demands to be read.
Lord of Light—Very Science-Fantasy. But if you enjoyed the first Dune book and are looking for more in that vein then I would recommend that you read this in preference to the Dune sequels.
Raft (Stephen Baxter) - A really interesting story about the politics of oppression, where its all free trade, but one side needs to buy food. The backdrop is a weird looking and strange world that makes enough surface level sense to be fun. (Warning: This is the first story in a “sequence”, I would recommend against reading the others, which I thought were bad).
I can’t speak to what the OP meant by that. But scientific publishing does require spin, at least if you are aiming for a good journal. There is not some magic axis by which people care about some things and not about others, so its your job as an author to persuade people to care about your results. This shifts the dial in all sorts of little ways.
“Well, in the end it seems like we learned nothing.” If that is the conclusion you don’t get to publish the paper, which is not good for your career. Where-as “In conclusion, we have shown {really important result} beyond any shadow of a doubt” is good. But real results are in the middle. You have something, but there are caveats, assumptions, details that you don’t think are important but who knows maybe they are? On any particular weakness, how much emphasis does it get? A paragraph? A sentence? A footnote? In the supplementary information? Entirely missing? How much emphasis would you give that weakness in your methodology if the publishing process was not incentivising you to put it as far down that list as possible?
The manufacturing process was not reliable, but the 32nd device tested worked fairly well, as plotted in fig.3.
”Why mention the 31 devices that the paper is not about?”
Assumption 1, assumption 2....
”Its nice that you understand and explain these assumptions so well. But they are all pretty standard in the field. I think we can drop these paragraphs, just say “using standard approximations”, actually “standard methods” sounds better.”
The end result:
PhD student’s draft: “It is possible that quantum information technology might be important at some point in the future. One aspect of that is 2-bit operations, but they need to be robust. One particular 2 bit operation is the CNOT gate. In this paper we demonstrate a CNOT quantum gate, that only worked on the 32nd tested device. Its nowhere near good enough for a useful quantum computer, but it worked OK on Tuesday, which is something (when I returned to get more data on Wednesday it had permanently stopped working for inexplicable reasons.).”
Final (published) paper after the professor has had an edit: “Quantum technology will soon revolutionise all aspects of human society, bringing vast social and economic benefits. The key obstacle to realising these enormous gains is a reliable the 2-bit quantum gate. In this paper we propose a novel design for such a gate, and find that a high level of reliability can be achieved simultaneously with improving the device’s speed.”