Here is the procrastination equation image (since it’s currently broken in the main text).
And the same equation in math:
Here is the procrastination equation image (since it’s currently broken in the main text).
And the same equation in math:
The Science Article mentions some ways it may understate the benefit of increasing production. But it misses an important benefit altogether. The chance of creating vaccine-resistant strains increases with the number of unvaccinated people. Vaccine-resistant strains make current capacity less valuable. So all capacity becomes more valuable the faster you vaccinate people because there is less chance of starting over.
Re: dominant assurance contracts/crowdfunding
The article makes the bad assumption that , the distribution of individual values of the public good is common knowledge. A good entrepreneur will do market research to try and determine . But better approximations cost more. Entrepreneurs will also be biased to think their idea is good. So, it is likely that many entrepreneurs will have bad models. Most individuals will also not know . So, there is another mode to profit for the small fraction of individuals who have decent approximations of : buy contracts likely to fail.
I’m not sure how this affects the whole scheme, but I’m pretty sure it limits the size of the failure payoffs to be significantly less than the value where is common knowledge.
The assumption that (the value to the -th individual) is known to that individual is also false. The individual has less variance on their estimate than others but would need to invest resources to know . I’m not sure what this error does to the contracts. I doubt it has as much effect as the common knowledge assumption.
The earliest citation in Wikipedia is from 1883, and it is a question and answer: “If a tree were to fall on an island where there were no human beings would there be any sound?” [The asker] then went on to answer the query with, “No. Sound is the sensation excited in the ear when the air or other medium is set in motion.”
So, if this is truly the origin, they knew the nature of sound when the question was first asked.
Take all the metaphysical models of the universe that any human ever considers.
This N is huge. Approximate it with the number of strings generatable in a certain formal language over the lifetime of the human race. We’re probably talking about billions even if the human race ceases to exist tomorrow. (Imagine that 1⁄7 of the people have had a novel metaphysical idea, and you get 1B with just the people currently on earth today. If you think that’s a high estimate, remember that people get into weird states of consciousness (through fever, drugs, exertion, meditation, and other triggers), so random strings in that language are likely.)
You may want to define “metaphysical idea” (and thus that language) better. Some examples of what I mean by “metaphysical idea:”
“Is my blood made of spiders?”;
“Things get hot because they contain phlogiston.”
What experimental tests has clash theory survived?
If recoupments occur sparingly, as I’d expect, where should the remaining funds go?
Keep them for “times of national emergency” etc. to hedge against correlated risk.
How big is the risk that the fund will be used in illicit ways, such as tax evasion, despite the fact that donors cannot claim more than they spent?
Modern society strongly incentivizes misusing anything that touches money, so without further evidence, I’d say that the risk is very high (near certainty). If we haven’t found a way to misuse it, it is more likely that we are not clever enough than that the way does not exist.
First thought: I put in $100 to an 80% fund. I wait a year to claim the tax break on the 80% donation netting say 30%*$80=$24 in reduced taxes. Then I take out $95. I’ve made $19 on the trade. Of course, a government would see this right away and not allow tax breaks for such contributions. But this sort of thing seems rife for problems.
Another: I put in $100, $80 goes to a “charity” that gives me a 10% kickback. Then I take out $95 and I’ve made $3.
You might be able to fix this by requiring that contributors maintain almost all of their assets as property of the fund. Then if I make a withdrawal for “an emergency” I can’t keep any profit or buy anything that doesn’t go right back to the fund. But that sounds a lot like the “everything in common” schemes that have failed so often in the past. So, we’d need to modify it to make it viable.
I found a review on Amazon (quoted at the bottom, since I cannot link to it) that says Ecker is injecting significant personal opinion and slanting his report of the science. I don’t know if this is true, but the gushing praise from readers and psychology’s history of jumping on things rather than evaluating evidence make it seem more likely than not. For me, this means that reading this book will involve getting familiar with the associated papers.
The Review
by “scholar”
Previously I posted a very positive review of this book. On further reflection and study of the relevant research papers, I have a very different view. The science of memory reconsolidation is complex and subtle. Its application to clinical work with real patients remains predominantly hypothetical. Ecker creates the impression that the conditions for memory reconsolidation and updating are now known and clear. They are not. His claims for their application to clinical practice (in my view) go rather beyond the evidence. Moreover, when I read his clinical examples later in the book, I completely fail to see how they relate specifically to the science he earlier quotes—they just seem to be examples of his therapeutic approach called Coherence Therapy (which predates his interest in memory reconsolidation) - and although these are certainly interesting, I cannot grasp how they illustrate the principles of memory reconsolidation. The positive outcome is that this book, which I eventually found confusing and infuriating, prompted me to study further this fascinating field of enquiry. There are undoubtedly potential clinical applications, but I feel Ecker’s enthusiasm is a little premature.
I had a similar issue. I could not do the exercise because I could not figure out how to evaluate confidence and competence separately. I always end up on the x=y line. Reading this thread did not help. “Anticipated okayness of failure” doesn’t change much with time for the same task, so that is a vertical line. “Confidence” = “Self-related ability to improve” is an interesting interpretation (working on “confidence” would be working on learning skills). Still, intuitively it feels off from what the graphs say (though I haven’t been able to put the disconnect into words). Thinking about the improv/parachute graph, maybe “confidence” is “willingness to attempt a task despite being incompetent.” I’m giving up for now.
If one must choose between a permanent loss of human life and some temporary discomfort, it doesn’t make sense to prefer the permanent loss of life, regardless of the intensity of the discomfort.
This choice doesn’t exist; permanent death is inevitable under known physics. All lifespans are finite because the time the universe will support consciousness is most likely finite, whether because of heat death or the big rip. This finiteness makes your “you save one life, and 7 billion humans suffer for 100 billion years” question not at all obvious. Saving a life is not avoiding death; it is postponing it. Thus, you could rewrite your scenario as: “should I give someone 80 extra years of normal life before they die, if in exchange, instead of dying at their normal time, 7 billion humans are tortured for 100 billion years and then die.” Under a Rawlsian veil of ignorance, I would not choose to “save the life” in this scenario. Even if that person survived until the end of the black hole farming era, I probably still wouldn’t choose it. There is too much chance that I will end up being one of the tortured. (Though a chance of years of life against years of torture is pretty tempting on an expected value basis, so I’m not sure.)
As others have commented, I also think the reversibility of suffering is a weak point. We do not know how hard it is. It may have the same difficulty level as resurrection. But, if you specify that the finite span of torture happens instead of normal death, you avoid this.
Like pjeby, I think you missed his point. He was not arguing from authority, he was presenting himself as evidence that someone tech-savvy could still see it as a trap. His actual reason for believing it is a trap is in his reply to GWS.
Rationalism requires stacktraces terminating in irrefutable observation
Like the previous two commenters, I find this statement odd. I don’t fully trust my senses. I could be dreaming/hallucinating. I don’t fully trust my knowledge of my thoughts. By this definition of a rationalist, I could never be one (and maybe I’m not) because I don’t think there is such a thing as an irrefutable observation. I think there was a joke in that statement, but, unobserved by me, it took flight and now soars somewhere else.
Hey Siri, “Is there a God?” “There is now.”
- Adapted from Fredric Brown, “The Answer”—for policymakers.
Orwell’s boot stamped on a human face forever. The AI’s boot will crush it first try.
Humanity’s incompetence has kept us from destroying ourselves. With AI, we will finally break that shackle.
Someone who likes machines more than people creates a machine to improve the world. Will the “improved” world have more people?
COVID and AI grow exponentially. In December 2019, COVID was a few people at a fish market. In January, it was just one city. In March, it was the world. In 2010, computers could beat humans at Chess. In 2016, at Go. In 2022, at art, writing, and truck driving. Are we ready for 2028?
The concept of a translator between conceptual frameworks reminds me of the narrator of Blindsight—a man who had special cybernetic enhancements to allow him to do this type of translation.