The prediction that the sun and stars we perceive will go out is absurd only because you are excluding the possibility that you are dreaming. Because of what we label as dreams we frequently perceive things that quickly pop out of existence.
I’m confused since as a buyer if I believed the seller could predict with probability .75 I would flip a fair coin to decide which box to take meaning that the seller couldn’t predict with probability .75. If I can’t randomize to pick a box I’m not sure how to fit in what you are doing to standard game theory (which I teach).
“Over the past few years, some people have updated toward pretty short AGI timelines. If your timelines are really short, then maybe you shouldn’t sign up for cryonics, because the singularity – good or bad – is overwhelmingly likely to happen before you biologically die”
But such a scenario means there is less value in saving for retirement and this should make it financially easier for you to sign up for cryonics. Also, the sooner we get friendly AGI, the sooner people in cryonics will be revived meaning there is a lower risk that your cryonics provider will fail before you can be revived.
Strongly agree. I would be happy to help. Here are three academic AI alignment articles I have co-authored. https://arxiv.org/abs/2010.02911 https://arxiv.org/abs/1906.10536 https://arxiv.org/abs/2003.00812
While not captured by the outside view, I think the massive recent progress in machine learning should give us much hope of achieving LEV in 30 years.
Yes, the more people infected with the virus, and the longer the virus is in people the more time for a successful mutation to arise.
I did a series of podcasts on COVID with Greg Cochran and Greg was right early on. Greg has said from the beginning that the risk of a harmful mutation is reasonably high because the virus is new meaning there are likely lots of potential beneficial mutations (from the virus’s viewpoint) that have not yet been found.
From an AI safety viewpoint, this might greatly increase AI funding and drive talent into the field and so advance when we get a general artificial superintelligence.
Yes for high concentration of observers, and if high tech civilizations have strong incentives to grab galactic resources as quickly as they can thus preventing the emergence of other high tech civilizations, most civilizations such as ours will exist in universes that have some kind of late great filter to knock down civilizations before they can become spacefaring.
Thanks, that’s a very clear explanation.
At the end of Section 5.3 the authors write “So far, we have assumed that we can derive no information on the probability of intelligent life from our own existence, since any intelligent observer will inevitably find themself in a location where intelligent life successfully emerged regardless of the probability. Another line of reasoning, known as the “Self-Indication Assumption” (SIA), suggests that if there are different possible worlds with differing numbers of observers, we should weigh those possibilities in proportion to the number of observers (Bostrom, 2013). For example, if we posit only two possible universes, one with 10 human-like civilizations and one with 10 billion, SIA implies that all else being equal we should be 1 billion times more likely to live in the universe with 10 billion civilizations. If SIA is correct, this could greatly undermine the premises argued here, and under our simple model it would produce high probability of fast rates that reliably lead to intelligent life (Fig. 4, bottom)...Adopting SIA thus will undermine our results, but also undermine any other scientific result that would suggest a lower number of observers in the Universe. The plausibility and implications of SIA remain poorly understood and outside the scope of our present work.”
I’m confused, probably because anthropic effects confuse me and not because the authors made a mistake. But don’t the observer selection effects the paper uses derive information from our own existence, and if we make use of these effects shouldn’t we also accept the implications of SIA? Should rejecting SIA because it results in some bizarre theories cause us to also have less trust in observer selection effects?
Not that I recall.
In 2007 I wrote an article for Inside Higher Ed advocating that “institutions should empower graduating seniors to reward teaching excellence. Colleges should do this by giving each graduating senior $1,000 to distribute among their faculty. Colleges should have graduates use a computer program to distribute their allocations anonymously.”
In an accident something from your car could hit you in the head even if you have an airbag. For example, the collusion could cause your head to hit a side window
The helmet I linked to is light and doesn’t block your vision so I don’t see how it could do any harm. It would do a lot of good if you were wearing it when your head collided with something.
Do you wear a helmet when in a car? I do.
Think of mutational load as errors. Reducing errors in the immune system’s genetic code should decrease the risk of pandemics. Reducing errors in people’s brains should greatly increase the quality of intellectual output. Hitting everyone in the head with a hammer a few times could, I suppose, through an extraordinarily lucky hit cause someone to produce something good that they otherwise wouldn’t but most likely the hammer blows (analogous to mutational load) just gives us bad stuff.
The best way to radically increase the intelligence of humans would be to use Greg Cochran’s idea of replacing rare genetic variations with common ones thereby greatly reducing mutational load. Because of copying errors, new mutations keep getting introduced into populations, but evolutionary selection keeps working to reduce the spread of harmful mutations. Consequently, if an embryo has a mutation that few other people have it is far more likely that this mutation is harmful than beneficial. Replacing all rare genetic variations in an embryo with common variations would likely result in the eventual creation of a person much smarter and healthier than has ever existed. The primary advantage of Cochran’s genetic engineering approach is that we can implement it before we learn the genetic basis of human intelligence. The main technical problem, from what I understand, from implementing this approach is the inability to edit genes with sufficient accuracy, at sufficiently low cost, and with sufficiently low side effects.
Much of the harm of aging is the increased likelihood of getting many diseases such as cancer, heart disease, alzheimer’s, and strokes as you age. From my limited understanding, Metformin reduces the age-adjusted chance of getting many of these diseases and thus it’s reasonable, I believe, to say that Metformin has anti-aging effects.