That’s very interesting. I’d be interested to see if this actually leads to an uptick of interest in Southeast Asia.
JoshuaZ
I think gjm responded pretty effectively so I’m just going to note that it really isn’t helpful if you want to have a dialogue with other humans to spend your time insulting them. It makes them less likely to listen, it makes one less likely to listen one’s self (since one sets up a mental block where it is cognitively unpleasant to admit one was wrong when one was) and makes bystanders who are reading less likely to take your ideas seriously.
By the way Eray, you claimed back last November here that 2018 was a reasonable target for “trans-sapient” entities. Do you still stand by that?
Yes, but at that point this becomes a completely unfalsifiable or evaluatable claim and even less relevant to Filtration concerns.
I don’t think this is what is going here at all. The pattern match that is going on is cryonics and fringe science or pseudoscientific ideas that sound like they are promising things they cannot deliver. This much more about PZ thinking of himself as a skeptic and having just enough biology background to think he can comment on any biology related issue.
I’d say inconsistent rather than incoherent moral standards, or different moral standards at tension.
Honestly, this seems like a “well, duh” sort of thing. One just needs to read the rhetoric from say both sides of the US immigration debate, or both sides of the discussions in Europe about refugees from North Africa to see this pretty clearly.
He’s not saying in that quote that they shouldn’t feel an obligation, he’s making a point focusing on doubting whether they’d want to resurrect them. I think they very likely would, and PZ is ignoring the entire first-in/last-out which cryonics plans on using to further encourage people to resurrect, but it helps to actually focus on what his criticism is.
Make travel easier. Thanks for catching that!
In that case, I’m confused as to what the error is. Can you expand?
I may not have explained this very well. The essential idea is being examined is the proposal that one doesn’t have a single low probability event but a series of low probability events that must happen in serial after life arises even as there are chances for each happen in parallel (say , multicellular life, development of neurons or equivalent, development of writing, etc.) In that case, most civilizations should show up around the long-lived stars as long as the others issues are only marginally unlikely.
Thus, the tentative conclusion is that life itself isn’t that unlikely and we find ourselves around a star like the sun because they have much bigger habitable zones than red dwarfs (or for some similar reason) so the anthropics go through as your last bit expects.
People have also suggested that civilizations move outside galaxies to the cold of space where they can do efficient reversible computing using cold dark matter. Jacob Cannell has been one of the most vocal proponents of this idea.
Do you have other examples of this idea? I’m just curious where you may have encountered it.
Hanson mentions it in the original Great Filter piece, and I’ve seen it discussed elsewhere on the internet in internet fora (for example, r/futurology on Reddit).
You’re correct that I should do a better job distinguishing the various versions of this hypothesis. I do think they run into essentially the same problems; reversible computing doesn’t mean one has unlimited computational power. If the material in question is conventional baryonic matter then it cannot be a large fraction of actual dark matter, so it loses the appeal of being an explanation for that (and yes, the other explanations for the Filter don’t explain dark matter, but versions of this at one point had this as their main selling point). Moreover, it isn’t at all clear how you would have multiple such objects communicate with each other.
A few months ago, I asked you here what you thought the form of these dark matter entities were and you didn’t reply. It seems that since then you’ve thought a lot more about this. I’m going to have to think carefully about what you have said above and get back to you later.
First everything in any practical simulation is always and everywhere an approximation. An exact method is an enormously stupid idea—a huge waste of resources.
We haven’t seen anything like evidence that our laws of physics are only approximations at all. If we’re in a simulation, this implies that with high probability either a) the laws of physics in the parent universe are not our own laws of physics (in which case the entire idea of ancestor simulations fails) or b) they are engaging in an extremely detailed simulation.
The optimal techniques only simulate down to the quantum level when a simulated scientist/observer actually does a quantum experiment. In an optimal simulated world, stuff literally only exists to the extent observers are observing or thinking about it.
And our simulating entities would be able to tell that someone was doing a deliberate experiment how?
The limits of optimal approximation appear to be linear in observer complexity—using output sensitive algorithms.
I’m not sure what you mean by this. Can you expand?
The upshot of these results is that one cannot make a detailed simulation of an object without using at least much resources as the object itself.
Ultra-detailed accurate simulations are only high value for quantum level phenomena. Once you have a good model of the quantum scale, you can percolate those results up to improve your nano-scale models, and then your micro-scale models, and then your milli-meter scale models, and so on.
Only up to a point. It is going to be for example very difficult to percolate up simulations from micro to milimeter for many issues, and the less detail in a simulation, the more likely that someone notices a statistical artifact in weakly simulated data.
We already can simulate entire planets using the tiny resources of today’s machines. I myself have created several SOTA real-time planetary renderers back in the day.
Again, the statistical artifact problem comes up, especially when there are extremely subtle issues going on, such as the different (potential) behavior of neutrinos.
Your basic point that I may be overestimating the difficulty of simulations may be valid; since simulations don’t explain the Great Filter for other reasons I discussed, this causes an update in the direction of us being in a simulation but doesn’t really help explain the Great Filter much at all.
This explanation does not work. Say exactly one species gets to emerge and do this. Then why would that one species show up so late?
Also, a quick note. Cell Bio Guy provided the estimate that supernovas are not a substantial part of the Filter, by noting that the the volume of the galactic disc (~2.6*10^13 cubic light years), a sterilization radius or around 10 light years (3 parsecs about) , and one every 50 years. This gives a sterilization of around 100 cubic light years per year and an odds of sterilization of a star system per year of one in three or four trillion.
To expand on their reasoning: One can argue that the supernova sterilization distance is too small, and if one increases that by a factor of 5 as an upper bound, that increases the amount of sterilized volume by a factor of about 1000, giving a sterilization event per a star system of about one in ten billion which is still very rare. One can also argue that it may be that much of the galactic center is too violent to have habitable life, which might cut down the total volume available by about 20%, and if one assumes (contrary to fact) that none of the supernovas are in that region, this still gives about one event every 6 billion years, so this seems to be strongly not the Filter.
I’m not sure if this piece should go here or in Main (opinions welcome).
Thanks to Mass_Driver, CellBioGuy, and Sniffnoy for looking at drafts, as well as Josh Mascoop, and J. Vinson. Any mistakes or errors are my own fault.
Astronomy, space exploration and the Great Filter
You can do URLs with parentheses you need when you have the closing paren to have a \ before it. So:
This is your link00266-3?cc=y)
Excellent. Thank you.
A new study suggests that people are overly optimistic about how technologies can succeed in ways that substantially impact decision making. Summary article of the work is here, while article behind paywall is here. This seems very interesting, and if anyone has a non-paywalled copy I’d be very interested in reading it. It looks like this may be to some extent a culturally driven rather than innate bias but for most purposes the effects will probably be similar.
Echoing Hairyfigment here, what is wrong with Bourbaki?
That’s an interesting hypothesis. Is there any way to test it? Also is there any way to take advantage of it? That suggests that the window for cryonics there may not be very long, possibly on the order of 20 years or so.