Looking from utilitarian perspective, why don’t you consider the pleasure of eating meat here at all?
Coacher
If there IS alien super-inteligence in our own galaxy, then what it could be like?
Many Worlds against Simulation?
Compared to what alternative?
Having this in mind, could it be possible to construct such roulette betting system, which have positive expected utility value?
Another hypothesis—the smarter you sound the less friends you tend to have.
Could it also be, that being rational deprives portion of CPU/RAM of human brains, that would otherwise be used for something better?
On the other hand I don’t see, why AI that does spread can not be a great filter. Lets assume:
Every advanced civilization creates AI soon after creating radio.
Every AI spreads immediately (hard take off) and does that in near speed of light.
Every AI that reaches us, immediately kills us.
We have not seen any AI and we are still alive. That can only be explained by anthropic principle—every advanced civilization, that have at least bit more advanced neighbors is already dead. Every advanced civilization, that have at least bit less advanced neighbors, have not seen them, as they have not yet invented radio. This solves Fermi paradox and we can still hope to see some primitive life forms in other planets. (also AI may be approaching us at speed of light and will wipe us out any moment now)
For hypothesis to hold AI needs to:
Kill their creators efficiently.
Don’t spread
Do both these things every time any AI is created with near 100% success ratio.
Seems a lot of presumptions, with no good arguments for any of them?
Or the way we try to keep isolated people isolated (https://en.wikipedia.org/wiki/Uncontacted_peoples)
Now somebody will steal the idea about bikeshops.
How do you solve interpersonal problems when neither sides can see themselves as the one in fault?
Is there any other kind?
I wanted to recommend she applied for graphic design and video editing work which she is talented in, since she isn’t sure what she can do career-wise, but now it’s too late. I wanted to watch I, origins with her since it reflects our story. But now it’s too late. I wanted to watch her favourite movie: ‘one day’ with her, which also reflects our story, but not it’s too late. I wanted to hand write her a letter, but now it’s too late. I wanted to suprise her after or during work, but now it’s too late. I want to share with her 6⁄7 major secrets (the 7th being my passwords...), to show her how looked as a kit, and to ‘dine at the y’ but not it’s too late. She pushed away when I sent her a goodmorning text, with a love heart at the end.
Well at least, your accounts are safe now.
I’ll be scared, when they do Counter Strike.
The problem here is that we are talking about two different concepts—experienced moments (as in antophics) and Everett branches (as in Many Words). There is a way to think of them as the same, but they not necessary are. Like if there is Bob before measuring spin, and two Bobs—Bob-up and Bob-down, after measuring, what is bigger probability to experience—being bob before measuring spin or being bob after measuring spin? (TBH I have no idea)
By “measures as” I mean as in what was the probability to experience exact this moment, from the set of all possible moments that “exists” (or can be experienced). And by “measures as 1” I mean, that if several physical “carriers” produces exact same experience, that counts as 1 experience in the grand total set of experiences, and probability to feel exactly that is 1/(count of all different experiences). Now I know this is controversial and counter intuitive. But still this is quite plausible, given what we even know about consciousness. Like, if consciousness emerges on the level of algorithms and logic, then why would it care, how many physical things produces it? If one were asked, how many movies about human pretending to be blue alien on planet Pandora does he know, the answer would probably be 1 and not the number of digital Avatar copies ever made.
I agree with criticism for 2 assumption. Although I have this intuition (based on possibly very wrong intuitions I have about QM), that argument still works even without it: Imagine same human runs the simulation. Then he goes to another table where he runs spin measuring experiment, with 50⁄50 probability of getting either up or down. After seeing the result, there is now two different consciousness of him, but there is still just one copy of simulated brains as they did not saw the result.
Also, what if intelligent life is just a rare event? Like not rare enough to explain Fermi paradox by itself, but rare enough, that we could be considered among earliest and therefore surprisingly early in the history of universe? Given how long universe will last, we actually are quite early: https://en.wikipedia.org/wiki/Timeline_of_the_far_future
Can it predict something real/measurable?
Buying insurance is rational for low chance, high cost (i.e. bigger than what you have in your bank account at the moment) risks. It is not rational for low cost risks, like loosing your phone, unless you tend to loose your phone more often than insurance companies accounts for.