How has LessWrong changed my life? I would say that I have learned a lot in regards to Bayesianism and epistemology. I became a transhumanist and developed an interest in cryonics before I knew LessWrong existed.
ImmortalRationalist
Humans underestimating the chance of being caught seems to beg the question of why they underestimate the chance of being caught in the first place. Why have humans evolved ethical inhibition, as opposed to a better sense of the likelihood of being caught? Still, evolution isn’t perfect.
Using Bayesian reasoning, what is the probability that the sun will rise tomorrow? If we assume that induction works, and that something happening previously, i.e. the sun rising before, increases the posterior probability that it will happen again, wouldn’t we ultimately need some kind of “first hyperprior” to base our Bayesian updates on, for when we originally lack any data to conclude that the sun will rise tomorrow?
The money you would have spent on giving money to a beggar might be better spent on something that will decrease existential risk or contribute to transhumanist goals, such as donating to MIRI or the Methuselah Foundation.
Plastination is one technology you might be interested in.
How do we determine our “hyper-hyper-hyper-hyper-hyperpriors”? Before updating our priors however many times, is there any way to calculate the probability of something before we have any data to support any conclusion?
Also, how do we know when the probability surpasses 50%? Couldn’t the prior probability of the sun rising tomorrow be astronomically small, and with Bayesian updates using the evidence that the sun will rise tomorrow, merely make the probability slightly less astronomically small?
With transhumanist technology, what is the probability that any human alive today will live forever, and not just thousands, or millions of years? I assume an extremely small, but non-zero, amount.
How is it that Solomonoff Induction, and by extension Occam’s Razor, is justified in the first place? Why is it that hypotheses with higher Kolmogorov complexity are less likely to be true than those with lower Kolmogorov complexity? If it is justified by that fact that it has “worked” in the past, does that not require Solomonoff induction to justify that has worked, in the sense that you need to verify that your memories are true, and thus requires circular reasoning?
But in the infinite series of possibilities summing to 1, why should the hypotheses with the highest probability be the ones with the lowest complexity, as opposed to having each consecutive hypothesis having an arbitrary complexity level?
But why should the probability for lower-complexity hypotheses be any lower?
How do you even define free will? It seems like a poorly defined concept in general, and is more or less meaningless. The notion of free will that people talk about seems to be little more than a glorified form of determinism and randomness.
This, and find better ways to optimize power efficiency.
What is the general consensus on LessWrong regarding Race Realism?
Has anyone here read Industrial Society And Its Future (the Unabomber manifesto), and if so, what are your thoughts on it?
I remember a while ago Eliezer wrote this article, titled Bayesians vs. Barbarians. In it, he describes how in a conflict between rationalists and barbarians, or to your analogy Athenians and Spartans, the barbarians/Spartans will likely win. In the world today, low IQ individuals are reproducing at far higher rates than high IQ individuals, so are “winning” in an evolutionary sense. Having universalist, open, trusting values is not necessarily a bad thing in itself, but should not be done to such an extent that this altruism becomes pathological, and leads to the protracted suicide of the rationalist community.
Ted Kaczynski wrote something similar to this in Industrial Society And Its Future, albeit with different motivations.
Revolutionaries should have as many children as they can. There is strong scientific evidence that social attitudes are to a significant extent inherited. No one suggests that a social attitude is a direct outcome of a person’s genetic constitution, but it appears that personality traits are partly inherited and that certain personality traits tend, within the context of our society, to make a person more likely to hold this or that social attitude. Objections to these findings have been raised, but the objections are feeble and seem to be ideologically motivated. In any event, no one denies that children tend on the average to hold social attitudes similar to those of their parents. From our point of view it doesn’t matter all that much whether the attitudes are passed on genetically or through childhood training. In either case they ARE passed on.
The trouble is that many of the people who are inclined to rebel against the industrial system are also concerned about the population problems, hence they are apt to have few or no children. In this way they may be handing the world over to the sort of people who support or at least accept the industrial system. To insure the strength of the next generation of revolutionaries the present generation should reproduce itself abundantly. In doing so they will be worsening the population problem only slightly. And the important problem is to get rid of the industrial system, because once the industrial system is gone the world’s population necessarily will decrease (see paragraph 167); whereas, if the industrial system survives, it will continue developing new techniques of food production that may enable the world’s population to keep increasing almost indefinitely.
Avoiding cryonics because of possible worse than death outcomes sounds like a textbook case of loss aversion.
I’m surprised that there aren’t any active YouTube channels with LessWrong-esque content, or at least none that I am aware of.
For reversing entropy via nanorobots, in the future, how complicated would such a process be? Would each nanorobot essentially need to contain a tiny computer, in order to manipulate molecules such as ATP?