Whenever I’ve seen people invoking Inference to the Best Explanation to justify a conclusion (as opposed to philosophising about the logic of argument), they have given no reason why their preferred explanation is the Best, they have just pronounced it so. A Bayesian reasoner can (or should be able to) show their work, but the ItoBE reasoner has no work to show.
Richard_Kennaway
Everyone works for money. Only one person, Mr. Purchaser, spends his money and everyone else just saves theirs forever. Suddenly money got a lot more powerful. Mr. Purchaser has literally all the money in the world and has pretty much infinite power, even if he only has $100k. He can make anyone do whatever he wants by paying them a penny, which is now worth about a million dollars[4].
I don’t understand this. What use is money that is never spent? Why would Mr. Purchaser’s penny induce me to do anything for him?
[4] Not that they’ll ever spend the penny on anything, since they’re one of the people who never spends any money, but let’s pretend they still have motivation to earn it.
This is a pretence too far. The imaginary world you are describing is incoherent.
Money is the slack in the system of trade that saves us from having to exchange only by barter or informal systems of credit — doing each other good turns, in your terminology. In Adam Smith’s words, “It is not from the benevolence of the butcher, the brewer, or the baker that we expect our dinner, but from their regard to their own self-interest.”
If I imagine a world without any money, but where everyone is somehow able to coordinate and act rationally for the good of all...
If I imagine that, my thoughts run to hive minds in which there are no people as we know them today.
Adam Smith continues: “We address ourselves, not to their humanity but to their self-love, and never talk to them of our own necessities but of their advantages.” A sentence that could have been penned by Dale Carnegie.
I took the word “Metaverse” to mean virtual worlds, but perhaps this is narrower than the OP intended. A dating app where the users are there to find people to physically meet is not what I would call a virtual world. Broaden it that far and you might as well call LessWrong part of “the Metaverse”.
But I am curious about these dating apps. What manner of virtual goods are these? Can you do anything with them other than showing that you bought them? That hasn’t turned out too well for NFTs, “a complicated way of buying nothing” as Penny Arcade put it.
There is a metaverse already. It’s called Second Life and has been around for more than 20 years. Never huge, but never going away. It has a marketplace of virtual goods that residents of Second Life have created. The market deals in “Linden dollars”, which can be both bought with real dollars and sold for real dollars.
But look at a few random prices at that Marketplace link. The exchange rate is stable at about L$250 = $1. A skirt for L$399 = $1.60. A massage table (with built-in animations) for L$1698 = $7. (Three times that for the version with built-in sex animations.) A tattoo for L$299 = $1.20. The most expensive car currently on the marketplace is L$50,000 = $200, but there are also plenty selling for under $1.
There are only a very few people who have made a living from selling things in Second Life. The number of spectacular successes might be countable on the fingers of one finger.
While I love Second Life, I do not see an economy of this sort growing to become a substantial part of the total economy. What, after all, is the value of these digital goods? They are decoration for an immersive social space, and game assets for recreational use within that space. They do have value, but the marketplace shows what that value is: $200 for a top-end virtual car.
So, is your conclusion (“the place where one stops writing”) that there is an unsolved hard problem, there is a solved hard problem, or there is no hard problem?
I happen to have been looking at some ETFs based on AI-related companies, and all of them showed the same pattern: a doubling of value from inception (2018 or 2019) to early 2022, then losing a lot of that over the next year, and from then to date recovering to about their former peak. Investing in any of them two years ago would have been literally a waste of time. I did not see this pattern in a few non-AI-related indexes. Are there any events between then and now to account for this, or it is just random fluctuation?
Is this what is happening?
-
The moderators invent a rule that sounds reasonable, based on how much karma over what period of time from whom.
-
The rule turns out to produce too many bans.
-
The moderators review the bans, but are anchored by the fact that the rule banned them.
-
Go to 1.
-
I want to push back a little bit on this simulation being not valuable—taking simple linear models is a good first step, and I’ve often been surprised by how linear things in the real world often are. That said, I chose linear models because they were fairly easy to implement, and wanted to find an answer quickly.
I was thinking more of the random graphs. It’s a bit like asking the question, what proportion of yes/no questions have the answer “yes”?
And, just to check: Your second and third example are both examples of correlation without causation, right?
Yes, I broadened the topic slightly.
I don’t believe that the generating process for your simulation resembles that in the real world. If it doesn’t, I don’t see the value in such a simulation.
For an analysis of some situations where unmeasurably small correlations are associated with strong causal influences and high correlations (±0.99) are associated with the absence of direct causal links, see my paper “When causation does not imply correlation: robust violations of the Faithfulness axiom” (arXiv, in book). The situations where this happens are whenever control systems are present, and they are always present in biological and social systems.
Here are three further examples of how to get non-causal correlations and causal non-correlations. They all result from taking correlations between time series. People who work with time series data generally know about these pitfalls, but people who don’t may not be aware of how easy it is to see mirages.
The first is the case of a bounded function and its integral. These have zero correlation with each other in any interval in which either of the two takes the same value at the beginning and the end. (The proof is simple and can be found in the paper of mine I cited.) For example, this is the relation between the current through a capacitor and the voltage across it. Set up a circuit in which you can turn a knob to change the voltage, and you will see the current vary according to how you twiddle the knob. Voltage is causing current. Set up a different circuit where a knob sets the current and you can use the current to cause the voltage. Over any interval in which the operating knob begins and ends in the same position, the correlation will be zero. People who deal with time series have techniques for detecting and removing integrations from the data.
The second is the correlation between two time series that both show a trend over time. This can produce arbitrarily high correlations between things that have nothing to do with each other, and therefore such a trend is not evidence of causation, even if you have a story to tell about how the two things are related. You always have to detrend the data first.
The third is the curious fact that if you take two independent paths of a Wiener process (one-dimensional Brownian motion), then no matter how frequently you sample them over however long a period of time, the distribution of the correlation coefficient remains very broad. Its expected value is zero, because the processes are independent and trend-free, but the autocorrelation of Brownian motion drastically reduces the effective sample size to about 5.5. Yes, even if you take a million samples from the two paths, it doesn’t help. The paths themselves, never mind sampling from them, can have high correlation, easily as extreme as ±0.8. The phenomenon was noted in 1926, and a mathematical treatment given in “Yule’s ‘Nonsense Correlation’ Solved!” (arXiv, journal). The figure of 5.5 comes from my own simulation of the process.
I kept expecting a punchline that never came. Is this an April Fool?
I don’t know, I seem to have misread it as “ Four-fifths know sun revolves around earth”.
If the subtitle of the report is as quoted, the report writers are even wronger than that.
Exploring this on the web, I turned up a couple of related Substacks: Chris Langan’s Ultimate Reality and TELEOLOGIC: CTMU Teleologic Living. The latter isn’t just Chris Langan, a Dr Gina Langan is also involved. A lot of it requires a paid subscription, which for me would come lower in priority than all the definitely worthwhile blogs I also don’t feel like paying for.
Warning: there’s a lot of conspiracy stuff there as well (Covid, “Global Occupation Government”, etc.).
Perhaps this 4-hour interview on “IQ, Free Will, Psychedelics, CTMU, & God” may give some further sense of his thinking.
Googling “CTMU Core Affirmations” turns up a rich vein of … something, including the CTMU Radio YouTube channel.
Here’s the general calculation.
Take any probability distribution defined on the set of all values where and are non-negative reals and . It can be discrete, continuous, or a mixture.
Let be the marginal distribution over . This method of defining avoids the distinction between choosing and then doubling it, or choosing and then halving it, or any other method of choosing such that .
Assume that has an expected value, denoted by .
The expected value of switching when the amount in the first envelope is in the range consists of two parts:
(i) The first envelope contains the smaller amount. This has probability . The division by 2 comes from the 50% chance of choosing the envelope with the smaller amount.
(ii) The first envelope contains the larger amount. This has probability . The extra factor of 2 comes from the fact that when the contents are in an interval of length , half of that (the amount chosen by the envelope-filler) is in an interval of length .
In the two cases the gain from switching is respectively or .
The expected gain given the contents is therefore .
Multiply this by , let tend to 0 (eliminating the term in ) and integrate over the real line:
The first integral is . In the second, substitute (therefore ), giving . The two integrals cancel.
And causally-connected panpsychism is just materialism where we haven’t discovered all the laws of physics yet.
Materialism, specifically applied to consciousness, is also just materialism where we haven’t discovered all the laws of physics yet — specifically, those that constitute the sought-for materialist explanation of consciousness.
It is the same as how “atoms!” is not an explanation of everyday phenomena such as fire. Knowing what specific atoms are involved, what they are doing and why, and how that gives rise to our observations of fire, that is an explanation.
Without that real explanation, “atoms!” or “materialism!”, is just a label plastered over our ignorance.
To follow a maxim of Edwin Jaynes, when a paradox arises in matters of probability, one must consider the generating process from which the probabilities were derived.
How does the envelope-filler choose the amount to put in either envelope? He cannot pick an “arbitrary” real number. Almost all real numbers are so gigantic as to be beyond human comprehension. Let us suppose that he has a probability distribution over the non-negative reals from which he draws a single value , and puts into one envelope and into the other. (One could also imagine that he puts into the other, or tosses a coin to decide between and , but I’ll stick with this method.)
Any such probability distribution must tail off to zero as becomes large. Suppose the envelope-chooser is allowed to open the first envelope, and then is allowed to switch to the other one if they think it’s worth switching. The larger the value they find in the first envelope, the less likely it is that the other envelope has twice as much. Similarly, if they find a very small value in the first envelope (i.e. well into the lower tail of the distribution), then they can expect to profit by switching.
In the original version, of course, they do not see what is in the envelope before deciding whether to switch. So we must consider the expected value of switching conditional on the value in the first envelope, summed or integrated over the probability distribution of what is in that envelope.
I shall work this through with an example probability distribution. Suppose that the probability of the chosen value being is for all positive integers , and no other value of is possible. (Taking would be simpler, but that distribution has infinite expected value, which introduces its own paradoxes.)
I shall list all the possible ways the game can play out.
1. $2 in the envelope in your hand, $4 in the other. Probability for selecting the value , and for picking up the envelope containing , so . Value of switching is , so the contribution of this possibility to the expected value of switching is .
2. $4 in your hand, $2 in the other. Probability , value of switching , expectation .
3. $4 in your hand, $8 in the other. Probability , value of switching , expectation .
4. $8 in your hand, $4 in the other. Probability , value of switching , expectation .
And so on. Now, we can pair these up as , , , etc. and see that the expected value of switching without knowledge of the first envelope’s contents is zero. But that is just the symmetry argument against switching. To dissolve the paradoxical argument that says that you should always switch, we pair up the outcomes according to the value in the first envelope.
If it has $2, the value of switching is .
If it has $4, the value is .
If it has $8, the value is .
The sum of all of the negative terms is , cancelling out the positive one. The expected value is zero.
The general term in this sum is, for ,
, which is negative. The value conditional on the value having been drawn is just this divided by , which leaves it still negative. If we write and , this works out to and . The expected value given is then . Observe how this weights the negative value three times as heavily as the positive value, but the positive value is only twice as large.Compare with the argument for switching, which instead computes the expected value as , which is positive. It is neglect of the distribution from which was drawn that leads to this wrong calculation.
I worked this through for just one distribution, but I expect that a general proof can be given, at least for all distributions for which the expected value of is finite.
Thanks.
Wow! I never came across that before.
Brace yourself. Whatever they say will likely be painful to hear.
But will it be true?
Even given the earnest request, are they going to take it all at face value and give you the information you are asking for as best they can (and how good is their best, anyway?), or are they going to either be nicey-nicey, or use it as an open goal to offload their own “stuff” into? I am seeing a lot of ways this could go wrong. It would not be good to acknowledge and accept ugly falsehoods.
Something something Ask vs. Guess culture.
OP quoting Bostrom:
I have some sympathy with that technologically advanced civilisation. I mean, what would you rather they do? Intervene to remould humans into their preferred form? Or only if their preferred form just happened to agree with yours?