“AI Watch.”
Richard_Kennaway
AI Corporation Watch | AI Mega-Corp Watch | AI Company Watch | AI Industry Watch | AI Firm Watch | AI Behemoth Watch | AI Colossus Watch | AI Juggernaut Watch | AI Future Watch
These are either tendentious (“Juggernaut”) or unnecessarily specific to the present moment (“Mega-Corp”).
How about simply “AI Watch”?
I can only agree , since I’ve been saying for a long time that the current rationalist movement is only the latest iteration of many.
I’d agree with that, except for the word “only”. It is no criticism of the present, to observe that it has a history.
Is it possible that ethics-motivated laws will strange generative AI
“Strangle”?
This means C2 should be 8.4µF, but I didn’t have one so I used a 4.7µF and 3.3µF in series for a total of 8µF.
You want those in parallel for them to add. The series combination (which I see in the breadboard pic, not just the text) is only 2µF, making your high-pass frequency a little over 10kHz.
Using a discrete hypothesis space avoids big parts of the problem.
Only if there is a “natural” discretisation of the hypothesis space. It’s fine for coin tosses and die rolls, but if the problem itself is continuous, different discretisations will give the same problems that different continuous parameterisations do.
In general, when infinities naturally arise but cause problems, decreeing that everything must be finite does not solve those problems, and introduces problems of its own.
“Processed” is a political category, not a nutritional one. I suspect that “ultra-processed” was invented because the literal meaning of “processed” was too blatantly at variance with the political job required of it.
What is the measure of goodness? How does one judge what is the “better” explanation? Without an account of that, what is IBE?
OP quoting Bostrom:
Imagine that some technologically advanced civilization arrived on Earth … Imagine they said: “The most important thing is to preserve the ecosystem in its natural splendor. In particular, the predator populations must be preserved: the psychopath killers, the fascist goons, the despotic death squads … What a tragedy if this rich natural diversity were replaced with a monoculture of healthy, happy, well-fed people living in peace and harmony.” … this would be appallingly callous.
I have some sympathy with that technologically advanced civilisation. I mean, what would you rather they do? Intervene to remould humans into their preferred form? Or only if their preferred form just happened to agree with yours?
Whenever I’ve seen people invoking Inference to the Best Explanation to justify a conclusion (as opposed to philosophising about the logic of argument), they have given no reason why their preferred explanation is the Best, they have just pronounced it so. A Bayesian reasoner can (or should be able to) show their work, but the ItoBE reasoner has no work to show.
Everyone works for money. Only one person, Mr. Purchaser, spends his money and everyone else just saves theirs forever. Suddenly money got a lot more powerful. Mr. Purchaser has literally all the money in the world and has pretty much infinite power, even if he only has $100k. He can make anyone do whatever he wants by paying them a penny, which is now worth about a million dollars[4].
I don’t understand this. What use is money that is never spent? Why would Mr. Purchaser’s penny induce me to do anything for him?
[4] Not that they’ll ever spend the penny on anything, since they’re one of the people who never spends any money, but let’s pretend they still have motivation to earn it.
This is a pretence too far. The imaginary world you are describing is incoherent.
Money is the slack in the system of trade that saves us from having to exchange only by barter or informal systems of credit — doing each other good turns, in your terminology. In Adam Smith’s words, “It is not from the benevolence of the butcher, the brewer, or the baker that we expect our dinner, but from their regard to their own self-interest.”
If I imagine a world without any money, but where everyone is somehow able to coordinate and act rationally for the good of all...
If I imagine that, my thoughts run to hive minds in which there are no people as we know them today.
Adam Smith continues: “We address ourselves, not to their humanity but to their self-love, and never talk to them of our own necessities but of their advantages.” A sentence that could have been penned by Dale Carnegie.
I took the word “Metaverse” to mean virtual worlds, but perhaps this is narrower than the OP intended. A dating app where the users are there to find people to physically meet is not what I would call a virtual world. Broaden it that far and you might as well call LessWrong part of “the Metaverse”.
But I am curious about these dating apps. What manner of virtual goods are these? Can you do anything with them other than showing that you bought them? That hasn’t turned out too well for NFTs, “a complicated way of buying nothing” as Penny Arcade put it.
There is a metaverse already. It’s called Second Life and has been around for more than 20 years. Never huge, but never going away. It has a marketplace of virtual goods that residents of Second Life have created. The market deals in “Linden dollars”, which can be both bought with real dollars and sold for real dollars.
But look at a few random prices at that Marketplace link. The exchange rate is stable at about L$250 = $1. A skirt for L$399 = $1.60. A massage table (with built-in animations) for L$1698 = $7. (Three times that for the version with built-in sex animations.) A tattoo for L$299 = $1.20. The most expensive car currently on the marketplace is L$50,000 = $200, but there are also plenty selling for under $1.
There are only a very few people who have made a living from selling things in Second Life. The number of spectacular successes might be countable on the fingers of one finger.
While I love Second Life, I do not see an economy of this sort growing to become a substantial part of the total economy. What, after all, is the value of these digital goods? They are decoration for an immersive social space, and game assets for recreational use within that space. They do have value, but the marketplace shows what that value is: $200 for a top-end virtual car.
So, is your conclusion (“the place where one stops writing”) that there is an unsolved hard problem, there is a solved hard problem, or there is no hard problem?
I happen to have been looking at some ETFs based on AI-related companies, and all of them showed the same pattern: a doubling of value from inception (2018 or 2019) to early 2022, then losing a lot of that over the next year, and from then to date recovering to about their former peak. Investing in any of them two years ago would have been literally a waste of time. I did not see this pattern in a few non-AI-related indexes. Are there any events between then and now to account for this, or it is just random fluctuation?
Is this what is happening?
-
The moderators invent a rule that sounds reasonable, based on how much karma over what period of time from whom.
-
The rule turns out to produce too many bans.
-
The moderators review the bans, but are anchored by the fact that the rule banned them.
-
Go to 1.
-
I want to push back a little bit on this simulation being not valuable—taking simple linear models is a good first step, and I’ve often been surprised by how linear things in the real world often are. That said, I chose linear models because they were fairly easy to implement, and wanted to find an answer quickly.
I was thinking more of the random graphs. It’s a bit like asking the question, what proportion of yes/no questions have the answer “yes”?
And, just to check: Your second and third example are both examples of correlation without causation, right?
Yes, I broadened the topic slightly.
I don’t believe that the generating process for your simulation resembles that in the real world. If it doesn’t, I don’t see the value in such a simulation.
For an analysis of some situations where unmeasurably small correlations are associated with strong causal influences and high correlations (±0.99) are associated with the absence of direct causal links, see my paper “When causation does not imply correlation: robust violations of the Faithfulness axiom” (arXiv, in book). The situations where this happens are whenever control systems are present, and they are always present in biological and social systems.
Here are three further examples of how to get non-causal correlations and causal non-correlations. They all result from taking correlations between time series. People who work with time series data generally know about these pitfalls, but people who don’t may not be aware of how easy it is to see mirages.
The first is the case of a bounded function and its integral. These have zero correlation with each other in any interval in which either of the two takes the same value at the beginning and the end. (The proof is simple and can be found in the paper of mine I cited.) For example, this is the relation between the current through a capacitor and the voltage across it. Set up a circuit in which you can turn a knob to change the voltage, and you will see the current vary according to how you twiddle the knob. Voltage is causing current. Set up a different circuit where a knob sets the current and you can use the current to cause the voltage. Over any interval in which the operating knob begins and ends in the same position, the correlation will be zero. People who deal with time series have techniques for detecting and removing integrations from the data.
The second is the correlation between two time series that both show a trend over time. This can produce arbitrarily high correlations between things that have nothing to do with each other, and therefore such a trend is not evidence of causation, even if you have a story to tell about how the two things are related. You always have to detrend the data first.
The third is the curious fact that if you take two independent paths of a Wiener process (one-dimensional Brownian motion), then no matter how frequently you sample them over however long a period of time, the distribution of the correlation coefficient remains very broad. Its expected value is zero, because the processes are independent and trend-free, but the autocorrelation of Brownian motion drastically reduces the effective sample size to about 5.5. Yes, even if you take a million samples from the two paths, it doesn’t help. The paths themselves, never mind sampling from them, can have high correlation, easily as extreme as ±0.8. The phenomenon was noted in 1926, and a mathematical treatment given in “Yule’s ‘Nonsense Correlation’ Solved!” (arXiv, journal). The figure of 5.5 comes from my own simulation of the process.
But does the probability decrease fast enough?