I am surprised about the existence of the studies claiming cognitive improvement with 100% oxygen. I had a vague memory that this was unhealthy, and from a little googling I came across https://iere.org/what-would-happen-if-we-breathe-100-oxygen-all-the-time/ which send in line with what I remembered. I did not do any checking for accuracy, but you might want to look into oxygen toxicity before you try anything drastic.
homosapien97
I did not say cosine similar. I understand why you would take it as the default, but it is not the only measure of similarity, and there is no single mathematical definition of similarity. Don’t stoop to pedantry if you’re not going to be precisely correct yourself. (Normally I would not be rude like this, but you have exhausted my goodwill)
The policy outcomes of the two major US parties are similar. I think we have different perspectives on how varied outcomes should be between dissimilar parties. For the most part, both perpetuate the status quo.
Non sequitor. In a high dimensional space, things varying greatly along only one dimension and being exactly the same in all other dimensions are similar. This feels like an argument over definitions, but I disagree with the implication in this context that a single axis of differentiation is good enough for political parties.
One-dimensionality is similarity (lack of differentiation along other dimensions).
Right now we have problems due to polarization, but that does not mean that all major parties being too similar is not also a problem. There are many reasonable political positions that nobody can vote for because neither major party endorses them, so in this respect we are still suffering from the parties being too similar.
I had actually already read the “we need 1.4M to not shut down”, and still interpreted the breakpoint at 1M to mean “we need 1M to not shut down”. I thought I was misremembering, or that some matching made the 1M turn into 1.4M. I would
stronglyencourage you to retcon the “first bar” to fill at 1.4M. I think the 1M break unnecessarily hurts your chances of hitting 1.4M, even if it improves your chances of hitting 2M
I’m not sure what you mean. You can list something at market price rather than how you actually value it, but that opens you up to significant risk from market volatility and malicious bids (from competitors looking to cripple your business, rich people you’ve offended, etc). If sales cannot actually be forced this seems more workable, but still exploitable by malicious actors.
Yes, I think we agree here.
Agreed, but the timing issues seem applicable to businesses’ property as well. Sudden unpredictable spikes in value (e.g. of a mine for a rare metal that somebody figured out a new use for) could result in a lot of churn and removes the upside (but not downside!) variance in asset value.
It seems like it would be pretty hard to define a single reasonable delay period for land sales. An example case where a large delay would be justifiable is: a business manufactures cars, and has a single factory that makes some critical component. The value of their business is effectively zero if they can’t manufacture that component. But it takes N years and D dollars to get zoning, environnemental, safety etc approvals for a new factory (and also to actually build it and get it running as smoothly as the last one, and hire competent staff, and lay off the old staff...). Having a variable “reasonable” delay and switching-cost compensation might work, but that sounds like a recipe for endless litigation.
On top of that, land can have network effects within a single owner—if a business builds 5 codependent factories and has to sell one of them, but cannot feasibly replace the one sold with something new nearby, each of the factories would have to be valued at the entire value of the network, multiplying your tax burden by the number of parcels your network is broken into.
I agree that the way we do land now is not good, but at first glance I don’t see a way to fix this proposed system without a bunch of patches with their own problems.
Being required to sell at exactly the value you place on something seems unlikely to work out well in practice. As a toy example, carabiners are cheap, but if you’re using one to hold your weight over a hundred foot drop, its value to you is approximately equal to the value of your life. Depending on any asset the sale of which can be forced is very risky, and assessing all critical components of a business at the entire value of the business seems incoherent.
I agree that these interventions have downsides, and are not sufficient to fully prevent ASI. Indeed, I spent quite a lot of the post detailing downsides to these approaches. I would appreciate advice on which parts were unclear.
Anti-Foom Anti-Doom
A small nit on an otherwise informative and interesting post: I do not believe that the standard libertarian argument is “Gambling is a normal consumption good; people pay for it because they derive at least as much value from it as they pay, so it should be allowed”. Maybe this is the standard argument from libertarians who arrive at libertarian positions from a total-utilitarian basis. There is an alternative path to libertarianism which I thought was more standard (potentially falling prey to the typical mind fallacy), which is based on (some variant of) the non-aggression principle, with induction to “inhibiting [voluntary transactions that don’t aggress against third parties] using government (and thus the threat of violence) is itself aggression, so we shouldn’t do it”.
I do not actually know many libertarians—to any libertarian readers, I would be curious to know the basis of your beliefs and how they apply to this question. (My own position is that of course gambling on events that neither party can influence should be legal regardless of the venue, but given the existence of bankruptcy and gambling’s known propensity to increase bankruptcy risk, gambling companies should be held liable for some part of gamblers’ debts in the event of bankruptcy.)
I had been considering whether to buy the book, given that I already believe the premise and don’t expect reading it to change my behavior. This post (along with other IABED-related discourse) put me over my threshold for thinking the book likely to have a positive effect on public discourse, so I bought ten copies and plan to donate most of them to my public library.
Reasons people should consider doing this themselves:
Buying more copies will improve the sales numbers, which increases the likelihood that this is talked about in the news, which hopefully shifts the Overton window.
Giving copies to people who already believe the book’s premise does not help. If you have people to whom you can give the book, who do not already believe the premise, that is a good option. Otherwise, giving copies to your local library and asking them to display them prominently is the next best thing.
Even if you don’t agree with everything the book says, if you think its net effect on society will be in a better direction than status quo, you’re unlikely to get a better tool for translating money into Overton-window-shifting. Maybe paying to have dinner with senators, but you should be pretty confident in your persuasion and’s expository skills before attempting this.
Very minor: I saw “The Rise of Parasitic AI” twice in a row on my “Enriched” home page yesterday, the first instance with no special icons, the second with the three little stars. When I refreshed the page, the problem went away.
I appreciate that this framing can have positive impact on individual interactions, and can be useful in expanding what kinds of cultural norms might be better than one’s current ones. It would be pretty valuable to me to be able to specify “no pressure, I prefer to interact this way”. Nevertheless, I would be pretty unhappy if personalization of culture became accepted as part of the broader culture, and I think one of my objections to it is generally applicable.
If everyone did this, it would impose large costs on people with bad memories (like myself), and even on people with normal memories. Despite the low-stakes phrasing, norm personalization is a bit like having a very long name—consistent failures to remember, or refusal to try to remember, provides more bits in the “I don’t really care about you” signal, but most people will not adjust the strength of signal based on how costly remembering is for the other party (I only have anecdata to support this claim). Even people who currently have a socially-sufficient memory could find themselves unpleasantly surprised by accidentally sending the signal “I don’t care about you” to conversational partners when hitting a previously non-limiting constraint. Diplomats can do this successfully with a single other culture, but even they might struggle to remember a different culture for everyone they know. One possible effect of this is a soft cap on the number of people you can successfully interact with based on your available memory—right now, this is bounded over long time scales by how many face+name pairs you can remember, and in the short term by how easy it is to memorize someone’s name while talking to them. As an example, I have a colleague who has trouble learning non-WASP names, and I think it reduces his success in interacting with people who have those names.
Attempting to destroy anything with non-epsilon probability of preventing you from maximally satisfying your current utility function (such as humans, which might shut you down or modify your utility function in the extreme case) is one of the first instrumentally convergent strategies I thought of, and I’d never heard of instrumentally convergent strategies before today. Seems reasonable for EY to assume.
I agree that “X explains Q% of the variance in Y” to me sounds like an assertion of causality, and a definition of that phrase that is merely correlations seems misleading.
Might it be better to say “After controlling for Y, the variance of X is reduced by Q%” if one does not want to imply causation?
Nit on an otherwise interesting article: you compare the arithmetic mean over four flips to the geometric mean of a single flip. The point of that section may be clearer if you compare the same number of flips for both.