In my opinion, the risk analysis here is fundamentally flawed. Here’s my take on the two main SETI scenarios proposed in the OP:Automatic disclosure SETI—all potential messages are disclosed to the public pre analysis. This is dangerous if it is possible to send EDM (Extremely Dangerous Messages—world exploding/world hacking), and plausible to expect they would be sent.
Committee vetting SETI—all potential messages are reviewed by a committee of experts, who have the option of unilaterally concealing information they deem to be dangerous.
The argument in the OP hinges on portraying the first scenario as risky, with the second scenario motivated based on avoiding that risk. But the risk to be avoided there is fully theoretical, there’s no concrete basis EDM (obviously if smart people think there can be/should be a concrete basis for them, I’d love to see it fleshed out).
But the second scenario has much more plausible risk! Conditioned on both scenarios eventually receiving alien messages, the second scenario could still be dangerous if EDM aren’t real. By handling alien messages with unilateral secrecy, you’re creating a situation where normal human incentives for wealth, personal aggrandizement, or even altruistic principles could lead a small, insular group to try to seize power using alien technology. The main assumption for this risk to be a factor, is that aliens sending us messages could have significantly superior technology. This seems more plausible than the existence of EDM, which is after all essentially the same claim but incredibly stronger.
Some people might even see the ability to seize power with alien tech as a feature, probably. But I think this is an underdiscussed and essential aspect to the analysis of public disclosure SETI vs secret committee SETI. To my mind, it dominates the risk of EDM until there’s a basis for claiming that EDM are real.
Strongly upvoted, I think that the point about emotionally charged memeplexes distorting your view of the world is very valuable.
That does clarify where you’re coming from. I made my comment because it seems to me that it would be a shame for people to fall into one of the more obvious attractors for reasoning within EA about the SBF situation. E.G., an attractor labelled something like “SBF’s actions were not part of EA because EA doesn’t do those Bad Things”.
Which is basically on the greatest hits list for how (not necessarily centrally unified) groups of humans have defended themselves from losing cohesion over the actions of a subset anytime in recorded history. Some portion of the reasoning on SBF in the past week looks motivated in service of the above.
The following isn’t really pointed at you, just my thoughts on the situation.I think that there’s nearly unavoidable tension with trying to float arguments that deal with the optics of SBF’s connection to EA, from within EA. Which is a thing that is explicitly happening in this thread. Standards of epistemic honesty are in conflict with the group need to hold together. While the truth of the matter is and may remain uncertain, if SBF’s fraud was motivated wholly or in part by EA principles, that connection should be taken seriously.
My personal opinion is that, the more I think about it, the more obvious it seems that several cultural features of LW adjacent EA are really ideal for generating extremist behavior. People are forming consensus thought groups around moral calculations that explicitly marginalize the value of all living people, to say nothing of the extreme side of negative consequentialism. This is all in an overall environment of iconoclasm and disregarding established norms in favor of taking new ideas to their logical conclusion. These are being held in an equilibrium by stabilizing norms. At the risk of stating the obvious, insofar as the group in question is a group at all, it is heterogeneous; the cultural features I’m talking about are also some of the unique positive values of EA. But these memes have sharp edges.
From what I’ve heard, SBF was controlling, and fucked over his initial (EA) investors as best he could without sabotaging his company, and fucked over parts of the Alameda founding team that wouldn’t submit to him. This isn’t very “EA” by the usual lights.
It’s not immediately clear to me that this isn’t a No True Scotsman fallacy.
I’d be interested in someone with legal expertise weighing in on whether the farm example is in violation of child labor laws. There are special regulations and exemptions for farms, especially run by a parent or person standing for the parent, but a nine year old driving that tractor seems very likely to be illegal to me. I broadly agree with all the stuff about letting children roam, and it comports well with my own experience, but tractors in particular can be very dangerous and 9 seems very young to be doing genuinely independent ag work like this. Would be interested in other people’s thoughts.
It seems like you might be reading into the post what you want to see to some extent(after reading what I wrote, it looked like I’m trying to be saucy paralleling your first sentence, just want to be clear that to me this is a non valenced discussion), the OP returns to referring to K-type and T-type individual people after discussing their formal framework. That’s what makes me think that classifying people into the binary categories is meant to be the main takeaway.I’m not going to pretend to be more knowledgeable than I am about this kind of framework, but I would not have commented anything if the post had been something like “Tradeoffs between K-type and T-type theory valuation” or anything along those lines. Like I said, I don’t think the case has remotely been made for being able to identify well defined camps of people, and I think it’s inconsistent to say that there are K-type and T-type people, which is a “real classification”, and then talk about the spectrum between K-type and T-type people. This implies that K-type and T-type people really aren’t exclusive camps, and that there are people with a mix of K-type and T-type decision making.
I’m not persuaded at all by the attempt to classify people into the two types. See: in your table of examples, you specify that you tried to include views you endorse in both columns. However, if you were effectively classified by your own system, your views should fit mainly or completely in one column, no?
The binary individual classification aspect of this doesn’t even seem to be consistent in your own mind, since you later talk about it as a spectrum.
Maybe you meant it as a spectrum the whole time but that seems antithetical to putting people into two well defined camps.
Setting those objections aside for a moment, there is an amusing meta level of observing which type would produce this framework.
One would expect a Prime Minister to be Prime over Ministers. I don’t see the need to rename everything Ministry of This or That, so Prime Minister doesn’t really seem appropriate.
Would you be willing to summarize the point you’re making at the object level? Is it something like “the Soviets had to make the Molotov Ribbentrop pact, and that doesn’t say anything meaningful about their cultural approach to the interaction of world religions”? I don’t want to put words in your mouth or anything, I just want to understand the “extremely low-epistemics” bit.
It seems like you’ve retreated fully from your bailey: “at the risk of being the Captain Obvious, I must remind the readers that mountain climbing is stupid”to your motte: “There is no greatness in being the 5001th man who climbed Everest”I suspect most people responding take greater issue with the former position, so maybe if you still stand by it you could defend that one. To me, it seems like the standard of “if it increases your chances of dying, it’s a stupid recreational activity” is one that is unlikely to be applied evenly by just about anyone.E.G., if you want to apply that consistently, you should probably have a very restrictive diet, and definitely not play video games for moderate to long periods of time (risk of death from blood clots, sedentariness, etc)
Conceptually I like the framing of “playing to your outs” taken from card games. In a nutshell, you look for your victory conditions and backchain your strategy from there, accepting any necessary but improbable actions that keep you on the path to possible success. This is exactly what you describe, I think, so the transposition works and might appeal intuitively to those familiar with card games. Personally, I think avoiding the “miracle” label has a significant amount of upside.
Not every occupation is the same, but nations occupied by military force are often denied the ability to run their own affairs with regard to legal proceedings, defence, etc. In particular not being allowed to have final authority over legal matters on their own soil seems to historically be a great sticking point: see the Austro-Hungarian demands of Serbia leading to WW1.
This is one of the key domains which defines the authority of a sovereign nation, whereas it doesn’t seem that uncommon in history for there to be foreign military assets in a nation as a non-occupying force that does not damage the sovereignty of that nation. Auxiliary troops, mercenaries, allied soldiers.
From this perspective, U.S. bases look like occupation insofar as they damage the sovereignty of the occupied nation, and look like anything but occupation to the degree that they protect or abide by that sovereignty. Russian propaganda would of course claim, that the former dramatically outweighs the latter.
I think it’s useful to point out that training muscles for strength/size results in a well documented phenomenon called supercompensation. However, training for other qualities like speed doesn’t really work the same way. There’s lots of irrational training done because people make an inferential leap from the supercompensation they see in strength training and apply it to cases which intuitively seem like they might be analogues (e.g., weighted sprints don’t make you faster).
I think counterexamples are relevant because sometimes intuition points out real analogues, and sometimes fake ones, so we should value evidence and mechanistic explanation over analogies and cultural beliefs.
Sorry if this is a little incoherent, I wrote it when I was really sleepy.
You imply that you understand it’s a metaphor, but your other sentences seem to insist on taking the word “wrestling” literally as referring to the sport. The sentence in bold
“This was no passive measure to confirm a hypothesis, but a wrestling with nature to make her reveal her secrets.”
Makes it pretty clear I think. Do you simply not like the metaphor?
I suspect that massive destabilization following the precipitous fall of most of the great powers (NATO + Russia at the least) would result in war on every continent (sans Antarctica). If Asian countries don’t get nuked in this scenario like you suppose, I think it’s quite plausible general war in Asia would follow shortly as the surviving greatest powers jockey for dominance. If we posit the complete collapse of U.S. power projection in the Pacific, surely China is best positioned to fill the void, and I don’t think it’s clear where they’d draw the new lines.
In practice, leading thinkers in EA seem to interpret AGI as a special class of existential threat (i.e., something that could effectively ‘cancel’ the future)
This doesn’t seem right to me. “Can effectively ’cancel’ the future” seems like a pretty good approximation of the definition of an existential threat. My understanding of why A.I. risk is treated differently is because of a cultural commonality between said leading thinkers such that A.I. risk is considered to be a more likely and imminent threat than other X-risks. Along with a less widespread (I think) subset of concerns that A.I. can also involve S-risks that other threats don’t have an analogue to.
These are just one native speaker’s impressions, so take them with a grain of salt.
Your first two examples, to me, scan as being about abstract concepts; respectively: the emotion/quality of curiosity and the property of being in context.
This quora result indicates that it’s a quality of “definiteness” that indicates when articles get dropped (maybe as a second language learner you’re likely to already have this as knowledge, but find it difficult to intuit).
In those examples, the meaning doesn’t rely on pointing at two specific “curiosity” and “context” objects that have to be precisely designated, it relies on set phrases “out of curiosity” and “in context” that respectively describe an unmentioned action or object.
I think the article in the last example is dropped for a completely different reason. The “definiteness” argument doesn’t apply, but my instinct is that this is simple terseness in the communication from UI to user. Describing every UI element with precise language would result in web pages that resemble legal documents.
It’s possible you’re in Ease Hell. It has been a while since I got into the weeds with my settings but there are pretty good reasons to change the default ease settings and reset the ease on old cards, as I recall. I’m also in the camp of only using the “again” and “good” buttons, since the other ones affect ease iirc. Anyway you’ve been at it longer than I have but maybe the ease hell thing is new info for you or other anki users.
I wish the cuteness made a difference. Interesting reading though, thanks.
Is that link safe to click for someone with Arachnophobia?