~[agent foundations]
Mateusz Bagiński
https://en.wikipedia.org/wiki/Priming_(psychology)
“Although semantic, associative, and form priming are well established,[70] some longer-term priming effects were not replicated in further studies, casting doubt on their effectiveness or even existence.[71] Nobel laureate and psychologist Daniel Kahneman has called on priming researchers to check the robustness of their findings in an open letter to the community, claiming that priming has become a “poster child for doubts about the integrity of psychological research.”[72] Other critics have asserted that priming studies suffer from major publication bias,[73] experimenter effect[66] and that criticism of the field is not dealt with constructively.[74]”
As far as I understand, they claim think that since spike proteins in the actual virus particles are embedded in the particle (as opposed to “free”/”detached” when produced after getting the vaccine), they do not accumulate in the tissues, at least not to the same extent. Possibly, after a virus particle has been destroyed, some of its spike proteins circulate freely (or attached to some smaller segments of the virion) and then can get into tissues and accumulate.
According to Karl Friston, we all do it
The current Polish government is very much conservative, right-wing, and populist but they clearly voice support for Ukraine and criticize Putin’s actions (which does not necessarily mean they’re going to do anything substantial about it).
You might want to edit this for clarity: in English Estland is Estonia and Letland is Latva. This was not immediately obvious to me on first glance.
I’m subscribed to So8res’s posts and over the last ~2 days have been getting messages that “So8res has created a new post: X” where X was a 7-year old post.
Different cultures have different antimemes. The more different two cultures are from each other the less their antimemes overlap. You can sweep up a mountain of antimemes just by reading a Chinese or Arabic history of civilization and comparing it to Western world history. You can snag a different set by learning what it was like to live in a hunter-gatherer or pastoralist society.
Anti-memes as shibboleths? If I see that you share the same weird belief/cultural practice/whatever that provokes such a strong self-suppressing response that nobody would acquire it unless they (a) are already immersed in that anti-meme’s culture or (b) make some substantial deliberate effort to become a part of that culture.
Typo: the Seven Years’ war ended in 1763, not 1753.
My superficial understanding is that Cyc has two crucial advantages over all current knowledge bases / knowledge graphs:
It is much, much bigger
Predicates can be of any arity (properties of one entity, relations between two entities, more complex, structured relationships between N entities for any N), whereas knowledge graphs can only represent binary relationships R(X,Y), like “X loves Y”.
If I understand it correctly, then Cyc’s knowledge base is a knowledge hypergraph. Maybe it doesn’t eventually matter and you can squeeze any knowledge encoded into Cyc’s KB into ordinary knowledge graphs without creating some edge-spaghetti hell.
If I remember correctly, GJP superforcasters were similarly successful, although they were a bit slower.
Actually, GJP forecasters updated a bit more quickly than Metaculus[EDIT: probably not, look up the reply below]
You’re probably right. I was myopically looking only at the rightmost portion where GJP updated to ~99% a bit quicker. It also seems like GJP had a more erratic trajectory than Metaculus.
I’m not sure how the right decision process on whether to do salvage epistemology on any given subject should look like. Also, if you see or suspect that this woo-ish thingy X “is a mix of figurative stuff and dumb stuff” but decide that it’s not worth salvaging because of infohazard, how do you communicate it? “There’s 10% probability that the ancient master Changacthulhuthustra discovered something instrumentally useful about human condition but reading his philosophy may mess you up so you shouldn’t.” How many novices do you expect to follow a general consensus on that? My hunch is that if one is likely to fall into the crazy, they are also unlikely to let their outside view override the inside view, assert “I calculated the expected value and it’s positive” and rush into it. Also2, how does one know whether they are “experienced enough” to try salvaging anything for themselves? Also3, I don’t think protecting new rationalists in this way would be helpful for their development.
To reduce the risks pointed out by OP, I would rather aim at being more explicit when we’re using salvage epistemology (here just having this label can be helpful) and poke around their belief system more when they start displaying tentative signs of going crazy.
Another related post that feels missing from the last section:
The great challenge is to figure out which variables are directly relevant—i.e. which variables mediate the influence of everything else.
Is this equivalent to identifying the Markov blanket of the phenomenon being studied?
If you assume that the overseer could robustly detect that the AI wants to kill humans, they could probably just as robustly detect that it is not aiming to operate under the constraint of keeping humans alive, happy etc while optimizing for whatever it is trying to optimize.
This might occur in the kind of misalignment where it is genuinely optimizing for human values just because it is too dumb to know it is not the best way to realize its learned objective. If extracting that objective would be harder than reading its genuine instrumental intentions, then the moment it discovers a better way may look to the overseer like a sudden change of values
It’s Monte Carlo, not Monty
this would actually happen in practice—i.e. why humans would learn high-level models which are related to the universe’s lower-level structure in this way.
My intuition is that this makes it possible to model long-range dependencies with minimum cognitive costs.
Also, do you think there’s some utility in modelling abstracting into as a forgetful functor?
It may be the same kind of bias that disproportionately incentivizes publishing new shiny research papers, finding new hypotheses etc over trying to replicate what has already been published.
Thanks, that’s a very valuable insight for me!
[WARNING: The paragraphs below are evolutionary psychological speculation so read them with a grain of salt.]
When you present it this way, it makes much more sense why we are “wired” to (in some circumstances) expect the other person not to offer any help or advice but just pay attention and listen. Something important happened. So important, actually, that I want (or maybe “I’m adapted”) to tell another person about it and I want them to understand as precisely as possible both the content (what I’m saying) and the intention (why I want them to know about that).
Only once we minimized the inferential distance between us and each of us can assume that the other has all the relevant knowledge, we can start thinking “OK, so what do we do now?”
Another (maybe more obvious) explanation of this phenomenon is that we just want to reassure ourselves that we are not alone with our problems and/or try strengthening our relationships in times when we may especially need them. This however leaves out many situations when we want to “just talk” while nothing we value is in any particular danger–these can be explained by your interpretation.
And… yes, this sounds a lot of like Hold Off On Proposing Solutions. Maybe we already discovered it at some point in our evolutionary past.
[/WARNING]