There’s a piece of knowledge available in certain altered states (meditative and entheogen, possibly some manic states) that behaves a bit like scp-055. Right down to the fact that people find it easier to describe in terms of negatives. Repeated exposure allows you to bring a bit more back with you but it seems to make people susceptible to bad epistemics (i.e. most such people wind up woo). I think this negative payload isn’t directly bad epistemics but something that collides with people’s badly grounded ontology/metaphysics.
It’s not quite moral realism, but it does relate to things actually being important/precious in a way that, while in the state, we are concerned that our normal self doesn’t seem to understand.
One phenomenological signature of the thing is that it feels ‘too big’ for normal cognition. Like you need a higher than normal branch factor on your thought process to be able to hold its disparate parts at once.
It *isn’t* unity consciousness, though that’s nearby in mind-space or has some overlap.
It *isn’t* ‘no-self’ (itself a bad translation of not-self and endlessly confusing for spiritual seekers.)
It *does* seem related to our problems, both object and meta, with moloch and azathoth. At least by my read.
I think subsuming several dimensions of coordination failure under transaction costs is the same mistake the externalities people were making.
meta: I’ve really enjoyed these posts where you’ve been willing to publicly boggle at things and this is my favorite so far. I wanted to more than upvote because I know how hard these sorts of posts can be to write. Sometimes they’re easy to write when they’re generated by rant-mode, but then hard to post knowing that our rant-mode might wind up feeling attacked, and rant-mode is good and valuable and to be encouraged.
It does help ground things but isn’t a full accounting on the philosophy of science side since your decision model has ontological commitments.
As someone who sometimes has a hard time coming up with concrete examples, I really like the list of mental motions.
+1 and a frame that might be helpful is figuring out when adversarial training happens anyway and why. I think this happens with human management and might be illustrative.
I think it would be both fun and useful to attempt rat cev with original seeing (not just bringing on existing rat experts, though perhaps consulting with them to some degree).
‘Doing it well’ seems to be very load bearing there. I think you’re sneaking in an ‘all’ in the background? Like, in order to be defined as superintelligent it must do better at all domains than X or something?
My current answer is something hand wavy about the process just trying to ungoodhart itself (assuming that the self and world model as given start off goodharted) and the chips fall where they may.
Reframing suffering can alleviate it, but this is a temporary and partial solution. Learning better mental heuristics can lead to longer lasting more complete solutions. I guess I’d say that reframes are a within-narrative solution when the real solution is an extra-narrative one, disassembling upstream components of suffering.
I’ve referred to this as super cooperators and super defectors, but those terms aren’t used in the literature. I remember Joshua Greene citing something interesting in this space in his EAG talk in like 2015 or so but can’t find much. The video of his talk doesn’t have slides. The vivid memory I have is the surprising discovery of the super defectors, ie people who enforce against cooperators, and the tidbit that it only takes a small number of supers in either direction to flip the whole network over to the other equilibrium.
“We need to use our diplomatic and more traditional intelligence assets to bring pressure on the governments of Qatar and Saudi Arabia, which are providing clandestine financial and logistic support to Isil and other radical Sunni groups in the region,” -Hilary Clinton
The Ericsson Question (in the tradition of the Hamming question): what are the most important skills in your problem domain? What would a deliberate practice system look like for those skills?
Why aren’t you doing something like that system?
And the infuriating response where people act like this obvious thing isn’t real or is specific to the examples that then get hen pecked into even more specific irrelevance (“well, technically...”), or motte and bailey’d etc.
One part of this is that Informational rentiers/gatekeepers have to maintain a facade that they’re not doing something that goes against common sense norms.
I’d put it in a way that is pretty explicit: most people are trained so completely to uphold obvious lies, stupidities, and failures of those deemed higher status that they start actually hallucinating clothing on the emperor.
Skill training is just one downstream consequence of this. Where the correct procedure doesn’t flow from the territory, but via some other person’s map that has been officially sanctioned.
The sky is blue <removed for lack of citation, see talk page>
Why don’t the high status talk about technique? Because it’s embarrassing how bad they are at it.
There is a way out.
Sample size is related to how big an effect size you should be surprised by ie power. Big effect sizes in smaller populations = less surprising. Why is there no overall rule of thumb? Because it gets modified a bunch by the base rate of what you’re looking at and some other stuff I’m not remembering off the top of my head.
In general I’d say there’s enough methodological diversity that there’s a lot of stuff I’m looking for as flags that a study wasn’t designed well. For examples of such you can look at the inclusion criteria for meta-analyses.
There’s also more qualitative things about how much I’m extrapolating based on the discussion section by the study authors. In the longevity posts for example, I laud a study for having a discussion section where the authors explicitly spend a great deal of time talking about what sorts of things are *not* reasonable to conclude from the study even though they might be suggestive for further research directions.
Confounds are kinda like building a key word map. I’m looking at the most well regarded studies in a domain, noting down what they’re controlling for, then discounting studies that aren’t controlling for them to varying degrees. This is another place where qualitative judgements creep in even in cochrane reviews where they are forced to just develop ad hoc ‘tiers’ of evidence (like A, B, C etc) and give some guidelines for doing so.
I have higher skepticism in general than I did years ago as I have learned about the number of ways that effects can sneak into the data despite honest intention by moderately competent scientists. I’m also much more aware of a fundamental problem with selection effects in that anyone running a study has some vested interest in framing hypotheses in various ways because nobody devotes themselves to something about which they’re completely disinterested. This shows up as a problem in your own evaluation in that it’s almost impossible to not sneak in isolated demands for rigor based on priors.
I’m also generally reading over the shoulder of whichever other study reviewers seem to be doing a good job in a domain. Epistemics is a team sport. An example of this is when Scott did a roundup of evidence for low carb diets and mentioning lots of other people doing meta reviews and some speculating about why different conclusions were reached eg Luke Muelhauser and I came down on the side that the VLC evidence seemed weak and Will Eden came down on the side that it seemed more robust, seemingly differing on how much weight we placed on inside view metabolic models vs outside view long term studies.
That’s a hot take. It can be hard to just dump top level heuristics vs seeing what comes up from more specific questions/discussion.
I’ve been surprised in both directions with different behaviors. 1. That a behavior was part of a surprisingly agentic, self healing pattern with structure that wasn’t immediately obvious and 2. That a behavior was a surprisingly structureless spandrel that dissolved on even a modicum of contact with awareness.
I partially see it as a question of volume/consistency than just raw talent, though a bigger pipeline on raw talent is obviously still an obstacle. I think there are quite a few people who can put out relevant work occasionally, and very few who can do it often enough that it makes sense to be full time.
Intutition pump by thinking about this in the context of consumption curves in ones own life. i.e. is any utility gained by moving consumption forward or backward in time between selves?
Psycho-cybernetics is an early text in this realm.
time for a new instance of this?
We have fewer decision points than we naively model and this has concrete consequences. I don’t have ‘all evening’ to get that thing done. I have the specific number of moments that I think about doing it before it gets late enough that I put it off. This is often only once or twice.