Software developer and mindhacking instructor. Interested in intelligent feedback (especially of the empirical testing variety) on my new (temporarily free) ebook, A Minute To Unlimit You.
pjeby
forcing yourself to do what you know you ought to instead of what is fun & easy.
I had difficulty engaging with most of your article from this point on, because your premise seems to be that Work is hard and problematic and we must be forced to do it.
This premise is not just epistemically false: believing it has bad instrumental effects as well.
Ask anybody who’s actually productive—especially those who make a lot of money by being productive, and nearly all of them will tell you that they love their work. (The rest will probably say they love money, or prestige, or whatever other result their work gets for them.)
IOW, instrumental observation shows that the driving factor of high productivity is loving something more, not forcing yourself to do something you love less.
I’ve read all of HPMOR and some of the sequences, attended a couple of meetups, am signed up for cryonics, and post here occasionally. But, that’s as far as I go.
That’s further than I go. Heck, what else is there, and why worry about whether you’re going there or not?
Other people (that I have talked to) seem to be divided on whether it was a good thing to do or not.
[Note: this is going to sound at first like PUA advice, but is actually about general differences between the socially-typical and atypical in the sending and receiving of “status play” signals, using the current situation as an example.]
I don’t know about “good”, but for it to be “useful” you would’ve needed to do it first. (E.g. Her: “Buy me a drink” You: “Sure, now bend over.” Her: “What?” “I said bend over, I’m going to spank your spoiled [add playful invective to taste].”)
Of course, that won’t work if you are actually offended. You have to be genuinely amused, and clearly speaking so as to amuse yourself, rather than being argumentative, judgmental, condescending, critical, or any other such thing.
This is a common failure mode for those of us with low-powered or faulty social coprocessors—we take offense to things that more-normal individuals interpret as playful status competition, and resist taking similar actions because we interpret them as things that we would only do if we were angry.
In a way, it’s like cats and dogs—the dog wags its tail to signal “I’m not really attacking you, I’m just playing”, while the cat waves its tail to mean, “you are about to die if you come any closer”. Normal people are dogs, geeks are cats, and if you want to play with the dogs, you have to learn to bark, wag, and play-bite. Otherwise, they think you’re a touchy psycho who needs to loosen up and not take everything so seriously. (Not unlike the way dogs may end up learning to avoid the cats in a shared household, if they interpret the cats as weirdly anti-social pack members.)
Genuine creeps and assholes are a third breed altogether: they’re the ones who verbally say they’re just playing, while in fact they are not playing or joking at all, and are often downright scary.
And their existence kept me from understanding how things worked more quickly, because normal people learn not to play-bite you if you bare your claws or hide under the couch in response ! So, it didn’t occur to me that all the normal people had just learned to leave me out of their status play, like a bunch of dogs learning to steer clear of the psycho family cat.
The jerks, on the other hand, like to bait cats, because we’re easy to provoke a reaction from. (Most of the “dogs” just frown at the asshole and get on with their day, so the jerk doesn’t get any fun.)
So now, if you’re a “cat”, you learn that only jerks do these things.
And of course, you’re utterly and completely wrong, but have little opportunity to discover and correct the problem on your own. And even if you learn how to fake polite socialization, you won’t be entirely comfortable running with the dogs, nor they you, since the moment they actually try to “play” with you, you act all weird (for a dog, anyway).
That’s why, IMO, some PUA convversation is actually a good thing on LW; it’s a nice example of a shared bias to get over. The LWers who insist that people aren’t really like that, only low [self-esteem, intelligence] girls fall for that stuff, that even if it does work it’s “wrong”, etc., are in need of some more understanding of how their fellow humans [of either gender] actually operate. Even if their objective isn’t to attract dating partners, there are a lot of things in this world that are much harder to get if you can’t speak “dog”.
tl;dr: Normal people engage in playful dog-like status games with their actual friends and think you’re weird when you respond like a cat, figuratively hissing and spitting, or running away to hide under the bed. Yes, even your cool NT friends who tolerate your idiosyncracies—you’re not actually as close to them as you think, because they’re always more careful around you than they are around other NTs.
- 1 Jul 2010 6:10 UTC; 4 points) 's comment on How to always have interesting conversations by (
- 18 May 2010 18:57 UTC; 1 point) 's comment on The Social Coprocessor Model by (
I could have got that child sponsored. I could have kept my job, and Mary could have stopped crying that evening. She’d have thanked me for coming by, and after I left she would have cuddled on the couch with her new Sponsor Child, tears drying as she found hope in the world.
But I didn’t do it. Instead I apologized for interrupting her grief, and left.
Because I am not a Meat Fucker.
Translation: you fell prey to a different Dark meme, one that linked your behavior to a perception of status, which you then pursued at the expense of real utility for at least two real people besides yourself.
In other words, your vaunted “honesty” actually equals nothing more than selfish egoism: you prefer to pride yourself on it more than you prefer actually helping people.
That is, the rhetoric around honesty you’re using is nothing more than the creed of the Confessor (“the ultimate sin is the exercise of command”), which is equally countered and mirrored by that of the Kiritsugu (who would see you as flawed for being “one who refuses to help”).
It’s not obvious that either of these is a truly correct stance, vs. just being different from each other. Neither posture absolves you of responsibility for the consequences of your actions or inactions—or even limits the possibility of bad consequences occurring.
(btw, on an entirely unrelated note, it’s Watto, not Waddo.)
the CFAR techniques as a whole never went meta enough to catch “meta-issues,” not in any really systematic way.
There is no level of meta systemization that can overrule a person’s meta issues, because their meta issues are always “one level higher than you”. ;-) To put it more precisely, no passive information-passing process can bypass a person’s meta issues, any more than you can turn a shredder into a fax machine by feeding it a copier manual. The incoming information gets processed through an existing filter that deletes any information that doesn’t fit the paradigm, or mangles the information until it does fit.
As I was joking above, self-help really is governed by the Interdict of Merlin: you can discover powerful “spells”, or you can pass them mind-to-mind, but you can’t learn them from a book, except insofar as the book might give you enough clues to rediscover the spell for yourself.
There are towers of meta-issues, meta-issues that prevent themselves from being looked at… what a mess.
Actually, meta-issues don’t have meta-levels, they only simulate recursion via iteration. A meta-issue might be something like, “when you are learning something from a teacher, then try to suck up by being super-successful really fast”. This is only one meta-level, and while it can look recursive in effect, it’s just an illusion.
Let’s say I tell the person with this meta-issue they need to inquire about some emotional state. If we’re at a part of the process where there’s a possibility for them to have succeeded in fixing the problem we’re working on, they will under-report issues and problems (like a lingering negative feeling) or describe hyper-optimistic scenarios that don’t match what they’re really feeling.
Then, if I point this out, they may now apply the pattern again, by trying to prove even harder that they’ve already learned what I’m telling them, even though they haven’t.
(This isn’t really recursion or a new meta-level, it’s just the same pattern being applied to a new stimulus or situation that only incidentally happens to be at a different meta-level, if that makes sense.)
So the only escape from this iterated pseudo-recursion is for them to catch themselves in the act of this automatic response, which requires either a lot of iteration (like it did for me when I was learning my own meta-issues), or an outside party who can spot it and say, “stop that! you’re doing it again...” until the person can recognize themselves doing it.
- 20 Oct 2019 12:13 UTC; 14 points) 's comment on Building up to an Internal Family Systems model by (
- 25 Jul 2020 8:18 UTC; 8 points) 's comment on Reveal Culture by (
- 21 Nov 2019 14:07 UTC; 2 points) 's comment on The Intelligent Social Web by (
One example was the self-help blog of Phillip Eby (pjeby), where each new post seemed to bring new amazing insights, and after a while you became jaded.
Er, you do realize I stopped most of my blogging for more or less that reason, right?
Around that time, I started pushing for a (partly LW-inspired) greater focus on empirical improvement in my work, because there was just too much randomness in how long the effects of my then-current techniques would last. Some things were permanent or nearly-so, and others might only last a few days or weeks… and I had no reliable way to predict what the outcome of a particular instance of application would be.
It was a tough shift, because at the time I also had no way to know for sure that anything more reliable or predictable in fact existed, but unlike the more “faith-based” self-help folks, I couldn’t just keep ignoring the anomalies in my results.
The good news is I got over that hump and developed more reliable methods. The bad news is that it didn’t involve brilliant simple epiphanies, but lots and lots of little hard-won insights and the correlation of tons of practical knowledge.
(And one of those bits of practical knowledge is how to avoid stopping at the “epiphany” phase of a given insight.)
Anyway, I quit blogging about it (at least to the general public) because once you’re no longer dealing in simple epiphanies, there starts to be too much inferential distance to be able to talk about anything meaningful, short of creating my own Sequences to reconstruct the inferential chains… one mini-epiphany at a time.
if all they have to say that’s nice about the post is a stock phrase that could be equally well applied to any original text, I’d prefer they skip it.
What I find interesting about this is that you’re basically saying that their signal isn’t costly enough to make you feel good. I wonder if that’s the essence of the conflict under normal circumstances, i.e., by being direct (and thus not paying the additional costs of being polite) you are signaling that you do not value your audience as alliance partners very much, or that you are so far above them as to not need to make an investment in pleasing them.
Perhaps us geeky types simply prefer our costly signaling to be in the form of someone actually having thought about what we said. ;-)
Minerva McGonagall, having submissively accepted her character assassination at the hands of Harry Potter, now submits herself for public humiliation and complete self-abnegation
I don’t see it like that at all—I saw McGonagall:
Trying bravely to take blame away from Harry because, in her words, if she didn’t, he would have no one to say those horrible things to, and
Bravely taking a public stand for her principles, trying to turn over a new leaf (or as she put it, “trying to do better”)
At least, those are pretty clearly how she sees herself in those situations, not as submitting to Harry.
(I interpret the discussion about House points as simply meaning she 1. doesn’t care about the points to anybody but the Weasley twins, and 2. is trying to be more inclusive and trusting of her students.)
Scientology is based on a bunch of low-level hacks on human perceptual routines and cognitive biases. (The staring one works on others by intimidation, as you look confident in an odd therefore unpredictable manner; the routine itself trains you to uncritically accept what’s in the later, sillier material.) Hubbard did rather well for someone with no theory and only an aim (money and fame) in mind. I would, however, caution that there are few arts of mind-hacking that are darker.
The other major hack going on in all of those routines is people paying attention to you. Being paid attention to is an extremely powerful behavior modifier, and it’s a major recruitment tool used by cults of all kinds.
(Not only is staring paying attention, but in the other exercises, the instructor is clearly paying attention to the slightest detail of everything you say or do. This type of attention from parents and teachers tends to stimulate a desire to please the person giving the attention.)
I’m not sure if it was your intent to point this out by contrast, but I would like to point out that a reasonable art of “kicking” would not rely on you making conscious decisions, let alone explicitly rational ones. Rather, it would rely on you ensuring that your subconscious has been freed from sources of bias ahead of time, and is therefore able to safely leap to conclusions in its usual fashion. An art that requires you to think at the time things are actually happening is not much of an art.
Case in point: when reading “Stuck In The Middle With Bruce”, I became aware of a subconsciously self-sabotaging behavior I’d done recently. So I “kicked” it out by crosslinking the behavior with its goal-satisfaction state. It would be crazy to wait until the next occasion for that behavior to strike, and then try to reason my way around it, when I can just fix the bloody thing in the first place. (Interestingly, I mentioned the story to my wife, and described how it related to my own behavior… and she thought of a different sort of self-sabotage she was doing, and applied the same mindhack. So, as of now, I’d say that story was one of the top 5 most valuable things I’ve gotten from LW.)
Now, in the case of extinguishing a behavior, there’s no way you can absolutely prove you’ve fixed something permanently; the best you can do is show that the thought process that you use to produce an autonomous response before applying a technique, no longer produces the same response afterward. Also, sometimes you catch a break: you find yourself in a situation, expecting yourself to do the same old stupid thing you’ve been doing before, and then you find you don’t need to, or notice a few seconds later that you already did something completely different, and a much better choice.
Truth is, our brains really aren’t that bad at making decisions, once you take out the “priority overrides” that mess things up.
Anyway, I’m rambling a bit now. The point is, “kicking” is generally not something you do at the time—you do it in advance of the next time....
Because your brain is faster than you are.
It doesn’t really much matter whether this is true or not.
I think it matters from the perspective that if subagents are simulated at query time, then a non-subagent model should be able to produce similar results to IFS, with fewer complications.
My own experience comparing subagent-oriented approaches (e.g. IFS, Core Transformation) with non-subagent ones, the non-subagent ones generally require less work to figure out what is going on, because simulating parts that want to hide or deflect stuff is more energy-intensive and frustrating than just helping someone notice that they are hiding or deflecting things.
For example, when I segregate my own desires into parts, it increases the odds of an argument or of parts withholding information or motives, vs. presupposing that all my desires are mine and that I have good reasons even for doing apparently self-destructive things.
That being said, I can think of all kinds of situations where IFS as a metaphor would be superior to more direct approaches… but they all involve people for whom the subagent metaphor is an easier introduction to metacognition, and/or the stuff being dealt with is traumatic enough that you really want to keep it mostly out of consciousness until necessary. I mostly don’t work with either group, so for me it’s vastly more efficient to simply point to the hiding or deflecting and say, “that thing you just did there: don’t do that,” than to make up new parts for each thing being avoided.
I can take that stance, though, because I work with people who are highly motivated to change and have put me in a position to give them that type of feedback. But IFS therapists need to be able to work with people who don’t always have that degree of trust and compliance with them, or who aren’t as willing to accept the existence of their less-acceptable thoughts and desires. In that situation, you’re going to have complications no matter what, so you might as well let people pretend they have subagents.
I’m actually kind of surprised that IFS seems so popular in rationalist-space, as I would’ve thought rationalists more likely to bite the bullet and accept the existence of their unendorsed desires as a simple matter of fact. In retrospect, I suppose that the ability to project (at least temporarily) those desires onto imaginary subagents would be helpful, at least as “training wheels” of a sort… and that the kind of people drawn to rationalism might be extra-likely to want to disavow all their “irrational”-seeming desires!
The reason I call it “training wheels” and dissociation is because imagined subagents allow you to disavow those desires from really being “you”. If it’s “5-year-old you” who has that desire (for example), then that can be more acceptable than admitting that you are the one who had—and still has—that desire. (My approach to this kind of thing is to first target whatever judgmental beliefs say the desire is only acceptable if you’re a five year old (if at all), because if you don’t reject the desire then disavowing it is no longer required, and you don’t need to project it onto an imaginary agent.)
To put it another way, conscious negotiation between subagents is a kludge. The brain already has systems for mediating between desires, and they function normally except when people also have judgmental beliefs that reject those desires’ validity and try to squash them. The underlying decision system still tries to run them, but runs into problems because the conscious mind has arranged its life in such a way as to not leave any opportunity for them to manifest, and rejects explicitly pursuing them. So they end up coming out in dysfunctional ways that allow for continued deniability.
Or to put it another way, if you desire something, and you also desire that you not desire it, your brain resolves the conflict by making the desire’s fulfillment appear to be outside your control. Believing that there is a “subagent” or “part” in play, allows you to maintain this facade… which is useful if you’re a therapist and don’t want to piss off your unsophisticated clients who will think you’re insulting them if you tell them all their desires are theirs, period.
But if you’re working on yourself, there is IMO little use for maintaining this facade, since if you’re going to negotiate successfully, you’re ultimately going to have to accept the validity of the desires… so you might as well bite the bullet and start from there in the first place!
On the other hand, if I reflect further, there is actually one area where I do find subagent metaphors of a sort to be kind of useful, and that is when I feel “taken over” by some past experience that I’m flashing back to. In such cases it’s helpful to see it as being temporarily possessed by a ghost of my past self, in order to detach from it, and step back into the “everyday self” that can weigh things rationally without being consumed by a past emotion. But these don’t involve any negotiations; they’re more like, “is this thing I’m feeling actually happening now?” or “is this the most useful mindset to be in right now?”
So it’s less subparts and more, “who am I acting as right now, and who would I like to be acting as?” I don’t assume that these aspects of me have agency of their own, because they are roles that I can play or not play, hats I can put on or take off at will. I think that’s the closest thing to anything subagenty that I’ve actually found useful, personally, and IMO it’s a more empowering metaphor than seeing oneself as a collection of squabbling not-really-you parts.
Most people simply do not expect reality to make sense
More precisely, different people are probably using different definitions of “make sense”… and you might find it easier to make sense of if you had a more detailed understanding of the ways in which people “make sense”. (Certainly, it’s what helped me become aware of the issue in the first place.)
So, here are some short snippets from the book “Using Your Brain For A Change”, wherein the author comments on various cognitive strategies he’s observed people using in order to decide whether they “understand” something:
There are several kinds of understanding, and some of them are a lot more useful than others. One kind of understanding allows you to justify things, and gives you reasons for not being able to do anything different.…
A second kind of understanding simply allows you to have a good feeling: “Ahhhh.” It’s sort of like salivating to a bell: it’s a conditioned response, and all you get is that good feeling. That’s the kind of thing that can lead to saying, “Oh, yes, ‘ego’ is that one up there on the chart. I’ve seen that before; yes, I understand.” That kind of understanding also doesn’t teach you to be able to do anything.
A third kind of understanding allows you to talk about things with important sounding concepts, and sometimes even equations.… Concepts can be useful, but only if they have an experiential basis [i.e. “near” beliefs that “pay rent”], and only if they allow you to do something different.
Obviously, we are talking mostly about “clicking” being something more like this latter category of sense-making, but the author actually did mention how certain kinds of “fuzzy” understanding would actually be more helpful in social interaction:
However, a fuzzy, bright understanding will be good for some things. For example, this is probably someone who would be lots of fun at a party. She’ll be a very responsive person, because all she needs to do to feel like she understands what someone says is to fuzz up her [mental] pictures. It doesn’t take a lot of information to be able to make a bright, fuzzy movie. She can do that really quickly, and then have a lot of feelings watching that bright movie. Her kind of understanding is the kind I talked about earlier, that doesn’t have much to do with the outside world. It helps her feel better, but it won’t be much help in coping with actual problems.
Most of the chapter concerned itself with various cognitive strategies of detailed understanding used by a scientist, a pilot, an engineer, and so on, but it also pointed out:
What I want you all to realize is that all of you are in the same position as that … woman who fuzzes images. No matter how good you think your process of understanding is, there will always be times and places where another process would work much better for you. Earlier someone gave us the process a scientist used—economical little pictures with diagrams. That will work marvelously well for [understanding] the physical world, but I’ll predict that person has difficulties understanding people—a common problem for scientists. (Man: Yes, that’s true.)
Anyway, that chapter was a big clue for me towards “clicking” on the idea that the first two obstacles to be overcome in communicating a new concept are 1) getting people to realize that there’s something to “get”, and 2) getting them to get that they don’t already “get” it. (And both of these can be quite difficult, especially if the other person thinks they have a higher social status than you.)
My search grids don’t look much like anything from NLP, least of all metaprograms. Instead, they’re patterns of common bugs in the brain that I believe are evolutionarily defined, and likely to be universal.
For example, working with people on self-image problems, I’ve found that there appear to be only three critical “flavors” of self-judgment that create life-long low self-esteem in some area, and associated compulsive or avoidant behaviors:
Belief that one is bad, defective, or malicious (i.e. lacking in care/altruism for friends or family)
Belief that one is foolish, incapable, incompetent, unworthy, etc. (i.e. lacking in ability to learn/improve/perform)
Belief that one is selfish, irresponsible, careless, etc. (i.e. not respecting what the family or community values or believes important)
(Notice that these are things that, if you were bad enough at them in the ancestral environment, or if people only thought you were, you would lose reproductive opportunities and/or your life due to ostracism. So it’s reasonable to assume that we have wiring biased to treat these as high-priority long-term drivers of compensatory signaling behavior.)
Anyway, when somebody gets taught that some behavior (e.g. showing off, not working hard, forgetting things) equates to one of these morality-like judgments as a persistent quality of themselves, they often develop a compulsive need to prove otherwise, which makes them choose their goals, not based on the goal’s actual utility to themself or others, but rather based on the goal’s perceived value as a means of virtue-signalling. (Which then leads to a pattern of continually trying to achieve similar goals and either failing, or feeling as though the goal was unsatisfactory despite succeeding at it.)
Simply knowing this fact is hugely helpful in narrowing down the search space for the memories needing reconsolidation. All you have to do is look for emotionally salient instances or patterns of learning that problematic behavior X equated to judgment flavor Y. If you’ve ever done something like Method of Levels or similarly undirected “theories of everything”, you’ll know you can wander through somebody’s conscious understanding of the problem for ages without getting anywhere or even being sure you are getting somewhere.
In contrast, if somebody tells me that they’ve been pursuing X goal for years and keep failing at it, or even when they do achieve it, it feels awful, or if they have impostor syndrome, I can go after it immediately by looking at what virtue they’re trying to signal (or flipping it and asking what bad judgment they would have of themselves if they had to give up on the goal or it were impossible for them), and then we’re off to the races of tracking down examples of where they learned that judgment from, the various implicit learnings that went with it, and the underlying social values and assumptions they absorbed in the process.
Along the way, this also helps pinpoint self-destructive and self-undermining behavior and self-talk (and gets rid of them), without having to first dig around in someone’s self-talk to get at the beliefs. (Which is a big win, because people are rarely aware of the ways they treat themselves badly, and often think they are helping or “motivating” themselves by being pessimistic or self-critical or having overly high expectations. So if you ask people what they think is their problem, they will often insist they need more of precisely the thing that is causing the problem in the first place!)
What I found interesting about this article was that it highlights why I’ve needed to pinpoint which “flavor” of low self-esteem was involved in a memory in order to fix the relevant behavior or belief: without that key piece of information, you can’t generate a correct contradiction for it!
In my early use and development of the implicit beliefs framework (SAMMSA) that I now use this grid in, I was just asking the client what positive quality the other person in the memory thought the client lacked, and then helping them discover how they in fact possessed that quality at the time. But this process was still a bit hit or miss until I worked out that there were really only those three kinds or flavors of quality, regardless of how the thing was named or presented, which made things easier because I could point to the three things and ask “which of these is it more like?”
Sometimes people hesitate a bit, and think it’s between two of them, and we might have to try it both ways in such a case. But that’s still a lot faster than wandering around with no idea where to start or if you’re getting anywhere. Discovering the three “EPIC failures of trust” (flavors of moral judgment) made the process reliable because it adds a well-formedness condition that can be checked before completing the technique, thus allowing earlier error correction and early trimming of dead-ends from the search tree, so to speak.
And now, thanks to this article, I have a better understanding of why this was needed, in a higher-order sense, which might lead to the development of new techniques, or at least a filter for better understanding or improving other techniques that appear to work via memory reconsolidation.
Um, isn’t that basically a wiki? I looked at the website and don’t see anything right off that indicates how it’s different from any other personal wiki tool. It even seems to be using the same double-square-bracket link syntax used by many wiki tools.
On a closer look at the one available screenshot, I think I see that the difference might be that instead of just a list of “pages that link here”, the tool provides a list of “paragraphs or bullet points that link here”, and that perhaps the wiki pages themselves are outlines?
Actually, that makes a lot of sense… and probably is better than what I’m doing with DynaList right now. Signing up… and, ok, so it’s interesting. The outliner UX is kind of basic and really lacking in features I’m used to with other outliners. For example, I can’t paste anything into it from my other outliners—pasting multiline text results in a single outline item with indentation, instead of separate bullet points.
Worse, I can’t copy out either, or at least haven’t figured out how to yet. That seems to make this an information silo that doesn’t play well with other tools.
After some experimenting with “Export” I find I can copy and paste that into a markdown editor and get a bullet list, but not something I can paste into actual outlining tools using e.g. tab indentation or OPML. The export is also lossy, losing any line breaks or indentation in code blocks. And using it is awkward, as hitting ^A to “select all” in the export ends up selecting the rest of the page, not just the export bit. I was hoping “view as document” plus “export” would let me at least extract a markdown page, but it goes back to bullet points in the export. In order to get a non-lossy export, you have to “Export All” (meaning your entire database(!), and it uses a weird asterisk-indented format that is compatible with exactly nothing.
Overall this is an intriguing idea for a tool, but the execution isn’t something I’d trust with important data, with the lack of interop being a killer lack-of-feature. The fact that the “markdown” isn’t actually markdown, either, isn’t helping. There’s really no reason for a text markup syntax like this to not just follow the Commonmark standard, even if you’re only going to support a subset. The fenced code block syntax is especially whack, as you either end up with blank lines at the top and bottom, or with something you can’t copy as-is to another program. Also, the editor seems to be applying syntax for some guessed language, as it didn’t understand shell script and indented it according to rules for some other language, fighting me all the way.
Last, but not least, I find the outlines really hard to read. This is especially visible on the “Writing Tips in Roam” page, where the vertical indent lines are too high contrast, making them distracting, the default font is too small and has no way to change it, the indentation width appears erratic when numbers are in use (because it’s actually based on indentation from the still-there-yet-invisible bullet points), and the little avatar heads (at irregular indents due to the aforementioned) are distracting and repetitive.
In short: I can’t effectively paste information into it, I can’t read it or edit it while it’s there, and I can’t effectively copy it back out. I don’t know what else I can do with it. ;-)
To be fair, these are problems one might expect with alpha software. But until they’re resolved I can’t see why I would do anything except play with it as a thought experiment in how useful and cool it might someday be if these issues were resolved. Certainly at minimum, it should be able to cleanly copy/paste to and from Dynalist and Workflowy, since it’s presented as an alternative to those tools. And if you have something that’s a “document”, you ought to be able to copy it as a markdown document and paste it into a markdown editor, so that you can take your writing and do something like putting it up on the web or making an ebook out of it.
So, if you have trouble reading tiny text or weird alignments drive you nuts, or if you need to be able to use your writing outside the note tool itself, I wouldn’t recommend signing up for this thing right now. If you intend to use it as a standalone tool and the above-mentioned quirks wouldn’t bother you, then go for it.
In other words, in practical terms, you might be better off with a personal wiki, because even plain text copy and paste is more interoperable than this. And the backlinks of a personal wiki don’t quite do what Roam does, but they might be a better choice if you’re livin’ la vida markdown as I do.
But hey, I’m sure the author(s) will fix some of these issues with time. After all, you know what they say...
Roam wasn’t built in a day. [ba dum tiss!]
No. It’s really complex, and nobody in-the-know had time to really spell it out like that.
Actually, you can spell out the argument very briefly. Most people, however, will immediately reject one or more of the premises due to cognitive biases that are hard to overcome.
A brief summary:
Any AI that’s at least as smart as a human and is capable of self-improving, will improve itself if that will help its goals
The preceding statement applies recursively: the newly-improved AI, if it can improve itself, and it expects that such improvement will help its goals, will continue to do so.
At minimum, this means any AI as smart as a human, can be expected to become MUCH smarter than human beings—probably smarter than all of the smartest minds the entire human race has ever produced, combined, without even breaking a sweat.
INTERLUDE: This point, by the way, is where people’s intuition usually begins rebelling, either due to our brains’ excessive confidence in themselves, or because we’ve seen too many stories in which some indefinable “human” characteristic is still somehow superior to the cold, unfeeling, uncreative Machine… i.e., we don’t understand just how our intuition and creativity are actually cheap hacks to work around our relatively low processing power—dumb brute force is already “smarter” than human beings in any narrow domain (see Deep Blue, evolutionary algorithms for antenna design, Emily Howell, etc.), and a human-level AGI can reasonably be assumed capable of programming up narrow-domain brute forcers for any given narrow domain.
And it doesn’t even have to be that narrow or brute: it could build specialized Eurisko-like solvers, and manage them at least as intelligently as Lenat did to win the Travelller tournaments.
In short, human beings have a vastly inflated opinion of themselves, relative to AI. An AI only has to be as smart as a good human programmer (while running at a higher clock speed than a human) and have access to lots of raw computing resources, in order to be capable of out-thinking the best human beings.
And that’s only one possible way to get to ridiculously superhuman intelligence levels… and it doesn’t require superhuman insights for an AI to achieve, just human-level intelligence and lots of processing power.
The people who reject the FAI argument are the people who, for whatever reason, can’t get themselves to believe that a machine can go from being as smart as a human, to massively smarter in a short amount of time, or who can’t accept the logical consequences of combining that idea with a few additional premises, like:
It’s hard to predict the behavior of something smarter than you
Actually, it’s hard to predict the behavior of something different than you: human beings do very badly at guessing what other people are thinking, intending, or are capable of doing, despite the fact that we’re incredibly similar to each other.
AIs, however, will be much smarter than humans, and therefore very “different”, even if they are otherwise exact replicas of humans (e.g. “ems”).
Greater intelligence can be translated into greater power to manipulate the physical world, through a variety of possible means. Manipulating humans to do your bidding, coming up with new technologies, or just being more efficient at resource exploitation… or something we haven’t thought of. (Note that pointing out weaknesses in individual pathways here doesn’t kill the argument: there is more than one pathway, so you’d need a general reason why more intelligence doesn’t ever equal more power. Humans seem like a counterexample to any such general reason, though.)
You can’t control what you can’t predict, and what you can’t control is potentially dangerous. If there’s something you can’t control, and it’s vastly more powerful than you, you’d better make sure it gives a damn about you. Ants get stepped on, because most of us don’t care very much about ants.
Note, by the way, that this means that indifference alone is deadly. An AI doesn’t have to want to kill us, it just has to be too busy thinking about something else to notice when it tramples us underfoot.
This is another inferential step that is dreadfully counterintuitive: it seems to our brains that of course an AI would notice, of course it would care… what’s more important than human beings, after all?
But that happens only because our brains are projecting themselves onto the AI—seeing the AI thought process as though it were a human. Yet, the AI only cares about what it’s programmed to care about, explicitly or implicitly. Humans, OTOH, care about a ton of individual different things (the LW “a thousand shards of desire” concept), which we like to think can be summarized in a few grand principles.
But being able to summarize the principles is not the same thing as making the individual cares (“shards”) be derivable from the general principle. That would be like saying that you could take Aristotle’s list of what great drama should be, and then throw it into a computer and have the computer write a bunch of plays that people would like!
To put it another way, the sort of principles we like to use to summarize our thousand shards are just placeholders and organizers for our mental categories—they are not the actual things we care about… and unless we put those actual things in to an AI, we will end up with an alien superbeing that may inadvertently wipe out things we care about, while it’s busy trying to do whatever else we told it to do… as indifferently as we step on bugs when we’re busy with something more important to us.
So, to summarize: the arguments are not that complex. What’s complex is getting people past the part where their intuition reflexively rejects both the premises and the conclusions, and tells their logical brains to make up reasons to justify the rejection, post hoc, or to look for details to poke holes in, so that they can avoid looking at the overall thrust of the argument.
While my summation here of the anti-Foom position is somewhat unkindly phrased, I have to assume that it is the truth, because none of the anti-Foomers ever seem to actually address any of the pro-Foomer arguments or premises. AFAICT (and I am not associated with SIAI in any way, btw, I just wandered in here off the internet, and was around for the earliest Foom debates on OvercomingBias.com), the anti-Foom arguments always seem to consist of finding ways to never really look too closely at the pro-Foom arguments at all, and instead making up alternative arguments that can be dismissed or made fun of, or arguing that things shouldn’t be that way, and therefore the premises should be changed
That was a pretty big convincer for me that the pro-Foom argument was worth looking more into, as the anti-Foom arguments seem to generally boil down to “la la la I can’t hear you”.
- 2 Nov 2010 1:42 UTC; 0 points) 's comment on What I would like the SIAI to publish by (
Of course, this means that there does need to be some contradictory information available which could be used to disprove the original schema. One might have a schema for which no disconfirmation is available because it is correct, or a schema which might or might not be correct but which is making things worse and cannot easily be disconfirmed.
This view is ignoring the distinction between denotation and connotation, or as I like to think of it, between prediction and evaluation. Our memories don’t just create factual prediction, they are also tagged with evaluations: meaning, feelings, etc.
So, it’s quite possible to reconsolidate different evaluations for the same factual predictions. For example:
UtEB mentions the example of a man, “Tómas”, who had a desire to be understood and validated by someone important in his life. Tómas remarked that a professional therapist who was being paid for his empathy could never fulfill that role. The update contradicting the schema that nobody in his life really understood him, would have to come from someone actually in his life.
The evaluation Tómas is making is itself based in some other memory that can be reconsolidated, so that it is no longer required for somebody else to understand him. The experience of “feeling understood” is not something that actually comes from outside, it is something your brain generates according to learned rules. In this case, Tómas has learned that only certain specific people’s understanding counts or is meaningful… and this learning is just as subject to reconsolidation as anything else!
Another issue that may pop up with the erasure sequence is that there is another schema which predicts that, for whatever reason, running this transformation may produce adverse effects. In that case, one needs to address the objecting schema first, essentially be carrying out the entire process on it before returning to the original steps. (This is similar to the phenomenon in e.g. Internal Family Systems, where objecting parts may show up and have their concerns addressed before work on the original part can proceed.)
Yes, checking for objections is of critical importance, because if you don’t, the thing you think you fixed can come back in a few days or weeks. But this isn’t because there’s an agent that “objects”, it’s just that the thing you were working on is reinforced by another prediction/evaluation.
For example, let’s say that Joe is having trouble promoting himself or his work, because he’s learned never to brag and that bragging is bad. He learned this because his mother always punished him for bragging and said “Pride goeth before a fall”. We do some work and get rid of that immediate response, but don’t check for objections, so we miss the part where the implicit, unspoken part of the interaction was, “If I don’t punish you for bragging, you’ll grow up to be an obnoxious selfish person who nobody will like”.
So, because of that, we’ve removed Joe’s semi-explicit belief that bragging is prideful and will lead to a disastrous “fall”, but not his more-implicit belief that he needs to punish himself for bragging. In the high of having changed the first belief, Joe will go out and start promoting himself, but feeling weirdly bad about it, until he stops again.
IOW, the “objecting” schema isn’t really objecting per se. The schema is rather reinforcing the previous schema, with a need or desire to punish himself for violating it, leading to a return of the old behavior and extinguishing the new behavior we tried to establish.
These reinforcing schema don’t always show up with an obvious objection at the time you’re making a change, and people who are eager to get the change done will often report over-optimistic predictions when they’re doing the reconsolidation part. Sometimes, the “objection” is nothing more than a vague feeling that the new scenario being projected isn’t realistic in some way, or “isn’t quite right”. When that is the case, I always dig deeper immediately to uncover what other predictions are being made.
(Of course, for this specific pattern of “if I don’t punish X in way Y, I/they will become bad type of person Z”, I have a standard format for finding it even before getting to the reconsolidation part, as it’s super-common in issues of self-sabotage, procrastination, perfectionism, etc.)
There are no generic solutions to bridging the gap between G and G*, but the body of knowledge of theory of constraints is a very good starting point for formulating better measures for corporates.
A good example from my own history of doing this is when I worked for an ISP and persuaded them to eliminate “cases closed” as a performance measurement for customer service and tech support people, because it was causing email-based cases to be closed without any actual investigation. People would email back and create a new case, and then a rep would get credit for closing that one without investigation either.
The replacement metric was one I derived via the Theory of Constraints, inspired by Goldratt’s “throughput-dollar-days” measurement. The replacement metric was “customer-satisfaction-waiting-hours”—a measurement of collective work-in-progress inventory at the team level, and a measurement of priority at the ticket level.
I also made it impossible to truly “close” a case—you could say, “I think this is done”, but the customer could still email into it and it would jump right back to its old place in the queue, due to the accumulated “satisfaction waiting hours” on the ticket.
Of course, the toughest part in some ways was educating new service managers that, no, you can’t have a measurement of cases closed on a per-rep basis. Instead, you’re going to have to actually pay attention to a rep’s work in order to know if they’re doing the job. (Of course, the system I developed also had ways to make it easy to see what people are working on, not only at the managerial but the team level—peer pressure is a useful co-ordination tool, if done right.)
I have no idea how well the system fared since I left the company, since it’s entirely possible they found programmers since then to give them new metrics that would f**k it up, although I did design the database in such a way as to make it as close to impossible as I could manage. ;-)
Anyway, the theory of constraints positively rocks for business performance optimization, and its Thinking Processes are generally useful tools for any rationalist. They were also a big inspiration for me developing other thinking processes and ultimately mindhacking techniques, in that they showed that it’s possible to think systematically even about some of the vaguest and most ill-defined problems imaginable, rigorously hone in on key leverage points, resolve conflicts between goals, and generally overcome our brains’ processing limitations for analysis and planning.
[Edit to add: the Wikipedia page on thinking processes doesn’t really show why a rationalist would be interested in the processes; it’s useful to know that a key element of the processes are something called the “categories of legitimate reservation”, which have to do with logical proof and well-formedness of argument. They are a key part of constructing and critiquing the semantic maps that are created by the thinking processes.
For example, ToC’s conflict resolution method effectively maps out certain implicit assumptions in a conflict, and then invites you to logically disprove these assumptions in order to break the conflict. (That is, if you can find a circumstance where one of those assumptions is false, then the conflict will no longer exist under that circumstance—and you have a potential way out of your dilemma.)
So, in short, ToC thinking processes are mostly about constructing past, present, or future semantic maps of a situation, and applying systematic logic to validating (or invalidating) the maps’ well-formedness, as a way of solving problems, creating plans, etc. Very core rationalist stuff, from an instrumental-rationality POV.]
a problem with cramming an attempt at powerful introspection into expensive 1-hour blocks.
The hard part of implementing this isn’t the reconsolidation part. That part is like 10-20 minutes, especially with practice. The hard part is identifying the things that need to be reconsolidated in the first place, and because that is basically a debugging process, it can take a fair amount more time.
I’ve seen a lot of “one theory fits all” therapeutic methods in the past (like Method Of Levels), but in practice for the type of work I do with people, none of them are very good at quickly identifying things because they’re too good at describing everything, and the brain isn’t just one thing.
So now I work with what I call “search grids”—common patterns of bugs that I can go through and check, is it this? is it that? is it more like X or Y? -- and it saves boatloads of time.
(To be fair, though, I work with only a select audience with selected problems, so it’s quite possible that the applicability of my grids isn’t that good outside that audience, even though I think I’ve a fair shot at an evolutionary justification for drawing the lines on my grids where they are.)
Still, I can’t even imagine trying to really solve somebody’s problems in an hour a week, unless they were already trained and had been coached through the methods once or thrice. And if they’re at that point, I ’d work with them via email anyway. The only part of these processes that actually requires real-time interaction is getting people over what I call their “meta-issue”—the schema they have that gets in the way of being able to reflect on their issues.
For example, I’ve had clients who had what you might call a “be a good student” schema that keeps them from accurately reporting their emotions, responses, or progress in applying a reconsolidation technique. Others who would deflect and deny ever having any negative experiences or even any problems, despite having just asked me for help with same. These kinds of meta-issues are the hardest and most time-consuming part of getting someone ready to change.
Ten or twelve years ago when I thought I’d unlocked the secrets of the universe and that memory reconsolidation was going to change everything and everyone, I didn’t realize yet that hard part 1 (needing to identify the things to change) and hard part 2 (needing to get past meta issues), meant that it is impossible to mass-produce change techniques.
That is, you can’t write a single document, record a single video, etc. that will convey to all its consumers what they need in order to actually implement effective change.
I don’t mean that you can’t successfully communicate the ideas or the steps. I just mean that implementing those steps is not a simple matter of following procedure, because of the aforementioned Hard Parts. It’s like expecting someone to learn to bike, drive, or debug programs from a manual.
This doesn’t mean it’s impossible for someone to teach themselves from such material, but it’s not trivial, and my dreams of revolutionizing personal development by mass-producing books or workshops died an ignoble death almost a decade ago. (Incidentally, this limtation is also why for almost any given school of therapy, you will find that people who advertise doing therapy in that school, aren’t always capable of doing more than giving it lip service or cargo cultery.)
It’s a bit like the Interdict of Merlin in HPMOR: successful techniques can only be passed from one living mind to another, or independently discovered. You can write down your notes and share the story of your discovery, and then people either discover it again for themselves, learn it from interacting with someone who knows, or go through the motions and cargo-cult it.
RMI. Now that would be a fascinating follow up post!
The irony is that RMI is absolutely the simplest, most natural thing in the world, and it’s utterly fucking insane that it needs a three-letter acronym at all.
In fact, I only gave it a name in order to be able to tell people that they’re doing it wrong.
Or more precisely, that they’re not doing it at all. Until I recently got to the improved metaphor of “mental muscles”, I didn’t know how to say, “you’re using the analysis muscle, you need to use the curiosity muscle instead”. So I coined RMI—relaxed mental inquiry—as a name for the state of mind of genuine curiosity.
You know, that same kind of genuine curiosity that Eliezer likes to rant about, where you need to genuinely not know the answer, and instead sincerely ask the question.
Except that Eliezer would also have more luck at teaching it if he gave it a funny technical name, too. You call it “curiosity”, and everybody thinks they already know what it means.
And then they don’t learn.
To learn, you have to be ignorant. To discover something new, you have to be surprised.
I could continue going on in pseudo-Zen about it, but the point is that knowing things doesn’t help you change, only doing things does. And you have to be able to “do” curiosity in order to get your brain to go “near”.
The bare minimum requirement for any form of mindhacking is to be able to attend to the present moment. With most gurus and coaching (and even therapy), this usually happens when the teacher asks a question and the student has to think about it.
RMI is my attempt to teach people to be both the teacher asking the question, and the student answering it… without becoming a show-off student or a hectoring teacher.
Heck, often people don’t manage it with a teacher asking them things, if they’re too busy confabulating. But at least if they’re in front of a teacher, the teacher can stop them, and re-ask the question.
- 28 Feb 2010 10:32 UTC; 17 points) 's comment on What is Bayesianism? by (
- 7 Mar 2010 8:43 UTC; 2 points) 's comment on The fallacy of work-life compartmentalization by (
- 9 Apr 2010 17:44 UTC; 2 points) 's comment on Pain and gain motivation by (
One of the things that I’ve noticed about this is that most people do not expect to understand things. For most people, the universe is a mysterious place filled with random events beyond their ability to comprehend or control. Think “guessing the teacher’s password”, but not just in school or knowledge, but about everything.
Such people have no problem with the idea of magic, because everything is magic to them, even science.
An anecdote: once, when I still worked as software developer/department manager in a corporation, my boss was congratulating me on a million dollar project (revenue, not cost) that my team had just turned in precisely on time with no crises.
Well, not congratulating me, exactly. He was saying, “wow, that turned out really well”, and I felt oddly uncomfortable. After getting off the phone, I realized a day or so later that he was talking about it like it was luck, like, “wow, what nice weather we had.”
So I called him back and had a little chat about it. The idea that the project had succeeded because I designed it that way had not occurred to him, and the idea that I had done it by the way I negotiated the requirements in the first place—as opposed to heroic efforts during the project—was quite an eye opener for him.
Fortunately, he (and his boss) were “clicky” enough in other areas (i.e., they didn’t believe computers were magic, for example) that I was able to make the math of what I was doing click for them at that “teachable moment”.
Unfortunately, most people, in most areas of their lives treat everything as magic. They’re not used to being able to understand or control anything but the simplest of things, so it doesn’t occur to them to even try. Instead, they just go along with whatever everybody else is thinking or doing.
For such (most) people, reality is social, rather than something you understand/ control.
(Side note: I find myself often trying to find a way to express grasp/control as a pair, because really the two are the same. If you really grasp something, you should be able to control it, at least in principle.)