I think it’s somewhat a matter of personal taste, but like you I’ve found such attempts to quantify my life dissatisfying, although I know others who get a lot out of such attempts. I general fall in the direction of not bothering to measure hard to measure things if I don’t have to, and when I’m reluctantly forced to do it I try to use very gross measurements to match the poor precision possible in such cases. Having the precision of the measurement match the level of precision you can achieve helps avoid getting confused by the numbers and thinking you have more information than you do.
This is giving me a new appreciation for why multiagent/subagent models of mind are so appealing. I used to think of them as people reifying many tiny phenomena into gross models that are easier to work with because the mental complexity of models with all the fine details is too hard to work with, and while I still think that’s true, this gives me a deeper appreciation for what’s going on, because it seems it’s not just that it’s a convenient model to abstract away from the details, but that the structure of our brains is setup in such a way that makes subagents feel natural so long as you model only so far down. I only ever much had a subjective experience of having two parts in my mind before blowing it all apart, so this gives an interesting look to me of how it is that people can feel like they are made up of many parts. Thanks!
Oh, here’s another one: Lisp Machines. These were computers with alternative chip designs focused on executing Lisp (or really any functional programming language) rather than on executing procedural code. Had the direction been pursued further, they might have resulted in dramatically different computer architectures than what we use today. Some were built and used, but only in very limited contexts, so I’d say this meets the criteria of “never saw the light of day” in that less than 10k Lisp machines were ever built.
One that comes to my mind is OpenDoc, a cool and exciting proposal for a way to make editable generic computer documents that were not application constrained. The idea was to make documents a cross-platform, operating system level responsibility and what we today think of as applications would instead be embedded viewers/editors that could be used when putting different types of “objects” in documents.
We did eventually get something like it: Google Docs, Word, and even web pages generally have the ability to embed all kinds of different other documents, and sometimes there is viewing/editing support within the document (you can see images, embed editable spreadsheets, embed editable diagrams, etc.), but with more vendor lock-in and missing the spirit of vendor openness OpenDoc intended.
Well, I guess I sort of do this, although not with email and only for certain very narrow purposes: I use a few tools (calendars, emails to myself that describe an action item in the subject, a notes app that displays on my phone’s home screen) to make sure I don’t forget things I want to do something with later. This is basically just part of my personalization of the Getting Things Done method, although what I do these days doesn’t look much like GTD as described in the books, but it’s carried out in the same spirit: make a decision about what to do now, and then either do it now, put it in a system you trust so you are sure to do it later, or drop it.
Certainly an email to myself every day could achieve the same thing since I could edit it every day to contain the current state of the information I want to store in these systems, although I imagine more could be done with such a generalized mechanism that I can’t do with my narrow way of using the tools I currently do.
I do disagree with C (compelling only from a certain stage of development) in that I think even once you have much deeper understanding, the higher levels of abstraction remain crucially important. Just because you understand electromagnetism really well and know the limits of conventional circuit theory (e.g. designing super duper tiny transistors), doesn’t mean you want to throw out circuit theory and just solve Maxwell’s equations everywhere—even if eventually sometimes you have to.
So maybe it would help if I was a little more specific about this point. When I’m saying “compelling” here I meant to point to something like both intellectually interesting and useful because it feels new and like it’s engaging with the edge of development. Stuff like this becomes uncompelling as one gains mastery, so I think I was trying to pass on the wisdom of my accumulated experience in this area from building, learning, using, and presenting models like this one and then, upon reconsidering, finding them limiting but having been useful at one point because I didn’t have access to any deeper details to help me along.
My objective in pointing this out is tied in with the next bit, so we’ll just go ahead and segue to that.
To be honest, I did bristle at some of the way things were phrased, but that’s on me. It felt like there was some kind of implication that I personally didn’t have any deeper understanding and that felt.
To be honest, there is an implication like that, based on what I’ve read here. I could maybe believe you intentionally didn’t address some of the deeper points you might understand about the details that I think are relevant, but if that were the case I would expect your footnotes and asides to address topics more about beliefs, preferences, and especially perception and less about those things munged together and rounded off to “motivation”. Instead I read this as your honest best effort to explain what’s going on with motivation, and I’m telling you I think there’s much more going on in directions much more fine-grained than those you seem to have explored, even in the references.
“Motivation” and “intention” are huge, confounded concepts that I believe can be broken apart, thinking of yourself as having a “motivation system” is another confusion, but unfortunately I’ve not worked out all the details well enough for myself that I’m happy to share my current state of partial knowledge in this area. Unfair, I admit, but it’s where I stand. All I can point to is there’s a bunch of stuff going on that can be reified into the concept of “motivation” and working with motivation as a concept will be helpful for a while but ultimately “motivation” doesn’t cut reality at the joints so thinking in those terms has to be largely abandoned to go further.
Should I have publicly passed judgement on you in the comments section? Probably not, but for some reason I already did so we’ll just have to deal with it now. Sorry about that.
My goal here is to be encouraging, however it might come across, and to make clear there is a way forward. As I said to another person recently when I responded in a similar way to something they said, I’ve been realizing a lot recently the ways in which I limited myself by thinking I understood things. I see in this work clues that you having an understanding similar to how I thought about motivation maybe 3 years ago, and maybe I would already have a ready-at-hand alternative if I hadn’t spent so much time thinking I had it right. So I want you to explain what you’ve figured out, I think your way of explaining what you have is going to be useful for others, I don’t want to say anything that might put you off either of those goals, and I also want to push you along so you don’t suffer the worst of all calamities: thinking you understand something!
I also think D (unlikely to help many people) is somewhat false, depending on what counts as “many people”. Another commenter felt this post was quite useful, someone else on FB found it rather revelationary, and I’d infer from those who I know of that several more benefited even if I don’t know of it directly. That’s beyond the inside view that abstraction/model presented can be applied already. mr-hire also states simpler ideas worked well for a really long time (though I’m not sure which simpler ideas or what counts as “brute force”.
Sure, I guess I was hoping to set expectations appropriately, since I know I’ve been let down many times broaching these topics with folks. Yes, there will always be some people who you manage to connect with in part because of what you write and in part because of where they are, i.e. they are ready to listen to what you have to say and have it click. They are the cherished folks with little enough dust in their eyes that you write for. But for every person you help, there are probably 20 more who will read this and for one reason or another it won’t connect the way you’d hope it would. They might not hate it, and might say they get it, but then they’ll just keep on doing what they were doing, not changing anything really, not really having gained any understanding. I was demoralized a lot by this, thinking it must have been me, until I figured out the base rate of success for this kind of thing is pretty low unless you’re tackling stuff way down at the bottom of the developmental ladder. I suspect, based on the quality of your explanation, that this post will perform better than average, but that to me probably means something like connecting with 7% of the people who read it instead of 5%.
If you don’t know that going in and depending on what your expectations are that can be pretty brutal when you realize it (especially if, unlike it sounds like for you, you focus more on the people it doesn’t work for that the people it does), and I feel like you did well enough on this post that you might do more and you deserve to know this in case it will affect your self-esteem and your likelihood of doing writing more things like this. Again, this is in the category of “things I wish someone had told me 5 years ago because then I wouldn’t have had to figure it out the hard way for myself”.
No, because there’s generally not an option for that via insurance since doing that would effectively be bribery under the way payment is handled. Have not tried doesn’t-take-insurance private practice.
I have to say I do really wish there were some kind of reliable, N=1 medical service out there for when something is wrong and it’s not easy to diagnose let alone solve. I have a lot of personal experience in this area on the patient side, where a person close to me was (and still is!) suffering from some kind of medical problem and they keep getting bounced around because whatever is wrong is rare enough that it doesn’t show up on anyone’s flowchart. The experience is incredible frustrating, because I can see that there’s something pretty specific wrong, but every time I or the patient talked to a doctor we’d go through the diagnostic process and, at best, end with “yep, idk what’s wrong, let’s just treat some symptoms then”. I’d think that we’d be able to do better than this, but in the end most doctors just seem to throw up their hands and say “well, too hard for me, good luck”. I get why it happens: it’s a lot of work, they’re not going to get paid extra for doing it, and no one is going to sue them as long as they made a best effort. But it doesn’t make it any less frustrating, and any less interesting (to me) a problem to try to solve both for N=1 and for all the N=1s.
And if your abstractions are tight (not leaky) enough, you actually don’t need to understand the underlying complexity for them to be useful.
This sounds like the crux of the disagreement: I think no abstraction is sufficiently non-leaky that you don’t (eventually) need to understand more of the underlying complexity within the context I see this post sitting in, which is the context of what we might call cognitive, personal, psychological, or personal development (or to put it in non-standard term, the skill of being human). Unless your purpose is only to unlock a little of what you can potentially do as a human and not all of it, every abstraction is eventually a hindrance to progress, even if it is a skillful hinderance during certain phases along the path that helps you progress until it doesn’t.
For what it’s worth, I also suspect the biggest hurdle we have to overcome to make progress on being better at being humans is gaining enough cognitive capacity to handle more complex, multi-layered abstractions at once, i.e. to see both the machine and the gears at the same time. Put another way, it’s gaining the ability to not simply abstract “away” details but to see the details and the abstraction all at once, and then do this again and again with more layers of abstractions and more fine-grained details.
Oh, thanks! From the post and the comments I thought such a feature didn’t exist on purpose!
I appreciate what you’re doing here trying to protect us, but I’d also really like a way to get the data more frequently. I understand there’s probably a lot of reasons you want to make this hard, but if it’s easy to tweak per user it’d be nice if I could do something like send a support request to get my frequency cranked up to once every 5 minutes or something reasonably real-time that doesn’t put a bunch of strain on the system.
Basically, I know I can trust myself with this and would like it, understand why you would want to make it very hard for almost everyone to get access to it, and so just want to put out a feeler to see if super-hidden options are a possibility, even if it means I have to add the code myself and get you to flip the config for my user in the database.
I think this is the best, mostly clearly and accurately written explanation of this insight to appear within the rationalsphere so far. Most of us, myself definitely included, have focused our explanations largely on narrow ways to approach this point without doing justice to the breadth of it, and I’m not really sure why we’ve all done that, though my guess is we focus too much on our own entry points, and possibly you’ve done the same but your way into the insight happened to be one that naturally admits a general explanation. Either way, kudos.
That said, this wouldn’t be a very LessWrongy comment if I didn’t have few, possibly antithetical, things to say about it.
First, I agree that you get the model right, but it’s a model that is only very compelling from a certain stage of development, my strongest evidence being it was once very compelling to me and now it’s more like the kind of understanding I would have if I was asked to manifest my understanding without explaining below a certain level of detail, and the other being I think I’ve seen a similar pattern of discovering this and then focusing on other thing in the writing of others. That doesn’t make any of it wrong or not useful, but it does suggest it’s rather limited, as I think fellow commenter Romeo also points out. That is, what’s going on here is much deeper than it appears to you, and if you keep pushing to explain the opaque parts of this model (like, “where do the beliefs that power motivations come from?” and “why do you even prefer one thing to another?“) you’ll see it explode apart in a way that will make you go “oh, I had it right, but I didn’t really understand it before” the same way you might think you understand how any complex system like a watch or a computer program works until you start literally looking at the gears or electrical currents and then say “oh, I’m amazed I even had such good understanding before given how little I really understood”.
I say this not because I want to show off how great I am, even if it seems that way, but because I think you’re on the path and want to make it absolutely clear to you that you made progress and that there’s much, much deeper to go, whether you pursue that now or later. I say this too because I wish someone had said it to me sooner, as I might have wasted less time being complacent.
Second, just to set expectations, it’s unfortunately unlikely that having this model will actually help many people. Yes, it will definitely help some who are ready to see it, but years of trying to explain my insights has taught me that one of the great frustrations is that fundamental insights come in a particular order, they build on each other, and the deeper you go the smaller the audience of people explaining your insights to will help. This doesn’t mean we shouldn’t do it, as I think anyone who figures these thing out can attest because we’ve all had both the experience of reading or hearing something of someone else’s insight that helped us along and of figuring something out and then helping others see it through our explanations, but it also means we’re going to spend a lot of time writing things that people just won’t be ready to appreciate yet when they read it. Again, this is a pattern it took me a long while to accept, and once I understood what was going on I overcame much of my previous feelings that I was misunderstanding things despite clear evidence to the contrary because when I tried to explain my understanding it often was met with confusion, misunderstanding, or hostility (my Hegelian writing style not withstanding).
I very much look forward to seeing part 2, and hope it ends up helping many people towards gaining better understanding of how motivations work!
You know, at first when I saw this post I was like “ugh, right, lots of people make gross mistakes in this area” but then didn’t think much of it, but by coincidence today I was prompted to read something I wrote a while ago, and it seems relevant to this topic. Here’s a quote from the article that was on a somewhat different topic (hermeneutics):
One methodology I’ve found especially helpful has been what I, for a long time, thought of as literary criticism but for interpreting what people said as evidence about what they knew about reality. I first started doing this when reading self-help books. Many books in that genre contain plainly incorrect reasoning based on outdated psychology that has either been disproved or replaced by better models (cf. Jeffers, Branden, Carnegie, and even Covey). Despite this, self-help still helps people. To pick on Jeffers, she goes in hard fordaily self-affirmation, but even ignoring concerns with this line of researchraised by the replication crisis, evidence suggests it’s unlikely to help much toward her instrumental goal of habit formation. Yet she makes this error in the service of giving half of the best advice I know: feel the fear and do it anyway. The thesis that she is wrong because her methods are flawed contradicts the antithesis that she is right because her advice helps people, so the synthesis must lie in some perspective that permits her both to be wrong about the how and right about the what simultaneously.
My approach was to read her and other self-help more from the perspective of the author and the expected world-view of their readers than from my own. This lead me to realize that, lacking better information about how the human mind works but wanting to give reasons for the useful patterns they had found, self-help authors often engage in rationalization to fit current science to their conclusions. This doesn’t make their conclusions wrong, but it does hide their true reasoning which is often based more on capta than data and thus phenomenological rather than strictly scientific reasoning. But given that they and we live in an age of scientism, we demand scientific reasons of our thinkers, even if they are poorly founded and later turn out to be wrong, or else reject their conclusions for lack of evidence. Thus the contradiction is sublimated by understanding the fuller context of the writing.
In the case of humans, it seems self-evident that suffering is a consciously experienced, mental or psychological phenomenon. This makes it difficult to quantify, given our lack of access to other beings’ qualia.
I think we can actually say something about minds in general here. Suffering is tied to how we relate to certain qualia, that is, suffering is qualia about qualia, and to put a fine point on it I’d say suffering is a kind of confusion (an incorrect prediction, in the predictive processing model) we experience as aversive (negative feedback). This suggests that any thing we think shows signs of capacity to perceive its own experience is likely to be capable of suffering, though whether or not it actually suffers is harder to suss out because humans can, though extensive training, learn to not suffer under conditions that would normally cause suffering by changing how they relate to pain.
I realize this doesn’t really address the broader point of your post, but since you’re thinking about these topics I thought you might find a more precise explanation of suffering of interest since it took me a while to pick t
I posted this as a question because, although I have my own thoughts on this, I’m not very confident in them because I’m too deep inside LW to be able to see this well, so I want to find out what others think about this. That said, it’d be dishonest if I didn’t at least give what I think is the answer, since if nothing else it motivated me to ask the question!
Here’s the evidence that jumps to mind when I think about this:
certain authors seem to get disproportionately high post scores relative to what I perceive to be the value of what they’re saying
those authors tend to be authors who I perceive to have high status on LW, often drawn from several sources, such as in person connections, other work they are doing, and previous posts they have published
yes, i’m somewhat among this set of people, although it’s (thankfully?) often offset by my being so weird that a few people downvote me
contra: maybe these people always have something valuable to the LW community to say
those authors sometimes write posts they say they think are low quality within the post, yet they still receive higher post scores than average
contra: maybe they misassess the quality of their posts
contra: maybe they say that for other reasons, like to feel like they’ve protected themselves from potential rejection if the post is poorly received
new authors sometimes write very insightful things that are largely ignored (receive very few votes and have lost post scores)
most often i notice this when they fail to conform in some way to LW style expectations
contra: maybe writing style is a secretly cherished LW value that isn’t held up as explicitly as others like insightfulness and accuracy
contra: new people join all the time and some of them gain high status
None of this is perfectly explained by LW being a “classic style intellectual world”, but it seems quite suggestive to me that it has tended in that direction, arguably right from the beginning. Maybe the answer will offer other explanations of this evidence that fit it better, or offer other evidence to suggest I have a skewed view of what LW is like (that’s why I say I’m too deep inside LW to see it clearly).
A few years ago I wrote a short series of posts on my old blog about what I had learned in this direction. Glancing over it I don’t think it’s 100% what you’re looking for, but might point you in some useful, interesting directions. The posts, in order:
I wrote these when I was at a different stage of cognitive development than I’m in now, so they don’t totally match the way I would address these topics today, but hopefully they will be of some use nonetheless.
Nice. I think one of the largest challenges I see people facing when it comes to motivation is that they don’t have access to the ability to manipulate their motivation, in part because they don’t have (or have but don’t believe in) models that suggest ways in which motivation can be manipulated. This doesn’t mean having those models will make working with motivation easy, but it does at least make it possible so that you don’t flail around going “I have no idea why I’m doing what I’m doing” (and not because you deeply “don’t know”, but because you just can’t even look to see enough of what’s going on to really “not know” in any meaningful way other than simple unawareness).
I’d say the deep insight here is seeing that both what you do and what makes you do what you do is not part of the self, and in being not part of the self (i.e. not a thing you’re identified with) you are unattached to it and free to work with it as needed. Easier said than done, though, as always.
AI in most cases doesn’t need emotions, but if need, it could be perfect is simulating smile of hate—notjing difficult in it.
I agree with you that this post seems to be suffering from some kind of confusion because it doesn’t clearly distinguish emotions from the qualia of emotions (or, for anyone allergic to talk of “qualia”, maybe we could just say “experience of emotions” or “thought about emotions”).
I do, however, suspect any sufficiently complex mind may benefit from having something like emotions, assuming it runs in finite hardware, because one of the functions of emotions seems to be to get the mind to operate in a different “mode” than what it otherwise does.
Consider the case of anger in humans as an example. Let’s ignore the qualia of anger for now and just focus on the anger itself. What does it do? I’d summarize the effect as putting the brain in “angry mode” so that whatever comes up we respond in ways that are better optimized for protecting what we value through aggression. Anger, then, conceptualized as emotion, is a slightly altered way of being that put the brain in a state that is better suited to this purpose of “protect with aggression” than what it normally does.
This is necessary because the brain is only so big, and can only be so many ways at once, thus it seems necessary to have a shortcut to putting the brain in a state that favors a particular set of behaviors to make sure those happen.
Thus if we build an AI and it is operating near the limits of its capabilities, then it would benefit from something emotion-like (although we can debate all day whether or not to call it emotion), i.e. a system for putting it self in altered states of cognition that better serve some tasks than others, thus trading off temporarily better performance in one domain/context for worse performance in others. I’m happy to call such a thing emotion, although whether the qualia of noticing such “emotion” from the inside would resemble the human experience of emotion is hard to know at this time.
Yeah, I do sometimes make an inside/outside distinction as a metaphor for talking about the subject/object distinction because things that are object can in a certain sense be said to be outside the self and thus available for manipulation and considering by the self and those things that are subject as inside and cannot as easily be manipulated and seen, just as it’s easier for me to see and manipulate the cup on my desk than to see and manipulate the stomach inside my body. Most progress with insight meditation consists of gradually (or suddenly!) moving what was subject/inside to object/outside, and a way to do that is by engaging with it in this way through a deliberative introspective process as part of meditation.
Do you mean here that as you progress, you will introspect on the nature of your previous introspections, rather than more ‘object-level’ thoughts and feelings?
Yes, and also more broadly that what was once skillful inspection of, say, observable behavior, can later become unskillful excess attention on behavior when you should now be paying more attention to the precursors of behavior because those are more readily accessible to you.
So one of the distinctions folks make within meditation (a special kind of introspection, we might say) to help people focus on the right things for insight is to distinguish between content and structure/process. That is, you’re often given the instruction to just be with whatever comes up, but this doesn’t mean to spend your time being trapped in it; instead the idea is to see something about what’s going on that’s elucidated by what came up.
To make this more concrete, say you sit down to meditate and you start thinking about how you felt awkward about something that you said to your friend earlier. Maybe that’s not the object of concentration, so maybe you should just drop it, but if you stay with it that could be fine too, especially if it’s intrusive and no matter how hard you try to focus on your breath or whatever you keep drifting back to thoughts of the awkward conversation. Whether or not what you are doing is skillful meditation will depend on if spend your time focused on the content of thought (e.g. “oh no why did I say that, I bet they hate me now, how was I so dumb, I always do this, what if I did something different instead, if only I had a different childhood I would be different, …“) or whether you spend your time noticing what’s going on (e.g. “okay, I feel awkward and keep thinking about it, what does it feel like to feel awkward? where do i feel it in my body? okay, now why do I relate to it the way I do? what comes up when I ask “why is this awkward”? am I sure this is true? …“). This seems to be something like the rumination/introspection distinction you mention.
Now of course today’s structure/process is tomorrow’s content, so this distinction is a relative one to how you relate to your own thoughts. Further, introspection/meditation is a place where it’s important to be epistemically humble because our brains often don’t know themselves very well and may feed us confused misinformation that we will later see as confused but right now can’t separate out from reality, so the conclusions we can draw from introspectively accumulated evidence are necessarily weaker due to the lower confidence we should have in the capta. Thus it makes sense to be cautious about our claims based on introspection, even as they are often extremely helpful for some purposes, like understanding ourselves better and becoming less confused about our relationship to the world.