It seems like one of the most useful features of having agreement separate from karma is that it lets you vote up the joke and vote down the meaning :)
Thanks for clarifying! And for the excellent post :)
Finally, when steam flows out to the world, and the task passes out of our attention, the consequences (the things we were trying to achieve) become background assumptions.
To the extent that Steam-in-use is a kind of useful certainty about the future, I’d expect “background assumptions” to become an important primitive that interacts in this arena as well, given that it’s a useful certainty about the present. I realize that’s possibly already implicit in your writing when you say figure/ground.
I think some equivalent of Steam pops out as an important concept in enabling-agency-via-determinism (or requiredism, as Eliezer calls it), when you have in your universe both:
iron causal laws coming from deterministic physics and
almost iron “telic laws” coming from regulation by intelligent agents with something to protect.
The latter is something that can also become a very solid (full of Steam) thing to lean on for your choice-making, and that’s an especially useful model to apply to your selves across time or to a community trying to self-organize. It seems very neglected, formally speaking. Economically-minded thinking tends to somewhat respect it as a static assumption, but not so much the dynamics of formation AFAIK (and so dynamic Steam is a pretty good metaphor).However, shouldn’t “things that have faded into the background” be the other kind of trivial, ie. have “maximal Steam” rather than have “no Steam”? It’s like an action that will definitely take place. Something that will be in full force. Trivially common knowledge. You yourself seem to point at it with “Something with a ton of steam feels inevitable”, but I suppose that’s more like the converse.
(EDIT: Or at least something like that. If a post on the forum has become internalized by the community, a new comment on it won’t get a lot of engagement, which fits with “losing steam” after it becomes “solid”. But even if we want to distinguish where the action is currently, it makes sense to have a separate notion of what’s finished and can easily re-enter attention compared to what was never started.)Also when you say, in your sunk costs example, “no steam to spend time thinking”, I’d say a better interpretation than “time thinking” would be “not enough self-trust to repledge solidity in a new direction”. Time to think sounds to me more like Slack, but maybe I’m confused.
I’m unsure if open sets (or whatever generalization) are a good formal underpinning of what we call concepts, but I’m in agreement that there seems needed at least a careful reconsideration of intuitions one takes for granted when working with a concept, when you’re actually working with a negation-of-concept. And “believing in” might be one of those things that you can’t really do with negation-of-concepts.
Also, I think a typo: you said “logical complement”, I’m imagining you meant “set-theoretic complement”. (This seems important to point out since in topological semantics for intuitionistic logic, the “logical complement” is in fact defined to be the interior of the set-theoretic complement, which guarantees an open.)
I began reading this charitably (unaware of whatever inside baseball is potentially going on, and seems to be alluded to), but to be honest struggled after “X” seemed to really want someone (Eliezer) to admit they’re “not smart”? I’m not sure why that would be relevant. I think I found these lines especially confusing, if you want to explain:
“I just hope that people can generalize from “alignment is hard” to “generalized AI capabilities are hard”.Is capability supposed to be hard for similar reasons as alignment? Can you expand/link? The only argument I can think of relating the two (which I think is a bad one) is “machines will have to solve their own alignment problem to become capable.”
Eliezer is invalidating the second part of this but not the first.This would be a pretty useless machiavellian strategy, so I’m assuming you’re saying it’s happening for other reasons? Maybe self-deception? Can you explain?
Eliezer thinks that OpenAI will try to make things go faster rather than slower, but this is plainly inconsistent with things like the state of vitamin D researchThis just made me go “wha” at first but my guess now is that this and the bits above it around speech recognition seem to be pointing at some AI winter-esque (or even tech stagnation) beliefs? Is this right?
There’s probably a radical constructivist argument for not really believing in open/noncompact categories like ¬A. I don’t know how to make that argument, but this post too updates me slightly towards such a Tao of conceptualization.
(To not commit this same error at the meta level: Specifically, I update away from thinking of general negations as “real” concepts, disallowing statements like “Consider a non-chair, …”).
But this is maybe a tangent, since just adopting this rule doesn’t resolve the care required in aggregation with even compact categories.
(A suggestion for the forum)You know that old post on r/ShowerThoughts which went something like “People who speak somewhat broken english as their second language sound stupid, but they’re actually smarter than average because they know at least one other language”?
I was thinking about this. I don’t struggle with my grasp of English the language so much, but I certainly do with what might be called an American/Western cadence. I’m sure it’s noticeable occasionally, inducing just the slightest bit of microcringe in the typical person that hangs around here. Things like strange sentence structure, or weird use of italics, or overuse of a word, or over/under hedging… all the writing skills you already mastered in grade school. And you probably grew up seeing that the ones who continued to struggle with it often didn’t get other things quickly either.
Maybe you notice some of these already in what you’re reading right now (despite my painstaking efforts otherwise). It’s likely to look “wannabe” or “amateurish” because it is−one learns language and rhythm by imitating. But this imitation game is confined to language & rhythm, and it would be a mistake to also infer from this that the ideas behind them would be unoriginal or amateurish.I’d like to think it wouldn’t bother anyone on LW because people here believe that linguistic faux pas, as much as social ones, ought to be screened off by the content.
But it probably still happens. You might believe it but not alieve it. Imagine someone saying profound things but using “u” and “ur” everywhere, even for “you’re”. You could actually try this (even though it would be a somewhat shallow experiment, because what I’m pointing at with “cadence” is deeper than spelling mistakes) to get a flavor for it.
A solution I can think of: make a [Non-Native Speaker] tag and allow people to self-tag. Readers could see it and shoot for a little bit more charity across anything linguistically-aesthetically displeasing. The other option is to take advantage of customizable display names here, but I wonder if that’d be distracting if mass-adopted, like twitter handles that say “[Name] …is in New York”.I would (maybe, at some point) even generalize it to [English Writing Beginner] or some such, which you can self-assign even if you speak natively but are working on your writing skills. This one is more likely to be diluted though.
I like this question. I imagine the deeper motivation is to think harder about credit assignment.
I wrote about something similar a few years ago, but with the question of “who gets moral patienthood” rather than “who gets fined for violating copyright law”. In the language of that comment, “you publishing random data” is just being an insignificant Seed.
Yeah, this can be really difficult to bring out. The word “just” is a good noticer for this creeping in.
It’s like a deliberate fallacy of compression: sure you can tilt your view so they look the same and call it “abstraction”, but maybe that view is too lossy for what we’re trying to do! You’re not distilling, you’re corrupting!
I don’t think the usual corrections for fallacies of compression can help either (eg. Taboo) because we’re operating at the subverbal layer here. It’s much harder to taboo cleverness at that layer. Better off meditating on the virtue of The Void instead.
But it is indeed a good habit to try to unify things, for efficiency reasons. Just don’t get caught up on those gains.
The “shut up”s and “please stop”s are jarring.
Definitely not, for example, norms to espouse in argumentation (and tbf nowhere does this post claim to be a model for argument, except maybe implicitly under some circumstances).
Yet there’s something to it.
There’s a game of Chicken arising out of the shared responsibility to generate (counter)arguments. If Eliezer commits to Straight, ie. refuses to instantiate the core argument over and over again (either explicitly, by saying “you need to come up with the generator” or implicitly, by refusing to engage with a “please stop.”), then the other will be incentivized to Swerve, ie. put some effort into coming up with their own arguments and thereby stumble upon the generator.
This isn’t my preferred way of coordinating on games of Chicken, since it is somewhat violent and not really coordination. My preferred way is to proportionately share the price of anarchy, which can be loosely estimated with some honest explicitness. But that’s what (part of) this post is, a very explicit presentation of the consequences!
So I recoil less. It feels inviting instead, about a real human issue in reasoning. And bold, given all the possible ways to mischaracterize it as “Eliezer says ‘shut up’ to quantitative models because he has a pet theory about AGI doom”.
But is this an important caveat to the fifth virtue, at least in simulated dialogue? That remains open for me.
It occurred to me while reading your comment that I could respond entirely with excerpts from Minding our way. Here’s a go (it’s just fun, if you also find it useful, great!):
You will spend your entire life pulling people out from underneath machinery, and every time you do so there will be another person right next to them who needs the same kind of help, and it goes on and on forever
This is a grave error, in a world where the work is never finished, where the tasks are neverending.
Rest isn’t something you do when everything else is finished. Everything else doesn’t get finished. Rather, there are lots of activities that you do, some which are more fun than others, and rest is an important one to do in appropriate proportions.
Rest isn’t a reward for good behavior! It’s not something you get to do when all the work is finished! That’s finite task thinking. Rather, rest and health are just two of the unending streams that you move through. [...]
the scope of the problem, at least relative to your contribution, is infinite
This behavior won’t do, for someone living in a dark world. If you’re going to live in a dark world, then it’s very important to learn how to choose the best action available to you without any concern for how good it is in an absolute sense. [...]
You will beg for a day in which you go outside and don’t find another idiot stuck under his fucking car
I surely don’t lack the capacity to feel frustration with fools, but I also have a quiet sense of aesthetics and fairness which does not approve of this frustration. There is a tension there.
I choose to resolve the tension in favor of the people rather than the feelings. [...]
somebody else is going to die, you monster
We aren’t yet gods. We’re still fragile. If you have something urgent to do, then work as hard as you can — but work as hard as you can over a long period of time, not in the moment. [...]
You can look at the bad things in this world, and let cold resolve fill you — and then go on a picnic, and have a very pleasant afternoon. That would be a little weird, but you could do it! [...]
So eventually you either give up, or you put earplugs in your ears and go enjoy some time in the woods, completely unable to hear the people yelling for help.
many people seem to think that there is a privileged “don’t do anything” action, that consists of something like curling up into a ball, staying in bed, and refusing to answer emails. It’s much easier to adopt the “buckle down” demeanor when, instead, curling up in a ball and staying in bed feels like just another action. It’s just another way to respond to the situation, which has some merits and some flaws.
(That’s not to say that it’s bad to curl up in a ball on your bed and ignore the world for a while. Sometimes this is exactly what you need to recover. Sometimes it’s what the monkey is going to do regardless of what you decide. [...])
So see the dark world. See everything intolerable. Let the urge to tolerify it build, but don’t relent. Just live there in the intolerable world, refusing to tolerate it. See whether you feel that growing, burning desire to make the world be different. Let parts of yourself harden. Let your resolve grow. It is here, in the face of the intolerable, that you will be able to tap into intrinsic motivation. [...]
You draw boundaries towards questions.
As the links I’ve posted above indicate, no, lists don’t necessarily require questions to begin noticing joints and carving around them.
Questions are helpful however, to convey the guess I might already have and to point at the intension that others might build on/refute. And so...
Your list doesn’t have any questions like that
...I have had some candidate questions in the post since the beginning, and later even added some indication of the goal at the end.
EDIT: You also haven’t acknowledged/objected to my response to your “any attempt to analyse the meaning independent of the goals is confused”, so I’m not sure if that’s still an undercurrent here.
In Where to Draw the Boundaries, Zack points out (emphasis mine):
The one replies:But reality doesn’t come with its joints pre-labeled. Questions about how to draw category boundaries are best understood as questions about values or priorities rather than about the actual content of the actual world. I can call dolphins “fish” and go on to make just as accurate predictions about dolphins as you can. Everything we identify as a joint is only a joint because we care about it.No. Everything we identify as a joint is a joint not “because we care about it”, but because it helps us think about the things we care about.
The one replies:
But reality doesn’t come with its joints pre-labeled. Questions about how to draw category boundaries are best understood as questions about values or priorities rather than about the actual content of the actual world. I can call dolphins “fish” and go on to make just as accurate predictions about dolphins as you can. Everything we identify as a joint is only a joint because we care about it.
No. Everything we identify as a joint is a joint not “because we care about it”, but because it helps us think about the things we care about.
There are more relevant things in there, which I don’t know if you have disagreements with. So maybe it’s more useful to crux with Zack’s main source. In Where to Draw the Boundary, Eliezer gives an example:
And you say to me: “It feels intuitive to me to draw this boundary, but I don’t know why—can you find me an intension that matches this extension? Can you give me a simple description of this boundary?”
I take it this game does not work for you without a goal more explicit than the one I have in the postscript to the question?
(Notice that inferring some aspects of the goal is part of the game; in the specific example Eliezer gave, they’re trying to define Art−which is as nebulous an example as it could be. Self-deception is surely less nebulous than Art.)
I was looking for this kind of engagement, which asserts/challenges either intension or extension:
You come up with a list of things that feel similar, and take a guess at why this is so. But when you finally discover what they really have in common, it may turn out that your guess was wrong. It may even turn out that your list was wrong.
It seemed to me that avoiding fallacies of compression was always a useful thing (independent of your goal, so long as you have the time for computation), even if negligibly. Yet these questions seem to be a bit of a counterexample in mind, namely that I have to be careful when what looks like decoupling might be decontextualizing.
Importantly, I can’t seem to figure out a sharp line between the two. The examples were a useful meditation for me, so I shared them. Maybe I should rename the title to reflect this?
(I’m quite confused by my failure of conveying the point of the meditation, might try redoing the whole post.)
Yes, this is the interpretation.
If I’m doing X wrong (in some way), it’s helpful for me to notice it. But then I notice I’m confused about when decoupling context is the “correct” thing to do, as exemplified in the post.
Rationalists tend to take great pride in decoupling and seeing through narratives (myself included), but I sense there might be some times when you “shouldn’t”, and they seem strangely caught up with embeddedness in a way.
I think I might have made a mistake in putting in too many of these at once. The whole point is to figure out which forms of accusations are useful feedback (for whatever), and which ones are not, by putting them very close to questions we think we’ve dissolved.
Take three of these, for example. I think it might be helpful to figure out whether I’m “actually” enjoying the wine, or if it’s a sort of a crony belief. Disentangling those is useful to make better decisions for myself, in say, deciding to go to a wine-tasting if status-boost with those people wouldn’t help.
Perhaps similarly, I’m better off knowing if my knowledge of whether this food item is organic is interfering with my taste experience.
But then in the movie example, no one would dispute the knowledge is relevant to the experience! Going back to our earlier ones, maybe just the knowledge there was relevant, and “genuinely” making it a better experience?
Maybe my degree of liking is a function of both “knowledge of organic origin” and “chemical interactions with tongue receptors” just like my degree of liking of a movie is a function of both “contextual buildup from the narrative” and “the currently unfolding scene”?
How about when you apply this to “you only upvoted that because of who wrote it”? Maybe that’s a little closer home.
[ETA: posted a Question instead]Question: What’s the difference, conceptually, between each of the following if any?
“You’re only enjoying that food because you believe it’s organic”“You’re only enjoying that movie scene because you know what happened before it”“You’re only enjoying that wine because of what it signals”“You only care about your son because of how it makes you feel”“You only had a moving experience because of the alcohol and hormones in your bloodstream”“You only moved your hand because you moved your fingers”“You’re only showing courage because you’ve convinced yourself you’ll scare away your opponent”
“You’re only enjoying that food because you believe it’s organic”
“You’re only enjoying that movie scene because you know what happened before it”
“You’re only enjoying that wine because of what it signals”
“You only care about your son because of how it makes you feel”
“You only had a moving experience because of the alcohol and hormones in your bloodstream”
“You only moved your hand because you moved your fingers”
“You’re only showing courage because you’ve convinced yourself you’ll scare away your opponent”
Do some of these point legitimately or illegitimately at self-deception?
Are some of these a confusion of levels and others less so?
Are some of these instances of working wishful thinking?
Are some of these better seen as actions rather than rationalizations?
So… it looks like the second AI-Box experiment was technically a loss.
Not sure what to make of it, since it certainly imparts the intended lesson anyway. Was it a little misleading that this detail wasn’t mentioned? Possibly. Although the bet was likely conceded, a little disclaimer of “overtime” would have been nice when Eliezer discussed it.
I was also surprised. Having spoken to a few people with crippling impostor syndrome, the summary seemed to be “people think I’m smart/skilled, but it’s not Actually True.” I think the claim in the article is they’re still in the game when saying that, just another round of downplaying themselves? This becomes really hard to falsify (like internalized misogyny) even if true, so I appreciate the predictions at the end.