For the future, in the case of multiple choice questions it might be nice to have an “unselect” option. (Some of the questions say “if you don’t know leave blank” or similar and then if you accidentally click an option you are forced to choose something)
Ishaan
Anecdote time:
I’m currently dispassionate about racial issues, and can (and have) openly discussed topics such as the possibility that racial discrimination is not a real thing, the possibility that genetically mediated behavioral differences between races exist, and other conservative-to-reactionary viewpoints. Some of those discussions have been on lesswrong, under this account and under an alt, some have been on other sites, and some have been in “real life”.
Prior to the age of ~19, I would have been unable to be dispassionate about issues of race and culture. I would understand the value of being dispassionate and I would try, but the emotions would have come anyway. Due to my racial and cultural differences, I’ve fended of physical attacks from bullies in middle school and been on the receiving end of condescending statements in high school and college, sometimes from strangers and people whom I do not care about and sometimes from peers who I liked and from authority figures who I respected. When it came from someone i liked/respected, it hurt more.
The way human brains work, is when a neutral stimuli (here, racist viewpoints) is repeatedly paired with a negative stimuli (here, physical harm and/or loss of social status), the neutral stimuli can involuntarily trigger pre-emptive anger and defensiveness all on its own. If your experience of people who posited Opinion X was that they proceeded to physically attack you / steal your things / taunt you openly in a social setting, you too would probably develop aversive reactions to Opinion X.
--
EDIT: just read the linked post. It independently echoes my account:
This is because respect for said arguments and/or the idea behind them is a warning sign for either 1) passively not respecting my personhood or 2) actively disregarding my personhood, both of which are, to use some vernacular, hella fucking dangerous to me personally.
--
The above is an explanation as to why it happens and how it is. I’m not saying it’s justified, or that it aught to be that way. I made a conscious effort to fight down the anger and not direct it at people who were clearly not trying to physically harm me or lower my social status in a group. I think others should do the same.
For an extreme example, in the past an authority figure made a racial joke at my expense in the presence of other students who had previously physically taunted me, thereby validating their behavior—and I took care to not direct the anger at the authority figure (who was simply ignorant of the social status lowering effect of the joke, not maliciously trying to harm me). For a tamer example, I’ve never actually ended a friendship with someone for espousing certain views—I’ve only been angry and forced myself not to say anything until after calming down.
Currently, I don’t feel emotionally angry at all when faced with those views, and i think every one else should strive to that. However, that doesn’t mean that people who haven’t faced this sort of thing are allowed to simply expect that people who have faced it will have that sort of emotional control. I’m pretty sure I’m an outlier with respect to unusually good emotional control (globally, if not on LessWrong) - most people can’t do it. It also really helps that my current social bubble has less of that sort of thing.
That said (and this is where I disagree with the linked poster) I don’t think it’s a good idea to censor views for the sake of not triggering anyone’s emotions. Dispassionate discussion of a topic unpairs the neutral stimuli with a negative stimuli—in fact, I would go so far as to recommend that people who are psychologically similar to myself (intellectually curious, emotionally stable) who have been hurt by racism should spend time talking on the internet to white nationalists and reactionaries, and people who have been hurt by sexism should spend time talking to pua’s / redpill / the “manosphere”. Talking about charged topics in settings where people are powerless to actually hurt you is a great way to remove emotional triggers.
That said, the small but vocal prevalence of meta-contrarian, reactionary ideology on LW has probably driven away a lot of smart people. There’s even dirty tactics at play here—such as the down-voting of every single comment of anyone who explicitly expresses progressive views or challenging reactionary views. I myself am on the receiving end of this nonsense—every post is systematically downvoted by exactly −1 ever since I mentioned some biological evidence about sexual orientation that could be construed as liberal. I think our kind is so partial to contrarians that we actually give people a pass from the downvote simply because they went against the grain even when the actual ideas aren’t especially insightful. Remember, well-kept gardens die by pacifism—reactionary ideas are fine if they are supported by real evidence and logic of the same standard you would hold if someone espoused a common viewpoint which is fairly obvious and popular. If it reads like pseudo-intellectual fluff, it probably is. Don’t go easy on it just because it’s contrarian.
Now imagine that one day, you’re talking with someone who you strongly suspect is a Blue, and they remark on how irrational it is for so many people to believe the moon is made of cheese.
I’m a big fan of “Agree Denotationally But Object Connotationally” when this is the case
Or, when talking to your fellow Greens about the moon, you would “agree connotationally but object denotationally”. I find that for me this is actually even more common than the reverse.
think of the “skeptic” groups that freely mock ufologists or psychics or whatever, but which are reluctant to say anything bad about religion, even though in truth the group is dominated by atheists.
Okay, let’s run with that example. If someone says something like “Theist are stupid”...I agree denotatively in that I think theism is foolish and I’m aware that holding theistic beliefs is negatively correlated with intelligence. I disagree connotationally with the disdain and patronizing attitude which is implicit in the statement, and I dislike the motivations which the person probably had for making it. If the same person had said “religiosity is negatively correlated with intelligence”, then I would have no objections -it’s the exact same information but the tone indicates that they are simply stating a fact. For particularly charged topics, explicit disclaimers voiding the connotations which normally occur are helpful.
I’m not sure it’s practical, as a reader, to read writing and extract purely the denotative information, simply because of the sheer volume of useful information which is embedded within the connotations. If language is about communicating mental states and inferring the mental states of others, you can’t communicate nearly as effectively if you toss out connotation.
TL:DR for Yvain’s post: “Your statement is technically true, but I disagree with the connotations. If you state them explicitly, I will explain why I think they are wrong”
If she had died in Azkaban or from a Kiss or from a Malfoy-funded assassination, that would have perhaps felt better. But the lamest warmup boss of the canon? Offscreen?
Isn’t that the point, though? Hasn’t that been the theme? That reality doesn’t care about the narrative arcs that you make in your head? That at any time, the universe is allowed to kill you, your notion of the plot be damned? What you are feeling seems to be more or less the author’s intention—the sign of a good story.
Mind you, if this was the real world. Harry would have found out about Hermione’s death 2-3 days later, not dramatically just in time. From a realism perspective, there was way more closure than anyone ever actually gets when it comes to violent death.
Why did Quirrell allow the unicorn corpses to be found? Why didn’t he dispose of the corpse by making it disappear, instead of trying to pass it off as a predator? Would anyone notice if a unicorn vanished without leaving a corpse? ( I suppose they might, since they’re medically valuable, but since unicorns are known not to have predators the predated corpse is hardly a good cover, as we saw. Vanishing the corpse would have made it take longer to notice.)
Anyway, this is one of the few times we see Quirrell’s plot clearly failing without anyone actually acting to thwart him. Is it plausible that he was actually unable to kill and drink a unicorn without anyone immediately noticing?
One of the big variations I see between people is the amount of energy they habitually put into thinking, and I haven’t seen this discussed anywhere.
If you wish to study this, here are two words that link what you are talking about to the literature:
- 28 Oct 2013 0:36 UTC; 1 point) 's comment on Intellectual energy by (
It has nothing to do with white people—it has to do with cross cultural misunderstandings in general. People just use the word “white” frequently because of certain implicit assumptions about the racial / cultural background of the audience.
Anyway, let me give you an example of when this sort of thing actually happens: In India, there used to be religious figures called Devadasis. They are analogous to nuns in one sense—they get “married” to divinity, and never take a human spouse. Unlike nuns, they are trained in music and dancing. In medieval India, music, dancing, and sexual services were all lumped under the same general category...as in, there was a large overlap between dancers, musicians, and sex workers, and this was widely recognized. (This is not really true today, but if you watch really old Indian movies you can see remnants of this association). We can presume that many of the Devadasis engaged in sex work. It should be noted that they also had a high social status, which allows us to further infer that the sex work probably didn’t involve intense coercion and probably wasn’t driven by extremely harsh economic pressures.
You can guess where this is going. The actual closest Western analogue to this phenomenon is “Courtesian”. However, the West had left “Courtesians” behind in the Renaissance era, and at the time of occupation they were in the Victorian era and the closest cultural analogue that came to mind was “prostitution”, which implies exploitation of women, low social status, etc.
To quote one of Eliezer’s stories, “it wasn’t prudery. It was a memory of disaster”… well, actually in this case it probably was prudery too… but I’m sure the humanitarian concerns were more salient. The British experience of sex work was negative, and the fact that the devadasi “marriages” were child marriages must have made it even more horrifying.
Of course, despite all the social reforms and laws that would-be humanitarians enacted, Devadasis continued to exist...except now they were primarily prostitutes, low status, criminal, and exploitable...and the whole thing continues to be a horrid affair to this day.
So I’d say the real problem is not the imposition of a Western conception of “good” onto others...in fact, I think humans share the “humanist” values of good and evil across cultures. (Although as far as I can tell, what constitutes conservative / traditional morality does seem to be culturally variable.)
The problem is that without cultural knowledge, you might easily misjudge good and evil because of incomplete information, even when both cultures are using the same basic metrics of good and evil...or you might just pick the wrong way of going about making improvements.
While reading primary science literature, I’ve had the following experiences happen to me on multiple occasions.
1) Read a paper with a surprising result. Later discover it has critical flaws or didn’t pass replication. I’ve learned to increase skepticism with increasingly surprising results. “This study is just wrong because of statistical issues or bad reporting” is now always one of the hypotheses in my mental arsenal, and I’ve found myself getting a bit better at predicting which results are just wrong using largely the heuristic of “this is too surprising to believe”
2) Form a hypothesis while reading. It gets verified (or falsified) via something you read later. Also, since one typically reads the methods before the results, one gets a lot of practice predicting results. (I don’t formally make predictions but I find myself making them automatically as I read.)
Based on these experiences, I suggest that reading primary scientific literature is a good exercise in “alive” epistemic rationality training. The only drawback is that it takes a long time to get sufficiently acquainted with a field.
Tyrion is frequently put into situations where he relies on his family’s reputation for paying debts.
It’s a real-life Newcomb-like problem—specifically a case of Parfit’s Hitchhiker—illustrating the practical benefits of being seen as the sort of agent who keeps promises. It’s not an ordinary quid-pro-quo because there is, in fact, no incentive for Tyrion to keep his end of the bargain once he gets what he wants other than to be seen as the sort of person who keeps his bargain.
Think it’s a stretch?
Causal Decision Theory / consequentialism:
“If your actions have results, you can use actions to choose your favorite result.”
When you want someone to do something for you, do you prefer to ask them directly or do you prefer to mention something related and expect that they infer what you want?
You’re gonna lose at least 20% of the OKC population and a much larger chunk of the general population with the complexity of your sentence structure and the use of words like “infer”.
When you want something do you
[pollid:614]
And there’s another problem—the real answer will usually be “it depends on the situation”. So an even better question would be
How often do you drop hints about what you want, instead of asking directly?
[pollid:615]
(Even now, my real answer is “it depends on what system I think the person I am talking to uses”. I’m not sure ask/tell is actually a property attributable to individual people...it’s more a mode of group interaction)
- 20 Feb 2014 2:10 UTC; 1 point) 's comment on Open Thread for February 18-24 2014 by (
I learned PHP, Javascript, HTML, and wrote my first program since AP comp sci back in high school. It’s also the first program which I wrote for an actual reason, rather than for the sake of learning to program in a class setting.
Well, step one would be to convince a team of rationalists to come up with ideas to maximize misery… :P
I think that when choosing examples of misery causing things, your mind went first to examples of misery causing things which you come across in daily life. The reason i reach this conclusion is that your “devil” isn’t nearly evil enough. There have been actual, real-world devils who have done worse than your imaginary one. If you had lived in a society more plagued by real-world devils, you might have imagined something more sinister.
It is an interesting game though.
it occurred to me that the best way to discourage education might be to make it a chore.
Even a dreary educational system generally results in literacy. And how will you crush the next education reformer who comes along?
In the real world, the most effective ways to discourage education have been to replace education with propaganda, randomized violence against schools and schoolchildren, and economic conditions which make education impossible. The first two have been employed on purpose by various groups. Although, I suppose the first method does require that the “devil” be some sort of position of political power, all the second method requires is a small band of decentralized thugs. As for the third method, a reasonably wealthy person could probably economically unbalance a small third world country if they were smart about it.
These methods aren’t even targeted for misery maximization—they are just means to an end. If your goal is specifically to cause misery, there are even more effective ways. If anyone comes up with anything that seems truly effective, I suggest not sharing it online on the off-chance that someone happens upon it and tries!
You’re speaking from anecdotal life experience. My anecdotal life experience tends to disagree with several of the points you have made.
I’m writing solely from the perspective of and about the top ~5% of a high school class. I assume that students taking the time to weigh the human capital growth prospects vs. the signaling benefits of an opportunity belong to this class
Bad assumption, even when only taking into account similar schools and demographics to the one you attended. I did make exactly this sort of cost-benefit calculation in high school and I was only above average and not close to the top 5% when it came to GPA. To avoid having you attribute this to low intelligence, I’ll also mention that I was in the top 1% when it came to standardized test scores. I attribute this to the fact that GPA primarily measures organization, conscientiousness, working memory,and signalling while test scores primarily measure English verbal and quantitative proficiency.
What I’m against is a student using signaling benefits as a deciding factor in any amount when selecting extracurriculars, classes, etc
I paid zero attention to signalling in high school. I’d engage in auto-didactic activities such as browsing google-scholar and reading stuff which interested me in favor of working on my assignments. I signed up for all of the most challenging classes because I knew that in the competition between regular course X and AP course X, the AP course would be more fun, more interesting, teach me more, and suck up roughly equal time. For the same amount of studying, a high level course will give you a lower grade but more knowledge. I didn’t even think about my GPA—I didn’t even bother keep track of how many points I had in my classes. I had a terribly single-minded focus on learning...because signalling was just too damn boring to bother with. To this day I still struggle to force myself into putting up adequately strong signalling, even when it’s dull. Not signalling adequately is akrasia, it is bad, don’t do it.
I was one of very few among the top 1% of my class who didn’t force themselves into AP US History, AP Chemistry
Right, but that means you took non-AP History and non-AP chemistry courses. You could have chosen to take the AP classes and simply spend the same amount of time studying for them as you would for the non-AP equivalent , and been content with a lower grade because you learned more about the material. If you weren’t signalling at all, why would you ever take a lower level course if you have the prerequisite knowledge to understand the higher course?
If it takes 5 hrs/wk to get an A in AP english and 2 hrs/wk to get an A in regular English, you can (but shouldn’t!) still just spend 2 hrs/wk on AP english, get a B-, and still learn more than you would have with an A in regular English.
Forget about how “significant” or “impressive” this issue might look on paper to an admissions officer.
So when I started college (due to my GPA, it was not a high-flying Ivy league) and started taking courses, when it came to the fields of neuroscience, psychology, and sociology I had already read much of the original research off of which the material was based. Some of the professors I encountered felt like celebrities because I had happened to read one of their papers once. Of course, this didn’t help me at all when it came to introductory courses where most of the material was memorization, but now that I’m taking upper level courses dealing with primary source material it’s finally beginning to pay off. But even now, those introductory courses are leeching my GPA, and are going to effect my graduate admissions process.
So lets return to high school: how exactly do I put: “For the past three years I’ve I spend hours every day reading primary literature and so I am intimately familiar with how science is done, but you’ll just have to take my word on that” on a transcript without sounding completely lame? That might fly in grad school interviews, but your average undergraduate admissions officer probably doesn’t understand why this is a really valuable and rare thing for a high school student to do. They’re thinking, “Yeah, my kid reads Scientific American too, big deal”. The fact that I won several science fairs, which should actually be the much less impressive accomplishment, probably helped my resume more than all my reading combined.
Evidence of a true passion can show through in application essays, and will mean more than 10 club presidencies or 50 letters of recommendation. Rip out of the chains of high school and do something that you’re passionate about!
So here’s what I see: You are someone who followed their passions and excelled in school, and you think doing well in school is about passion. I am someone who followed his passions and did above average in school (an underachiever relative to what IQ-proxy standardized tests expected), and I thought doing well in school was about signalling.
Here’s my updated conclusion after reading your story: some people have natural passions which take them in a direction that happens to cause good signalling—passions which cause them to succeed in school and get admitted into high level colleges. Such people like the system and tend to feel that its indicators are good and honest. Others have natural passions which take time away from signalling activities, and these people perceive a constant strain between signalling and passion. Such people tend to think that the system has perverse incentive structures.
I didn’t often connect this work with college admissions, however.
And that’s the crux of it. You did signal optimally, and the fact that you didn’t signal optimally on purpose doesn’t change that.
See, when your passions just happen lead to high GPA and good resumes, you don’t need conscientiousness-ly cultivated signalling behavior. This is why you are advising others to simply do what comes naturally.
For the rest of us - if you want to get into a high level university, I’d advise you to keep signalling, or switch into a different set of incentive structures (homeschooling, alternative schools, etc).
Yes, Extremely strong—it’s among the extremely few statements which are uncontroversial in nutrition.
google scholar → vegetables health -->
http://ajcn.nutrition.org/content/70/3/475s.full
--> assorted references
Edit: I think it’s more accurate to say that vegetable deprivation is extremely harmful. It’s not like eating additional vegetables leads to additional health!
Psychologically speaking, it is helpful to think of eating vegetables as the default state which we actively deviate from, not a thing which we actively do to stay healthy. That’s why I like words such as “sedentism”—it makes you feel like you are actively harming your body rather than passively allowing it to be harmed, similar to “alcoholism”.
1) The idea of constructing things out of axioms. This is probably old hat to everyone here, but I was clumsily groping towards how to describe a bunch of philosophical intuitions I had, and then I was learning math proofs and understood that any “universe” can be described in terms of a set of statements, and suddenly I understood what finally lay at the end of every chain of why?s and had the words to talk about a bunch of philosophical ideas...not to mention finally understanding what math is, why it’s not mysterious if physics is counterintuitive, and so on. (Previously I had thought of “axioms” as”assumptions”, rather than building blocks.). Afterwards, I felt a little cheated, because it is a concept much simpler than algebra and it ought to have been taught in grade school.
2) Something more specialized: I managed to get a B.S. in neuroscience without knowing about the thalamus. I mean, knew the word and I knew approximately where it was and what it did, but I did not know that it was the hub for everything. (By which I mean, nearly every connection is either cortico-cortico or cortico-thalamic). After graduation, I was involved in a project where I had to map out the circuitry of the hippocampus, and suddenly… Oh! This is clearly one of the single most important organizational principles of the brain and I had no idea. After that, a whole bunch of other previously arbitrary facts gradually began to made sense...Why did no one simply show us a picture of a connectome before and point out that big spot right in the middle where it all converges?
3) We learned all this minutia of history, but no one really talked about the hunter-gatherer <--> agriculture transition and its causes. Suddenly, historical trends in religion, the demographic transition, nutrition, exercise, cultural differences, and a bunch of other things start clicking together.
I think what all these 3 things have in common, is that they really aught to have been among the very first lessons on their respective subjects...but somehow they were not.
I was playing a card game with about 6 people in an AP calc class. One component of the game involved guessing: some of the cards were “good’ and some were “evil”. You had the option to either pick up a card or pass it on to the next player, and the objective was to pick up the “good” cards and pass on the “evil” ones.
Prior to guessing, I would look in my opponents eyes, and ask them: “Is it good or is it evil?”. If it was good, I’d get this mischievous, friendly vibe from them. If it was evil, I’d get a sort of adversarial or guilty vibe.
I must have guessed between 60-120 times throughout the game. I got every single guess correct. It was creeping me out.
After the game was over, we tried having the professor draw some cards and pass it to me, and I was supposed to guess whether it was good or evil. My professors face was like a stone, and I was guessing at chance. (Note, however, that this wasn’t a real game so there was no winning-losing at stake—that might have made it easier to avoid micro-expressions.)
This sort of thing had never happened to me before and has never happened to me since. I attributed it to luck and temporarily heightened sensitivity to face reading (It certainly felt like reading faces)...but the sheer accuracy of my intuitions and my inability to replicate it still spooked me. And, of course, part of me was screaming you managed to find psychic powers and you lost them you idiot!.
Assuming it wasn’t sheer luck, I’d very much like to successfully replicate it one day and master the skill. I scored 33⁄36 my first time taking the RMET and mean is ~25 so my face-reading skills are probably above average, but it’s not like I hit ceiling.
I think a large part of it is learning to listen to gut feeling, not second guessing, not letting your imagination interfere with your perception...but I really don’t know. It’s hard to introspect on phenomenon that I can’t replicate.
Assuming none of this is fabricated or exaggerated, every time I read these I feel like something is really wrong with my imagination. I can sort of imagine someone agreeing to let the AI out of the box, but I fully admit that I can’t really imagine anything that would elicit these sorts of emotions between two mentally healthy parties communicating by text-only terminals, especially with the prohibition on real-world consequences. I also can’t imagine what sort of unethical actions could be committed within these bounds, given the explicitly worded consent form. Even if you knew a lot of things about me personally, as long as you weren’t allowed to actually, real-world, blackmail me...I just can’t see these intense emotional exchanges happening.
Am I the only one here? Am I just not imagining hard enough? I’m actually at the point where I’m leaning towards the whole thing being fabricated—fiction is more confusing than truth, etc. If it isn’t fabricated, I hope that statement is taken not as an accusation, but as an expression of how strange this whole thing seems to me, that my incredulity is straining through despite the incredible extent to which the people making claims seem trustworthy.
It’s almost as if the internet’s nutrition websites weren’t designed for munchkining your diet!
This is because while the field of nutrition is currently at the point where it can prevent serious deficiency (a relatively simple matter of making sure all the important nutrients pass through your guts in sufficient quantity), it’s not at the point where it can confidently point to the optimal diet for the average human.
Everyone agrees that fruits and vegetables are generally positive. Everyone agrees that heavily processed foods are generally bad. By the way, Calorie counting is a reasonable path to weight loss and weight gain (though there are other methods) - and everyone agrees that being over/under weight is generally bad. That’s about where the agreement ends.
Tackling the harder problems of nutrition would require us to understand more about human metabolism, nutrient absorption, non-nutrient factors like anti-oxidants and anti-inflammatory compounds, natural toxins (never forget that being eaten is not in the genetic interests of most plants), gut flora, immunological function, and things of that sort.
Just to give you a sense of the chaos here: there are nutritionists who make a case that you shouldn’t eat beans or lentils at all. These same folks say that while you are at it stop eating grains in general, and make up the calories with animal fat. At the opposite side of the spectrum, there are nutritionists who say that the optimal diet contains almost zero meat (see—china study). All this confusion is before you add in ethical complications about sustainable food and animal rights.
If you both these strands of advice simultaneously and cut out grains, legumes, and animal fat … at that point you’ll have to start to getting rather creative in order to get sufficient calories, and you’re probably pretty far off from optimal at this point.
I could take your request and give you a professional nutritionist’s dietary recommendations, but the nutritionist I recommend will necessarily conform to my own stance and you’d be foolish to trust anyone on expert opinion when expert opinion is so diverse. From your perspective, my opinion that the optimal strategy is to model your diet off of what humans ate during the paleolithic would constitute a random shot in a space of common schools of thought—I think Paleolithic diets have a relatively high likelihood of being better than almost all diets which became possible post-agriculture, but as far as you’re concerned who the hell am I? Nutrition isn’t even my primary area of study, and even if it was, taking the recommendation of a random expert is probably worse than taking the recommendation an expert who was recommended by a random non-expert
Anyway, aside from general purpose tools like google scholar, cochrane reviews, etc… http://examine.com is one of the most user-friendly primary source databases I’ve come across geared specifically to nutrition. Unfortunately, it’s mostly about supplements and single nutrients rather than whole foods, and that’s largely because we can be more confident when talking about single molecules than we can about entire foods. On a less empirical note, I’ve got a generally favorable opinion of the blog posts from http://www.marksdailyapple.com/ which clued me in to several things I hadn’t considered before (offal & bones, Vitamin K2, etc) and lean on the practical side. If you prefer to listen to people who are prominent in academia, Loren Cordain has a blog http://thepaleodiet.com/ and several influential papers.
I think what confuses people is that he
1) claims that morality isn’t arbitrary and we can make definitive statements about it
2) Also claims no universally compelling arguments.
The confusion is resolved by realizing that he defines the words “moral” and “good” as roughly equivalent to human CEV.
So according to Eliezer, it’s not that Humans think love, pleasure, and equality is Good and paperclippers think paperclips are Good. It’s that love, pleasure, and equality are part of the definition of good, while paperclips are just part of the definition of paperclippy. The Paperclipper doesn’t think paperclips are good...it simply doesn’t care about good, instead pursuing paperclippy.
Thus, moral relativism can be decried while “no universally compelling arguments” can be defended. Under this semantic structure, Paperclipper will just say “okay, sure...killing is immoral, but I don’t really care as long as it’s paperclippy.”
Thus, arguments about morality among humans are analogous to Pebblesorter arguments about which piles are correct. In both cases, there is a correct answer.
It’s an entirely semantic confusion.
I suggest that ethicists aught to have different words for the various different rigorized definitions of Good to avoid this sort of confusion. Since Eliezer-Good is roughly synonymous to CEV, maybe we can just call it CEV from now on?
Edit: At the very least, CEV is one rigorization of Eliezer-Good, even if it doesn’t articulate everything about it. There are multiple levels of rigor and naivety that may be involved here. Eliezer-good is more rigorous than “good” but might not capture all the subtleties of the naive conception. CEV is more rigorous than Eliezer-good, but it might not capture the full range of subtleties within Eliezer-good (and it’s only one of multiple ways to rigorize Eliezer-good...consider Coherent Aggregate Volition, for example, as an alternative rigorization of Eliezer-good).