I’m not happy about the justifying the high payouts to EY as “that’s what a programmer might make”. Instead, put him (and any other SIAI full-time employees, possibly just Michael Vassar) on half pay (and half time), and suggest that he work in the “real world” (something not SIAI/futurism related) the rest of the time. This means that his presumed skills are tested and exercised with actual short-term tasks, and also gives an approximate market price for his skills.
Currently, his market-equivalence to a programmer is decoupled from reality.
Can you elaborate? I don’t understand my proposal as related to signaling at all; it’s about measuring EY’s (and others’) effectiveness, rather than taking it for granted. Yes, it’s costly in the event it’s unnecessary, but corruption/ineffectiveness/selfishness (where EY and Vassar are primarily building a career and a niche for themselves, consciously or unconsciously) is also costly.
Perhaps other employers should also employ everyone half-time so that they get more information about their employees’ market value?
If SIAI were paying Eliezer to be a “generic” programmer, then I suppose they could get a reasonable idea of whether he’s a good one in the way you describe. Or they could just fire him and hire some other guy for the same salary: that’s not a bad way of getting (where SIAI is) a middling-competent programmer for hire.
But it doesn’t seem to me even slightly credible that that’s what they’re paying Eliezer for. They might want him writing AI software—or not, since he’s well known to think that writing an AI system is immensely dangerous—in which case sending him out to work half-time for some random software company isn’t going to give much idea of how good he is at that. Or they might want him Thinking Deep Thoughts about rationality and friendly AI and machine ethics and so forth, in which case (1) his “market value” would need to be assessed by comparing with professional philosophers and (2) presumably SIAI sees the value of his work in terms of things like reducing existential risk, which the philosophy-professor market is likely to be … not very responsive to.
What sending Eliezer out to work half-time commercially demonstrably won’t do is to measure his “effectiveness” at anything that seems at all likely to be what SIAI thinks it’s worth paying him $100k/year for.
The most likely effects seem to me some combination of: (1) Eliezer spends less time on SIAI stuff and is less useful to SIAI. (2) Eliezer spends all his time on SIAI stuff and gets fired from his other job. (3) Eliezer finds that he can make a lot more money outside SIAI and jumps ship or demands a big pay rise from SIAI. (4) Eliezer decides that an organization that would do something so obviously silly is not fit to (as he sees it) try to decide the fate of the universe, quits SIAI, and goes to do his AI-related work elsewhere.
No combination of these seems like a very good outcome. What’s the possible benefit for SIAI here? That with some (not very large) probability Eliezer turns out not to be a very good programmer, doesn’t get paid very well by the commercial half-time gig, accepts a lower salary from SIAI on the grounds that he obviously isn’t so good at what he does after all, but doesn’t simultaneously get so demoralized as to reduce his effectiveness at what he does for SIAI? Well, I suppose it’s barely possible, but it doesn’t seem like something worth aiming for.
What am I missing here? What halfway plausible way is there for this to work out well?
I think it’s entirely possible for people within corporations to build cozy empires and argue that they should be paid well, and for those same people to in fact be incompetent at value creation—that is, they could be zero-sum internal-politics specialists. The corporation would benefit from enforcing a policy against this sort of “employee lock-in”, just like corporations now have policies against “supplier lock-in”.
This would entail, among other things, everyone within the corporation having a job description that is sufficiently generic that other people also fit the same job description, and for outside auditors to regularly evaluate whether the salaries being paid for a given job description are comparable to industry standards.
I haven’t heard of anyone striving to prevent “employee lock-in” (though that might just be the wrong words) - but people certainly do strive for those related policies.
There are lots of potential upsides: 1. At the prospect of potentially being tested, EY shapes up and starts producing. 2. Due to real-world experience, EY’s ideas are pushed along faster and more accurately. 3. SIAI discovers that EY is “just a guy” and reorganizes, in the process jumping out of its recurrent circling of the cult attractor. 4. Due to EY’s stellar performance in the real world, other people start following the “work half time and do rationality and existential risk reduction half time” lifestyle.
In general, my understanding of SIAI’s proposed financial model is “other people work in the real world, and send money without strings to SIAI, in exchange for infrequent documentation regarding SIAI’s existential risk reduction efforts”. I think that model is unsustainable, because the organization could switch to becoming simply about sustaining and growing itself.
SIAI firing Eliezer would be like Nirvana firing Kurt Cobain. Most of the money and public attention will follow Eliezer, not stay with SIAI.
You’re not alone in wanting Eliezer to start publishing new results already. But there’s also the problem that he likes secrecy way too much. Alexandros Marinos once compared his attitude to staying childless: every childless person came from an unbroken line of people who reproduced (=published their research), and couldn’t exist otherwise.
For example, our decision-theory-workshop group is pretty much doing its own thing now. I believe it diverged from Eliezer’s ideas a while ago, when we started thinking about UDT-ish theorem provers instead of TDT-ish causal graph thingies. I don’t miss Eliezer’s guidance, but I sure miss his input—it could be very valuable for the topics that interest us. But our discussions are open, so I guess it’s a no go.
This is something I’ve never really understood. I can understand wanting to keep any moves directly towards creating an AI quiet—if you create 99% of an AI and someone else does the other 1%, goodbye world. It may not be optimal, but it’s a comprehensible position.
But the work on decision theory is presumably geared towards codifying Friendliness in such a way that an AI could be ‘guaranteed Friendly’. That seems like the kind of thing that would be aided by having many eyeballs looking at it, while being useless for anyone who wanted to put together a cobbled-together quick-results AI.
...a constructive theory of the world’s second most important math problem, reflective decision systems, is necessarily a constructive theory of seed AI; and constitutes, in itself, a weapon of math destruction, which can be used for destruction more quickly than to any good purpose. Any Singularity-value I attach to publicizing Friendly AI would go into explaining the problem. Solutions are far harder than this and will be specialized on particular constructive architectures.
So in a nutshell, he thinks solving decision theory will make building unfriendly AIs much easier. This doesn’t sound right to me because we already have idealized models like Solomonoff induction or AIXI, and they don’t help much with building real-world approximations to these ideals, so an idealized perfect solution to decision theory isn’t likely to help much either. But maybe he has some insight that I don’t.
I think Eliezer must have changed his mind after writing those words, because his TDT book was written for public consumption all along. (He gave two reasons for not publishing it sooner: he wanted to see if a university would offer him a PhD based on it, and he was using DT as a problem to test potential FAI researchers.) I guess his current lack of participation in our DT mailing list is probably due to some combination of being busy with his books and lack of significant new insights.
I think TDT is different from the “reflective decision systems” he was talking about, which sounds like it refers to a theory specifically of self-modifying agents.
Ah.
I see what he means, if you’re talking about a) just the ‘invariant under reflection’ part and not Friendliness and b) you’re talking about a strictly pragmatic tool. That makes sense.
Starts producing what? 2. What real-world experience, and how will it be relevant to his SIAI work? 3. Yup, that’s possible. See below. 4. Just like they do for all the other people who do stellar work as software developers, you mean?
I think #3 merits a closer look, since indeed it’s one of the few ways that your proposal could have a positive outcome. So let’s postulate, for the sake of argument, that indeed Eliezer’s skills in software development are not particularly impressive and he doesn’t do terribly well in his other half-time job. So … now they fire him? Because he hasn’t performed very well in another job doing different kinds of work from what he’s doing for SIAI? Yeah, that’s a good way to do things.
It would probably be good for SIAI to fire Eliezer if he’s no good at what he’s supposed to be doing for them. But, if indeed he’s no good at that, they won’t find it out by telling him to get a job as a software engineer and seeing what salary he can make.
Yes, it’s bad that SIAI can’t easily document how much progress it’s making with existential risk reduction so that potential donors can decide whether it’s worth supporting. But Eliezer’s market-salary-as-a-generic-programmer is—obviously—not a good measure of how much progress it’s making. Thought experiment: Consider some random big-company CEO who’s being paid millions. Suppose they get bored of CEOing and take a fancy to AI, and suppose they agree to replace Eliezer at SIAI, and even to work for half his salary. In this scenario, should SIAI tell their donors: “Great news, everyone! We’ve made a huge stride towards avoiding AI-related existential risk. We just employed someone whose market salary is measured in the millions of dollars!”?
Yes, it’s bad if SIAI can’t tell whether Eliezer is actually doing work worth the salary they pay him. (My guess, incidentally, is that he is even if his actual AI-related work is of zero value, on PR grounds. But that’s a separate issue.) But measuring something to do with Eliezer that has nothing whatever to do with the value of the work he does for SIAI is not going to solve that problem.
You seem to be optimizing this entire problem for avoiding the mental pain of worrying about whether you’re being cheated. This is the wrong optimization criterion.
I’m working from “organizations are superhumanly intelligent (in some ways) and so we should strive for Friendly organizations, including structural protections against corruption” standpoint.
I hardly think the SIAI, a tiny organisation heavily reliant on a tiny pool of donors, is the most likely organisation to become corrupt. Even when I thought Eliezer was being paid significantly more than he was (see above threads) I wouldn’t call that corruption.
Eliezer is doing a job. His salary is largely paid for by a very small number of individuals. As the primary public face of SIAI he is under more scrutiny than anyone else in the organisation. As such, if those people donating don’t think he’s worth the money, he’ll be gone very quickly—and so long as they do, it’s their money to spend.
I don’t understand my proposal as related to signaling at all
What’s the good reason to care about whether EY’s salary is calibrated to the market rate, rather than/independent from whether it’s too low or high for this particular situation?
it’s about measuring EY’s (and others’) effectiveness, rather than taking it for granted.
I don’t understand why SI (i.e., its board) shouldn’t employ EY and MV full-time and continually evaluate the effectiveness of their work for it, like any other organization in the world would do.
“high payouts”? Good programmers are worth their weight in gold. (As for AI researchers, bad ones are worthless, good-but-not-good-enough ones will simply kill us all, and good-enough ones are literally beyond value...) NYT:
Then there are salaries. Google is paying computer science majors just out of college $90,000 to $105,000, as much as $20,000 more than it was paying a few months ago. That is so far above the industry average of $80,000 that start-ups cannot match Google salaries. Google declined to comment.
“half pay (and half time)”? I’m just a programmer, not an AI researcher, but I’m confident that this applies equally: it is ridiculously hard to apply concentrated thought to solving a problem when you have to split your focus. As Paul Graham said:
One valuable thing you tend to get only in startups is uninterruptability. Different kinds of work have different time quanta. Someone proofreading a manuscript could probably be interrupted every fifteen minutes with little loss of productivity. But the time quantum for hacking is very long: it might take an hour just to load a problem into your head. So the cost of having someone from personnel call you about a form you forgot to fill out can be huge.
This is why hackers give you such a baleful stare as they turn from their screen to answer your question. Inside their heads a giant house of cards is tottering.
A policy of downvoting posts that you disagree with will, over time, generate a “Unison” culture, driving away / evaporatively cooling dissent.
Though you’re correct about interruptions and sub-day splitting, in my experience it is entirely feasible to split your time X days vs Y days without suffering context-switch overhead—that is, since we’re presumably sleeping, we’re already forced to “boot up” in the morning. I agree it’s harder to coordinate a team some of whom are full time, some are half time, and some are the other half time—but you’d have 40k to make up the lost team productivity.
A policy of downvoting posts that you disagree with will, over time, generate a “Unison” culture, driving away / evaporatively cooling dissent.
What do you think downvotes are for? It’s just a number, it’s not an insult.
(Now, if you want to suggest that perhaps I shouldn’t announce a downvote when replying with objections, perhaps I could be convinced of that. I think I’d appreciate a downvote-with-explanation more than a silent downvote.)
but you’d have 40k to make up the lost team productivity.
What do you think downvotes are for? It’s just a number, it’s not an insult.
Downvotes are for maintaining the quality of the conversations, not expressing agreement or disagreement. No matter what someone’s opinion is, as long as its incorrectness would not be made evident by reading the sequences, downvotes should only express disapproval of the quality of the argument, not the conclusion. In a case like this, no argument for the opinion that you disapprove of was made. Unless he refused to acknowledge the substance of your disagreement, which was not the case here, no downvote was warranted.
A policy of downvoting posts that you disagree with will, over time, generate a “Unison” culture, driving away / evaporatively cooling dissent.
STL’s downvote was appropriate and he gave far more justification than was needed. I similarly downvoted both your comments here because they both gave prescriptions of behavior to others that was bad advice based on ignorance.
I’m not happy with how big Eliezer’s salary is either, but having him work half-time as a programmer to verify the market value of his skills is probably not the best thing to do about it.
I’m not happy with how big Eliezer’s salary is either
What rational reasons do you have?
I can imagine two rational reasons for feeling that someone is overpaid. First and most commonly, if someone is overpaid relative to their productivity. For example, a programmer who writes buggy, poorly designed code and makes 130k for it is clearly overpaid, as is a CEO who makes zillions while driving their company into the ground. This objection could be bluntly phrased as “Eliezer is a hack”—if you think so, say so. I suspect that very few people on LW hold this opinion, especially if, as I said above, they agree that good-enough AI researchers are literally beyond value. (That is, if you subscribe to the basic logic that AI holds the potential to unleash a technological singularity that can either destroy the world or remake it according to our wishes, then EY’s approach is the way to go about doing the latter. Even if you disagree with the particulars, he is obviously onto something, and such insights have value.)
Second, your objection may be “someone who works for a nonprofit shouldn’t be richly compensated”. For example, you could probably go through Newsweek’s Fifteen Highest-Paid Charity CEOs, and pick one where you could say “yeah, that’s a well-run organization, but that CEO is paid way too much—why don’t they voluntarily accept a smaller, but still generous, salary, like a few hundred K?” I don’t believe that the second one applies to EY, because he works in an expensive area. More importantly, the fundamental root of this objection would be “if X accepted less money, the nonprofit would have more resources to spend elsewhere”. That’s pretty obvious when you’re talking about mega-zillion CEO salaries. What about Eliezer’s case? What if he handed back, say, 10k of his salary to SIAI? That’s a significant hit in income for someone whose income matches expenses and whose expenses aren’t unreasonable, and it would be much less significant to SIAI. Finally, EY is already working 60 hours a week for SIAI, and you would want him to donate a chunk of his current salary on top of that? Really?
On the other hand, I can think of an irrational reason to be unhappy with Eliezer’s salary, which I think I’ll be too polite to mention here.
I’m not happy with how big Eliezer’s salary is either, but having him start working half-time as a programmer to verify the market value of his skills is probably not the best thing to do about it.
I’m not happy about the justifying the high payouts to EY as “that’s what a programmer might make”. Instead, put him (and any other SIAI full-time employees, possibly just Michael Vassar) on half pay (and half time), and suggest that he work in the “real world” (something not SIAI/futurism related) the rest of the time. This means that his presumed skills are tested and exercised with actual short-term tasks, and also gives an approximate market price for his skills.
Currently, his market-equivalence to a programmer is decoupled from reality.
This is a great idea, if SIAI put signalling what moral people they are over actually bringing about the best outcome.
Can you elaborate? I don’t understand my proposal as related to signaling at all; it’s about measuring EY’s (and others’) effectiveness, rather than taking it for granted. Yes, it’s costly in the event it’s unnecessary, but corruption/ineffectiveness/selfishness (where EY and Vassar are primarily building a career and a niche for themselves, consciously or unconsciously) is also costly.
Perhaps other employers should also employ everyone half-time so that they get more information about their employees’ market value?
If SIAI were paying Eliezer to be a “generic” programmer, then I suppose they could get a reasonable idea of whether he’s a good one in the way you describe. Or they could just fire him and hire some other guy for the same salary: that’s not a bad way of getting (where SIAI is) a middling-competent programmer for hire.
But it doesn’t seem to me even slightly credible that that’s what they’re paying Eliezer for. They might want him writing AI software—or not, since he’s well known to think that writing an AI system is immensely dangerous—in which case sending him out to work half-time for some random software company isn’t going to give much idea of how good he is at that. Or they might want him Thinking Deep Thoughts about rationality and friendly AI and machine ethics and so forth, in which case (1) his “market value” would need to be assessed by comparing with professional philosophers and (2) presumably SIAI sees the value of his work in terms of things like reducing existential risk, which the philosophy-professor market is likely to be … not very responsive to.
What sending Eliezer out to work half-time commercially demonstrably won’t do is to measure his “effectiveness” at anything that seems at all likely to be what SIAI thinks it’s worth paying him $100k/year for.
The most likely effects seem to me some combination of: (1) Eliezer spends less time on SIAI stuff and is less useful to SIAI. (2) Eliezer spends all his time on SIAI stuff and gets fired from his other job. (3) Eliezer finds that he can make a lot more money outside SIAI and jumps ship or demands a big pay rise from SIAI. (4) Eliezer decides that an organization that would do something so obviously silly is not fit to (as he sees it) try to decide the fate of the universe, quits SIAI, and goes to do his AI-related work elsewhere.
No combination of these seems like a very good outcome. What’s the possible benefit for SIAI here? That with some (not very large) probability Eliezer turns out not to be a very good programmer, doesn’t get paid very well by the commercial half-time gig, accepts a lower salary from SIAI on the grounds that he obviously isn’t so good at what he does after all, but doesn’t simultaneously get so demoralized as to reduce his effectiveness at what he does for SIAI? Well, I suppose it’s barely possible, but it doesn’t seem like something worth aiming for.
What am I missing here? What halfway plausible way is there for this to work out well?
I think it’s entirely possible for people within corporations to build cozy empires and argue that they should be paid well, and for those same people to in fact be incompetent at value creation—that is, they could be zero-sum internal-politics specialists. The corporation would benefit from enforcing a policy against this sort of “employee lock-in”, just like corporations now have policies against “supplier lock-in”.
This would entail, among other things, everyone within the corporation having a job description that is sufficiently generic that other people also fit the same job description, and for outside auditors to regularly evaluate whether the salaries being paid for a given job description are comparable to industry standards.
I haven’t heard of anyone striving to prevent “employee lock-in” (though that might just be the wrong words) - but people certainly do strive for those related policies.
There are lots of potential upsides: 1. At the prospect of potentially being tested, EY shapes up and starts producing. 2. Due to real-world experience, EY’s ideas are pushed along faster and more accurately. 3. SIAI discovers that EY is “just a guy” and reorganizes, in the process jumping out of its recurrent circling of the cult attractor. 4. Due to EY’s stellar performance in the real world, other people start following the “work half time and do rationality and existential risk reduction half time” lifestyle.
In general, my understanding of SIAI’s proposed financial model is “other people work in the real world, and send money without strings to SIAI, in exchange for infrequent documentation regarding SIAI’s existential risk reduction efforts”. I think that model is unsustainable, because the organization could switch to becoming simply about sustaining and growing itself.
SIAI firing Eliezer would be like Nirvana firing Kurt Cobain. Most of the money and public attention will follow Eliezer, not stay with SIAI.
You’re not alone in wanting Eliezer to start publishing new results already. But there’s also the problem that he likes secrecy way too much. Alexandros Marinos once compared his attitude to staying childless: every childless person came from an unbroken line of people who reproduced (=published their research), and couldn’t exist otherwise.
For example, our decision-theory-workshop group is pretty much doing its own thing now. I believe it diverged from Eliezer’s ideas a while ago, when we started thinking about UDT-ish theorem provers instead of TDT-ish causal graph thingies. I don’t miss Eliezer’s guidance, but I sure miss his input—it could be very valuable for the topics that interest us. But our discussions are open, so I guess it’s a no go.
This is something I’ve never really understood. I can understand wanting to keep any moves directly towards creating an AI quiet—if you create 99% of an AI and someone else does the other 1%, goodbye world. It may not be optimal, but it’s a comprehensible position. But the work on decision theory is presumably geared towards codifying Friendliness in such a way that an AI could be ‘guaranteed Friendly’. That seems like the kind of thing that would be aided by having many eyeballs looking at it, while being useless for anyone who wanted to put together a cobbled-together quick-results AI.
Eliezer stated his reasons here:
So in a nutshell, he thinks solving decision theory will make building unfriendly AIs much easier. This doesn’t sound right to me because we already have idealized models like Solomonoff induction or AIXI, and they don’t help much with building real-world approximations to these ideals, so an idealized perfect solution to decision theory isn’t likely to help much either. But maybe he has some insight that I don’t.
I think Eliezer must have changed his mind after writing those words, because his TDT book was written for public consumption all along. (He gave two reasons for not publishing it sooner: he wanted to see if a university would offer him a PhD based on it, and he was using DT as a problem to test potential FAI researchers.) I guess his current lack of participation in our DT mailing list is probably due to some combination of being busy with his books and lack of significant new insights.
I think TDT is different from the “reflective decision systems” he was talking about, which sounds like it refers to a theory specifically of self-modifying agents.
That’s the first time I noticed the pun. Good one. I want a tshirt.
Ah. I see what he means, if you’re talking about a) just the ‘invariant under reflection’ part and not Friendliness and b) you’re talking about a strictly pragmatic tool. That makes sense.
Starts producing what? 2. What real-world experience, and how will it be relevant to his SIAI work? 3. Yup, that’s possible. See below. 4. Just like they do for all the other people who do stellar work as software developers, you mean?
I think #3 merits a closer look, since indeed it’s one of the few ways that your proposal could have a positive outcome. So let’s postulate, for the sake of argument, that indeed Eliezer’s skills in software development are not particularly impressive and he doesn’t do terribly well in his other half-time job. So … now they fire him? Because he hasn’t performed very well in another job doing different kinds of work from what he’s doing for SIAI? Yeah, that’s a good way to do things.
It would probably be good for SIAI to fire Eliezer if he’s no good at what he’s supposed to be doing for them. But, if indeed he’s no good at that, they won’t find it out by telling him to get a job as a software engineer and seeing what salary he can make.
Yes, it’s bad that SIAI can’t easily document how much progress it’s making with existential risk reduction so that potential donors can decide whether it’s worth supporting. But Eliezer’s market-salary-as-a-generic-programmer is—obviously—not a good measure of how much progress it’s making. Thought experiment: Consider some random big-company CEO who’s being paid millions. Suppose they get bored of CEOing and take a fancy to AI, and suppose they agree to replace Eliezer at SIAI, and even to work for half his salary. In this scenario, should SIAI tell their donors: “Great news, everyone! We’ve made a huge stride towards avoiding AI-related existential risk. We just employed someone whose market salary is measured in the millions of dollars!”?
Yes, it’s bad if SIAI can’t tell whether Eliezer is actually doing work worth the salary they pay him. (My guess, incidentally, is that he is even if his actual AI-related work is of zero value, on PR grounds. But that’s a separate issue.) But measuring something to do with Eliezer that has nothing whatever to do with the value of the work he does for SIAI is not going to solve that problem.
You seem to be optimizing this entire problem for avoiding the mental pain of worrying about whether you’re being cheated. This is the wrong optimization criterion.
I’m working from “organizations are superhumanly intelligent (in some ways) and so we should strive for Friendly organizations, including structural protections against corruption” standpoint.
I hardly think the SIAI, a tiny organisation heavily reliant on a tiny pool of donors, is the most likely organisation to become corrupt. Even when I thought Eliezer was being paid significantly more than he was (see above threads) I wouldn’t call that corruption. Eliezer is doing a job. His salary is largely paid for by a very small number of individuals. As the primary public face of SIAI he is under more scrutiny than anyone else in the organisation. As such, if those people donating don’t think he’s worth the money, he’ll be gone very quickly—and so long as they do, it’s their money to spend.
What’s the good reason to care about whether EY’s salary is calibrated to the market rate, rather than/independent from whether it’s too low or high for this particular situation?
I don’t understand why SI (i.e., its board) shouldn’t employ EY and MV full-time and continually evaluate the effectiveness of their work for it, like any other organization in the world would do.
The fact that both are costly is irrelevant, the point is that one has the potential to be vastly more costly than the other.
Downvoted.
“high payouts”? Good programmers are worth their weight in gold. (As for AI researchers, bad ones are worthless, good-but-not-good-enough ones will simply kill us all, and good-enough ones are literally beyond value...) NYT:
“half pay (and half time)”? I’m just a programmer, not an AI researcher, but I’m confident that this applies equally: it is ridiculously hard to apply concentrated thought to solving a problem when you have to split your focus. As Paul Graham said:
A policy of downvoting posts that you disagree with will, over time, generate a “Unison” culture, driving away / evaporatively cooling dissent.
Though you’re correct about interruptions and sub-day splitting, in my experience it is entirely feasible to split your time X days vs Y days without suffering context-switch overhead—that is, since we’re presumably sleeping, we’re already forced to “boot up” in the morning. I agree it’s harder to coordinate a team some of whom are full time, some are half time, and some are the other half time—but you’d have 40k to make up the lost team productivity.
What do you think downvotes are for? It’s just a number, it’s not an insult.
(Now, if you want to suggest that perhaps I shouldn’t announce a downvote when replying with objections, perhaps I could be convinced of that. I think I’d appreciate a downvote-with-explanation more than a silent downvote.)
The man-month is mythical.
Downvotes are for maintaining the quality of the conversations, not expressing agreement or disagreement. No matter what someone’s opinion is, as long as its incorrectness would not be made evident by reading the sequences, downvotes should only express disapproval of the quality of the argument, not the conclusion. In a case like this, no argument for the opinion that you disapprove of was made. Unless he refused to acknowledge the substance of your disagreement, which was not the case here, no downvote was warranted.
It’s not just that I disagreed with you, it’s that you are wrong in a more objective sense.
How can you tell the two apart?
STL’s downvote was appropriate and he gave far more justification than was needed. I similarly downvoted both your comments here because they both gave prescriptions of behavior to others that was bad advice based on ignorance.
More appropriate reference classes: philosophers, writers, teachers, fundraisers.
I’m not happy with how big Eliezer’s salary is either, but having him work half-time as a programmer to verify the market value of his skills is probably not the best thing to do about it.
What rational reasons do you have?
I can imagine two rational reasons for feeling that someone is overpaid. First and most commonly, if someone is overpaid relative to their productivity. For example, a programmer who writes buggy, poorly designed code and makes 130k for it is clearly overpaid, as is a CEO who makes zillions while driving their company into the ground. This objection could be bluntly phrased as “Eliezer is a hack”—if you think so, say so. I suspect that very few people on LW hold this opinion, especially if, as I said above, they agree that good-enough AI researchers are literally beyond value. (That is, if you subscribe to the basic logic that AI holds the potential to unleash a technological singularity that can either destroy the world or remake it according to our wishes, then EY’s approach is the way to go about doing the latter. Even if you disagree with the particulars, he is obviously onto something, and such insights have value.)
Second, your objection may be “someone who works for a nonprofit shouldn’t be richly compensated”. For example, you could probably go through Newsweek’s Fifteen Highest-Paid Charity CEOs, and pick one where you could say “yeah, that’s a well-run organization, but that CEO is paid way too much—why don’t they voluntarily accept a smaller, but still generous, salary, like a few hundred K?” I don’t believe that the second one applies to EY, because he works in an expensive area. More importantly, the fundamental root of this objection would be “if X accepted less money, the nonprofit would have more resources to spend elsewhere”. That’s pretty obvious when you’re talking about mega-zillion CEO salaries. What about Eliezer’s case? What if he handed back, say, 10k of his salary to SIAI? That’s a significant hit in income for someone whose income matches expenses and whose expenses aren’t unreasonable, and it would be much less significant to SIAI. Finally, EY is already working 60 hours a week for SIAI, and you would want him to donate a chunk of his current salary on top of that? Really?
On the other hand, I can think of an irrational reason to be unhappy with Eliezer’s salary, which I think I’ll be too polite to mention here.
I’m not happy with how big Eliezer’s salary is either, but having him start working half-time as a programmer to verify the market value of his skills is probably not the best thing to do about it.