I think that you should get Eliezer to say the accurate but arrogant sounding things, because everyone already knows he’s like that. You should yourself, Luke, be more careful about maintaining a humble opinion.
If you need people to say arrogant things, make them ghost-write for Eliezer.
Personally, I think that a lot of Eliezer’s arrogance is deserved. He’s explained most of the big questions in philosophy either by personally solving them or by brilliantly summarizing other people’s problems. CFAI was way ahead of its time, as TDT still is. So he can feel smug. He’s got a reputation as an arrogant eccentric genius anyway.
But the rest of the organisation should try to be more careful. You should imitate Carl Shulman rather than Eliezer.
I think having people ghost-write for Eliezer is a very anti-optimum solution in the long run. It removes integrity from the process. SI would become insufficiently distinguishable from Scientology or a political party if it did this.
Eliezer is a real person. He is not “big brother” or some other fictional figure head used to manipulate the followers. The kind of people you want, and have, following SI or lesswrong will discount Eliezer too much when (not if) they find out he has become a fiction employed to manipulate them.
Yeah, I kinda agree. I was slightly exaggerating my position for clarity.
Maybe not full on ghost-writing. But occasionally, having someone around who can say what he wants without further offending anybody can be useful. Like, part of the reason the Sequences are awesome is that he personally claims that they are. Also, Eliezer says:
I should note that if I’m teaching deep things, then I view it as important to make people feel like they’re learning deep things, because otherwise, they will still have a hole in their mind for “deep truths” that needs filling, and they will go off and fill their heads with complete nonsense that has been written in a more satisfying style.
So occasionally SingInst needs to say something that sounds arrogant.
I just think that when possible, Eliezer should say those things.
He’s explained most of the big questions in philosophy either by personally solving them or by brilliantly summarizing other people’s problems.
As a curiosity, what would the world look like if this were not the case? I mean, I’m not even sure what it means for such a sentence to be true or false.
Addendum: Sorry, that was way too hostile. I accidentally pattern-matched your post to something that an Objectivist would say. It’s just that, in professional philosophy, there does not seem to be a consensus on what a “problem of philosophy” is. Likewise, there does not seem to be a consensus on what a solution to one would look like. It seems that most “problems” of philosophy are dismissed, rather than ever solved.
Here are examples of these philosophical solutions. I don’t know which of these he solved personally, and which he simply summarized others’ answer to:
What is free will? Ooops, wrong question. Free will is what a decision-making algorithm feels like from the inside.
What is intelligence? The ability to optimize things.
What is knowledge? The ability to constrain your expectations.
What should I do with the Newcomb’s Box problem? TDT answers this.
...other examples include inventing Fun theory, using CEV to make a better version of utilitarianism, and arguing for ethical injunctions using TDT.
And so on. I know he didn’t come up with these on his own, but at the least he brought them all together and argued convincingly for his answers in the Sequences.
I’ve been trying to figure out these problems for years. So have lots of philosophers. I have read these various philosophers’ proposed solutions, and disagreed with them all. Then I read Eliezer, and agreed with him. I feel that this is strong evidence that Eliezer has actually created something of value.
What is free will? Ooops, wrong question. Free will is what a decision-making algorithm feels like from the inside.
I admire the phrase “what an algorithm feels like from the inside”. This is certainly one of Yudkowsky’s better ideas, if it is one of his. I think that one can see the roots of it in G.E.B. Still, this may well count as something novel.
Nonetheless, Yudkowsky is not the first compatibilist.
What is intelligence? The ability to optimize things.
One could define the term in such a way. I tend to take a instrumentalist view on intelligence. However, “the ability to optimize things” may well be a thing. You may as well call it intelligence, if you are so inclined.
This, nonetheless, may not be a solution to the question “what is intelligence?”. It seems as though most competent naturalists have moved passed the question.
What is knowledge? The ability to constrain your expectations.
I apologize, but that does not look like a solution to the Gettier Problem. Could you elaborate?
What should I do with the Newcomb’s Box problem? TDT answers this.
I have absolutely no knowledge of the history of Newcomb’s problem. I apologize.
Further apologies for the following terse statements:
I don’t think Fun theory is known by academia. Also, it looks like, at best, a contemporary version of eudaimonia.
The concept of CEV is neat. However, I think if one were to create an ethical version of the pragmatic definition of truth, “The good is the end of inquiry” would essentially encapsulate CEV. Well, as far as one can encapsulate a complex theory with a brief statement.
TDT is awesome. Predicted by the superrationality of Hofstadter, but so what?
I don’t mean to discount the intelligence of Yudkowsky. Further, it is extremely unkind of me to be so critical of him, considering how much he has influenced my own thoughts and beliefs. However, he has never written a “Two Dogmas of Empiricism” or a Naming and Necessity. Philosophical influence is something that probably can only be seen, if at all, in retrospect.
Of course, none of this really matters. He’s not trying to be a good philosopher. He’s trying to save the world.
I apologize, but that does not look like a solution to the Gettier Problem. Could you elaborate?
Okay, the Gettier problem. I can explain the Gettier problem, but it’s just my explanation, not Eliezer’s.
The Gettier problem is pointing out problems with the definition of knowledge as justified true belief. “Justified true belief” (JTB) is an attempt at defining knowledge. However, it falls into the classic problem with philosophy of using intuition wrong, and has a variety of other issues. Lukeprog discusses the weakness of conceptual analysis here.
Also, it’s only for irrational beings like humans that there is a distinction between “justified’ and ‘belief.’ An AI would simply have degrees of belief in something according to the strength of the justification, using Bayesian rules. So JTB is clearly a human-centered definition, which doesn’t usefully define knowledge anyway.
Incidentally, I just re-read this post, which says:
Yudkowsky once wrote, “If there’s any centralized repository of reductionist-grade naturalistic cognitive philosophy, I’ve never heard mention of it.” When I read that I thought: What? That’s Quinean naturalism! That’s Kornblith and Stich and Bickle and the Churchlands and Thagard and Metzinger and Northoff! There are hundreds of philosophers who do that!
So perhaps Eliezer didn’t create original solutions to many of the problems I credited him with solving. But he certainly created them on his own. Like Leibniz and calculus, really.
Also, it’s only for irrational beings like humans that there is a distinction between “justified’ and ‘belief.’ An AI would simply have degrees of belief in something according to the strength of the justification, using Bayesian rules. So JTB is clearly a human-centered definition, which doesn’t usefully define knowledge anyway.
I am skeptical that AIs will do pure Bayesian updates—it’s computationally intractable. An AI is very likely to have beliefs or behaviors that are irrational, to have rational beliefs that cannot be effectively proved to be such, and no reliable way to distinguish the two.
I am skeptical that AIs will do pure Bayesian updates—it’s computationally intractable.
Isn’t this also true for expected utility-maximization? Is a definition of utility that is precise enough to be usable even possible? Honest question.
An AI is very likely to have beliefs or behaviors that are irrational...
Yes, I wonder there is almost no talk about biases in AI systems. . Ideal AI’s might be perfectly rational but computationally limited but artificial systems will have completely new sets of biases. As a simple example take my digicam, which can detect faces. It sometimes recognizes faces where indeed there are no faces, just like humans do but yet on very different occasions. Or take the answers of IBM Watson. Some were wrong but in completely new ways. That’s a real danger in my opinion.
As a simple example take my digicam, which can detect faces. It sometimes recognizes faces where indeed there are no faces, just like humans do but yet on very different occasions.
I appreciate the example. It will serve me well. Upvoted.
I am aware of the Gettier Problem. I just do not see the phrase, “the ability to constrain one’s expectations” as being a proper conceptual analysis of “knowledge.” If it were a conceptual analysis of “knowledge”, it probably would be vulnerable to Gettieriziation. I love Bayesian epistemology. However, most Bayesian accounts which I have encountered either do away with knowledge-terms or redefine them in such a way that it entirely fails to match the folk-term “knowledge”. Attempting to define “knowledge” is probably attempting to solve the wrong problem. This is a significant weakness of traditional epistemology.
So perhaps Eliezer didn’t create original solutions to many of the problems I credited him with solving. But he certainly created them on his own. Like Hooke and calculus, really.
I am not entirely familiar with Eliezer’s history. However, he is clearly influenced by Hofstadter, Dennet, and Jaynes. From just the first two, one could probably assemble a working account which is, weaker than, but has surface resemblances to, Eliezer’s espoused beliefs.
Also, I have never heard of Hooke independently inventing calculus. It sounds interesting however. Still, are you certain you are not thinking of Leibniz?
To quickly sum up Newcomb’s problem, Its a question of probability where choosing the more “rational” thing to do will result in a great deal less currency to a traditional probability based decision theory. TDT takes steps to avoid getting stuck 2 boxing, or choosing the more rational of the two choices while applying in the vast majority of other situations.
I’ve reccommended this before, I think.
I think that you should get Eliezer to say the accurate but arrogant sounding things, because everyone already knows he’s like that. You should yourself, Luke, be more careful about maintaining a humble opinion.
If you need people to say arrogant things, make them ghost-write for Eliezer.
Personally, I think that a lot of Eliezer’s arrogance is deserved. He’s explained most of the big questions in philosophy either by personally solving them or by brilliantly summarizing other people’s problems. CFAI was way ahead of its time, as TDT still is. So he can feel smug. He’s got a reputation as an arrogant eccentric genius anyway.
But the rest of the organisation should try to be more careful. You should imitate Carl Shulman rather than Eliezer.
I think having people ghost-write for Eliezer is a very anti-optimum solution in the long run. It removes integrity from the process. SI would become insufficiently distinguishable from Scientology or a political party if it did this.
Eliezer is a real person. He is not “big brother” or some other fictional figure head used to manipulate the followers. The kind of people you want, and have, following SI or lesswrong will discount Eliezer too much when (not if) they find out he has become a fiction employed to manipulate them.
Yeah, I kinda agree. I was slightly exaggerating my position for clarity.
Maybe not full on ghost-writing. But occasionally, having someone around who can say what he wants without further offending anybody can be useful. Like, part of the reason the Sequences are awesome is that he personally claims that they are. Also, Eliezer says:
So occasionally SingInst needs to say something that sounds arrogant.
I just think that when possible, Eliezer should say those things.
As a curiosity, what would the world look like if this were not the case? I mean, I’m not even sure what it means for such a sentence to be true or false.
Addendum: Sorry, that was way too hostile. I accidentally pattern-matched your post to something that an Objectivist would say. It’s just that, in professional philosophy, there does not seem to be a consensus on what a “problem of philosophy” is. Likewise, there does not seem to be a consensus on what a solution to one would look like. It seems that most “problems” of philosophy are dismissed, rather than ever solved.
Here are examples of these philosophical solutions. I don’t know which of these he solved personally, and which he simply summarized others’ answer to:
What is free will? Ooops, wrong question. Free will is what a decision-making algorithm feels like from the inside.
What is intelligence? The ability to optimize things.
What is knowledge? The ability to constrain your expectations.
What should I do with the Newcomb’s Box problem? TDT answers this.
...other examples include inventing Fun theory, using CEV to make a better version of utilitarianism, and arguing for ethical injunctions using TDT.
And so on. I know he didn’t come up with these on his own, but at the least he brought them all together and argued convincingly for his answers in the Sequences.
I’ve been trying to figure out these problems for years. So have lots of philosophers. I have read these various philosophers’ proposed solutions, and disagreed with them all. Then I read Eliezer, and agreed with him. I feel that this is strong evidence that Eliezer has actually created something of value.
I admire the phrase “what an algorithm feels like from the inside”. This is certainly one of Yudkowsky’s better ideas, if it is one of his. I think that one can see the roots of it in G.E.B. Still, this may well count as something novel.
Nonetheless, Yudkowsky is not the first compatibilist.
One could define the term in such a way. I tend to take a instrumentalist view on intelligence. However, “the ability to optimize things” may well be a thing. You may as well call it intelligence, if you are so inclined.
This, nonetheless, may not be a solution to the question “what is intelligence?”. It seems as though most competent naturalists have moved passed the question.
I apologize, but that does not look like a solution to the Gettier Problem. Could you elaborate?
I have absolutely no knowledge of the history of Newcomb’s problem. I apologize.
Further apologies for the following terse statements:
I don’t think Fun theory is known by academia. Also, it looks like, at best, a contemporary version of eudaimonia.
The concept of CEV is neat. However, I think if one were to create an ethical version of the pragmatic definition of truth, “The good is the end of inquiry” would essentially encapsulate CEV. Well, as far as one can encapsulate a complex theory with a brief statement.
TDT is awesome. Predicted by the superrationality of Hofstadter, but so what?
I don’t mean to discount the intelligence of Yudkowsky. Further, it is extremely unkind of me to be so critical of him, considering how much he has influenced my own thoughts and beliefs. However, he has never written a “Two Dogmas of Empiricism” or a Naming and Necessity. Philosophical influence is something that probably can only be seen, if at all, in retrospect.
Of course, none of this really matters. He’s not trying to be a good philosopher. He’s trying to save the world.
Okay, the Gettier problem. I can explain the Gettier problem, but it’s just my explanation, not Eliezer’s.
The Gettier problem is pointing out problems with the definition of knowledge as justified true belief. “Justified true belief” (JTB) is an attempt at defining knowledge. However, it falls into the classic problem with philosophy of using intuition wrong, and has a variety of other issues. Lukeprog discusses the weakness of conceptual analysis here.
Also, it’s only for irrational beings like humans that there is a distinction between “justified’ and ‘belief.’ An AI would simply have degrees of belief in something according to the strength of the justification, using Bayesian rules. So JTB is clearly a human-centered definition, which doesn’t usefully define knowledge anyway.
Incidentally, I just re-read this post, which says:
So perhaps Eliezer didn’t create original solutions to many of the problems I credited him with solving. But he certainly created them on his own. Like Leibniz and calculus, really.
I am skeptical that AIs will do pure Bayesian updates—it’s computationally intractable. An AI is very likely to have beliefs or behaviors that are irrational, to have rational beliefs that cannot be effectively proved to be such, and no reliable way to distinguish the two.
Isn’t this also true for expected utility-maximization? Is a definition of utility that is precise enough to be usable even possible? Honest question.
Yes, I wonder there is almost no talk about biases in AI systems. . Ideal AI’s might be perfectly rational but computationally limited but artificial systems will have completely new sets of biases. As a simple example take my digicam, which can detect faces. It sometimes recognizes faces where indeed there are no faces, just like humans do but yet on very different occasions. Or take the answers of IBM Watson. Some were wrong but in completely new ways. That’s a real danger in my opinion.
Honest answer: Yes. For example 1 utilon per paperclip.
I appreciate the example. It will serve me well. Upvoted.
I am aware of the Gettier Problem. I just do not see the phrase, “the ability to constrain one’s expectations” as being a proper conceptual analysis of “knowledge.” If it were a conceptual analysis of “knowledge”, it probably would be vulnerable to Gettieriziation. I love Bayesian epistemology. However, most Bayesian accounts which I have encountered either do away with knowledge-terms or redefine them in such a way that it entirely fails to match the folk-term “knowledge”. Attempting to define “knowledge” is probably attempting to solve the wrong problem. This is a significant weakness of traditional epistemology.
I am not entirely familiar with Eliezer’s history. However, he is clearly influenced by Hofstadter, Dennet, and Jaynes. From just the first two, one could probably assemble a working account which is, weaker than, but has surface resemblances to, Eliezer’s espoused beliefs.
Also, I have never heard of Hooke independently inventing calculus. It sounds interesting however. Still, are you certain you are not thinking of Leibniz?
ooops, fixed.
I’ll respond to the rest of what you said later.
To quickly sum up Newcomb’s problem, Its a question of probability where choosing the more “rational” thing to do will result in a great deal less currency to a traditional probability based decision theory. TDT takes steps to avoid getting stuck 2 boxing, or choosing the more rational of the two choices while applying in the vast majority of other situations.
Apologies, I know what Newcomb’s problem is. I simply do not know anything about its history and the history of its attempted solutions.
...efficiently.
Most readers will misinterpret that.
The question for most was/is instead “Formally, why should I one-box on Newcomb’s problem?”