What is free will? Ooops, wrong question. Free will is what a decision-making algorithm feels like from the inside.
I admire the phrase “what an algorithm feels like from the inside”. This is certainly one of Yudkowsky’s better ideas, if it is one of his. I think that one can see the roots of it in G.E.B. Still, this may well count as something novel.
Nonetheless, Yudkowsky is not the first compatibilist.
What is intelligence? The ability to optimize things.
One could define the term in such a way. I tend to take a instrumentalist view on intelligence. However, “the ability to optimize things” may well be a thing. You may as well call it intelligence, if you are so inclined.
This, nonetheless, may not be a solution to the question “what is intelligence?”. It seems as though most competent naturalists have moved passed the question.
What is knowledge? The ability to constrain your expectations.
I apologize, but that does not look like a solution to the Gettier Problem. Could you elaborate?
What should I do with the Newcomb’s Box problem? TDT answers this.
I have absolutely no knowledge of the history of Newcomb’s problem. I apologize.
Further apologies for the following terse statements:
I don’t think Fun theory is known by academia. Also, it looks like, at best, a contemporary version of eudaimonia.
The concept of CEV is neat. However, I think if one were to create an ethical version of the pragmatic definition of truth, “The good is the end of inquiry” would essentially encapsulate CEV. Well, as far as one can encapsulate a complex theory with a brief statement.
TDT is awesome. Predicted by the superrationality of Hofstadter, but so what?
I don’t mean to discount the intelligence of Yudkowsky. Further, it is extremely unkind of me to be so critical of him, considering how much he has influenced my own thoughts and beliefs. However, he has never written a “Two Dogmas of Empiricism” or a Naming and Necessity. Philosophical influence is something that probably can only be seen, if at all, in retrospect.
Of course, none of this really matters. He’s not trying to be a good philosopher. He’s trying to save the world.
I apologize, but that does not look like a solution to the Gettier Problem. Could you elaborate?
Okay, the Gettier problem. I can explain the Gettier problem, but it’s just my explanation, not Eliezer’s.
The Gettier problem is pointing out problems with the definition of knowledge as justified true belief. “Justified true belief” (JTB) is an attempt at defining knowledge. However, it falls into the classic problem with philosophy of using intuition wrong, and has a variety of other issues. Lukeprog discusses the weakness of conceptual analysis here.
Also, it’s only for irrational beings like humans that there is a distinction between “justified’ and ‘belief.’ An AI would simply have degrees of belief in something according to the strength of the justification, using Bayesian rules. So JTB is clearly a human-centered definition, which doesn’t usefully define knowledge anyway.
Incidentally, I just re-read this post, which says:
Yudkowsky once wrote, “If there’s any centralized repository of reductionist-grade naturalistic cognitive philosophy, I’ve never heard mention of it.” When I read that I thought: What? That’s Quinean naturalism! That’s Kornblith and Stich and Bickle and the Churchlands and Thagard and Metzinger and Northoff! There are hundreds of philosophers who do that!
So perhaps Eliezer didn’t create original solutions to many of the problems I credited him with solving. But he certainly created them on his own. Like Leibniz and calculus, really.
Also, it’s only for irrational beings like humans that there is a distinction between “justified’ and ‘belief.’ An AI would simply have degrees of belief in something according to the strength of the justification, using Bayesian rules. So JTB is clearly a human-centered definition, which doesn’t usefully define knowledge anyway.
I am skeptical that AIs will do pure Bayesian updates—it’s computationally intractable. An AI is very likely to have beliefs or behaviors that are irrational, to have rational beliefs that cannot be effectively proved to be such, and no reliable way to distinguish the two.
I am skeptical that AIs will do pure Bayesian updates—it’s computationally intractable.
Isn’t this also true for expected utility-maximization? Is a definition of utility that is precise enough to be usable even possible? Honest question.
An AI is very likely to have beliefs or behaviors that are irrational...
Yes, I wonder there is almost no talk about biases in AI systems. . Ideal AI’s might be perfectly rational but computationally limited but artificial systems will have completely new sets of biases. As a simple example take my digicam, which can detect faces. It sometimes recognizes faces where indeed there are no faces, just like humans do but yet on very different occasions. Or take the answers of IBM Watson. Some were wrong but in completely new ways. That’s a real danger in my opinion.
As a simple example take my digicam, which can detect faces. It sometimes recognizes faces where indeed there are no faces, just like humans do but yet on very different occasions.
I appreciate the example. It will serve me well. Upvoted.
I am aware of the Gettier Problem. I just do not see the phrase, “the ability to constrain one’s expectations” as being a proper conceptual analysis of “knowledge.” If it were a conceptual analysis of “knowledge”, it probably would be vulnerable to Gettieriziation. I love Bayesian epistemology. However, most Bayesian accounts which I have encountered either do away with knowledge-terms or redefine them in such a way that it entirely fails to match the folk-term “knowledge”. Attempting to define “knowledge” is probably attempting to solve the wrong problem. This is a significant weakness of traditional epistemology.
So perhaps Eliezer didn’t create original solutions to many of the problems I credited him with solving. But he certainly created them on his own. Like Hooke and calculus, really.
I am not entirely familiar with Eliezer’s history. However, he is clearly influenced by Hofstadter, Dennet, and Jaynes. From just the first two, one could probably assemble a working account which is, weaker than, but has surface resemblances to, Eliezer’s espoused beliefs.
Also, I have never heard of Hooke independently inventing calculus. It sounds interesting however. Still, are you certain you are not thinking of Leibniz?
To quickly sum up Newcomb’s problem, Its a question of probability where choosing the more “rational” thing to do will result in a great deal less currency to a traditional probability based decision theory. TDT takes steps to avoid getting stuck 2 boxing, or choosing the more rational of the two choices while applying in the vast majority of other situations.
I admire the phrase “what an algorithm feels like from the inside”. This is certainly one of Yudkowsky’s better ideas, if it is one of his. I think that one can see the roots of it in G.E.B. Still, this may well count as something novel.
Nonetheless, Yudkowsky is not the first compatibilist.
One could define the term in such a way. I tend to take a instrumentalist view on intelligence. However, “the ability to optimize things” may well be a thing. You may as well call it intelligence, if you are so inclined.
This, nonetheless, may not be a solution to the question “what is intelligence?”. It seems as though most competent naturalists have moved passed the question.
I apologize, but that does not look like a solution to the Gettier Problem. Could you elaborate?
I have absolutely no knowledge of the history of Newcomb’s problem. I apologize.
Further apologies for the following terse statements:
I don’t think Fun theory is known by academia. Also, it looks like, at best, a contemporary version of eudaimonia.
The concept of CEV is neat. However, I think if one were to create an ethical version of the pragmatic definition of truth, “The good is the end of inquiry” would essentially encapsulate CEV. Well, as far as one can encapsulate a complex theory with a brief statement.
TDT is awesome. Predicted by the superrationality of Hofstadter, but so what?
I don’t mean to discount the intelligence of Yudkowsky. Further, it is extremely unkind of me to be so critical of him, considering how much he has influenced my own thoughts and beliefs. However, he has never written a “Two Dogmas of Empiricism” or a Naming and Necessity. Philosophical influence is something that probably can only be seen, if at all, in retrospect.
Of course, none of this really matters. He’s not trying to be a good philosopher. He’s trying to save the world.
Okay, the Gettier problem. I can explain the Gettier problem, but it’s just my explanation, not Eliezer’s.
The Gettier problem is pointing out problems with the definition of knowledge as justified true belief. “Justified true belief” (JTB) is an attempt at defining knowledge. However, it falls into the classic problem with philosophy of using intuition wrong, and has a variety of other issues. Lukeprog discusses the weakness of conceptual analysis here.
Also, it’s only for irrational beings like humans that there is a distinction between “justified’ and ‘belief.’ An AI would simply have degrees of belief in something according to the strength of the justification, using Bayesian rules. So JTB is clearly a human-centered definition, which doesn’t usefully define knowledge anyway.
Incidentally, I just re-read this post, which says:
So perhaps Eliezer didn’t create original solutions to many of the problems I credited him with solving. But he certainly created them on his own. Like Leibniz and calculus, really.
I am skeptical that AIs will do pure Bayesian updates—it’s computationally intractable. An AI is very likely to have beliefs or behaviors that are irrational, to have rational beliefs that cannot be effectively proved to be such, and no reliable way to distinguish the two.
Isn’t this also true for expected utility-maximization? Is a definition of utility that is precise enough to be usable even possible? Honest question.
Yes, I wonder there is almost no talk about biases in AI systems. . Ideal AI’s might be perfectly rational but computationally limited but artificial systems will have completely new sets of biases. As a simple example take my digicam, which can detect faces. It sometimes recognizes faces where indeed there are no faces, just like humans do but yet on very different occasions. Or take the answers of IBM Watson. Some were wrong but in completely new ways. That’s a real danger in my opinion.
Honest answer: Yes. For example 1 utilon per paperclip.
I appreciate the example. It will serve me well. Upvoted.
I am aware of the Gettier Problem. I just do not see the phrase, “the ability to constrain one’s expectations” as being a proper conceptual analysis of “knowledge.” If it were a conceptual analysis of “knowledge”, it probably would be vulnerable to Gettieriziation. I love Bayesian epistemology. However, most Bayesian accounts which I have encountered either do away with knowledge-terms or redefine them in such a way that it entirely fails to match the folk-term “knowledge”. Attempting to define “knowledge” is probably attempting to solve the wrong problem. This is a significant weakness of traditional epistemology.
I am not entirely familiar with Eliezer’s history. However, he is clearly influenced by Hofstadter, Dennet, and Jaynes. From just the first two, one could probably assemble a working account which is, weaker than, but has surface resemblances to, Eliezer’s espoused beliefs.
Also, I have never heard of Hooke independently inventing calculus. It sounds interesting however. Still, are you certain you are not thinking of Leibniz?
ooops, fixed.
I’ll respond to the rest of what you said later.
To quickly sum up Newcomb’s problem, Its a question of probability where choosing the more “rational” thing to do will result in a great deal less currency to a traditional probability based decision theory. TDT takes steps to avoid getting stuck 2 boxing, or choosing the more rational of the two choices while applying in the vast majority of other situations.
Apologies, I know what Newcomb’s problem is. I simply do not know anything about its history and the history of its attempted solutions.