I apologize, but that does not look like a solution to the Gettier Problem. Could you elaborate?
Okay, the Gettier problem. I can explain the Gettier problem, but it’s just my explanation, not Eliezer’s.
The Gettier problem is pointing out problems with the definition of knowledge as justified true belief. “Justified true belief” (JTB) is an attempt at defining knowledge. However, it falls into the classic problem with philosophy of using intuition wrong, and has a variety of other issues. Lukeprog discusses the weakness of conceptual analysis here.
Also, it’s only for irrational beings like humans that there is a distinction between “justified’ and ‘belief.’ An AI would simply have degrees of belief in something according to the strength of the justification, using Bayesian rules. So JTB is clearly a human-centered definition, which doesn’t usefully define knowledge anyway.
Incidentally, I just re-read this post, which says:
Yudkowsky once wrote, “If there’s any centralized repository of reductionist-grade naturalistic cognitive philosophy, I’ve never heard mention of it.” When I read that I thought: What? That’s Quinean naturalism! That’s Kornblith and Stich and Bickle and the Churchlands and Thagard and Metzinger and Northoff! There are hundreds of philosophers who do that!
So perhaps Eliezer didn’t create original solutions to many of the problems I credited him with solving. But he certainly created them on his own. Like Leibniz and calculus, really.
Also, it’s only for irrational beings like humans that there is a distinction between “justified’ and ‘belief.’ An AI would simply have degrees of belief in something according to the strength of the justification, using Bayesian rules. So JTB is clearly a human-centered definition, which doesn’t usefully define knowledge anyway.
I am skeptical that AIs will do pure Bayesian updates—it’s computationally intractable. An AI is very likely to have beliefs or behaviors that are irrational, to have rational beliefs that cannot be effectively proved to be such, and no reliable way to distinguish the two.
I am skeptical that AIs will do pure Bayesian updates—it’s computationally intractable.
Isn’t this also true for expected utility-maximization? Is a definition of utility that is precise enough to be usable even possible? Honest question.
An AI is very likely to have beliefs or behaviors that are irrational...
Yes, I wonder there is almost no talk about biases in AI systems. . Ideal AI’s might be perfectly rational but computationally limited but artificial systems will have completely new sets of biases. As a simple example take my digicam, which can detect faces. It sometimes recognizes faces where indeed there are no faces, just like humans do but yet on very different occasions. Or take the answers of IBM Watson. Some were wrong but in completely new ways. That’s a real danger in my opinion.
As a simple example take my digicam, which can detect faces. It sometimes recognizes faces where indeed there are no faces, just like humans do but yet on very different occasions.
I appreciate the example. It will serve me well. Upvoted.
I am aware of the Gettier Problem. I just do not see the phrase, “the ability to constrain one’s expectations” as being a proper conceptual analysis of “knowledge.” If it were a conceptual analysis of “knowledge”, it probably would be vulnerable to Gettieriziation. I love Bayesian epistemology. However, most Bayesian accounts which I have encountered either do away with knowledge-terms or redefine them in such a way that it entirely fails to match the folk-term “knowledge”. Attempting to define “knowledge” is probably attempting to solve the wrong problem. This is a significant weakness of traditional epistemology.
So perhaps Eliezer didn’t create original solutions to many of the problems I credited him with solving. But he certainly created them on his own. Like Hooke and calculus, really.
I am not entirely familiar with Eliezer’s history. However, he is clearly influenced by Hofstadter, Dennet, and Jaynes. From just the first two, one could probably assemble a working account which is, weaker than, but has surface resemblances to, Eliezer’s espoused beliefs.
Also, I have never heard of Hooke independently inventing calculus. It sounds interesting however. Still, are you certain you are not thinking of Leibniz?
Okay, the Gettier problem. I can explain the Gettier problem, but it’s just my explanation, not Eliezer’s.
The Gettier problem is pointing out problems with the definition of knowledge as justified true belief. “Justified true belief” (JTB) is an attempt at defining knowledge. However, it falls into the classic problem with philosophy of using intuition wrong, and has a variety of other issues. Lukeprog discusses the weakness of conceptual analysis here.
Also, it’s only for irrational beings like humans that there is a distinction between “justified’ and ‘belief.’ An AI would simply have degrees of belief in something according to the strength of the justification, using Bayesian rules. So JTB is clearly a human-centered definition, which doesn’t usefully define knowledge anyway.
Incidentally, I just re-read this post, which says:
So perhaps Eliezer didn’t create original solutions to many of the problems I credited him with solving. But he certainly created them on his own. Like Leibniz and calculus, really.
I am skeptical that AIs will do pure Bayesian updates—it’s computationally intractable. An AI is very likely to have beliefs or behaviors that are irrational, to have rational beliefs that cannot be effectively proved to be such, and no reliable way to distinguish the two.
Isn’t this also true for expected utility-maximization? Is a definition of utility that is precise enough to be usable even possible? Honest question.
Yes, I wonder there is almost no talk about biases in AI systems. . Ideal AI’s might be perfectly rational but computationally limited but artificial systems will have completely new sets of biases. As a simple example take my digicam, which can detect faces. It sometimes recognizes faces where indeed there are no faces, just like humans do but yet on very different occasions. Or take the answers of IBM Watson. Some were wrong but in completely new ways. That’s a real danger in my opinion.
Honest answer: Yes. For example 1 utilon per paperclip.
I appreciate the example. It will serve me well. Upvoted.
I am aware of the Gettier Problem. I just do not see the phrase, “the ability to constrain one’s expectations” as being a proper conceptual analysis of “knowledge.” If it were a conceptual analysis of “knowledge”, it probably would be vulnerable to Gettieriziation. I love Bayesian epistemology. However, most Bayesian accounts which I have encountered either do away with knowledge-terms or redefine them in such a way that it entirely fails to match the folk-term “knowledge”. Attempting to define “knowledge” is probably attempting to solve the wrong problem. This is a significant weakness of traditional epistemology.
I am not entirely familiar with Eliezer’s history. However, he is clearly influenced by Hofstadter, Dennet, and Jaynes. From just the first two, one could probably assemble a working account which is, weaker than, but has surface resemblances to, Eliezer’s espoused beliefs.
Also, I have never heard of Hooke independently inventing calculus. It sounds interesting however. Still, are you certain you are not thinking of Leibniz?
ooops, fixed.
I’ll respond to the rest of what you said later.