I think the null hypothesis is “the neutrino detector is lying” because the question we are most interested in is if it is correctly telling us the sun has gone nova. If H0 is the null hypothesis and u1 is the chance of a neutrino event and u2 is the odds of double sixes then H0 = µ1 - µ2. Since the odds of two die coming up sixes is vastly larger than the odds of the sun going nova in our lifetime the test is not fair.
noen
Ok, I do wonder how one would distinguish between perceived effects vs real effects. The real effects of say civil rights legislation was greater freedom and opportunity for minorities. We are a better more productive society when we, at least in theory, give everyone an equal chance to succeed. That’s the real material result of the 60′s civil rights movement.
The psychological effect of those who benefited was maybe “I am a valued member of society.” I’m not sure how one teases that apart from the positive effect of simply being able to get a job or a loan without being discriminated against. I am just wondering out loud. I really wonder how much of a difference perception or attitude makes over and above real material changes.
I suspect that my perceptions positive or negative of the results on an election are determined by whether or not I experience real benefit or harm. I also suspect that we sort of backtrack and revise our memories to convince ourselves that we are masters of our domain when the opposite may be true.
But I don’t know. I could be all wrong.
This is the wrong way to think about it. One’s vote matters not because in rare circumstances it might be decisive in selecting a winner. One’s vote matters because by voting you reaffirm the collective intentionality that voting is how we settle our differences. All states exist only through the consent of it’s people. By voting you are asserting your consent to the process and it’s results. Democracy is strengthened through the participation of the members of society. If people fail to participate society itself suffers.
- 6 Jan 2014 17:32 UTC; 9 points) 's comment on Another Critique of Effective Altruism by (
“On the contrary, most people don’t care whether it is conscious in some deep philosophical sense.”
Do you mean that people don’t care if they are philosophical zombies or not? I think they care very much. I also think that you’re eliding the point a bit by using “deep” as a way to hand wave the problem away. The problem of consciousness is not some arcane issue that only matters to philosophers in their ivory towers. It is difficult. It is unsolved. And… and this is important. it is a very large problem, so large that we should not spend decades exploring false leads. I believe strong AI proponents have wasted 40 years of time and energy pursuing a ill advised research program. Resources that could have better been spent in more productive ways.
That’s why I think this is so important. You have to get things right, get your basic “vector” right otherwise you’ll get lost because the problem is so large once you make a mistake about what it is you are doing you’re done for. The “brain stabbers” are in my opinion headed in the right direction. The “let’s throw more parallel processors connected in novel topologies at it” crowd are not.
“Moreover, the primary worry discussed on LW as far as AI is concerned is that the AI will bootstrap itself in a way that results in a very unpleasant bad singularity.”
Sounds like more magical thinking if you ask me. Is bootstrapping a real phenomenon? In the real world is there any physical process that arises out of nothing?
“And yes, I am familiar with behaviorism in the sense that is discussed in that section. But it still isn’t an attempt to explain consciousness.”
Yes it is. In every lecture I have heard when the history of the philosophy of mind is recounted the behaviorism of the 50′s and early 60′s it’s main arguments for and against it as an explanation of consciousness are given. This is just part of the standard literature. I know that cognitive/behavioral therapeutic models are in wide use and very successful but that is simply beside the point here.
“So I don’t follow you at all here, and it doesn’t even look like there’s any argument you’ve made here other than just some sort of conclusion.”
Are you kidding!??? It was nothing BUT argument. Here, let me make it more explicit.
Premise 1 “If it is raining, Mr. Smith will use his umbrella.” Premise 2 “It is raining” Conclusion “therefore Mr. Smith will use his umbrella.”
That is a behaviorist explanation for consciousness. It is logically valid but still fails because we all know that Mr. Smith just might decide not to use his umbrella. Maybe that day he decides he likes getting wet. You cannot deduce intent from behavior. If you cannot deduce intent from behavior then behavior cannot constitute intentionality.
“So, on LW there’s a general expectation of civility, and I suspect that that general expectation doesn’t go away when one punctuates with a winky-emoticon.”
It’s a joke hun. I thought you would get the reference to Ned Block’s counter argument to behaviorism. It shows how an unconscious machine could pass the Turing test. I’m pretty sure that Steven Moffat must have been aware of it and created the Teselecta.
Suppose we build a robot and instead of robot brain we put in a radio receiver. The robot can look and move just like any human. Suppose then that we take the nation of China and give everyone a transceiver and a rule they must follow. For each individual if they receive as input state S1 they will then output state S2. They are all connected in a functional flowchart that perfectly replicates a human brain. The robot then looks moves and above all talks just like any human being. It passes the Turing test.
Is “Blockhead” (the name affectionately given to this robot) conscious?
No it is not. A non-intelligent machine passes the behaviorist Turing test for an intelligent AI. Therefore behaviorism cannot explain consciousness and an intelligent AI could never be constructed from a database of behaviors. (Which is essentially what all attempts at computer AI consist of. A database and a set of rules for accessing them.)
Meaning.
The words on this page mean things. They are intended to refer to other things.
Is there something that it is like to be Siri? Still, Siri is a tool and potentially a powerful one. But I feel no need to be afraid of Siri as Siri any more than I am afraid of nuclear weapons in themselves. What frightens me is how people might misuse them. Not the tools themselves. Focusing on the tools then does not address the root issue. Which is human nature and what social structures we have in place to make sure some clown doesn’t build a nuke in his basement.
Did ELIZA present the “dangers and promises” of AI? Weizenbaum’s secretary thought so. She thought it passed the Turing test. Did it? Will future AI tools really be indistinguishable from living beings? I doubt it. I think it will always be apparent to people that they are dealing with a software tool that makes it easier for them to do something.
If behaviorism has been rejected as an explanation for consciousness how can one appeal to behaviorism as a model for future AI?
--
“so what evidence for this claimed proportion is there?”
Oh, I was just being flippant. It is a law of the universe that if there is a joke to be made I must at least try for it. ;)
“I don’t see how this is a corollary. ”
Yeah, also not serious. I meant only to mock the eternal claim of fusion proponents that it is always “just around the corner”. I remember as a child reading breathless articles in Popular Science in the 70′s about the immanent breakthroughs in nuclear fusion “any day now”. Just like AI researchers of that day. And 40 years later little has changed.
I do not mistake Google translate for a conscious entity. Neither does anyone else. I can see no reason to believe that will change in the next 40 years.
“Examples include tabtletop designs that can be made by hobbyists.”
Well now, that was cool. But yeah, no net increase in energy. Still, good for him.
Plants do not count and have no awareness of time or of anything at all. The exact method by which venus fly traps activate is unknown but it seems hard to me to attribute it with the ability to count. That kind of teleological explanation is something we are cognitively biased to give but it fails to be explanatory.
Sunflowers do not turn their heads to face the sun because they want to catch more sunlight. They turn towards light because those cells that are in shadow receive more auxin which in turn stimulates the elongation of the cell walls causing the plant to grow in the opposite direction and towards the light. Natural selection will tend to favor those individuals that can gather more light than those which do not. There is no teleology involved.
I generally agree with point (1) but the point is irrelevant. Counting isn’t what makes 2 + 2 = 4 true. Although that is how we all learn to do math, by counting and memorizing addition and multiplication tables. I owe it all to my 3rd grade teacher. ;)
On point (2): “on our macro scale of reality, on the scale of things we perceive with our senses, discrete, separate objects are a feature of the map, not the territory; they exist in your mind, not the reality. In the reality, there’s just a lot of atoms everywhere”
There are no atoms at the macro scale. Or, if you like, atoms are everywhere. A chair is an “atom” of my dinning room furniture set and I can choose to count 5 items, four chairs and a table, or one item, one dinning room set. How I choose to cut up the world will determine which answer I get. But I am very confident that rocks and trees and universities and constitutions do not exist in my mind. They have an objective ontology that is independent of my personal subjective needs, interests and desires. Which is what it means for something to be real.
“Was 2+2=4 before humans were around to invent that equation?”
The statement: “2 + 2 = 4” is absolutely true because it is true in all possible worlds. Humans did not invent the equation, we invented the symbols and means of expressing it but the relation that is expressed in the words is an objective feature of the world that is true regardless of our opinions about it. Scientific facts have the world to word direction of fit. That is, they are true only to the extent they correspond to the world.
“we can certainly speak of single photons”
Only if we choose to observe them as particles. Photons have been observed experimentally to be both particles and waves. “The measurement apparatus detected strong nonlocality, which certified that the photon behaved simultaneously as a wave and a particle in our experiment. This represents a strong refutation of models in which the photon is either a wave or a particle.” This presents a significant challenge to certain theories.
If the confidence fairy has been shown not to exist. (The confidence fairy is the theory that the reason banks are not lending right now is due to a lack of confidence in the market.) Then why should we believe that feelings of hopelessness or empowerment will effect the economy? (productivity is an economic feature) What seems to me more likely to affect productivity is whether or not one got a good night’s sleep the night before and ate a decent breakfast.
If folk psychology (hope, despair) is epiphenominal then there is no reason to believe they have causal effects in the world.
“Is this a variant of what it is like to be a bat?”
Is there something that it is like to be you? There are also decent arguments that qualia does matter. It is hardly a settled matter. If anything, the philosophical consensus is that qualia is important.
“Whether some AI has qualia or not doesn’t change any of the external behavior,”
Yes, behaviorism is a very attractive solution. But presumably what people want is a living conscious artificial mind and not a useful house maid in robot form. I can get that functionality right now.
If I write a program that allows my PC to speak in perfect English and in a perfectly human voice can my computer talk to me? Can it say hello? Yes it can, Can it greet me hello? No, it cannot because it cannot intend to say hello.
“Behaviorism as that word is classically defined isn’t an attempt to explain consciousness.”
Wikipedia? Really? Did you even bother to read the page or are you just pointing to something on wikipedia and believing that constitutes an argument? Look at section 5 “Behaviorism in philosophy”. Read that and follow the link to the Philosophy of Mind article. Read that. You will discover that behaviorism was at one time thought to be a valid theory of mind. That all we needed to do to explain human behavior was to describe human behavior.
“If it is raining, Mr. Smith will use his umbrella. It is raining, therefore Mr. Smith will use his umbrella.” Is this a valid deduction? No, it isn’t because consciousness is not behavior only.
If you are a fan of Doctor Who, is the Teselecta conscious? Is there something that it is like to be the Teselecta? My answer is no, there is nothing it is like to be a robot piloted by miniature people emulating the behavior of a real conscious person.
Don’t be a blockhead. ;)
I cannot imagine why this:
“Here at UR, “economics” is not the study of how real economies work. It is the study of how economies should work ”
should not bring to mind this:
“Here at Fantasy University, “physics” is not the study of how real physical principles work. It is the study of how physics should work.”
or should not raise giant red flags that you are about to be fed a steaming pile of horse shit. I don’t know about everyone else but for me the moment anyone purports to dictate how the world ought to be over and above how it actually is they are engaged in creative fiction not science.
When someone begins from such massive thought errors what follows, if they are at all rigorous, cannot but help be equally flawed and is therefore not worth my time.
I predict that the search for AI will continue to live up to it’s proud tradition of failing to produce a viable AI for the indefinite future. Since the Chinese Room argument does refute the strong AI hypothesis no AI will be possible on current hardware. An artificial brain that duplicates the causal functioning of an organic brain is necessary before an AI can be constructed.
I further predict that AI researchers will continue to predict immanent AI in direct proportion to research grant dollars they are able to attract. Corollary: A stable nuclear fusion reactor will be built before a truly conscious artificial mind is. Neither of which will happen in the lifetime of anyone reading this.
How about “the probability of our sun going nova is zero and 36 times zero is still zero”?
Although… continuing with the XKCD theme if you divide by zero perhaps that would increase the odds. ;)
That is correct, you don’t know what semantic content is.
“I still don’t know what makes you so sure conciousness is impossible on an emulator.”
For the same reason that I know simulated fire will not burn anything. In order for us to create an artificial mind, which certainly must be possible, we must duplicate the causal relations that exist in real consciousnesses.
Let us imagine that you go to your doctor and he says, “You’re heart is shot. We need to replace it. Lucky for you we have miniature super computer we can stick into your chest that can simulate the pumping action of a real heart down to the atomic level. Every atom, every material, every gasket of a real pump is precisely emulated to an arbitrary degree of accuracy.”
“Sign here.”
Do you sign the consent form?
Simulation is not duplication. In order to duplicate the causal effects of real world processes it is not enough to represent them in symbolic notation. Which is all a program is. To duplicate the action of a lever on a mass it is not enough to represent that action to yourself on paper or in a computer. You have to actually build a physical lever in the physical world.
In order to duplicate conscious minds, which certainly must be due to the activity of real brains, you must duplicate those causal relations that allow real brains to give rise to the real world physical phenomenon we call consciousness. A representation of a brain is no more a real brain than a representation of a pump will ever pump a single drop of fluid.
None of this means we might not someday build an artificial brain that gives rise to an artificial conscious mind. But it won’t be done on a von Neuman machine. It will be done by creating real world objects that have the same causal functions that real world neurons or other structures in real brains do.
How could it be any other way?
“Because the telegraph analogy is actually a pretty decent analogy.”
No it isn’t. Constructing analogies is for poets and fiction writers. Science does not construct analogies. The force on an accelerating mass isn’t analogous to F=ma, it IS F=ma. If what you said is true, that neurons are like telegraph stations and their dendrites the wires then it could not be true that neurons can communicate without a direct connection or “wire” between them. Neurons can communicate without any synaptic connection between them (See: “Neurons Talk Without Synapses”). Therefore the analogy is false.
“What makes you think a sufficiently large number of organized telegraph lines won’t act like a brain?”
Because that is an example of magical thinking. It is not based on a functional understanding of the phenomenon. “If I just pour more of chemical A into solution B I will get a bigger and better reaction.” We are strongly attracted to thinking like that. It’s probably why it took us thousands of years to really get how to do science properly.
“What do you mean by “strong AI is refuted”″
The strong AI hypothesis is that consciousness is the software running on the hardware of the brain. Therefore one does not need to know or understand how brains actually work to construct a living conscious mind. Thus any system that implements the right computer program with the right inputs and outputs has cognition in exactly the same literal sense that human beings have understanding, thought and memory. It was the belief of strong AI proponents such as Marvin Minski at MIT and others that they were literally creating minds when writing their programs. They felt no need to stoop so low as to poke around in actual brains and get their hands dirty.
Computers are syntactical machines. The programs they execute are pure syntax and have no semantic content. Meaning is assigned, it is not intrinsic to symbolic logic. That is it’s strength. Since (1) programs are pure syntax and have no semantic content and (2) minds do have semantic content and (3) syntax is neither sufficient for nor constitutive of semantics. It must follow that programs are not by themselves constitutive of, nor sufficient for, minds. The strong AI hypothesis is false.
Which means that IBM is wasting time, energy and money. But.… perhaps their efforts will result in spin off technology so not all is lost.
Vacuums exist. Nearly frictionless planes and more or less perfectly rigid bodies actually exist. There is nothing wrong with abstraction based on objective reality. Claiming that one is about to declare how economies ought to work is not a abstraction based on a preexisting reality. It is attempting to impose one’s own subjective needs wants and desires on reality.
That is not science, that is pseudoscience.
Spherical cow is not “how science is done”. It is a joke. Jokes rely on reversing expectations, going counter to reality, for the surprise element. How science is actually done is you begin with the intent to describe the real world and from there you use whatever tools, intellectual or actual, at your disposal in order to accomplish your goal.
If one’s goal is not to describe how economies actually work you are not doing science. Declaring what one’s ideal economy ought to be is not the same as describing how a real economy would behave under ideal conditions. If I declare how photosynthesis ought to work I am not doing the same thing as describing how photosynthesis actually works under ideal conditions. It seems like a subtle distinction but it is not and failing to understand this difference has lead to a lot of bad science by lesser minds.
Suppose a man goes to the supermarket with a shopping list given him by his wife on which are written the words “beans, butter, bacon and bread”. Suppose as he goes around with his shopping cart selecting these items, he is followed by a detective who writes down everything he takes. As they emerge from the store both the shopper and the detective will have identical lists. But the function of the lists is quite different. In the case of the shopper’s list the purpose of the list is, so to speak, to get the world to match the words; the man is supposed to make his actions fit the list. In the case of the detective, the purpose of the list is to make the words match the world; the man is supposed to make the list fit the actions of the shopper. This can be further demonstrated by observing the role of “mistake” in the two cases. If the detective gets home and suddenly realizes that the man bought pork chops instead of bacon, he can simply erase the word “bacon” and write “pork chops”. But if the shopper gets home and his wife points out that he has bought pork chops when he should have bought bacon he cannot correct the mistake by erasing “bacon” from the list and writing “pork chops”.
Scientists are detectives attempting to describe how the world behaves. If the world behaves differently than we expect we erase bacon and write down pork chops even if we really would prefer bacon. Idealists, fantasists and Austrian school economists want bacon on the detective’s list so they write down bacon and blame reality for not living up to their desires.
That’s religion, not science.
You’re getting old. The long term prognosis is that the condition is fatal. ;)
“It’d be hard for me to overstate my skepticism for the genre of popular political science books charging that their authors’ enemies are innately evil. I haven’t read Mooney’s book”
It is obvious you have not read it because he makes no such claim nor have I. In fact he ends the book with a new found respect for conservatives. Loyalty, personal responsibility, being willing to set aside one’s own desires for the good of the group are all admirable qualities. I myself do not despise conservatives in themselves. I do despise the hucksters and grifters who promote pseudoscience and conspiracy theories in order to enrich themselves. Those people find a significant percentage of the population are easily manipulated by preying on their fears and prejudices. That percentage is over represented by conservative personality types and people with that kind of temperament tend to find political conservatism more to their liking. I have met Democrats with conservative personalities but not many. Civil Rights legislation in the 60′s was passed primarily by Republicans with liberal personalities. The reactionary types were in the Democratic Party
Conservatives are not innately evil. No one is. All people are susceptible to certain cognitive biases. Some people more than others. Some other people have found they can manipulate them to their advantage. It is easy to do, you trigger the fear response, as a result one’s rational centers literally shut down and areas of the brain associated with survival are activated.
“if we’re using “liberal” and “conservative” strictly to gauge desire for social change”
No, that’s not how it is used. Conservative means “resistant to change” and Liberal means “novelty seeking”. Political conservatives need not all be authoritarians but virtually all authoritarians would self select for conservative political organizations.
“Indeed, in this narrow sense Hitler, Mussolini, and others (though perhaps not Franco) might be considered liberals”
That’s absurd. Liberalism is not defined as a desire for social change. The authoritarian or conservative mindset would also seek social change because they wish to return to what they perceive as a traditional model for society.
Among candidate stars for going nova I would think you could treat it as a random event. But Sol is not a candidate and so doesn’t even make it into the sample set. So it’s a very badly constructed setup. It’s like looking for a needle in 200 million haystacks but restricting yourself only to those haystacks you already know it cannot be in. Or do I have that wrong.