I see here a Newcomb-like situation, but in the reverse direction—the fire department didn’t help the guy out to counterfactually make him pay his $75.
humpolec
To me this distinction is what makes consciousness distinct and special. I think it is a fascinating consequence of a certain pattern of interacting systems. Implying that conscious feelings occur all over the place, perhaps every feedback system is feeling something.
This sounds like the point Pinker makes in How the Mind Works—that apart from the problem of consciousness, concepts like “thinking” and “knowing” and “talking” are actually very simple:
(...) Ryle and other philosophers argued that mentalistic terms such as “beliefs,” “desires,” and “images” are meaningless and come from sloppy misunderstandings of language, as if someone heard the expression “for Pete’s sake” and went around looking for Pete. Simpatico behaviorist psychologists claimed that these invisible entities were as unscientific as the Tooth Fairy and tried to ban them from psychology.
And then along came computers: fairy-free, fully exorcised hunks of metal that could not be explained without the full lexicon of mentalistic taboo words. “Why isn’t my computer printing?” “Because the program doesn’t know you replaced your dot-matrix printer with a laser printer. It still thinks it is talking to the dot-matrix and is trying to print the document by asking the printer to acknowledge its message. But the printer doesn’t understand the message; it’s ignoring it because it expects its input to begin with ‘%!’ The program refuses to give up control while it polls the printer, so you have to get the attention of the monitor so that it can wrest control back from the program. Once the program learns what printer is connected to it, they can communicate.” The more complex the system and the more expert the users, the more their technical conversation sounds like the plot of a soap opera.
Behaviorist philosophers would insist that this is all just loose talk. The machines aren’t really understanding or trying anything, they would say; the observers are just being careless in their choice of words and are in danger of being seduced into grave conceptual errors. Now, what is wrong with this picture? The philosophers are accusing the computer scientists of fuzzy thinking? A computer is the most legalistic, persnickety, hard-nosed, unforgiving demander of precision and explicitness in the universe. From the accusation you’d think it was the befuddled computer scientists who call a philosopher when their computer stops working rather than the other way around. A better explanation is that computation has finally demystified mentalistic terms. Beliefs are inscriptions in memory, desires are goal inscriptions, thinking is computation, perceptions are inscriptions triggered by sensors, trying is executing operations triggered by a goal.
Badly formulated question. I think “consciousness” as subjective experience/ability of introspection/etc. is a concept we all intuitively know (from one example, but still...) and more or less agree on. Do you believe in the color red?
What’s under discussion is whether that intuitive concept is possible to be mapped to a specific property, and on what level. Assuming that is the question, I believe a mathematical structure (algorithm?) could be meaningfully called conscious or not conscious.
However, I wouldn’t be surprised if it could be “dissolved” into some more specific, more useful properties, making the original concept appear too simplistic (I believe Dennett said something like this in Consciousness Explained).
Saying that “what we perceive as consciousness” has to exist by itself as a real (epiphenomenal) thing seems just silly to me. But then again I probably should read some Chalmers to understand the zombist side more clearly.
AFAIK some people subvocalize while reading, some don’t. Is this preventing you from reading quickly?
(I’ve heard claims that eliminating subvocalization it is the first step to faster reading, although Wikipedia doesn’t agree. I, as far as I can tell, don’t subvocalize while reading (especially when reading English text, in which I don’t link strongly words to pronunciation), and although I have some problems with concentration, I still read at about 300 WPM. One of my friends claims ve’s unable to read faster than speech due to subvocalization).
I don’t know how we could overcome the boundary of subjective first-person experience with natural language here. If it is the case that human differ fundamentally in their perception of outside reality and inside imagination, then we might simply misunderstand each others definition and descriptions of certain concepts and eventually come up with the wrong conclusions.
While it does sound dangerously close to the “is my red like your red” problem, I think there is much that can be done before you leave the issue as hopelessly subjective. Your own example of being/not being able to visualise faces suggests that there are some points on which you can compare the experiences, so such heterophenomenological approach might give some results (or, more probably, someone already researched this and the results are available somewhere :) ).
I suspect such visualisation is not a binary ability but a spectrum of “realness”, a skill you can be better or worse at. I don’t identify with your description fully, I wouldn’t call what my imagination does “entering the Matrix”, but in some ways it’s like actual sensory input, just much less intense.
I also observed this spectrum in my dreams—some are more vivid and detailed, some more like the waking level of imagination, and some remain mostly on the conceptual level.
I would very be interested to know if it’s possible to improve your imagination’s vividness by training.
“In the 5617525 times this simulation has run, players have won $664073 And by won I mean they have won back $664073 of the $5617525 they spent (11%).”
Either it’s buggy or there is some tampering with data going on.
Also, several Redditors claim to have won—maybe the simulator is just poorly programmed.
Let’s make them wear hooded robes and call them Confessors.
I’m not sure if non-interference is really the best thing to precommit to—if we encounter a pre-AI civilization that still has various problems, death etc., maybe what {the AI they would have build} would have liked more is for us to help them (in a way preserving their values).
If a superintelligence discovers a concept of value-preserving help (or something like CEV?) that is likely to be universal, shouldn’t it precommit to applying it to all encountered aliens?
For example, you might have some herb combination that “restores HP”, but whenever you use it, you strangely lose HP that more than cancels what it gave you.
What if instead of being useless (by having an additional cancelling effect), magical potions etc. had no effect at all? If HP isn’t explicitly stated, you can make the player feel like he’s regaining health (e.g. by some visual cues), but in reality he’d die just as often.
RPGs (and roguelikes) can involve a lot of optimization/powergaming; the problem is that powergaming could be called rational already. You could
explicitly make optimization a part of game’s storyline (as opposed to it being unnecessary (usually games want you to satisfice, not maximize) and in conflict with the story)
create some situations where the obvious rules-of-thumb (gather strongest items, etc.) don’t apply—make the player shut up and multiply
create situations in which the real goal is not obvious (e. g. it seems like you should power up as always, but the best choice is to focus on something else)
Sorry if this isn’t very fleshed-out, just a possible direction.
Good point, I missed that. So MWI seems to be even subjectively unconfirmable...
If I commit quantum suicide 10 times and live, does my estimate of MWI being true change? It seems like it should, but on the other hand it doesn’t for an external observer with exactly the same data...
I think you should link to some learning resources about go in your article, for people who want to start.
I would also add something about “guessing the teacher’s password”.
If you do things you saw a stronger player do, but don’t understand them, you will sooner or later be punished—either because you applied the move in a situation where it doesn’t work; or because you don’t know how to continue.
We are according to consensus which I do not dispute since its well founded slowly approach heat-death. If I recall correctly we are supposed to approach maxentrhopy asymptotically. Can we with our current knowledge completley rule out the possibility of some kind computation machinery existing and waking up every now and then (at longer and longer intervals) in the wasteland universe to churn a few cycles of a simulated universe?
Dyson’s eternal intelligence. Unfortunately I know next to nothing about physics so I have no idea how this is related to what we know about the universe.
Isn’t the problem with friendly extraterrestials analogous to Friendly AI? (In that they’re much less likely than unFriendly ones).
The aliens can have “good” intentions but probably won’t share our values, making the end result extremely undesirable (Three Worlds Collide).
Another option is for the aliens to be willing to implement something like CEV toward us. I’m not sure how likely is that. Would we implement CEV for Babyeaters?
Thank you.
The idea reminded me of Moravec’s thoughts on death:
When we die, the rules surely change. As our brains and bodies cease to function in the normal way, it takes greater and greater contrivances and coincidences to explain continuing consciousness by their operation. We lose our ties to physical reality, but, in the space of all possible worlds, that cannot be the end. Our consciousness continues to exist in some of those, and we will always find ourselves in worlds where we exist and never in ones where we don’t. The nature of the next simplest world that can host us, after we abandon physical law, I cannot guess. Does physical reality simply loosen just enough to allow our consciousness to continue? Do we find ourselves in a new body, or no body? It probably depends more on the details of our own consciousness than did the original physical life. Perhaps we are most likely to find ourselves reconstituted in the minds of superintelligent successors, or perhaps in dreamlike worlds (or AI programs) where psychological rather than physical rules dominate. Our mind children will probably be able to navigate the alternatives with increasing facility. For us, now, barely conscious, it remains a leap in the dark.
How is having children at all similar?
I think people can feel a sense of accomplishment when their child achieved something they wanted but never got around to.
Look at the recently posted reading list. Pick some stuff, study and discuss. If you have a good “fighting spirit” and desire to become stronger, don’t waste it on writing fanfiction...