As a simple example take my digicam, which can detect faces. It sometimes recognizes faces where indeed there are no faces, just like humans do but yet on very different occasions.
I appreciate the example. It will serve me well. Upvoted.
As a simple example take my digicam, which can detect faces. It sometimes recognizes faces where indeed there are no faces, just like humans do but yet on very different occasions.
I appreciate the example. It will serve me well. Upvoted.
The ability to optimize things
...efficiently.
What is knowledge? The ability to constrain your expectations.
Most readers will misinterpret that.
What should I do with the Newcomb’s Box problem? TDT answers this.
The question for most was/is instead “Formally, why should I one-box on Newcomb’s problem?”
We should then apply a prior probability to each reference class (on general grounds of simplicity, economy, overall reasonableness or whatever) as well a a prior probability to each hypothesis
What is applying a prior probability to a reference class? As opposed to applying a prior probability to a hypothesis?
This is vague enough that it might plausibly induce equally abstract solutions to the same problem, as it doesn’t have many details to criticize. That would be good.
More likely, it won’t begin a dialogue at any level of abstraction other than those below it. This is the problem with proposing solutions as a conversation starter.
xenophobia
What do you mean? When in history are you referring to?
To me, it would make most sense to declare Crocker’s Rules in certain contexts (subjects, settings, individuals) rather than in general.
More generally, the optimal social codes for different contexts are likely different. Why expect that what is optimal for random subject or setting or individual is optimal for another?
Use it sparingly, or if possible not at all.
Use it on yourself.
Actually, first stop using it on yourself unwittingly.
Perhaps the sunk cost fallacy is useful because without it...
This sounds like a fake justification. For every justification of a thing by pointing to the positive consequences of it, one can ask how much better that thing is than other things would be.
I expect evolution to produce beings in local optima according its criteria, which often results in a solution similar to what would be the best solution according to human criteria. But it’s often significantly different, and rarely the same.
For every systematic tendency to deviate from the truth I have, I can ask myself the leading question “How does this help me?” and I should expect to find a good evolutionary answer. More than that, I would expect to actually be prone to justifying the status quo according to my personal criteria, rather than evolution’s. Each time I discover that what I already do habitually is best, according to my criteria, in the modern (not evolutionary) environment, I count it as an amazing coincidence, as the algorithms that produced my behavior are not optimized for that.
your expectations were very irrational.
It is a bad sign that you labeled her expectations with that symbol alone.
Less Wrong is entertainment
This looks like the beginning of an argument about whether or not LW is “really entertainment.” If it is really entertainment, then that doesn’t prevent it from being useful in any other way, unless its being entertainment precludes it from being those things by definition, which would of course be irrelevant.
Saying that LW is entertainment is somewhat relevant as an evolutionary debunking argument, to explain its popularity as being from something other than usefulness, which all else equal makes it less likely LW is useful. But I don’t like how the comment was phrased, nor is that argument terribly strong.
the real reason
Almost all causes have multiple effects, almost all effects have multiple causes.
Your comment is far below the standard for you. Standing alone, it implies a broken ideology and worldview, and looks like many useless internet comments. Only from your other comments is it clear that this one is an aberration. A bad one, one that looks as if it were written by someone else.
Everyone has to have a worst comment, my worst is probably worse. But please rethink this issue, or express your thoughts better.
The AI remained in the box.
It was agreed to halve the wager from 50 karma to 25 due to the specific circumstances concluding the role-play in which that the outcome depended on variables that hadn’t been specified, but if that sounds contemptible to you downvote all the way to −50.
Motorcycles aren’t dangerous. Cars are dangerous.
Particularly for people riding motorcycles.
Fixed reality: your dog has a lifetime’s supply of peanut butter.
My friend says it makes her think it has to do with boating. There should be a separate focused attempt to come up with the best name.
“My priors are different than yours, and under them my posterior belief is justified. There is no belief that can be said to be irrational regardless of priors, and my belief is rational under mine,”
“I pattern matched what you said rather than either apply the principle of charity or estimate the chances of your not having an opinion marking you as ignorant, unreasoning, and/or innately evil,”
“Loui Eriksson is the most underrated player in the NHL, just ask anyone! Wait a second...if everyone agrees, then...”
A new player poll asking who the most underrated NHL player is just came out, and guess who got more than twice as many votes as the second most voted for player? Hint: he was named to the All-Star roster last year...yes, it’s Loui Eriksson, again. This makes little sense. How many years in a row and in how many polls can a single guy be perceived by so many as “most underrated?”
New Year’s resolution: avoid discussing whether or not something is overrated or underrated and simply evaluate its actual worth.
I’m not sure what is your definition of a ‘failsafe’ but making simple limits like time and space part of the optimization parameters sounds to me like one.
You also would have to limit the resources it spends to verify how near the limits it is, since it acts to get as close as possible as part of optimization. If you do not, it will use all resources for that. So you need an infinite tower of limits.
Many people don’t endorse anything similar to the principle that “any argument for no more of something should explain why there is a perfect amount of that thing or be counted as an argument for less of that thing.”
E.g. thinking arguments that “life extension is bad” generally have no implications regarding killing people were it to become available. So those who say I shouldn’t live to be 200 are not only basically arguing I should (eventually, sooner than I want) be dead, the implication I take is often that I should be killed (in the future).
momentary object
Over short enough time, each bit of me is out of communication with each other bit of me. In light of this, is it still reasonable to think of a momentary consciousness?
The problem is will?