Just wondering if you’ve ever read an old economic article by Ron Heiner: The Origins of Predictable Behavior. 1984 (I think) Am. Econ. Rev. It’s probably very sympathetic to your last paragraph. (and on a slightly different slant, the recent Qunata article about evolution: https://www.quantamagazine.org/20170314-time-dependent-rate-phenomenon-evolution-viruses/)
jmh
I wonder how much of that is improvements in their manufacture (I suspect at least some) versus improvements in things like oils and other lubricants which then reduce the wear.
The answer seems fairly simple to me. You’re not in any position to decide the risks others assume. If you’re concerned about the potential torture the only mind you can really do anything about is yours—you don’t run around killing everyone else, just yourself.
So you point was that we don’t make the mistake of evaluating or thinking basketball skills are all a direct relationship with a simple metric as height but that’s what everyone is doing with IQ?
I’ve not read the comments so perhaps repeating something (or saying something that someone has already refuted/critiques).
I think it’s problematic to net all this out on an individual bases much less some aggregate level even for a single species much less multiple species.
First, we’re adaptive creatures so the scale is always sliding over time and as we act.
Second, disutility feeds into actions that produce utility (using your terms which might be wrong as my meaning here is want/discomfort/non-satisfaction and satisfaction/fulfillment type internal states). If on net a person is on the plus side of the scale you defined what do they do? In this case I’m thinking of some of the scifi themes I’ve read/seen where some VR tool leave the person happy but then they just starve to death.
Finally, isn’t the best counter here the oft stated quip “Life sucks but it’s better than the alternative.” If one accepts that statement then arguments the lead to the conclusion choosing death (and especially the death of others) really need to review the underlying premises. At least one must be false as a general case argument. (I’ll concede that in certain special/individual cases death may be preferred to the conditions of living)
Two thoughts—one perhaps very trivial. 1) If you believe the statement about the market response to stupidity then you’re you essentially attempting to supply a good with very little demand?
2) Maybe part of the issue is context—whenever the average person talks about economics I think it’s more in the political economy context, so perhaps inseparable from politics—leading to the direct linkage between market outcomes and regulatory aspects (after all, even in a pure neoclassical analysis the underlying—if generally unstated—assumption is that a host of rules underly and define market actions, incentives and results).
Perhaps such a comment in a setting where the discussion is more about science or engineering, or even baking, assuming the comment fits in somehow, I would think the reaction might not immediately jump to that of a pro regulatory response.
Or been seen as too mundane—like in the Hichiker’s Guide series where the really smart intelligance on earth were the mice, not humans or dalphin. I suspect someone that smart might realize they do better apearing less gifted (and probably simply terribly bored with any intelectual interaction with the other humans)
Possibly not a rational answer (so possilbly not living up to the less wrong philosophy!) but given the assumption of an infinite plane I would think the probability is vanishingly small of returning to the original position and velocity.
Something would need to constrain the vectors taken to prevent any ball from taking off in some direction that could be described as “away from the group”. Perhaps that could be understood be be on a path for which the the path of no other ball can possible intersect. At that point this ball can will never change it’s current velosity and never return to it’s oiginal position.
I cannot offer a proof that such a condition must eventually occur in your experimnt but my intuition is that it will. If so that vanishing small probablity that everyting return to some orginal state goes to zero.
I think given the scenario, I roll over and go back to sleep. Put simply that’s such a silly god I’m not going to pay any attention to it.
Another thought, “exactly as it unfolded” suggests I will have no awareness of any prior loop as I certainly have none now. Moreover, such an awareness necessarily changes how my life would unfold. There simply seems no difference between the two options from a practical perspective for me.
Would it be correct to define selfish utility as sociopathic?
I’m going to come at this from a different angle than the others, I think. I don’t claim it will work or be easy as I really identify with you question—changing myself should be easy (I control my brain, right? I make my decisions, right?) but find that reinventing me int the person I’d rather be than who I am is a real challenge.
There was another post here on LW, http://kajsotala.fi/2017/09/debiasing-by-rationalizing-your-own-motives/ that I think might have value in this contex as well as the one it takes for the post.
We can all try making our selves to X and though effort and repetition make it something of a habit. I think that works better for the young (no idea of your age). But at some point in life the habits, and especially the mental and emotional (which probably means physiological chemical processes that drive these states) hae become near hardwired. So, what I’ll call the brute force approach—just keep practicing—faces the problem of relative proportions. Behavoural characteristics we’ve developed over 20, 30 40 years (or more) will have a lot more weigh than the efforts to act differently for a few years (assuming one keeps up at the change myself routine).
Maybe at some point more effort in looking at “why am I acting like X” is as important just the effort to act differently. Perhaps to develop a new habbit will be easier than changing old habits. But if the new habbit then serves as a feedback into the old habit we setup a type of interupt for the initial impulse to behave in a way we would rather change. That might help break the old habits we don’t want but have reinforced to the point they are no longer just habits we display but actually more “who we are”.
So, this is off the cuff thinking to so very likely has some gapping holes!
While not addressing the question of a role for AI I often find myself thinking we should get away for the frequent trading of financial assets and make them a bit more like the trading of mutual funds. Does all the intra-day trades really give more information or just add noise and the opportunity for the insiders to make money off retail (and even some institutional) investors?
Seem like designing the market to work a bit more like the one often used in the Econ 101 theory—that Walrasian Auctioneer—we could have more stable markets that do better at pricing capital assets than today. In other words, take all the order flow see that the prices are to clear and then all trade occurs at that price.
I suspect you’d still see some gaming the system with fake orders (a bit like the algos have been accused of in today’s markets) but all systems get gamed.
Not sure I have anything to add to the question but do find myself having to ask why the general presumption so often seems to be that of AI gets annoyed at stupid people and kills humanity.
It’s true that we can think of situation where that might be possible, and maybe even a predictable AI response, but I just wonder if such settings are all that probable.
Has anyone ever sat down and tried to list out the situations where an AI would have some incentive to kill off humanity and then assess how reasonable thinking such a situation might be?
“Critical Rationalism says that an entity is either a universal knowledge creator or it is not. There is no such thing as a partially universal knowledge creator. So animals such as dogs are not universal knowledge creators — they have no ability whatsoever to create knowledge”
From what you say prior to the quoted bit I don’t even know why one needs to say anything about dogs. The either universal knowledge creator (UKC) or not is largely (or should this be binary as well?) a tautological statement. It’s not clear that you could prove dogs are or are not in either of the two buckets. The status of dogs with regard to UKC certinaly doesn’t follow from the binary claim statement.
Perhaps this is a false premise embedded in your thinking that helps you get to (didn’t read to the end) some conclusion about how an AI must also be a universal knowledge creator so on par with humans (in your/the CR assessment) so humans must respect the AI as enjoying the same rights as a human.
That conclusion—“dogs are not UKC” doesn’t follow from the binary statement about UKC. You’re being circular here and not even in a really good way.
While you don’t provide any argument for your conclusion about the status of dogs as UKC one might make guesses. However all the guess I can make are 1) just that and have nothing to go with what you might be thinking and 2) all result in me coming to the conclusion that there are NO UKC. That would hardly be a conclusion you would want to aim at.
Well it’s better than jumping to unsupported conclusion I suppose that should help at some level. Not sure it really helps with regard to either 1 or 2 in my response but that’s a different matter I think.
As alway some interesting views and thinking get found here. Some of the statements I think I would push back on are: The median is confused. Well, I think it would be more accurate to say EVERYONE is confused if only because we’re so limited in both our knowledge and any ability to observe so much of our reality on earth. Forget the metaphysical and philosophical/religous elements. Also, when suggesting confusion about somethings as complex as “the world” I’m not entirely sure there is a good common denominator to define not confused.
I think the characterization of most religous people as above—and I’ll cast it in the worst interprestation here—as blindly hoping something will save them from bad shit and give them good things is just wrong. I’ve personally known a bunch of very religous people who are as rational or more rational than most athiests I’ve met. And, given that we simply don’t know, strict atheism (as in a rejection of the monogod concept as reality) is as much a statementof faith and any belief in such an entity. But at least the religious will own their posistion as one of faith. Too many atheists will rebell against the acusation they, in the end, are makes statement based on the faith in their logic. Now, to be fair, more than a few “ateists” are really agnostics who simply say they don’t find the arugements for a god convincing and use that as their day-to-day but acept they could be wrong. Why bring up this? It goes back to the assumption about who is and is not confused about the world.
What assumptions are loaded into the overal story here?
First, as always I find interesting thoughts here. Thanks for effort and post. Now I’ll “defect” by saying I really should give more time to read this more closely and completely before commenting! ;-)
However, I did want to make a few comments/observations.
I’m not sure the PD approach applies—but that might be colored by my dislike of the metaphor in many situations. Everyone seems to forget about the third player in the game here: The Jailer who setup the pay off matrix and that the socially optimal result is that both confess (defect on their other). That’s not really the setting for the idea of common knowledge and it’s value in social setting (regardless of size but wonder if that shouldn’t be also explored here. Is that common knowledge between two people regarding their desire for sex the same category as that of common knowledge as manifest in things like custom and culture?)
I’m not sure I agree that the free-rider problem is actually a version of PD either. However, rather than splitting hairs on that I’ll simply ask if you have considered the converse problem—that of forced carrying: making the person contribute/go along even though they don’t personally derive any benefit and in fact the cost to them may even outweigh the external benefit to other in aggregate from that forced contribution. Not sure how to plug that into the question of common knowledge here. I do see how complete information would allow the free-rider to be distinguished from the force-carrier. But that seems a bit tangental to your post.
It also seems that the phenomea of voting cycles and agenda setting might fit into your analysis somewhere. Common knowledge may address one (agenda setting) to some extent but voting cycles and the underlying driver of multi-peaked preferences will remain. In that context I’m not sure common knowledge helps solve the problems, at least from a statbility stand point. In such a setting it seems we want a form of instability, especially if those multi-peaked preferences are not stable over time.
I find myself thinking of many statements and responses from both China and Russia over the past couple of years in light of point 7 above. Both seem to keep telling the world, the West or the USA to be “reasonable” in reacting to their actions.
I liked one of (well I liked them all but this one made me think about the comment I’m making) comments, David_C._Brayton, about the difference in views between engineers and theorists. I cannot help but wonder if there’s a difference in behavior between wanting to test the theory versus wanting to apply the theory in terms of one’s confirmation biases and ability to step beyond them.