These are both risks. But the issue about manipulation at various points is presumably unlikely to add up to systematically misleading results: the involvement of many manipulators here would presumably create a lot of noise.
DavidAgain
Yes: buying stuff from people is pretty much instrumentalising them. That’s capitalism! Although there tend to be limits as you note. And the ‘would they like this if they knew what I was doing’ is obviously a very good rule of thumb.
Occasionally, you’ll have to break this. Sometimes somebody is irrationally self-destructive and you basically end up deciding that you have a better sense of what is best for them. But that’s an INCREDIBLY radical/bold decision to make and shouldn’t be done lightly.
I’m not sure exactly what you’re referring to, so it’s hard to respond. I think most of the damage done to evidence-gathering is done in fairly open ways: the organisation explains what it’s doing even while it’s selecting a dodgy method of analysis. At least that way you can debate about the quality of the evidence.
There are also cases of outright black-ops in terms of evidence-gathering, but I suspect they’re much rarer, simply because that sort of work is usually done by a wide range of people with varied motivations, not a dedicated cabal who will work together to twist data.
You clearly implied “only”. The external favours were the basis of the motivation.
“It isn’t immoral to notice that someone values friendship, and then to be their friend [b]in order to get the favors[/b] from them that they willingly provide to their friends”
In answer to your question: I’d still find it a little weird, tbh.
Well, everything has risks. But you can generally tell when people are doing that. And it’s harder if the evidence is systematic rather than post-hoc reviews of specific things.
Well, until we know how to identify if something/someone is conscious, it’s all a bit of a mystery: I couldn’t rule out consciousness being some additional thing. I have an inclination to do so because it seems unparsimonious, but that’s it.
Not revealing your own preferences and giving a balanced analysis that doesn’t make them too obvious usually works.
But I don’t think you can meaningfully manipulate people by accident. The nearest thing is probably having/developing a general approach that leads to you getting your way over other people, noticing it, and deciding that you like getting your way and not changing it.
What you really can do (and what almost everyone does) is manipulate people while maintaining plausible deniability (including sometimes to yourself). But I suspect most people can identify when they’re manipulating people and trying to trick themselves into thinking they’re not.
Ah: this may be the underlying confusion. I don’t see the instrumentalist evo psych as bad and everything else as good. I see any deceptive, treating people as things approach as not valuing people.
I don’t see the people who brag about cheating and slag off their wives as models to aspire to. This is both in that I don’t particularly value the outcome they’re aiming for, and that I object to the deception and the treating people as things.
But on the broader point about attitude mattering: obviously it might change the activity in that way. But my point was more that you can’t step outside of your own psychology and humanity: thinking about people in this detached strategic way is not something done by a person looking in from outside the system: your sex life isn’t a game of The Sims. My intuition and experience is that doing something in a way constantly focused at trying to get individual bits of stuff out of it (’I will now buy this wine to get sex, I will now comfort my friend so that they will help me move house next week, I will try to understand this subject I’m studying so that I get a higher mark in the exam) leads to you having less fun and doing less good than engaging with things in their own terms (which is compatible with being aware of the underlying dynamics).
There’s also an issue of sincerity here, which to unpack into something that might be more appealing to your approach, is essentially game theoric. If you reassess for your benefit at every point, people can’t rely on you in tough situations. I would like people to be able to rely on me, and to be able to rely on them. Taking other people seriously and relating to them as people rather than strategies allows you to essentially pre-commit.
I dunno about essences. The point is that you can observe lots of interactions of neurons and behaviours and be left with an argument from analogy to say “they must be conscious because I am and they are really similar, and the idea that my consciousness is divorced from what I do is just wacky”.
You can observe all the externally observable, measurable things that a black hole or container can do, and then if someone argues about essences you wonder if they’re actually referring to anything: it’s a purely semantic debate. But you can observe all the things a fish, or tree, or advanced computer can do, predict it for all useful purposes, and still not know if it’s conscious. This is bothersome. But it’s not to do with essences, necessarily.
I think the areas least open (though still not immune) to mind-killing are: 1) better, more consistent evidence for policies (good stats rather than govts commissioning policy-based evidence) 2) developing technical systems so they work better: the more techy the better. Making computer systems for processing pensions, tax or whatever that come in on budget and on spec would be a fantastic start. Though I guess even then, a libertarian might feel that giving the state more powerful and effective systems is counter-productive.
Fair enough. As an intuition pump, for me at least, it’s unhelpful: it gave the impression that you thought that consciousness was merely a label being mistaken for a thing (like ‘life’ as something beyond its parts).
Only having indirect evidence isn’t the problem. For a black hole, I care about the observable functional parts. I wouldn’t be being sucked towards it and being crushed while going ‘but is it really a black hole?’ A black hole is like a container here: what matter are the functional bits that make it up. For consciousness, I care if a robot can reason and can display conscious-type behaviour, but I also care if it can experience and feel.
Many worlds could be comparable if there is evidence that means that there are ‘many worlds’ but people are vague about if these worlds actually exist. And you’re right, this is also a potentially morally relevant point.
Consciousness does seem different in that we can have a better and better understanding of all the various functional elements but that we’re 1) left with a sort of argument from analogy for others having qualia 2) even if we can resolve(1), I can’t see how we can start to know whether my green is your red etc. etc.
I can’t think of many comparable cases: certainly I don’t think containership is comparable. You and I could end up looking at the AI in the moment before it destroys/idealises/both the world and say ‘gosh, I wonder if it’s conscious’. This is nothing like the casuistic ‘but what about this container gives it its containerness’. I think we’re on the same point here, though?
I’m intuitively very confident you’re conscious: and yes, seeing you were human would help (in that one of the easiest ways I can imagine you weren’t conscious is that you’re actually a computer designed to post about things on less wrong. This would also explain why you like Dennett—I’ve always suspected he’s a qualia-less robot too! ;-)
I’m not saying we should be unconscious of how we’re built. I’m saying that the way we’re built itself means that if we treat something as an abstract scientific issue we experience it differently to if we treat it as a matter of personal relationships. Are you saying that the way we explain and discuss our actions doesn’t affect in turn how we act and think?
Oh, and the answer to your final question here is probably ‘yes’. The way you understand and talk about your own attitudes and activities definitely has feedback into said attitudes and activities.
On trying to be attractive: no, that doesn’t automatically translate as contempt. But then, not all attempting to be attractive is deceptive. You say that ‘women’ (all women it seems: this evo psych attitude seems to come with a side-serving of old-fashioned generalisation) want guys to ‘show that they care’. Ever thought that people saying that (even men!) might actually want people to care, not just to pretend?
Yes, I meant surprising in light of other discoveries/beliefs.
On memory: is it the conscious experience that’s challenging (in which case it’s just a sub-set of the same issue) or do you find the functional aspects of memory challenging? Even though I know almost nothing about how memory works, I can see plausible models and how it could work, unlike consciousness.
Yes. This is what worries me: I can see more advances making everyone sure that computers are conscious, but my suspicion is that this will not be logical. Take the same processor and I suspect the chances of it being seen as conscious will rise sharply if it’s put in a moving machine, rise sharply again for a humanoid, again for face/voice and again for physically indistinguishable.
The problem with generalising from commonalities is that I have precisely one direct example of consciousness. Although having said that, I don’t find epiphenomenal accounts convincing, so it’s reasonable for me to think that as my statements about qualia seem to follow causally from experiencing said qualia, that other people don’t have a totally separate framework for their statements about qualia. I wouldn’t be that confident though, and it gets harder with artificial consciousness.
I do think altruism is superior: I’m not sure how exactly to unpack ethical statements, but I believe altruism is better than egoism, definitely. I also think that ‘selfishness’ has a very well understood meaning about maximising your own happiness/power/whatever and that redefining it so that it’s selfish to do what you think is right is fairly pointless. ‘Preferences’ is a ridiculously broad term and you seem to be treating ‘people follow their preferences’ as true by definition, meaning that ‘people are selfish’ doesn’t have much content as a claim.
In practice, people aren’t perfect altruists: but defining however you act as maximising your utility function and therefore just as good as anything else is just a refusal to engage on ethics: you end up reverting to brute force (‘I cannot object ethically to the fact your utility function involves rape and murder: but I can oppose you based on my utility function’). Not sure what good moving all ethical debate to this level achieves.
Oh, and on the altrustically having sex approach: again, we live in a society where we reasonably expect non-inteference and non-deception but don’t usually expect people to actively do what they don’t want to do: a theoretical utility-maximiser might have sex with people they’re not attracted to, sure.
On valuing people: I would understand valuing someone to go beyond the level of ‘I won’t actively harm and abuse you on a whim’. Although even in the hard sense of valuing (does he care about her at all) the statement that kicked this off doesn’t demonstrate any consideration for her experience. As you note, raping/drugging etc. have bad consequences for him, and as for getting her to drop out, I imagine it would be far more effort, have far more unpredictable results (her or friends might end up getting revenge for you screwing up her life) and not worth it if he just wants sex.
I think friendships can be instrumentally good, obviously. But there’s a distinction between ways in which friendships are instrumentally good. If I discovered a friend of mine revealing that they were only my friend for the fantastic conversation, the excellent company, the superb sense of humour etc. I wouldn’t feel cheated. If I found out they were only my friend because I drove a car and it was convenient for them to get around, I would feel cheated.
If it’s mutually decided then it’s clearly not deception, and whatever floats your boat, tbh.
Your other responses referring to evolutionary psychology, the chance of altruistic friendship etc. etc.: there is a difference between the evolutionary fact that your inclination to be friends with someone will be based on ultimately selfish goals, and being selfish yourself. The psychological make-up we have is a brute fact of existence and we need to take it into account. But selfish genes do not mean that the concept of human unselfishness is a busted flush.
I identify with this very strongly. It’s even stronger for me if the distance I have to travel is already ‘extra’: e.g. if I forget my train ticket I’d rather take a much slower bus than spend ten minutes walking back to the house because the latter I feel as intensely frustrating.
Its interesting because you don’t just feel it at the point of being about to retrace your steps: you’re aware of it as part of journey planning.