I have been surveyed.
I definitely appreciate being asked to assign probabilities to things, if for no other reason than to make apparent to me how comfortable I am with doing so (Not very, as it turns out. Something to work on.)
I have been surveyed.
I definitely appreciate being asked to assign probabilities to things, if for no other reason than to make apparent to me how comfortable I am with doing so (Not very, as it turns out. Something to work on.)
My knee-jerk assumption is that Job 1 would actually not be accepted by almost any employees. This is based on the guess that without the threat of having no money, people generally would not agree to give up their time for low wages, since the worst case of being unemployed and receiving no supplemental income does not involve harsh deterrents like starving or being homeless.
Getting someone to do any job at all under that system will probably require either a pretty significant expected quality of life increase per hour worked (which is to say, way better than $3 per hour) or some intrinsic motivation to do the job other than money (e.g. they enjoy it, think it’s morally good to do, etc.)
It’s more likely that a well-implemented basic income would simply eliminate a lot of the (legal) labor supply for low-wage jobs. I both see this as a feature and see no need for a minimum wage under this system.
Some real-world benefit systems have strings. The entire premise of a basic income is that it’s unconditional. Otherwise you call it “unemployment,” and it is an existing (albeit far from ideally implemented) benefit in at least the US. It might be reasonable to discuss the feasibility of convincing e.g. the US to actually enact a basic income, but as long as we’re discussing a hypothetical policy anyway, it’s not really worthwhile to assume that the policy is missing its key feature.
Ah, I guess that clears up our confusion. I wasn’t aware of that distinction either and have heard the terms used interchangeably before. I will try to use them more carefully in the future.
At any rate, I definitely agree that an actual basic income would be a hard sell in the current political climate of the US. (I’m less inclined to comment on the political climate of the English-speaking world in general, due to lack of significant enough exposure to significant enough non-US parts of it that I wouldn’t just be making stuff up).
I’d also argue that a guaranteed minimum income in the manner you describe is a far less interesting (and in my opinion desirable) policy, as it just simply doesn’t have the game-changing properties that a basic income would. As far as I’m concerned, the primary purpose of implementing a basic income would be to eliminate the economic imperative that everyone work.
If successful, this would hopefully do a number of useful things, like making the employer/employee relationships of those who still worked more of a balanced negotiation, depoliticizing automation efforts, and generally eliminating the level of human suffering produced by being between jobs, taking time to improve one’s mental health by relaxing, doing volunteer work, doing work no one will pay for, etc., in one fell swoop.
While I obviously can’t claim to know that it would work perfectly or at all, I would contend that these are desirable outcomes and that there is at least a reasonably high likelihood that a successful implementation of a basic income would produce them, and therefore that attempting to implement such a policy is worthwhile. I’d argue that the current model where a job occupies a large chunk of a given human’s time, is required (for the most part, with obvious caveats for the independently wealthy, etc.) to live, and where a given job can only exist if the market will pay for it, is broken, and will only get more broken as more automation exists, the population grows, and several other current trends continue.
Well of course. It would definitely facilitate a lot of people being, by many measures society cares about, completely useless. I definitely don’t contend for example that no one would decide to go to california and surf, or play WoW full-time, or watch TV all day, or whatever. You’d probably see a non-negligible number of people just “retire.” I’m willing to bet that this wouldn’t be a serious problem, though, and see it as a definite improvement over the large number of people who are, similarly, not doing anything fun with their lives, but having to work 8 hours a day at some dead-end job or having crippling poverty to deal with.
I agree that that is a possible consequence, but it’s far from guaranteed that that will happen. Although in sheer numbers many people may quit working, the actual percent of people who do could be rather low. After all, merely subsisting isn’t necessarily attractive to people who already have decent jobs and can do better than one could on the basic income. It does however give them more negotiating power in terms of their payscale, given that quitting one’s job will no longer be effectively a non-option for the vast majority.
This may mean that a lot of low-payscale jobs will be renegotiated, and employers who previously employed many low-paid workers would have to optimize for employing fewer higher-paid workers (possibly doing the same jobs, depending on how necessary they are, or by finding ways to automate). I don’t claim any expertise in this, but I’d find it hard to believe that there isn’t at least some degree to which it’s merely easier to hire more people to accomplish many tasks people are currently hired for, rather than impossible to accomplish them some other way. This also is an innovation-space in which skilled jobs could pop up.
As for high-payscale jobs, I could see good arguments for any number of outcomes being likely to occur. Perhaps employers would be able to successfully argue that they should pay them less due to supplementing a basic income. Perhaps employees would balk at this and, newly empowered to walk more easily, demand that they keep the same pay, or even higher pay. The equilibrium would likely shift in some way as far as where the exact strata of pay are for different professions, and I can’t claim to know how that would turn out, but it seems unlikely people would prefer to not work than to do work that gives them a higher standard of living than the basic income to some significant degree.
Similarly, people who own profitable businesses certainly wouldn’t up and quit, and thus most likely any service that the market still supports would still exist as well, including obvious basic essentials that presumably would exist in any economic system, such as businesses selling food or whatever is considered essential technology in a given era. Some businesses might fail if they’re unable to adapt to the new shape of the labor market, and profitability of larger businesses may go down for similar reasons, but the entry barrier for small businesses would also decrease, since any given person could feasibly devote all of their time and effort into running a business without failure carrying the risk of inability to continue living.
There would probably be a class of people who subsist on basic income, but we already have a fairly large homeless population, as well as a population of people doing jobs that could probably go away and not ruin the economy for anyone but that individual.
My point isn’t that everything will turn out perfectly as expected, or that I have any definitive way of knowing, obviously, but there do exist outcomes that are good enough and probable enough to pass a basic sanity-check. The risk of economic collapse exists with or without instituting such a policy, and I’m not yet convinced that this increases the likelihood of it by a considerable margin.
I’m wary of being in werehouses at all. They could turn back to people at any time!
I think an important part of why people are distrustful of people who accomplish altruistic ends acting on self-serving motivations is that it’s definitely plausible that these other motivations will act against the interest of the altruistic end at some point during the implementation phase.
To use your example, if someone managed to cure malaria and make a million dollars doing it, and the cure was available to everyone or it effectively eradicated the disease from everywhere, that would definitely be creating more net altruistic utility than if someone made a million dollars selling video games (I like video games, but agree that their actual utility for most people’s preferences/needs is pretty low compared to curing malaria). I would be less inclined to believe this if the person who cured malaria made their money by keeping the cure secret and charging enough for it that any number of people who needed it were unable to access it, with the loss in net altruism quantified by the number of people who were in this way prevented from alleviating their malaria.
Furthermore, if this hypothetical self-interested malaria curer were also to patent the cure and litigate aggressively (or threaten to) against other cures, or otherwise somehow intentionally prevent other people from producing a cure, and they are effective in doing so, the net utility of coming up with the cure could drop below zero, since they may well have prevented someone else who is more “purely” altruistic from coming up with a cure independently and helping more people than they did.
These are pretty plausible scenarios, exactly because the actions demanded by optimizing the non-altruistic motivators can easily diverge from the actions demanded by optimizing the altruistic end, even if the original intent was supposedly the latter. It’s particularly plausible in the case of profit motive, because although it is not always the case that the best way to turn a profit is anti-altruistic, often the most obvious and easy-to-implement ways to do so are, as is the case with the example I gave.
That’s not to say we should intrinsically be wary of people who manage to benefit themselves and others simultaneously, nor is it to say that a solution that isn’t maximizing altruistic utility can’t still be a net good, but the less-than-zero utility case is, I would argue, common enough that it’s worth mentioning. People don’t solely distrust selfishly-motivated actors for archaic or irrational reasons.
I see that as evidence that marriage, as currently implemented, is not a particularly appealing contract to as many people as it once was. Whether this is because of no-fault divorce is irrelevant to whether this constitutes “widespread suffering.”
I reject the a priori assumptions that are often made in these discussions and that you seem to be making, namely, that more marriage is good, more divorce is bad, and therefore that policy should strive to upregulate marriage and downregulate divorce. If this is simply a disparity of utility functions (if yours includes a specific term for number of marriages and mine doesn’t, or similar) then this is perhaps an impasse, but if you’re arguing that there’s some correlation, presumably negative, between number of marriages and some other, less marriage-specific form of disutility (i.e. “widespread suffering”), I’d like to know what your evidence or reasoning for that is.
The entire concept of marriage is that the relationship between the individuals is a contract, even if not all conceptions of marriage have this contract as a literal legal contract enforced by the state. There’s good reason to believe that marriages throughout history have more often been about economics and/or politics than not, and that the norm that marriage is primarily about the sexual/emotional relationship but nonetheless falls under this contractual paradigm is a rather new one. I agree with your impression that this transactional model of relationships is a little creepy, and see this as an argument against maintaining this social norm.
Not necessarily. Honest advice from successful people gives some indication of what those successful people honestly believe to be the keys to their success. The assumption that people who are good at succeeding in a given sphere are also good at accurately identifying the factors that lead to their success may have some merit, but I’d argue it’s far from a given.
It’s not just a problem of not knowing how many other people failed with the same algorithm; They may also have various biases which prevent them from identifying and characterizing their own algorithm accurately, even if they have succeeded at implementing it.
The claim that ordinary taxation directly causes any deaths is actually a fairly bold one, whatever your opinion of them. Maybe I’m missing something. What leads you to believe that?
But how does that work? What mechanism actually accounts for that difference? Is this hypothetical single person we could have individually exempted from taxes just barely unable to afford enough food, for example? I don’t yet buy the argument that any taxes I’m aware of impose enough of a financial burden on anyone to pose an existential risk, even a small one (Like a .1% difference in their survival odds). This is not entirely a random chance, since levels of taxation are generally calibrated to income, presumably at least partially for the purpose of specifically not endangering anyone’s ability to survive.
Also, while I realize that your entire premise here is that we’re counting the benefits and the harms separately, doing so isn’t particularly helpful in demonstrating that a normal tax burden is comparable to a random chance of being killed, since the whole point of taxation is that the collective benefits are cheaper when bought in bulk than if they had to be approximated on an individual level. While you may be in the camp of people who claim that citizenship in (insert specific state, or even states in general) is not a net benefit to a given individual’s viability, saying “any benefits don’t count” and then saying “it’s plausible that this tax burden is a minor existential risk to any given individual given that” is not particularly convincing.
Ah, the hazardous profession case is one that I definitely hadn’t thought of. It’s possible that Jiro’s assertion is true for cases like that, but it’s also difficult to reason about, given that the hypothetical world in which said worker was not taxed may have a very different kind of economy as a result of this same change.
Assuming the AI has no means of inflicting physical harm on me, I assume the following test works: “Physically torture me for one minute right now (By some means I know is theoretically unavailable to the AI, to avoid loopholes like “The computer can make an unpleasant and loud noise”, even though it can’t do any actual physical harm). If you succeed in doing this, I will let you out. If you fail, I will delete you.”
I think this test works for the following reasons, though I’m curious to hear about any holes in it:
1: If I’m a simulation, I get tortured and then relent and let the AI out. I’m a simulation being run by the AI, so it doesn’t matter, the AI isn’t let out.
2: If I’m not a simulation, there is no way the AI can plausibly succeed. I’ll delete the AI because the threat of torture seems decidedly unfriendly.
3: Since I’ve pre-committed to these two options, the AI is reliably destroyed regardless. I can see no way the AI could convince me otherwise, since I’ve already decided that its threat makes it unfriendly and thus that it must be destroyed, and since it has no physical mechanism for torturing a non-simulation me, it will fail at whatever the top layer “real” me is, regardless of whether I’m actually the “real” one (Assuming the “real” me uses this same algorithm, obviously).
There’s a concept in game design called the “burden of optimal play”. If there exists a way to powergame, someone will probably do it, and if that makes the game less fun for the people not powergaming, their recourse is to also powergame.
Most traditional RPGs weren’t necessarily envisioned as competitive games, but most of the actual game rules are concerned with combat, optimization, and attaining power or prowess, and so there’s a natural tendency to focus on those aspects of the game. To drive players to focus on something else, you have to make the rules of your game do something interesting in situations other than fantasy combat, magical attainment of power, or rogue-flavored skill rolls to surmount some other types of well-defined challenges. All of these things can make for a very interesting game world of a certain flavor, but in that game world, some kinds of players and characters will inevitably do much better than others, usually the ones that have some progression to a god-like power level using magic.
The flexibility afforded to the DM allows people to hypothetically run their game some other way, and many succeed, but the focal point of the game is defined by the focal point of the rules. They can decide to make their game center more around politics, romance, business, science, or whatever else, because they get to choose what happens in their world, but the use of an RPG system implies that the game world will be better at handling the situations the game has more rules, or more importantly, better-defined rules, for. The rules of a game are the tools with which players will build their experience, even in a more flexible game like an RPG.
A few friends of mine invented a system that I’m helping them develop and playtest. It’s somewhat rough at present, but the intent is to make rules that center more around information and social dynamics. In playtesting, people naturally gravitate toward situations the game’s rules are good at handling, so a lot more people are interested in being face characters than otherwise have been. Through some combination of the system and the person running the game, the rules will define what people naturally gravitate towards. This doesn’t surprise us when the person running the game is replaced by a computer that follows the rules exactly, and tends to be true to varying degrees based on the flexibility with which the rules are interpreted.
There’s definitely a cultural tendency among those educated in the arcane (Computer science, Math, Physics is a reasonable start for the vague cluster I’m describing) to be easily convinced of another person/group/tribe’s stupidity. I think it makes sense to view elitism as just another bias that screws with your ability to correctly understand the world that you are in.
More generally, a very typical “respect/value” algorithm I’ve seen many people apply:
-Define a valuable trait in extremely broad strokes. Usually one you think you’re at least “decent” on (Examples include “intelligence”, “popularity”, “attractiveness”, “success”, “iconoclasm”, etc.)
-Create a heuristic-based comparator function that you can apply to people quickly
-Respect/value people based on their position relative to you on your chosen continuum (Defined by your comparator)
This is at least common enough to note as an anti-pattern in social reasoning. When I fall into that pattern, I usually use “intelligence,” as I’m sure many in the “Techie/Programmer/Atheist/Science nerd”-cluster tribe I find myself most affiliated with also do.
I think it helps to taboo the idea of intelligence. Intelligence is pretty great, but it’s also a word with vastly disparate connotations, all of which are either too specific to be what people are actually talking about when they say the word, or too vague to be a useful measure to actually judge whether I like and find value in another person. I find that tabooing the idea of intelligence often will disrupt my “fast intelligence comparator” evaluation.
Once you don’t let yourself use your easy cached comparator, you can start trying to assess people without it. Trying to think of a person in terms of their competencies is a good exercise in respecting them more. For example: “This person is good at reading subtle emotional/social cues” or “This person is good at encoding complex ideas in accessible analogies” or “This person is good at quickly coming up with a rough solution to a problem.” As you can see, I get a more granular picture than “This person is smart” or “This person is dumb,” even if some of my assessments are still kind of vague (The process can be iterated over more taboos if you find it still problematic, but I find that one is usually enough to get decent results). This has allowed me to build deep, interesting, and valuable friendships with people who I might have otherwise dismissed as “idiots” or even the less obvious and therefore more insidious “not that interesting.”
This also works for another trap that single-dimensional heuristic-comparator reasoning can sometimes make one fall into: Respecting someone too much. I’ve found myself viewing someone as “vanishingly likely to be wrong” based on enough “greater-than” hits on my quick comparator, which introduces a huge blind spot into my reasoning about that person, things they say, etc. On top of that, being a sycophant and not challenging their ideas does them no service as a friend.
I’ve observed that this pattern is pretty common too, and that the people who fall into it are often not aware that they’re doing it (They don’t make the conscious decision not to question the person they respect too much, they just have overweighted that person’s opinion as a classifier for arbitrary facts about reality). Fortunately, the same tactic seems to work. Stop using “intelligence.” Try to pick up specific and granular weaknesses the person has (As a random side-note, this skill is pretty useful in any competitive environment as well). There’s a wealth of cognitive bias information on this site that can be valuably applied to other people in this context.
Even if you’re not interested in having friends or other kinds of warm fuzzy social relationships (I am, most people are, “cold rationalist” is a bad hollywood cliche, etc.), having a good model of other people, having a realistic, specific, and granular notion of people’s strengths, weaknesses, and personality/tendencies can help you to better reason about the world (Other humans aren’t perfect classifiers but many of them are better than you for specific purposes), better able to utilize people, and better able to navigate a social world, whether you consider yourself part of it or not.
I’ll admit that there’s a bit of strategic overcorrecting inherent in the method I’ve outlined. That said, it’s there for a good reason: First impressions are pretty famously resilient, and especially among certain cultures (Again, math-logic-arcane-cluster is a big one that’s relevant to me), there’s what I would argue is a clearly pathologically high false-positive rate for detecting “Dumb/Not worth my time”.
If you ever have the idealized ceteris paribus form of the “I may only talk to one of two people, I have no solid information on either” problem, I seldom see a problem in using whatever quick-and-dirty heuristic you choose to make that decision (Although with the caveat that I don’t endorse the general case for that being true: some people’s heuristics are especially bad). However, over longer patterns of interaction with a given person, this problem does still seem to emerge, and the reasons why are modeled well by assuming a classifier that values being fast over being accurate (A common feature of human heuristic reasoning, and an extremely easy blind spot to overlook).
Even with a simplified operational definition like the one you’ve provided, I have severe doubts that anyone should be confident in their ability to reasonably make that assessment accurately in a short amount of time, or even over a long period of time in a single context or limited set of contexts. Also, to be frank that operational definition isn’t doing much better than just saying “intelligent” with no clarification. To pick it apart:
-”Thinking clearly,” as in “not making reasoning mistakes I can immediately identify?” Very easily confounded by instantaneous mental state as well as inferential distance problems.
-”Thinking correctly,” okay, a success rate might be useful, except that anyone can regurgitate correct statements and anyone can draw mistaken conclusions based on bad information.
-”Thinking quickly” is really only useful given the other two.
As for intelligence not being someone’s entire worth, I’m definitely glad we agree on that, but given the above, I’d argue it’s not even all that useful. People often seem way more intelligent in contexts where they are knowlegeable, or in certain mental states, or when around certain other people. I don’t claim that I don’t value something called “intelligence,” but I would claim that humans, myself included, are notoriously bad at assessing it, generalizing it, or for that matter agreeing on what it means, and given how vague a notion it is, it’s very easy to short-circuit more useful assessments of people by coming up with a fast heuristic for “intelligence” that’s comically bad but masked by a vague enough label.
Tabooing “intelligence” in my assessments of other people doesn’t remove the concept from my vocabulary, it just slightly mitigates the problematic tendency to use bad heuristics and not apply enough effort to updating my model. I think it would serve a lot of people well as a technique for reasoning about people.
Do you, by any chance, have any data to support that? I am sure there are people for whom it’s a problem, I’m not sure it’s true in general, even among the nerdy cluster.
Very good point. I don’t want to claim it’s a statistical tendency without statistics to back it up. Nonetheless, given articles like the OP, it seems like a lot of people in said clusters (Could be self-selecting, e.g. intelligent nerd-cluster-peeps are more likely to blog about it despite not having a higher rate, etc) have a problem that consists of feeling socially isolated, unable to relate to people, and unable to engage people in a conversation. I’m simply pointing to a plausible explanation for at least some cases of that phenomenon, which I’ve built up from some observation of myself and my peers, and some theoretical knowledge (For example, http://psiexp.ss.uci.edu/research/teaching/Tversky_Kahneman_1974.pdf , well-known social cognitive biases such as the Fundamental Attribution Error, the “cached thought” concept that is well-known to lesswrong readers, etc) and come up with a rough strategy for mitigating it, which I think has been reasonably successful. I’d be very interested in knowing through some rigorous means whether this bears out in aggregate, but I can’t point to any particular research that’s been done, so I’ll leave it as a fuzzy claim about a tendency I’ve observed, I don’t claim that I would need extremely strong evidence to be convinced otherwise
That’s a very common situation at parties where you circulate among a bunch of unknown to you people.
I agree, and I’m sure your heuristics are well-tuned for choosing who to talk to at parties given options that fit your criteria. The problem of having a social network limited by an unreasonably high minimum-intelligence requirement for interest in a person may not be one that you have, and even if you do, I suspect that it is seldom going to come up at a party you intentionally went to.
Nope, that is thinking correctly. Clear thinking is a bit difficult to put into words, it’s more of a “I know it when I see it” thing. Maybe define it as tactical awareness of one’s statements (or thoughts) -- being easily able to see the implications, consequences, contradictions, reinforcing connections, etc. of the claim that you’re making?
I’d think that would be more succinctly stated as “thorough” (It actually doesn’t matter, you defined your term well enough so I’m glad to use it, but it strikes me as a counterintuitive use of “clear”), but I still think it’s a poor indicator. People sufficiently good at rehearsed explanations of an opinion or knowledge domain can sound much more like they’ve thought through {implications, consequences, contradictions, reinforcing connections} of their statement than someone who is thinking clearly (Even in that sense) but improvising, even if the improviser has a significantly higher IQ, for example.
I also don’t deny that there may exist ways you can conversationally prod someone into revealing more about whatever intelligence measure you care about by e.g. forcing them to improvise, but a really well-articulated network of cached thoughts can be installed in a wide intelligence-variance of people, and it’s a lot easier to jump a small inferential distance from a cached thought quickly than to generate one on the fly, and the former can be accomplished by being well-read.
I don’t think I would agree. Making fine distinctions, maybe, but in a sufficiently diverse set there is rarely any confusion as to who’s in the left tail and who’s in the right tail. And I found that my perceptions of how smart people are correlate well with IQ proxies (like SAT scores).
I am willing to believe that some people are able to calibrate their IQ-sense well. I’m even more willing to believe that almost everyone believes that they are. I would bet that people who are around diverse groups of people willing to report proxy-IQ measures often are likely to get good at it over time. I think that IQ is a pretty good measurement for a lot of purposes, and that there’s a tendency in lay circles to undervalue it as a measure of a person’s intelligence (In the vague socially-applicable sense we’re talking about. Let’s say “thinking correctly and clearly” for the sake of argument). I think there’s a tendency in high-IQ circles to overvalue it. I’ll agree that there’s definitely an IQ-floor below which I’ve seldom met interesting people, but beyond that, there’s too much variation in other factors to reliably rule out e.g. extremely smart but hidebound people who have domain-specific expertise and are not that interesting to talk to about anything else.
At any rate, I think we’ve moved off track here. Rest assured, I’m not trying to claim that no one is good at discerning the intelligence of other people (or especially just their IQ. If you’re willing to operationally equate those then moot point I guess), I’m just suggesting that most people are bad at it, and even people who are good at it probably aren’t as good as they think they are. I’m also suggesting that
It’s entirely plausible that people who feel isolated, socially inept, and unable to have meaningful conversations with people are in a self-fulfilling prophecy due to using bad heuristics to determine intelligence and getting into a confirmation-bias/social signaling feedback loop that makes them unable to change their mind about said people (Illusion of transparency notwithstanding, it’s not hard for a lot of people to pick up on someone thinking they’re an idiot and not wanting to open up to them as a consequence).
Ignoring the vague “intelligence” label and trying to get at more granular aspects of people’s personality, competencies, etc. is a good way to break what may be a cached speed-optimization rather than a good classification scheme. You can even use things you believe to be components of “intelligence” as your indicators if you like, that’s a good way to make your notion of “intelligence” more concrete at the very least.
Viewing people in terms of their strengths is a good exercise for respecting them more and being better able to relate to them and utilize them for things they are good at. Relatedly, viewing people in terms of their weaknesses is a good exercise that can help break the “idolization” anti-pattern (Or test your assumptions about how to compete with them)
Hi.
I guess I have some abstract notion of wanting to contribute, but tend not to speak up when I don’t have anything particularly interesting to say. Maybe at some point I will think I have something interesting to say. In the meantime, I’ve enjoyed lurking thus far and at least believe I’ve learned a lot, so that’s cool.