Open thread, Oct. 6 - Oct. 12, 2014
If it’s worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the ‘open_thread’ tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.
On the suggestion of Gunnar_Zarncke, this comment has been transformed into a Discussion post.
This should at least be in Discussions. It is very valuable high level feedback about the value of LessWrong.
If you agree and if you want to avoid duplicating it you can remove the body of the text and replace it with a link to the Discussions post.
I’ll do that. However, Peter Hurford listed his similar experience below. This post can generate even more value (and I don’t mean karma, I mean people dovetailing) on its value with better, stronger examples than just the ones I’ve provided above. I’m thinking this could be a repository, and a catalyst for ever more users to get values out of Less Wrong as both a community, and a resource, like this. If you know of other users with similar experiences, please ask them if they’d be willing to share their stories, and include them in this post.
Ways to act on this idea:
Collaborate with Peter Hurford to jointly post this.
Create a Positive LW Experience Thread
Create a LW Wiki page where this can be collected.
I think the last has the most permanent effect but by itself is likely to receive little contribution.
I had a similar experience asking about my career choices.
Gunnar_Zarncke also commented that I should at least turn my above comment into a post in Discussion. Before I do that, or if I go on to post it to Main. if the reception goes well enough, I’d like to strengthen my own post by including your experience in it. I mean, the point I made above seems to be making enough headway on the few things I did alone, and if the weight of your clout as a well-known effective altruist, and rationalist, is thrown behind it, I believe we could make even more traction in generating positive externalities by encouraging others.
I remember there was a ‘Less Wrong as a social catalyst’ thread several months ago we both posted in, found valuable, and got great receptions for the feedback we provided. I might mine the comments there for similar experiences, message some users, and see if they don’t mind doing this. If you know of other friends, or peers, on Less Wrong, who have had a similar experience, I’d encourage you to get them on board as well. The more examples we can provide, of a more diverse base of users, the stronger case we can build. In doing so, I’d attribute you as a co-author/collaborator/provider of feedback when I make this a post in its own right.
Sounds good to me. I’ve wanted to write a “what EA/LW has done for me” post for awhile and may still do so.
Gunnar got enough upvotes for merely suggesting that I post this in Discussion that it shows a lot of promise. I didn’t anticipate this, and know I’m feeling ambitious. More than just generating a single thread of positive Less Wrong responses, I’d prefer to a call for any members of the site with a deep, broad enough experience of getting great advice from Less Wrong to make a post of their own as I will. So, yes, make your own.
However, if we can get others to come out of the woodwork to write reports, inspire more users to ask personal questions of Less Wrong, and then get them to turn that into future posts, there potential for personal growth for dozens of users on this site. I wouldn’t call it a chain reaction, per se, but I anticipate an unknown unknown value of positive externalities to be generated, and I want us to capture that value.
At the rationality meetup today, there was a great newcomer. He’s read up most of Eliezer’s Yudkowsky’s original Sequences up to 2010, and he’s also read a handful of posts promoted on the front page. As a landing pad for the rationalist community, to me, Less Wrong seems to be about updating beyond the abstract reasoning principles of philosophy past, toward realizing that through a combination of microeconomics, probability theory, decision theory, cognitive science, social psychology, and information theory, that humans can each hack their own minds, and notice how they use heuristics, to increase their success rate at which they form functional beliefs, and achieve their goals.
Then, I think about how if someone has only been following the rationalist community of Less Wrong for the last few years, and then they come to a meetup for the first time in 2014, everyone else who’s been around for a few years will be talking about things that don’t seem to fit with the above model of what the rationalist community is about. Putting myself back into a newcomer/outsider perspective, here are some memes that don’t seem to immediately, obviously follow from ‘cultivating rationality habits’:
Citing Moloch, an ancient demon, as a metaphorical source of all the problems humanity currently faces.
How a long series of essays yearning for the days of yore has led to intensely insular discussion of polarized contrarian social movements, This doesn’t square with how Less Wrong has historically avoided political debates because of how they often drift to ideological bickering, name-calling, and signaling allegiance to a coalition. Such debates aren’t usually conducive to everyone reaching more accurate conclusions together, but we’re having them anyway.
Some of us reversing our previous opinions on what’s fundamentally true, or false.
Less Wrong is also welcomes discussion of contrarian, and controversial, ideas, such as cryopreservation, and transhumanism. If this is the first thing somebody learns about Less Wrong through the grapevine, the first independent sources they may come across may be rather unflattering of the community as a whole, and disproportionately cynical about what most of us actually believe. Furthermore, controversy attracts media coverage like moths to a flame, which hasn’t gone to well for Less Wrong, and which falsely paints divergent opinions as our majority beliefs.
I’m not calling for Less Wrong to write a press coverage package, or protocol. However, I want to foster a local community at which I can discuss cognitive science, and the applications of microeconomics of everyday life, without new friends getting hung up on the weird beliefs they associate me with.
Additionally, in growing the local meetup, my friends, and I, in Vancouver, have gone to other meetups, and seeded the idea that it’s worth our friend’s time to check out Less Wrong. We’ve made waves to the point that a local student newspaper may want to publish an article about what Less Wrong is about, and profile some of my friends in particular. However, this has backfired to the point where I meet new people, or talk to old friends, and they’re associating me with creepy beliefs I don’t follow. It sucks that I feel I might have to do damage control for my personal standing in a close-knit community. So, I’m going to try writing another post detailing all the boring, useful ideas on Less Wrong nobody else notices, such as Luke’s posts about scientific self-help, or Scott’s great arguments in favor or niceness, community, and having better debates by interpreting your opponent’s arguments charitably, or the repositories of useful resources.
If you have links/resources about the most boring useful ideas on Less Wrong, or an introduction that highlights, e.g., all the discourse of Less Wrong which is merely the practical applications of scientific insight for everyday life, please share them below. I’ll try including them in whatever guide I generate.
I think one of the things worth noting about LW is that Holden Karnofsky Thoughts on the Singularity Institute is the top rated post. LW is a space where you can argue against the orthodox views if you bring arguments. This distinguish LW from nearly every other online forum.
I don’t think that online forums need media exposure. The usual way to find an online forum is through a Google search or through a shared link to a discussion.
Holden Karnofsky is a high-status person, which is the most important factor. I don’t think the same criticism by someone else would have received as many upvotes.
If that would be the case all post by high status people should get a high amount of votes. I think it’s hard to explain via status why Karnofsky’s post got more votes than any single post by Yudkowsky of the sequences.
The big deal is him being a high status outsider who made a contribution with a great deal of effort in it. It can be taken for granted that high-status insiders make many contributions.
How many online community are there that considers outsiders to be high status to an extend that the highest rated post is by an outsider?
It’s about a surrounding society’s measure of status, not about the community’s. Celebrity-outsiders (high status on the outside, indeterminate status on the inside) dropping in at Reddit often get a very positive reception for example. Random-person-outsiders (indeterminate status both outside and inside) get the random person outsider reception. The drop-in celebrities at Reddit probably don’t net the top ratings for the whole site, given how Reddit is huge, but a small forum that doesn’t have that much inside vote activity could easily end up treating an interesting high outside status person dropping in as the most interesting event in the forum history.
I don’t think Holden Karnofsky is high status as far as society goes. Outside of people with interest in Effective Altruism he’s just a random person running an NGO.
Social status has many dimensions. Credible professionalism and a position to affect an organization with notable resources and visibility are pretty robust ones.
Givewell has 1 million in revenue per year. There are plenty of organisations of that size. I don’t think it’s a large amount of resources.
Personally, I upvoted that post for cogency and pertinence. I didn’t know enough about Holden Karnofsky to distinguish him systematically from background until that post.
I’m curious. Who is Holden Karnofsky high-status to, in your opinion? I mean, I acknowledge that this website, and effective altruism, and maybe a subset of the philanthropy community in the United States is very enthusiastic about the work he does. If I wasn’t, I wouldn’t have given Givewell $1000 USD last year.
However, my friends from outside the efficient charity cluster don’t know who he is, and I doubt would update to extolling his greatness as soon as I explained what he does anyway.
Of course status is highly context and group specific. Status is relative, not absolute. He has high status in effective altruism / rationality cluster because he’s probably the most highly accomplished in this group.
No, I don’t think that is the key difference. I think the reason that SIAI (at the time) payed attention to Karnofsky is that he was willing to signal his in-group membership and speak the local jargon, thereby preventing his criticisms from being immediately dismissed (I think MIRI has gotten better about this lately, but they’ve been pitching themselves so high-status that it’s screwing with my intuition about their likely behavior :/ )
Holden Karnofsky is great, and Less Wrong is a great discussion board in a community for being so receptive of arguing against orthodox views. If he identified as a rationalist, I’m sure this community would be fine counting Holden Karnofsky among themselves. However, some media coverage Less Wrong has received is exactly as it is because bloggers, or journalists, or whoever, don’t come to this site to have a dialogue, and for both sides to learn something from each other.
I wrote this comment in the moment without lots of forethought, so I didn’t clarify myself enough. I haven’t invited a student journalist to write an article about Less Wrong to get good press coverage because others are worse. The publication is small enough that it wouldn’t get enough traffic to change the outside cultural perspective of Less Wrong’s culture anyway. One of the editors mentioned to this student journalist that I’m an organizer for the local meetup, and he came with me with lots of questions. Before he asked, he mentioned his impression thus far of Less Wrong was that it was full of ‘hyper-rationalist pseudoscience’, and that a typical belief of Less Wrong was of that of a fear-inspiring imaginary counter-factual monster I need not mention by name.
Anyway, in particular, he may want to profile the local meetup. So, I could let him go on impressions he gets from Slate, and RationalWiki, alone, or he could talk to me, and get an impression that Less Wrong is about literally anything else besides fringe transhumanism.
If the article really becomes a thing, I will invite the journalist to interface with Less Wrong as Holden has. If the article is about ‘what is this intellectual community we [the readership] have heard popping up in town, and what do they believe?’, I will now direct him to the Less Wrong survey results. You’ve inspired me to do this with your feedback, ChristianKI, so thanks.
Why not?
A better Wiki page on the community could be a start. Maybe Wiki’s entry on “Less Wrong” (with a space in between) should redirect there, rather than to Eliezer’s page (as it currently does) and that might attract attention from people who first google the term.
At the time I wrote the original comment, I didn’t want to come across as going on a crusade to change everything out of some sort of over-reaction, so I downplayed what could have been construed as my intentions. However, I don’t see a problem with creating it. I don’t know if I’ll do this myself. What I will do though is post a comment in the next open thread about if there’s any changes Less Wrong members want to see made to the Less Wrong Wiki. It’s a neglected but valuable resource that become even better with more additions.
Awesome. Thanks for taking the initiative!
The Boring Advice Repository obviously.
And the repository repository, (which, sadly, does not contain itself)
I’d say that the Moloch thing isn’t that much more weird than our other local eschatological shorthands (“FAI”, “Great Filter”, etc.). That’s just my insider’s perspective, though, so take with many grains of salt.
I believe you’re right. I’m not familiar with the Great Filter being lambasted outside of Less Wrong, but myself, and the people I know personally, have generally discussed the Great Filter less than we have Friendly A.I. On one hand, the Great Filter seems more associated with Overcoming Bias, so coverage of it is tangential to, and having a neutral impact upon, Less Wrong. On the other hand, I spend more time on this website, so my impression could be due to the availability heuristic only. In that case, please share any outside media coverage you know of the Great Filter.
Anyway, I chose Moloch to stay current, and also because citing a baby-eating demon as the destroyer of the world seems even more eschatological than Friendly A.I. So, Moloch strikes me as potentially even more prone to misinterpretation. (un)Friendly A.I. has already been wholly conflated with a scandal about a counterfactual monster that need not be named. It seems to me that that could snowball into misinterpretations of Moloch bouncing across the blogosphere like a game of Broken Telephone until there’s a widely-read article about Less Wrong having gone about atheism, and rationalism, so thoroughly wrong that it flipped back around to ancient religions.
The fact that Less Wrong periodically has to do damage control because there is even anything on this website that can be misinterpreted as eschatology seems demonstrative of a persistent image problem. Morosely, the fact that the outside perspective misinterprets something from this site as dangerous eschatology, perhaps because someone would have to read lots of now relatively obscure blog posts to otherwise grok it, doesn’t surprise me too much.
This seems pretty unlikely to me. I think the key difference from the uFAI thing is that if you ask a Less Wrong regular “So what is this ‘unFriendly AI’ thing you all talk about? It can’t possibly be as ridiculous as what that article on Slate was saying, can it?”*, then the answer you get will probably sound exactly as silly as the caricature, if not worse, and you’re likely to conclude that LW is some kind of crazy cult or something.
On the other hand, if you ask your LW-regular friend “So what’s the deal with this ‘Moloch’ thing? You guys don’t really believe in baby-easting demons, do you?”, they’ll say something like “What? No, of course not. We just use ‘Moloch’ as a sort of metaphorical shorthand for a certain kind of organizational failure. It comes from this great essay, you should read it...” which is all perfectly reasonable and will do nothing to perpetuate the original ludicrous rumor. Nobody will say “I know somebody who goes on LessWrong, and he says they really do worship the blasphemous gods of ancient Mesopotamia!”, so the rumor will have much less plausibility and will be easily debunked.
* Overall this is of course an outdated example, since MIRI/FHI/etc. have pulled a spectacular public makeover of Friendly AI in the past year or so.
Please note that none of those links points to a LessWrong page. They are two personal blogs. Personal blogs don’t have to follow LW policies.
I consider Moldbug almost completely irrelevant for LW. He has a few fans here, but they are a tiny minority (probably fewer than e.g. religious LW members). We don’t consider him a rationalist blogger, and don’t link to him in a list of rationalist blogs.
Scott is a LW member who has posted a few articles here; that is much more relevant. But anyway, SSC is his personal blog. (Also, his articles seem sufficiently sane to me—I would love to see more political debates be done like this.)
I guess we need a definition of some core principles of LW community, so the newcomers know what is canonical and what is not. May I suggest Sequences?
This seems like a significant understatement given that Scott has the second highest karma of all-time on LW (after only Eliezer). Even if he doesn’t post much here directly anymore, he’s still probably the biggest thought leader the broader rational community has right now.
I agree with ahbwramc. Going From California with An Aching Heart doesn’t seem to be something written by someone only kinda involved with the rationalist community.
First of all, mea culpa.
I should have provided more context to assuage confusion. The Talon is an alternative social justice publication at a local university. Their editorial board overlaps with the skeptic community in Vancouver itself, which is quite insular, which overlaps with the rationality meetup in Vancouver, too.
There has been some ideological bickering, name-calling, and signaling allegiance to a coalition of classic skeptic community v. Less Wrong perspectives on the Internet, and at various meetups, maybe at pubs, in Vancouver. I myself, among others, may not have engaged in discussions, or debates, as judiciously as would have been prudent. This also involved arguments over articles written on Slate Star Codex, which ‘social justice warriors’, as some call them(selves), find upsetting.
However, none of us here on Less Wrong knew there was enough chatter going around that the first time I meet a journalist, he knew who I was, and asked him why my friends held such peculiar beliefs that are out of line with mainstream scientific consensus if we’re ‘rationalists’. He was a friendly guy I actually like, but his misconceptions seemed worrisome, if he wanted to profile people I know personally. I don’t want a schism rising in my neck of the woods where my friends and I are seen as kooky neckbeards as soon as we enter a public space.
Yeah, when someone is very famous on LW, then even if they publish something on their private blog, it feels like an “idea connected with LW”, especially if the readerships overlap. :(
No idea what to do about this. I support Scott’s right to write whatever he wants on his blog; and the rules of LW do not apply for his blog. On the other hand, yes, people will see the connection anyway. It’s like when someone is a celebrity, they lose their private life, because everything they do is a food for gossip.
(Heck, Scott doesn’t even write under the same name on LW and SSC. But everyone knows anyway. What a horrible thing; not only one has to hide their true name, but even keep their individual pseudonyms hidden from each other.)
Uhm, I missed the connection somewhere. As far as I know, social justice warriors are not mainstream scientific consensus. And Scott doesn’t blog about many-worlds interpretation of quantum physics. :)
Okay, now seriously. I think you maybe overestimate the mainstream status of SJWs. What’s upsetting for them, is not necessarily upsetting for an average person. And optimizing for them… pretty much means following their doctrines, or avoiding discussing any social issues.
(Connotationally: I am not saying “upsetting SJWs is okay”, although I am also not saying it isn’t. Just that SJWs are not mainstream. So do we worry about the image in the eyes of mainstream, or in the eyes of SJWs?)
Right, obviously, I should have thought of this. The skeptic movement tends to be alternative, and socially liberal, and Vancouver city is full of skeptics who are also activists. ‘Vancouver Rationalists’ overlaps with the ‘Vancouver Skeptics’, and sometimes we talk to them without always being humble enough. Among these people are a few friends.
Let’s put ourselves in their shoes
We’re a bunch of people who feel (society is) threatened by others’ abuse of social privilege. Not always, and not by most of them, but we notice much of this type of abuse is at the hands of white males. Now we notice a bunch of one type of white male showing up at our safe spaces, often talking about this online community of (mostly) the one same type of white males. This community of (mostly) white males seems to disdain political activism and seem like they might be the same type of male jerks at college who say women can’t do math and science. And this online community believes they’re so good at science they can figure out even what Ph.D’s can, which doesn’t line up with skepticism. And the most popular white male blogger in this community should be allowed a safe space where anyone can say triggering things without using trigger warnings, they think we’re too politically correct, and they think there’s not enough evidence behind our activism.
...and back in our own shoes
Imagining the above, which even if it’s oversimplifying, makes it seems how some poor communication begetting tension seem obvious for Vancouver, if not other places.
*(a better, in a sensitive way, word than ‘warrior’)
An unavoidable consequences of promoting rationality is upsetting the irrational.
I am not quite sure what are you saying here. It does sound like you want LW to change so that it becomes more acceptable to your new friends and that seems to me a strange way to approach things.
For the last few years, my friend Eric and I have been part of the skeptic community in Vancouver. He had been involved with the rationalist community for a couple of years before I was, and then I eventually came around. After having each gone to CFAR workshops, Eric, a couple other CFAR alumni friends, and myself returned to Vancouver inspired and excited to seed a community as vibrant as that in the Bay Area. So, we go to other meetups for skeptics, and the like, and discuss their ideas, and tell them if they want to expand the sort of thinking going on at skeptics meetups to novel topics, to join us at our Less Wrong meetup.
We have also reached out to some local university clubs, the local Bitcoin scene, and the life extension community. This has gone phenomenal. I feel like we’re finally putting all the pieces of the correct contrarian cluster puzzle together. ‘Hanging out with my closest friends’, and ‘learning important things with others’ are synonymous in my social life.
However, with the few skeptics groups, with a misplaced explanation of a technological singularity here, and a heated debate on cryonics my other friend had over there, I’ve meet people at parties asking me why I hold peculiar beliefs that I don’t hold. The freethought community in Vancouver is very insular, as over half the city, by census data, identifies as not belonging to a major religious denomination. We got too enthusiastic in growing the meetup, turned some people off, and gossip started. If an article is written poorly, than not for all of Less Wrong, but for my friends, and I, in particular the pattern could become crystallized that we’re kooks only pretending to be freethinkers. This wouldn’t bode well, but in collaboration, I can help decrease distrust, and strengthen bonds between two communities that seem like they should be allies rather than enemies. This doesn’t affect the whole community, maybe just my corner of it. Suggestions are welcome.
So you got pattern-matched to something you’re not. And? That’s very common and will NOT be fixed by any changes in LW.
That still is not something that can be fixed by LW changing.
Also, LW is a global forum. You should expect that a community local to one city will find many strange things in a global forum.
Well, my original post started out with one thesis that morphed into another by the time I finished writing it. At this point, with what I’ve learned in dialogue with other users, is that, of course I can’t change Less Wrong. I didn’t want to in the first place, anyhow. However, what’s going on is that Less Wrong was my impetus in generating a conundrum I may now mitigate, and I thought I’d return to Less Wrong to ask for its advice on handling the issue. This makes sense all the more because the community can share with me their maybe similar impressions, or experiences, because the community is made of people.
In this regard, this is how I should have thought from the beginning. What others I know personally think of Less Wrong is a feature, but not the source, of my problem.
Well, I have the same opinion of most of the people calling themselves “freethinkers”.
Obviously you should ditch your new friends, if they’re not willing to sign on to our awesome community!
I started keeping a diary about a month ago. The two initial reasons I had for adopting this habit were that, first of all, I thought that I would enjoy writing, and second of all, I wanted to have something relaxing to do for half an hour before my bedtime every evening, because I often have trouble getting to sleep at night.
I have found that I generally end up writing about my day-to-day social interactions in my journals. One really nice benefit of keeping a journal that I hadn’t expected to reap was that writing has helped me weakly precommit to performing certain actions that help me improve at being sociable. For example, a few weeks back, there were a couple nights where I wrote about how I felt bad about how a new transfer student to my school didn’t seem to know anyone in the class which we had together. A couple days after writing about this, I ended up asking him to hang out with me, which was something that I normally would have been too shy to do.
Another thing that I learned is that writing about your problems can help you digest them in ways which are helpful to you. On a meta- level, I think that writing about my social interactions with others has helped me realize that I want to spend more time with my friends, at the expense of spending less time reading through e.g. posts on Reddit. Looking back on things, it is painfully obvious to me that spending time with my friends is much better than spending time on random internet sites, though I hadn’t explicitly realized that I had been failing to spend time with my friends until I ended up writing about the fact that this was the case.
Actually, before I had even started journaling, I had known that thinking about problems by writing about them or making diagrams was, in general, a helpful thing to do—after all, plenty of people benefit from drawing pictures when stuck on, say, math problems. However, it wasn’t previously obvious to me that problems other than math and science problems could be analyzed by writing about them or drawing diagrams that represented the problem. Basically, I found a way (which was previously unknown to me) to identify and solve problems in my life.
Journaling, writing a diary, expressive writing, writing therapy. This practice goes by many names and seems to be effective in psychologyically assisting a person.
Personally, I am forming a habit of writing at least 750 words a day in my diary on the computer. It seems to help me recognise trends and I can’t argue with what I have written down, it is plain and simply there.
No such thing.
For any given problem, once a possible solution is reached, do you expect to be able to check that solution against reality with further observations? If so, you have constructed a theory with experimental implications, and are doing Science. If not, you have derived the truth, falsehood, or invalidity of a particular statement from a core set of axioms, and are doing Math.
I enjoy keeping a diary, to crystallise thoughts and experiences, but to restrain my tendency to blather it’s a diary of haikus.
Don’t you mean:
I’ve a diary
To get my thoughts in order
This is how it works:
To keep myself terse
All entries must be haikus
Thus I don’t ramble.
[EDITED to add: of course strictly these aren’t actually haiku since the 5-7-5 thing is just a surface feature, but I conjecture BenSix’s diary entries also mostly aren’t.]
Indeed. I attempt to juxtapose ideas but often there is too pressing a need to juxtapose my head and a pillow.
Beauty and reason
In reports of every day
Rationality
We should probably
Stop the running joke right here.
What is this, Reddit?
Here’s a fun game: concepts, ideas, institutions and features of the world we (let’s say 21st Century Westerners) think of as obvious, but aren’t necessarily so. Extra points for particularly visceral or captivating cases.
For example: at some point in human history, the idea of a false identity or alias wouldn’t have even made sense, because everyone you met would be either known to you or a novel outsider. These days, anyone familiar with, say, Batman, understands the concept of an assumed identity, it’s that endemic in our culture. But there presumably must have been a time when you would have had to go to great lengths to explain to someone what an assumed identity was.
A few examples:
Accurate timekeeping and strict schedules (a very famous example). Although sundials and water clocks were known since antiquity, they weren’t very accurate and the length of an hour varied with the length of the day. It was rare for an average person to have a strict schedule. Even in monasteries and churches schedules probably could not be very strict, as although clocks did strike hours usually they weren’t very accurate (13th-14th century mechanical clocks had no faces at all, and it wasn’t until late 17th century when they became precise enough to justify regular use of minute hands) and they would likely regularly be reset at local high noon each day. In fact, it was only after the invention of pendulum clock by Christiaan Huygens in 1656 that timekeeping became accurate and independent of the length of the day, however, as late as 1773, towns were content to order clocks without minute hands as they saw no need for them. In 1840 railway time was invented. It was “the first recorded occasion when different local times were synchronised and a single standard time applied. Railway time was progressively taken up by all railway companies in Great Britain over the following two to three years.”. According to wikipedia, 98% of Great Britain’s public clocks were using GMT by 1855. After the industrial revolution and invention of the light bulb, most people have schedules which depend on the official time rather than Sun’s position in the sky.
Historian Roger Ekirch argues that before the industrial revolution the segmented sleep was the dominant form of human slumber in Western civilization and it is a myth that we need eight hours of uninterrupted sleep each night.
Concepts/analogies/metaphors/models that depend on having certain technologies to be understood. Possible examples: the clockwork universe, human mind as a computer. Although in some cases it is not clear whether a certain technology was necessary to inspire the creation of the philosophical concept, or was it simply a very nice example that helped to elucidate an already existing idea.
Historian David Wootton argues that until mid-19th century and the discovery of germ theory physicians did more harm than good to their patients. Nowadays most people expect positive results when they go to the doctor.
Many other inventions changed the landscape of ideas and what is taken for granted (ability to communicate over long distances, ability to store fresh food safely in the fridge (according to a documentary I watched, this was one of the main factors that enabled the growth of cities), large ships, accurate maps with no uncharted territories, etc.).
It think this question is very broad, perhaps too broad.
This raises two questions:
1) Why, despite this, doctor was in general respected and well-paid profession?
2) What would have happened if use of statistics in medicine became widespread before germ theory. Could it lead to ban on medicine?
The faith-healing preacher, the witch-doctor, and the traditional healer are respected professions in the cultures where they occur. The Hippocratic physician was basically the traditional healer of Western civilization. He offered interventions that might kill, might cure, and were certainly impressive.
(It’s worth noting that surgery was not within the traditional province of physicians. The original Hippocratic oath forbids physicians from doing surgery since they were not trained in it.)
That’s not a new idea!
Lewis Thomas (“The Youngest Science”) dates net benefit to well past 1900.
Your first link seems to say that Wootton dates it to antiseptic surgery. But that’s just one good thing, which needn’t balance many bad things. I’ve heard that the bad doctors did increased in the 19th century. For example, Lewis Thomas says that homeopathy was a reaction to the increase in the harm of 19th century drugs. Your second link seems to say that Wootton isn’t talking about net effects, but of doctors doing any good at all. That’s a pretty strong claim.
I don’t know about that—the Odyssey, for example, doesn’t have any trouble with the idea of a false identity...
Technically you are correct, of course, I don’t know if the concept of false identity would have made sense to a paleolithic tribe, and if it did we can always go earlier until it wouldn’t. But at this stage, a LOT of contemporary concepts would disappear.
As to your game, I think you need to limit it in some way, otherwise too much stuff (from women’s rights to telecommunications) qualifies.
That time is clearly before the Arthurian cycle, which contains several instances of knights taking someone else’s armour and being taken for that person—most famously, Kay the Seneschal and Lancelot. Arguably also before the period in which Greek myths were composed; Zeus occasionally disguises himself as someone’s husband for purposes of seduction. In the Bible, Jacob disguises himself as his brother Esau to obtain their father’s blessing, although admittedly the deception hinges on their father being blind. Mistaken identity seems to be a fairly old concept, then.
Homosexual identity. Over much of human history men and woman did engage in homosexual activity but they didn’t made it a matter of personal identity.
I wonder whether we can distinguish between these two hypotheses:
Formerly, no one (or very nearly no one) regarded homosexuality as a matter of personal identity.
Formerly, people writing books didn’t (openly, at least) regard homosexuality as a matter of personal identity.
I have the impression that until recently most cultures have either (1) regarded same-sex sex as abominable and shameful, or (2) regarded it as a perfectly normal activity for anyone (at least in certain circumstances). In case 1, a few percent of (what we would now call) homosexual people would be best advised to try to avoid being noticed. In case 2, they might be lost in the noise. In neither case is it clear that we’d expect to see much written about (what we would now call) actually homosexual people.
(I am vastly ignorant of history, and would not be very surprised to find that the impression reported in the previous paragraph is wrong.)
We do have writing about people who engage in homosexual activity.
Today being homosexual doesn’t mean “having sex with people of the same sex or even enjoying having sex with people of the same sex”. It’s something much more abstract.
In the middle of the 20st century we seeing a bunch of gay people speaking their own language with Polari. That’s something very strange from many view points of history and given today’s situation of Polari, I don’t think it will take that much time till we’ll also find it strange. At the height of Polari, homosexual activity was illegal.
Sure. What did I say that suggested I thought or expected otherwise?
You put that in quotation marks as if I said it or something like it; I didn’t. Of course there is more to being homosexual than having same-sex sex; at the very least homosexuality as understood nowadays involves (1) romantic love as well as sex and (2) a sustained preference for same-sex partners. I’m not sure whether that’s all you’re saying, or whether you’re also saying that (e.g.) there’s a whole lot of history and culture too. If the latter: I agree that there is, but I wouldn’t regard that as strictly part of “homosexual identity”, exactly, nor would I say it seems “obvious” in the same kind of way as the mere existence of homosexuality does (even though maybe in fact until recently there wasn’t any such phenomenon).
Yes, I agree that that’s a peculiar phenomenon. I think it’s part of the transition from “abominable, shameful and illegal” to “accepted and normal”, via “accepted and normal within a somewhat cohesive albeit marginal group”.
I’m not sure whether any of what you wrote is intended as support for the claim that until recently no one regarded homosexuality as a matter of personal identity (as opposed to the weaker claim that until recently people didn’t record instances of homosexuality being regarded as a matter of personal identity). If it is, I’m afraid I’m not seeing how it works. This may indicate that I’m misunderstanding exactly what meaning the term “homosexual identity” has in your original comment.
Being homosexual is today about making a choice to identify as homosexual.
I have a sustained preference to wear glaces but wearing glasses isn’t part of my self identity. I don’t think of myself as a glass wearer.
So. Imagine someone—let’s say a man—who is in a long-term romantic and sexual relationship with another man, who has never felt romantically or sexually attracted to women but often has to men, but for whom “identifying as homosexual” is exactly as major a part of his life as “identifying as heterosexual” is for most heterosexual people.
Would you say that that person is, or isn’t, homosexual?
I ask because it’s still not clear to me which of two things you’re saying is now regarded as “obvious” but formerly was largely unknown: (1) homosexual orientation—i.e., people regarding themselves, and being regarded, as primarily attracted to others of the same sex; (2) some stronger notion of homosexual “identity” that involves (e.g.) that identity being a central part of how one consciously identifies oneself, a label that one wears with pride, etc.
I think #1 is certainly widely regarded as “obvious” now and may well have been extremely rare in the past, though for the reasons I’ve given above I am not yet fully convinced that it was extremely rare in the past. I think #2 is certainly a thing that happens now but I’m not sure it’s regarded as “obvious” in the same way (and suspect that if “homosexual identity” is a bigger thing than “heterosexual identity” it’s largely because that’s what often happens with persecuted minorities, and that if—as currently seems likely—society moves further in the direction of treating homosexuality as no weirder or worse than lefthandedness then “homosexual identity” will become less of a big deal). So #2 may be a transient thing.
Incidentally, I see my comments here are getting some downvotes. If whoever’s making them would like to tell me why, there’s a better chance of fixing whatever (if anything) is broken; on rereading what I wrote, I don’t see anything obviously stupid or objectionable in it.
I’m not talking about whether or not the person is homosexual but as whether the person identifies as homosexual.
Of course heterosexual identity mirrors homosexuality identity. Those are two sides of the same coin. Heterosexual is a word invented in the 20st century.
It also comes with some baggage that considers male to male physical intimacy like hand holding abnormal while that kind of physical intimacy between friends was perfectly normal before the 19st century.
In the 19st century males started to stop engaging in actions such as hand holding with male friends to signal that they aren’t homosexual. There’s frequently latent homophobia that get’s triggered via male to male physical intimacy.
In the contact improvisation scene most people don’t have that. Male to male physical intimacy is perfectly fine in that scene. There you have people who value authentic expression instead of playing out roles.
Authentic expression, or just different roles? I’m fairly sure that if I was involved in contact improvisation, it would be the latter for me. That is, these are the customs I see here, so while I am here, I will adopt these customs.
It seems to me that authentic expression is not in opposition to roles, but is orthogonal to them, just as in speech, truthfulness is orthogonal to the language being spoken.
The contact scene does value authenticity very much. If you simply go there, you might start out with trying to copy a role but you would be doing things wrong.
Authenticity is also not something that’s easily faked if you dance with people with good physical perception. Being authentic changes the presence that you have.
But what you said was: “Being homosexual is today about making a choice to identify as homosexual.” and that’s what I was asking about. Did you actually mean “Identifying as homosexual is today about making a choice to identify as homosexual”? ’Cos if so, it’s probably true but doesn’t seem very interesting.
It seems to me that the idea of homosexual identity and the idea of homosexual orientation should be expected to have opposite effects on how much men with an insecure sense of their own masculinity would worry about physical contact with other men.
The concept of homosexual orientation gives them the ability to worry that they might be homosexual, not merely that they might have some attraction to men.
The concept of homosexual identity, on the other hand, gives them the ability to say “well, yes, I’m doing this, but I’m not one of Them” on account of “Them” having a clear boundary rather than just a matter of having one or another set of propensities.
Empirically, it does indeed seem that the emergence of both those things has come along with a new reluctance on men’s part to engage in nonsexual physical intimacy with other men; I suggest it’s the idea of homosexual orientation, not the idea of some stronger sort of homosexual identity, that’s more likely a cause.
(Does anyone have good estimates of (1) when men started being reluctant to engage in physical contact with other men, (2) when the idea of homosexual orientation first emerged, and (3) when the stronger notion of homosexual identity first emerged? According to the OED, the English word “homosexual” seems first to have appeared in 1892, in an English translation of Krafft-Ebing. According to Wikipedia, K-E’s use of the term (in German) is anticipated by a an anti-anti-sodomy pamphlet in 1869. Of course the word and the concept may have different histories.)
On this topic, “Love Stories” by Jonathan Katz is an informative source of western social developments around sexual orientation in the 19th century. There’s a particular focus on Walt Whitman (I think it was developed from a paper or lecture on the guy), but with plenty of focus on wider social mores and changes therein.
(1) I believe the turn of the century is when it started shifting in a big way in the United states, but this is a particularly finicky thing to measure and really contingent on geography. In the 1880s, it was still routine for a male visitor to a house to share his bed with other male residents in most places in the US. I am pretty sure it was unusual in by World War 2.
(2)The word was invented by what we would now think of as pro-gay activists in mid 19th century Germany, with the specific goal of creating a concept to describe people with innate, enduring preferences for both sexual and romantic couplings with the same sex. (There was also a fair bit of conflation with what we would now call transgenderism or intersex individuals, with homosexual men having a ‘feminized seed’.) The concept didn’t really cross the language barrier or the Atlantic ocean until about the last decade of the 19th century.
(3)The oldest real example I can think of is Plato’s symposium, the myth of Aristophanes. This myth (purporting to explain the origins of romantic love) describes an ur-human race with two faces, four arms, four legs, etc. Some of these had two male or two female, and some had one of each. The gods, being wrathful blokes, cut these ur-humans down the middle, and the two halves are reborn and spend their lives looking for the rest of their body- literally, their ‘other half’. Those with originally all-male or all-female bodies look for the match among members of the same sex, providing a mythological basis for a positive identity much like modern homosexuality. (Note that ancient Greeks in general didn’t seem to take this view as a consensus, often outlawing homosexuality between adult men even as they endorsed homosexual pederasty.)
That’s mostly a function of society becoming affluent enough that people could afford to have a spare bed for when visitors come over.
Also, see Straight: The Surprisingly Short History of Heteroxexuality by Hanne Blank.
After all, if homosexuality wasn’t a mental category, heterosexuality couldn’t be, either.
That’s a much more major part of certain heterosexual people’s life than of others, and I’m not sure where the median is (assuming you mean “most” literally).
Agreed. (I agree it varies, and I too am not sure where the median is.) But I take it that if ChristianKI is arguing #2 rather than #1 then he sees “homosexual identity” as a bigger thing than “heterosexual identity” in some sense, and my wording was intended to invite him to consider someone for whom that isn’t so. I can’t nail down the details because I don’t know in exactly what sense Christian (conditional on his intending #2 not #1) does consider homosexual identity a bigger thing than heterosexual identity.
Um, those kinds of low status with shades of criminality subcultures have had separate dialects for quite a long time.
Note: I found the above link as the first link from Wikipedia’s article on Polari.
Eric Raymond has a fairly good description of historical attitudes towards homosexuality here.
Edit: here is the key paragraph:
ESR’s not basing his “analysis” on anywhere near enough evidence. His claim that he is working from “primary sources” is laughable at best.
And your criticism of his analysis is based on...
Would be improved by more explicit comment on what for you would count as enough evidence and using primary sources.
(That isn’t a coded way of saying you’re wrong.)
There are plenty of comprehensive histories of queerness. ESR just won’t read or believe any of them.
Yes, primary sources screen out secondary sources.
If you have enough primary sources relative to what the secondary sources have, and if your overall grasp of the issue is as good as that of the authors of the secondary sources.
On the other hand, if what you have is what the paragraph quoted by paper-machine suggests, and if you’ve not devoted months of thought and study to the issue (which ESR may or may not have done), it could easily be the case that you’d learn a great deal more if you paid attention to some good secondary sources.
Assuming the authors of the secondary sources are interested in presenting an accurate account, as opposed believing it is there duty to lie for the “greater good”.
Yup, assuming that. Or at least assuming you can discern any lies well enough that on balance you still benefit from reading. Which is the same thing as you have to assume when reading anything else.
Just out of curiosity, have you made a careful examination of primary sources in order to tell us that
(as opposed to, e.g., a plausible-sounding description that has been fudged “for the greater good”, or that is inaccurate because the selection of sources Eric Raymond happens to have encountered gives a misleading picture, or that is inaccurate because Eric Raymond has misunderstood something or jumped to conclusions that fit his own biases, or whatever)?
… Or is it only people on one side of any argument who should be expected to lie for the greater good, expected not to be interested in truth, and so forth?
Not as careful as Eric but what I have seen agrees with him.
And what sources do you have?
Tentatively: that tolerance and intolerance of strangers should be a matter of law rather than local impulse.
The ideal existed since antiquity, but — as today — wasn’t consistently practiced.
“Do not mistreat or oppress a foreigner, for you were foreigners in Egypt.” — Exodus 22:21
“The foreigner residing among you must be treated as your native-born. Love them as yourself, for you were foreigners in Egypt. I am the LORD your God.” — Leviticus 19:34
“And I charged your judges at that time, ‘Hear the disputes between your people and judge fairly, whether the case is between two Israelites or between an Israelite and a foreigner residing among you.’” — Deuteronomy 1:16
(All quotations NIV.)
The classical world also had related norms of xenia and hospitium.
What do you mean?
Strangers may not have been the best choice of word, but what I meant is that how people who were in more or less outgroups were treated wasn’t so much a matter of public policy. They might be accepted. They might be murdered sporadically. There was no affirmative action, no Jim Crow laws. There were pogroms, but no holocaust.
So, basically, that people-not-from-my-tribe should not be “outlaws” (in the original sense of “outside of the law”)? Essentially, you are talking about the idea of law which covers everyone regardless of who/what they are?
Not just that—instead of just having relations between people shake out under a neutral law, it’s assumed that the government can achieve something better than neutrality.
In the general case, what is “better than neutrality”?
I don’t know whether there is anything better than neutrality, but a great many people seem to think there is.
Low infant mortality. In many time periods, you could expect to witness as many (more?) deaths before adulthood as deaths from old age.
The concept of adolescence:
With the trend towards an expectation of college education, we will need an extended concept to include the early twenties.
Edit: “Emerging adulthood is a phase of the life span between adolescence and full-fledged adulthood, proposed by Jeffrey Arnett in a 2000 article in the American Psychologist.”
Leadership for limited time periods.
They already had it back in at least ancient Greece.
Excluding the concept of “leadership until you get killed”.
Conversely, they also wouldn’t be able to understand the modern totalitarian state.
Obvious notion that shouldn’t be obvious: Getting what you want.
If you’ve had a good education, lived in an affluent society all your life, and learned useful social skills, the notion that goals are achievable will sound ridiculously redundant to you, barely worth pointing out in words.
Hypothesis: Poor societies do not develop game theories.
I’m not sure which way this bears on that, but one of the ancient Greeks, I forget who, seeing ten thousand men prepared for battle, reflected that here also were gathered as many dreams and desires, and pondered how few of them would ever be achieved.
Are you talking about a sense of entitlement to what one wants, or the broader notion of goals as achievable future world-states that one can work towards?
I meant only the latter, but having the latter in your head may lead to the former.
Could you expand on this? And are we using the definition of ‘game theory’: Strategies whose values depend on strategies of other people?
Societies conditioned to hopelessness by daily material frustration do not conceive of a systematized method for satisfying their needs.* They invent gods to plead with, and may backstab each other to ascend in power, but they will not develop an entire theory, involving other-modeling, based on the concept that goals are achievable by careful planning and effort.
*This puts me in a chicken-and-egg situation: What came first, mass-scale agriculture or plant breeding?
Machiavelli’s “The Prince” is very illustrative in that regard. He spends a few pages arguing that men can indeed control his own fate instead of just being at whim to the grace of God.
Interestingly, the answer seems to be “plant breeding”. Evidence of selective breeding of bottle gourd plants predates the Neolithic Revolution, for example.
In the New World, too, it wasn’t uncommon for people to selectively propagate plants without cultivating them; but it’s hard to say whether that predates agriculture on this side of the Atlantic.
But the satisfaction of our non-social needs in a modern environment depend much much less on other people’s strategies. Today, you can obtain all your non-social needs with hardly any social interaction; living alone, working from home, buying groceries from strangers; ignoring news and local trends.
In the past, meeting non-social needs required more social support, and could be thwarted more easily by the whims of others. Think of living in a band or tribe level society!
I agree that certain sorts of planning are more modern, but these new forms seem to require less sophisticated social understanding than old method. Compare: Investing in a retirement fund, and investing in connections with the next generation because you need them to feed you in your old age.
I am not sure about that—subsistence farming is pretty self-sufficient. Individual, separated homesteads were the norm in several cultures/time periods and given how you don’t count trading with others as social interaction, someone living with his family on a distant farm (without any telecommunications) probably had much less “social support” than a modern nerd spending his time on the ’net.
The family still counts as support from other people.
The stereotype of a person who can actually manage alone is a trapper.
Do you mean someone who hunts animals with traps, or a monk of the Order of La Trappe?
Someone who hunts animals with traps.
It’s more complicated than that. Checking out at the grocery store is low on social modeling. Trade in a barter economy is more social. “Trade” in the sort of gift economy that characterized most previous societies is really really social.
What time periods are we talking? My model of most historic farming practices still involves things like extended families living in same area for long periods of time (“clans”), reliance on “group work” for things like harvesting, and the common presence of a relatively close village. In many place, like ancient China, you had very nuanced communal farming systems, centered around shared access to irrigation.
Perhaps more importantly, in the absence of a strong impersonal state, all disputes would be settled in ways that required great demands on social modeling, rather than the straightforward appeal to a justice system.
I am not defending polymathwannabe’s position, I do not support his assertion. My point is, rather, that I am not sure that all the traditional societies required more of social skills / participation than the modern one.
There are a whole bunch of factors at play here. For example, on the one hand in the modern society an individual is, generally speaking, more powerful in the sense of being able to achieve more by himself and that makes his need for social support less. On the other hand, traditional societies were simpler in many ways and required less cooperation and coordination than the contemporary interlinked and interdependent world.
And, of course, all ages had their social butterflies and their hermits. People differ both in their need for social interaction and in the kinds they prefer and that has always been so.
Although I raised a challenge to the original claim, I’m genuinely curious about this, and I don’t feel strongly that I either agree or disagree with it. I don’t mean to claim all traditional or modern societies will have any particular pattern.
I agree with your second paragraph.
I think my current best try is something like: Coordination and cooperation are qualitatively different on different scales. Working on an assembly line (or designing an assembly line, or making business deals regarding an assembly line) is, in one sense, participating in a complex and massive coordination project. But it doesn’t make sense to compare this to the sort of social coordination that happens in interpersonal relationships, whose relative survival importance has generally declined.
To bring this back to the OP, my question is : Is the challenge of interpersonal coordination (or zero-sum status competition) sufficient for people to “conceive of a systematized method for satisfying their needs” that resembles the sort of thinking that we apply today?
Well, if we go to the OP, I think the claim is just not true. To give an obvious example, some early civilizations utilized massive and complicated irrigation systems. Such systems are clearly a “systematized method of satisfying their needs” which requires “careful planning and effort”. I am not sure what does it have to do with interpersonal coordination. Societies have been able to organize masses of people in service of a single goal for a very long time (Stonehenge, the Pyramids, etc.)
Of course, some societies did fail at this and you can still find a few of them hunting for bush meat in the jungle.
Like a full belly and a fertile woman to screw?
That sort of comment assumes that the default human is male, and probably heterosexual male.
The default population of LW is heterosexual male :-) There is no such thing as a default human.
There is no such thing as a default LW user either, as far as I’m aware. “Statistically predominant” is not the same concept as “default”. We can decide whether or not to treat a certain behavior or property or whatever (if any) as the default in a particular context. We can’t in general decide which behavior/property/whatever is the statistically predominant one.
True. Still, for certain basic drives one’s gender matters and writing out two (at least :-D) cases is often too much hassle for a simple comment.
This is what was answered—it’s about what people in general want, it’s not about what the typical LW reader might want.
Now that I think about it, the idea that women should have ambitions outside their families is pretty new so far as I know. So is the idea that everyone should aim for being extraordinary.
My thanks to everyone who voted Lumifer’s comments up—I wasn’t will to take a karma penalty to reply.
That all women should have ambitions outside of Kinder, Küche, Kirche (children, kitchen, church) might be a new idea, but for women belonging to elites it was acceptable for a long time—ruling queens are certainly not unknown in antiquity: Cleopatra, Boadicea, etc.
Yes, I meant the idea that all women should have ambitions outside the home.
I’m not sure that ruling queens became such because of their ambition, or if they inherited the job.
The idea that melodies, or at least an approximation accurate to within a few cents, can be embedded into a harmonic context. Yet in western art music, it took centuries for this to go from technically achievable but unthinkable to experimental to routine.
Medieval sacred music was a special case in many ways. We have some records (albeit comparatively scant ones) of secular/folk music from pre-Renaissance times, and it was a lot more tonally structured (a more meaningful term than “harmonic”) than that.
I’d believe that; my knowledge of music history isn’t that great and seeing teleology where there isn’t any is an easy mistake.
I guess what I’m saying, speaking very vaguely, is that melodies existing within their own tonal contexts are as old as bone flutes, and their theory goes back at least as far as Pythagoras. And most folk music traditions cooked up their own favorite scale system, which you can just stay in and make music as long as you want to. For that matter, notes in these scale systems can be played as chords and a lot of the combinations make musical sense (often with nicer consonance than is possible with notes that have to respect even temperament).
What western art music and its audience co-evolved into (not necessarily uniquely among music traditions?) was a state where something like the first few bars of the Schubert String Quintet can function. The first violin plays a note twice, with the harmonic context changing under it, driving the melody forward, driving the harmony forward, etc. I should probably have said a non-static harmonic context to be more clear.
The way you raise your children is very important for their life outcomes (common, recent, obvious, and wrong).
Wisdom literature of antiquity contains the same idea.
Sustained, non-trivial economic growth.
(I am less sure about DeLong’s remark, which I’ve excised, that before the Industrial Revolution, living standards were kept firmly in check by the Malthusian trap. The basic conclusion that pre-industrial economic growth was glacial nonetheless stands.)
Given these examples, it might be interesting to add to this thread with examples of ideas assumed to be new that are in fact old.
How about the right to life: if you were deformed, Spartans would have thrown you down a cliff.
I’ve also read, but I’m not able presently to confirm it, that in some Thai society children did not stay with the couple that generated them, they were instead put in the hands of a community of elders which would educate them.
In the modern evaluation of historic infanticide practices, we should remember the astronomically high infant mortality rate.
Or perhaps the other way around? :)
Do you mean that we should be careful not to count cases of natural infant mortality as infanticide; or that the high infant mortality rate changes the moral calculus of infanticide; or something else?
I meant “evaluation” only in the limited sense of “understanding the mental states of someone else”. I bring this up for the boring reason that people seem to forget this (most prominently in the butchered interpretation of life-expectancy at birth as being life expectancy at 18, which has only become a good approximation in modern times)
See also modern attitudes toward abortion. In various points in history both would have been considered equally acceptable (at least taking herbs believe to help induce miscariges) or equally abhorrent.
Now the Netherlands allows to “abort” a newborn with a birth defect that would make survival impossible. We’ve gone full circle.
Ok, now add the right of the parents to kill any underage child and we’re getting somewhere.
I find it very worrisome that popular culture has turned to glorifying the Spartans. Screw the Spartans.
Modern culture also loves Game of Thrones where leadership is determined by who can kill the most other people.
I wouldn’t worry about it. People are capable of distinguishing something as “cool” while still not really wanting to emulate a given society.
I’ve been thinking about Game of Thrones lately, trying to figure out why it fails to win my interest, and one of my hypotheses is that it focuses too much on the power struggle itself. In a monarchy, the stories I find the least interesting are those of the royal people themselves. I prefer the approach of e.g. The Pillars of the Earth, which dedicates far more attention to the lives of ordinary people and how the consequences of royal decisions affect their everyday lives.
300 was just a soft porn movie.
Whenever we meet the aliens, I’ll feel ashamed to explain to them that it took us so many thousands of generations to arrive at the idea that it’s wrong to beat your wife or kids.
How would you feel about aliens who explain to you that, while they currently endorse all the same values that you endorse, it took their species twice as long to arrive at those values as it took humanity?
I know better than to hold myself as the moral paragon of humanity, and I’m in many ways atypical, so I would feel strongly suspicious of an alien civilization telling me their cultural history led them to hold values exactly identical to mine.
Also, are we talking twice as many Planck times? Twice as many biological generations? Twice as many imperial ages? Twice as many technological transitions? The difficulties they had to face may mean many possible things depending on the exact nature of the discrepancy in time.
So—and I take it this is TheOtherDave’s point—why would you expect the aliens to be any less understanding about how long it took humanity to decide to think of not using domestic violence routinely?
You take it correctly.
I would not feel embarrased in reference to any expectations of alien history, but in reference to the facts about my species I would have liked to brag about.
How about explaining to the how you get this absurd notion that it’s OK to not beat your wife and kids?
Here’s a proposal: popular books for statistically literate people.
I’ve read several books from the Oxford University Press Very Short Introduction series. I like the general idea of these books: roughly 140 A6 pages concisely introducing a subject, and a list of further reading if you want it.
In practice, the ones on quantitative/scientific disciplines seem to put a lot of time and effort into writing around public ignorance of statistics. Those 140 pages would go a lot further if the author could just assume familiarity with statistical research methods.
This seems like a consistent enough body of knowledge to “factor out” of a lot of material, as much educational material with prerequisites does.
Good idea.
The series includes one on statistics and one on probability. How do they do as provision of such background?
For that matter, there’s one on causation, although from the table of contents it appears to be 9⁄10 about philosophies of causation and only 1⁄10 about how to discover causes.
Here’s an offer for anyone who writes blog posts or LW articles: I’m willing to proofread as well as provide feedback on your drafts. I would probably give the most useful feedback on material concerning computer science, personal productivity and ethics, as that’s where most of my experience is allocated. However, I’d be glad to read just about anything.
Only LW content?
Nope. The drafts could be for a personal blog as well.
Excellent news.
A promising idea in macroeconomics is that of NGDP level targetting. Instead of targetting the inflation rate, the central bank would try to maintain a trend rate of total spending in the economy. Here’s Scott Sumner’s excelent paper making the case for NGDP level targetting. As economic policy suggestions go, it’s extremely popular among rationalists—I recall Eliezer endorsing it a while back.
At the moment we have real-time market-implied forecasts for a variety of things; commodity prices, interest rates and inflation. These inflation expectations acted as an early warning sign of the great recession. Unfortunately, at present there does not exist a market in NGDP futures, so it’s hard to get real-time information on how the economy as a whole is doing.
Fortunately, Scott Sumner is setting up a prediction market for NGDP targetting in New Zealand. A variety of work, including some by Robin Hanson, suggests that even quite small prediction markets can create much more accurate predictions than teams of experts. The market is in the early stages of creation, but if anyone was interested in supply technical skills or financial assistance*, this could potentially have a huge payoff. Even if you don’t want to contribute to the project, you could participate in the market when it launches, which would help improve liquidity and aid the quality of predictions.
A few EA types have already donated, and Scott quickly raised his initial target of $30k within a day or so, but it’s plausible that costs might be higher than expected.
There is more discussion here.
Optimization as Hobby
This may belong more in the Rational Diary; however, as it is not an account of any physical efforts, but rather a train of thought, I’ll put it here.
As of late (over the last three months), I’ve been suffering from ennui and listlessness. Several objectives (begin an independent study habit on Spanish and economics, write a novel, contribute to the online efforts of a library or archive) have proved either unfeasible or have had no success. Some of my other objectives (establish a daily exercise habit, begin writing again, obtain a tutoring job) have been slow in coming to fruition. Difficulties happen and I’m not here to complain about them. Indeed, I’m quite happy with the successes I’ve had. In the past four months, I have obtained a director job at a library, started an exercise routine that is noticeably beneficial, increased my writing output, and learned the history of the American Civil War.
What’s important to me is that these failures have generated a listlessness that I do not like and that seems to specifically rise out of my desire to avoid such listlessness. Many of the objectives I listed are not terminal objectives; most of what I have attempted to do or start in these past few months have been instrumental towards other goals (mostly: obtain a better, more permanent job and leave home for a more engaging location). However, this focus on optimization, combined with the setbacks of several optimizing habits, has made the very idea of optimizing wearisome. When I begin to think or feel “I should try to improve myself,” whatever causes the thought, the result is a lingering angst of, “But I’m not a machine. My goal in life isn’t to optimize. Can’t I just be happy?”
Anytime I argue about making myself happy, I doubt my intentions. “Happy,” of a certain manner, could easily be obtained through vegetation in front of a screen or monitor, absorbing the works of others without applying the content of that work to my own life. Just receiving a constant input of satisfaction, outputting nothing. Indeed, a problem I face is that my family believes this is what I do now. Since many of the resources I use for job hunting or skill learning are online, I spend much of my free time in front of a screen. From without, it appears I am doing just that. Receiving constant satisfaction, producing nothing. This has led to my being accused of laziness, despite now having two jobs. A simple misunderstanding that I don’t take personally, but it can make the ennui worse as a general obstacle in my environment, providing one more factor that needs optimizing.
I know that my terminal values are not to simply absorb constant input because I do not find that existentially satisfying. It does not make me content imagining myself not learning or producing something new. However, with most of the low hanging fruit plucked, I’ve come to a point of conflict. Thoughts of further optimization disrupt happiness (especially because now a lot of my personal optimization has reached very effecting levels: for instance, despite the excess of conflict it causes in my family, I have been slowly discarding a large portion of the material goods I accumulated during childhood. Excess books, movies, and video games, with the intent to one day discard all video games. This conflicts with my family’s image of me and insults them as they spent money on those items for me when I was a child, making it seem as if I were throwing away a gift. But now I’m off topic as this is only tangentially related to the ennui). But I understand that “happiness” in this use is not terminal value. It’s momentary satisfaction. It’s that desire to say, “Let me take a second to breathe,” but secretly wanting to stretch that second out to a minute, to an hour, to a day.
So, I am trying to create a change in thought regarding optimization. So far, I think of optimization in terms of values. This is my terminal value, this is an instrumental value to reach such an end. It is an economical way of thinking that lets me way cost and benefit. This is useful thinking, but because I have begun to associate optimization with ennui and strife, it is no longer effective. Instead, I am now trying to think of optimization as a hobby.
First, I’ll admit this is a stop gap method against ennui, and probably (though I have nothing to prove it) less effective than thinking in a more economical way. Thinking of optimization as a hobby encourages not considering solid costs and benefits. It makes optimization a “can” rather than a “should” and I lose that pressing desire to really make something work that I have when I have sat down and determined if a way is the right way. But that’s just what I’m looking for. I’m creating associations between that pressing need and defeat, which I do not want. Such an association will cut the legs from under my efforts if I let it build.
Instead, I try to think of optimization as a hobby. This reminds me that it is something I engage in freely, that I have control over (so the successes and failures are mine alone), and is only a single aspect of my life. It disposes of that continuous thought that being “better” is everything, instrumental and terminal alike, which clouds my ability to think about what I want, why I want it, how I want it.
I apologize for this lengthy post that is more personal than intended for others. These have simply been my recent thoughts on optimization and my current manner of approaching the topic. Expressing them here prevents me from changing my mind later and then thinking, “I’ve always really thought like this.”
I think this is actually a decent way to think about it. It assists in giving yourself permission to “turn off.” and have mindless leisure when you need it, without worrying that your leisure time is being spent optimally preparing you to work again.
I’m trying to post an article on discussion about a personally important topic, but it’s not going through. Any thoughts? It won’t even save the material in drafts. I haven’t posted much before, but this is a question that I think Less Wrong could be a great help on. It says I have enough karma, so that shouldn’t be the problem.
What the exact step by step process you use to want to publish the article?
Are there any LWers in the area you could get to look over your shoulder to troubleshoot?
The rest of my probability is in the problem going away when you try posting from a different computer.
It might be easier to organize a quick Teamviewer session. I’d do it if I had ever actually submitted a top-level post.
I have started reading Qualia the Purple, a manga strongly recommended by a few LWers, such as Eliezer and Gwern. In his recommendation, Eliezer wrote: “The manga Qualia the Purple has updated with Ch. 14-15. This is what it looks like to “actually try” at something.”
Does anyone else know good examples of “actually trying” in other media? Over the LW IRC channel, Gwern linked to this page (Warning, TV Tropes), and specifically recommended Monster. Any other suggestions?
The Martian is basically this non-stop.
Thank you, I will take a look at The Martian.
It seems like Qualia the Purple is a manga where after a certain point, the author introduced magic and started giving philosophic explanations for how the main character can do magic, turn into other people, go back in time, and generally do whatever the fuck she wants except save one person. What does “actually try” mean?
Starting from chapter 10, the protagonist dedicates herself to a single goal, and never wavers from that goal no matter what it costs her throughout countless lifetimes. She cheats with many-worlds magic, but it’s a kind of magic that still requires as much hard work as the real thing.
It may be too late now, but would it have been more appropriate to post this under the Media Thread? I was not sure whether the media thread was only for recommending media, or also for asking for recommendations.
[tangential] The price of Bitcoin has been dropping significantly in the past few weeks, and dropped below $300 yesterday. I’ve read many theories as to how this can’t happen, but it is. What’s going on?
Its a bear market. The price moved from ~100 to ~1100 in the fall of 2013. The price action for the past 10 months is a correction of that move. After an 11x price increase, a retracement of 70% is perfectly normal market behavior. This is just the bitcoin boom and bust market cycle.
A larger holder did sell 30000 coins yesterday at $300 each. (And in fact, did so in a much less sophisticated way than normal—he simply stuck 30000 coins out there at a price of $300, and then just sat there. A more sophisticated trader selling in smaller increments could have gotten more money for them). This action did control the price of bitcoin for a number of hours. It was one small piece of the decline from $1100 in Dec 2013 to $300 now, but obviously it wasn’t the main driver.
There is nothing special about the decline in bitcoin from $1100 to $300. It is merely the result of the fact that the price previously rose from $100 to $1100 in a short time. This is how markets work. The price does not move smoothly in straight lines. It moves three steps forward and two back. It overshoots massively to the upside and to the downside.
It is very hard to tell exactly where the top and where the bottom are going to be. Back last november, it would have been hard to guess whether the top would be at $500, or $1100 as it was, or $2000. And its hard to guess the bottom now. You might have thought it was done falling when it was at $500. It might be done now. Or it might drop to $200 or lower. (You can make a pretty decent case for $275 on sunday morning having been the absolute low however, based on the fact that the volume of trading was enormous, and the extreme distance that the price moved away from the moving average. Of course, it is possible that even larger volume and an even more extreme drop could be coming. We will not be able to say for sure what the bottom was until well after the fact).
The entire market is based on speculation right now, as well as being small enough for a few big players to significantly move the needle. This is a combination that means that one or two people can cause a drop, which causes a mass sell off (the inverse can happen too). Of course, this is a “just so” story… the reality is more complicated.
Point being, you won’t be able to predict bitcoin prices until bitcoin as a payment network and store of value overcomes bitcoin as speculation.
I don’t know why someone would believe it couldn’t happen. The price of bitcoin is determined exactly like stock prices and subject to the same variations based on the same reasons.
The growth of the bitcoin market is below expectations so people sell their bitcoins to monetize their earnings so the price drops. That’s economics 101.
Stock prices are anchored to the expected discounted present value of firms’ profits. Bitcoins have no anchor. Think of it this way: If the market went crazy and valued Apple at zero you would do very well to buy the entire company for $1000. But if the market decided to value Bitcoins at zero, you would not want to buy them all up for $1000.
At least with tulip bulbs you can, like, grow tulips.
In five years the go-to example for speculative bubbles that popped might be bitcoins rather than tulip bulbs.
At least some recent research suggests that the Dutch tulip bubble was in fact a tulip contracts bubble, which expanded when legal changes converted commodity futures contracts to options and collapsed when authorities halted trading.
Is there an important difference between a tulip contracts bubble and a tulip bubble?
Sure, there’s the question of whether any actual tulip bulbs were exchanged.
My understanding was that it started as a minor bubble in tulips, expanded into a massive bubble in tulip contracts, then collapsed massively. Is that different?
By the same mechanism by which bitcoin price is anchored to the expected discounted present value of the goods and services you can buy with them. And since there are goods and services you can buy with bitcoin that value is no more nor less arbitrary than the profits generated by some company (maybe not Apple).
Nice try but this approach is consistent with bitcoin having many, many different possible prices including price=0. Think of it this way, if I create a company that does everything Apple does I’m a billionaire, but if I invent a cyber-currency that does everything Bitcoin does I have nothing of value.
I seriously don’t get it. If I have a company that makes products nobody wants, I also have nothing of value. Are you claiming that Apple products are inherently valuable? I don’t see that. Apple get’s money because people want their products and bitcoins have value because people want them. And the price of both Apple stock and bitcoin is determined by how many people want to buy and hold them in expectation of future profits.
This is not true. Your company, presumably, has some assets—maybe a factory, maybe an office building, maybe some inventory, likely some cash in a bank account, etc. If you were to shut down the company which makes products nobody wants and sell its assets, you would end up with some sum of money. This is the company’s residual value or (assuming the accounting is broadly in line with economics and you’re not selling at firesale prices) its book value.
And some liabilities. Apple e.g. in 2013 had 8.71B cash + 1.76B inventories and 16.96B debt. So if suddenly nobody wanted Apple products anymore, guess what the shareholders would get.
According to Yahoo! Finance Apple’s total assets are about $200B and their total liabilities are about $80B. (Or were in late September 2013, the latest for which they give figures.)
(Curiously, the figures there match yours for inventory and debt, but they give a much larger figure for “cash and cash equivalents” than yours. But as the totals indicate, the numbers you mentioned are very far from telling the whole story.)
So, it looks as if Apple has about 6 billion shares, and their net assets minus liabilities are a bit over $100B. So if people suddenly stopped buying their products (in some way that didn’t change the value of their assets, which would be a bit hard but never mind) then each share would be worth about $17.
[EDITED to fix an idiotic factor-of-1000 error; oops. Thanks to Lumifer for pointing it out.]
That’s billions (thousands of millions), not millions.
Apple’s book value per share is about $20.
Of course. It’s perfectly possible for a company to have zero or negative book value. Your Apple numbers are quite a bit off, though—look here for example.
The issue, however, is not what the price is determined by—for all tradeable goods in a more or less free market the price is determined by supply and demand and, yes, it is true for both bitcoin and AAPL shares, but it’s also true for tulip bulbs and old baseball cards. The issue is that you said that that bitcoin prices are “anchored” in the same way the equity share prices are anchored and I don’t think this is so.
Most of those goods that you can buy are pegged towards the dollar and not towards bitcoin. As a result they don’t provide for a floor for prices.
Bitcoins are achored to the cost to produce them. Same as with e.g. gold. E.g. a gold-rush means that for some time gold is ‘easy’ to come by.
That only puts an upper limit on the price of a bitcoin. The lower limit is set by what they are useful for, and how much more useful than the 500 or so other cryptocurrencies: transactions and contracts with security and anonymity properties that ordinary money does not provide.
Same as for e.g. gold (at least for those crypto-currencies that use proof-of-work).
They are produced at a set (and exponentially decreasing) rate. The cost to produce them is however much people put into it.
Same as for gold in a gold-rush.
In other words, bitcoins are not anchored to a cost. Same as with e.g. gold.
I don’t understand what does “anchored to a cost” mean.
In crude terms, things have value and they have a cost to produce. If the value is above the cost, more things like that will be made. If the value is below the cost, no one will make these things. Nothing in that speaks to “anchoring”—the cost does not “anchor” the value.
Bitcoins certainly have a cost to produce (and it’s growing, by design), just like gold. If the value of bitcoins falls below that cost, no one will produce new bitcoins any more.
If there is a fixed cost to produce something, then if the value ever moves above the cost, more will be produced until the value falls to the cost. This means that the value is anchored to the cost. Bitcoins do not have a fixed cost. It would be more accurate to say that the cost of bitcoins is anchored to the value.
Wouldn’t it be capped by the cost (+usual profit)? There is nothing about the cost of production that prevents the value from falling below the cost and all the way to zero.
Why not? It’s not hard to calculate the cost of production (cost of hardware and electricity) for bitcoins. That cost changes with time, but that’s normal for most everything.
That probably would be a better term. I should add that I haven’t been educated much in economics, and if “anchor” is economic jargon, I don’t know it. I was just going by the normal use of the term and the context to guess what the OP was trying to say. Also, pointing out that it’s only anchored from above pokes a hole in the OP’s comment, but since someone else already addressed that I didn’t bother.
Bitcoins are produced at a rate that halves every two years. It doesn’t matter how much effort you put into mining them. Putting more effort into bitcoin mining does not increase the amount of bitcoins, and thus does not decrease the price. The price is not dependant on the cost to produce. On the other hand, if the price doubles, then people will put twice as much effort into mining them, and the cost to produce will double. The cost to produce is completely dependant on the price.
No, I’m using words in standard meanings: “anchoring” means limited to the vicinity of a particular place or value, and “capped” means can go down but cannot go up above a certain limit.
Yes, under the assumption of many separate agents. However people can cooperate if there are sufficiently large incentives for that and there’s no anti-trust authority to stop them in the Bitcoin world. One mining pool already got over 50% of world capacity at one point—it quickly backed down for obvious reasons, but my first guess at an attractor point (equilibrium in econospeak) would be a duopoly where two mining pools control most of mining and they may or may not collude.
How are you suggesting they manipulate prices? While there are a number of security flaws, none of them allow counterfeiting. You can’t increase the supply beyond what it should be no matter what you do. You can destroy bitcoins by sending them to invalid accounts and cause deflation that way, although I have no idea why you’d want to. You could get over 50% of mining capacity and frequently use it for the 51% attack, causing people to lose trust in bitcoins and making the price fall, but again, it’s not going to help you.
Not prices, but the cost of production. With increased prices normally you would have increased mining efforts which, through competition, would make cost of production rise towards the price. If you control the amount of mining, you can avoid that and keep the mining effort at the same level. Your profit (price—cost) then stays high.
It changes with time, but only because the amount being produced changes over time.
First, the cost to produce them is greatly influence by how many people engage in mining which is itself determined by the market price of bitcoins, meaning that by this metric there are multiple equilibria.
Second, for any fiat money there is always an equilibrium with price=0.
That’s not technically correct. If the athourity issueing the fiat money collects taxes than that creates demand for it up to the amount required to pay outstanding tax bills (ie 1% of land value in the case of a property tax). Since property doesn’t produce fiat money itself property owners need to sell goods or services to get money.
So there are low equilibria for fiat money, and unstable equilibria for fiat money, but zero is typically not a natural equilibria
I agree with your qualification.
The first also applies to e.g. gold.
I do disagree with the second.
No, that does not apply to gold. The cost to produce more gold is influenced by how much people have engaged in mining in the past, and thus extracted the low-cost gold. If everyone but you stopped mining gold, the cost for you, personally, to produce as much gold as was previously being produced would be approximately the same as the cost everyone shared before. If everyone but you stopped producing bitcoin, it would be much, much cheaper for you to produce just as much bitcoin as previously.
Gold is not bitcoin. There are differences obviously. But there are analogies too. For one the effort to mine a bitcoin intentionally rises by how much people have engaged in mining in the past.
I’m not clear what you are driving at.
I’m pretty sure the effort (processing power) required to mine a bitcoin is independent of the history of mining, and depends exclusively on how much processing power is currently being spent trying to produce bitcoin. Unless I’ve drastically misunderstood the algorithm for difficulty, which I’m relatively sure I haven’t (thinking of this page specifically).
The consequences of this seem obvious to me; bitcoin is only difficult to acquire when it is perceived as worth acquiring, meaning that a loss of confidence that it has value leads directly to it being much easier to acquire even without having to purchase it from people who have lost confidence.
What you seem to mean is the cost due to competition: Multiple miners trying the same block and the first one succeeding making all the work of the other mineres on this block worthless.
I meant the increase in difficulty for later blocks inherent in the algorithm.
One could—though I agree that it stretches the analogy—compare the first to gold miners competing in the same physical location—as has happened during the gold rush. This causes competition not exactly for the same gold veins but for the physical space and other resources around.
The second (algorithmical) increase can be compared to mines becoming sparser and sparser—you have to dig deeper.
The bitcoin supply is limited by design and as you approach the last bitcoin which could be mined, the effort needed increases. That makes the effort needed dependent on the “history of mining”, or, more precisely, on how many bitcoins have already been mined.
Or more precisely the currant year.
No, not at all. Buying stock gives you a legally recognized claim on the company’s assets. Companies are rarely valued below their “book value”, just the price of everything they own (in cases where this happens it usually means that the market thinks the management will waste these assets before someone pries their hands off them). Bitcoin gives you a legally recognized claim on… nothing.
Even if you think of Bitcoin as a bona fide currency, foreign exchange rates are NOT determined “exactly like stock prices”. Thinking bitcoins are equity shares is a category error.
There are (or were) many, many Bitcoin advocates in the world who can’t see it being anything other than deflationary (as there is a limited supply), it does interesting things, etc. Then the world turns around and sends Bitcoins inflationary for this whole year. Empiricism beats praxeology (again).
We can’t live in a world in which the market expects Bitcoins to steadily increase in value compared to the dollar because of the arbitrage opportunities this would create.
Yes we can, because of the relative opportunity costs of holding Bitcoins and dollars. You may as well say we can’t live in a world in which the market expects stocks to steadily increase in value compared to the dollar. Dollars are far more liquid than Bitcoins (or stocks) and, in equilibrium, you have to pay for that liquidity.
If the market expected the price of Bitcoins to steady increase people would buy Bitcoins today increasing Bitcoin’s price, until the price of Bitcoins was high enough so that people wouldn’t be confident that it would keep increasing.
Again, you are neglecting the cost of holding Bitcoins.
Suppose today a Bitcoin is worth $100, and everyone thinks that Bitcoins will be worth $102 in one year. Anyone could gain an expected $2 by a buy-and-hold strategy on Bitcoins, but that means they will have to hold their money in Bitcoins for the next year, which is not as useful as holding dollars, because it’s much easier to turn their dollars into other assets (or consumption). If people think that this liquidity cost of holding Bitcoins is at least $2, then they will not bid up Bitcoins now.
Moreover, buying-and-holding Bitcoins has an opportunity cost because it means you aren’t (for example) buying stocks, bonds, land, gold, or engaging in immediate consumption. So, if we suppose the market interest rate is 3%, then even though Bitcoins are expected to go up $2 over the next year, no-one is going to bid them up to $102 now, because they can get a $3 return elsewhere.
Implied by your logic:
There can never be expected appreciation (or depreciation) of any asset.
In particular, expected inflation must always be 0.
People value future consumption as highly as present consumption.
Needless to say, none of these are the case.
As a fraction of total world GDP it is true, relative to the efficient markets hypothesis, that expected inflation must always be 0 and there can never be expected appreciation (or depreciation) of any asset.
And measured in those terms, it is still true that “a world in which the market expects Bitcoins to steadily increase in value compared to the dollar” is impossible for the same reasons James_Miller cited.
Yes, there is a certain amount of appreciation that the world could expect; it could expect Bitcoin to keep up with inflation and the growth of market. But the expected growth of the market takes into account risk, and under a fairly weak analog of the EMH, actual expectations of better-than-market growth would make the price of Bitcoin jump until it reached the same price as other instruments that had that expectation.
No. Suppose we are in a zero-growth economy. If an asset is worth $98 today and is expected to be worth $100 in a year’s time, people will only bid it up to $100 today if they are indifferent between $100 of consumption today and $100 in a year’s time. But people are (rightly) not indifferent, they prefer to consume today—hence even in zero-growth economies, you see a positive real, risk-free rate of interest. And nothing about the liquidity cost depends on a growing economy either.
There is substantial evidence that a giant whale dumped $9m worth of coins at $300. Now that the sell wall is gone, the price is back up.
Otherwise, just the typical accretion phase of a boom-bust cycle.
Evidence or speculation? I saw the $300 sell wall, but that does not account for the previous week’s dip, which is when the “bearwhale” speculation started. I did see plenty of speculation to this end … but humans, particularly bagholders in a bubble, will grasp for any explanation that is not “we were foolish”.
Really, everything is based on the assumption of conspiracy:
December—Just a small market correction after bubble, soon we go up!
February—Price dropped because Mark Karpeles is an incompetent thief. (This one I’ll give them.)
May—China dropped the price, now all Chinese is priced in, we go up!
August—Wall Street dropping the price because they want to enter cheap. Hold and we’ll go up!
October—The bearwhale dropped the price, cheap coins that will go up!
It’s a cliche for good reason that everything and its opposite is “great news for Bitcoin!”
The ridiculously inflated prices peaking in December 2013 are almost completely explained by Mt. Gox’s blatant fraud and the Willy and Marcus bots. A decline from that would be the expectation.
So what was the solid evidence for (and against) conspiracy, as opposed to the null hypothesis that this is just one week in a bubble on its way down?
Neither of those are true (probably).
The price is the result of normal market forces, not a “conspiracy’ to decrease the price, or ‘manipulation’ upward last november due to bots. All of the ‘manipulation’ talk is complete bullshit. There is nothing at all unusual in the price movements of bitcoin. It is completely normal for an asset that is growing from essenitally no value, into the billions of dolalrs range, in five years.
If there was a startup company that went public with shares at its inception, until a point of it having a $10 billion market valuation, over a 5 year period, it would look a LOT like the bitcoin price chart. Huge, massive increases in value as it reached new milestones of adoption. Massive contractions as it looked like it might fail. But the thing is, you DONT see the valuation of startup companies like this, because they are owned by a small number of founders and venture capitalists and angel investors. So the bitcoin price looks unusual to most people.
So the first hypothesis: “there is a conspiracy to manipulate the price”, is complete BS. (There are tons of idiots on places like bitocintalk.org that beleive things like this, because they are clueless. But this is not a truth about reality. it is a rationalization made up after the fact by people for whom bitcoin is essentially a religion that you must take on faith, to explain why the price is now going down).
The second hypothesis “Bitcoin is a bubble on the way down”, is also probably not true. (Though the chance that it is true is a lot greater than the idiotic manipulation theory). The reason why this is unlikely is that the blockchain is a truly revolutionary technology, whose impact on the world is goign to be MUCH greater than a mere 10 billion dollar company. (If you look at bitcoin’s market cap as a valuation of a company, it peaked around $10 billion).
It might be true that the bitcoin blockchain is about to be surpassed by a competitor. That is, if you think of bitcoin as a startup, as the first startup to pioneer a technology, it is possible that now it is losing its place to a competing startup. If this were true, maybe bitcoin is going down because some altcoin is doing a much better job and is going to surpass it.
Even if bitcoin is going to be eclipsed by a competitor, it is much more likely that bitcoin would have another rise, but during that rise, its successor will greatly outperform it, and eventually surpass it.
To summarize:
Chance that the bitcoin price movements, in the long term, are due to a conspiracy: Close to 0%. Chance that the blockchain technology will just die, and bitcoin and all altcoins will just die: Very Low. Chance that bitcoin is currently going straight to $0 because a competitor blockchain technology is surpassing it right now: Low.
I do think that bitcoin is at risk of being surpassed by competing blockchains in the future, but not so imminently that it is going straight to zero right now. I think that in the end, both bitcoin and a few different other blockchains will survive. Most of those probably have not even been created yet.
In the same way that there is not only one surviving internet company today, and in fact there are many, in the future there will be many surviving blockchain technologies. This is true, even while the vast majority of the coins currently in existence will fail, just as most of the dot com bubble companies failed.
Yes. The conspiracy theories are rationalizations that have been invented because reality contradicted their belief system. It is absolutely possible for the price to decline the way it did. It could go much lower. It could even go a lot lower and still recover, and go much higher in the future! It already did exactly that in 2011-2013.
Random fluctuations have moved the price that much on a daily basis. The fact that we are trending downward generally is certainly expected—it is exactly what has happened 5x earlier in bitcoin’s history, and numerous other times in speculative bubbles. It’s possible for the near-term trend to be down 90% of the time, and yet the overall long term trend to be up. Indeed, this is expected due to the typical behavior of whales. They moderate demand so that prices continue to gradually fall, all the while accumulating coins. Eventually the bottom is reached when they no longer are in control of demand. Then the bull market starts and you have a very quick run up to an all-new high.
This is a very common pattern. It happens in commodities, it happens in stock markets, it happens in real estate prices. It has repeated over and over in the history of bitcoin.
Yes, this exactly. Its normal market behavior.
For the last few weeks, I’ve been using an alarm app that forces me to take a picture of my front door before it turns off. Previously, I had been using one that forced me to do two difficult arithmetic problems. This meant that I woke up mentally, but was still unwilling to leave my bed, and instead spend half an hour checking fb and browsing the net on my phone. Now, the design of the clock forces me to leave my room, which makes it much easier for me to start my day more quickly. The photo recognition is not great, so normally I need to take 2 or 3 photos before it recognises the door, but this helps me wake up even more. I would highly reccommend the app, or something with the same functionality, for people who have difficulty leaving bed in the morning.
Some related advice:
There are alarm apps for your phone that can (imperfectly) detect your sleep phase using the accelerometer and wake you up at optimal times.
Try freeing up your schedule so you don’t have to use an alarm, those things are horrible.
Long time lurker here, I just recently got accepted to App Academy (A big part of my inspiration for applying came from this post) And I’m really excited to attend some meetups in the area.
I have a few questions for Less Wrong people in the area, and this seemed like a good place to post them:
I’ll be going in December, any chance I’ll have Less Wrong company?
I understand that at least a few folks from here have been to app academy. Any advice? I’ve got an Associates in CS, and none of the prep work they’ve given me is too difficult, but is there anything else I should do to prepare?
They allow students to stay there, but I’m hoping to bring my fiancee with me. Unfortunately, rent seems to be ridiculous and I have no idea where to look (I’ve never moved to a city that wasn’t driving distance from me). What’s the best way to find apartments in SF?
Related to the above, is anyone in the SF area looking for a roommate(or two) starting December? We are clean, quiet, and can be very unobtrusive if need be. The main issue I see is that we would prefer to also bring our cat along.
1) There are a lot of LWers in the SF area. I think Ozy Frantz might be doing App Academy then.
3) Here is a Google Doc for finding LW roommates in the SF bay area.
I’ll be moving there around the same time—look forward to seeing you there!
Thanks! I added myself to that list. I’ll be looking at it in more detail when I get home tonight.
I’m really excited about the density of LWers there, as I tend to do better at in person things than online. Honestly, reading stuff about the community was a big part of the reason I applied. I look forward to meeting you too!
I used to work at App Academy, and have written about my experiences here and here.
You will have a lot of LW company in the Bay Area (including me!) There will be another LWer who isn’t Ozy in that session too.
I’m happy to talk to you in private if you have any more questions.
I’m looking for research (for instance from psychology) showing that detailed feedback and criticism improves performance greatly and would be very grateful if I could get any tips in this regard. (I think I’ve heard that this is the case but can’t find any papers on this.) I’m especially interested in how criticisms of texts can improve authors’ writings but am also interested in examples from other fields.
I’m thinking that such feedback could improve performance via (at least) two mechanisms:
a) It teaches you exactly what you’re doing right and what you’re doing wrong; i.e. gives you knowledge b) It may incentivize you to do things rightly, if you want to get praise and avoid criticism. The strength of this incentive obviously depends on the context; for instance, public feedback should generally provide you with stronger incentives than private feedback.
This seems intuitively clear, but still it would be nice to have some research saying that the effect of feedback is very strong (if it is, as I suspect). Any help is greatly appreciated.
These pieces seem relevant:
http://www.ncbi.nlm.nih.gov/pubmed/24702833
http://www.ncbi.nlm.nih.gov/pubmed/24005703
http://www.ncbi.nlm.nih.gov/pubmed/23906385
http://www.ncbi.nlm.nih.gov/pubmed/23276127
http://www.ncbi.nlm.nih.gov/pubmed/22352981
http://www.ncbi.nlm.nih.gov/pubmed/17474045
http://www.ncbi.nlm.nih.gov/pubmed/16557357
http://www.ncbi.nlm.nih.gov/pubmed/16033667
http://www.ncbi.nlm.nih.gov/pubmed/12687924
http://www.ncbi.nlm.nih.gov/pubmed/12518979
Thanks a lot! Highly useful. The evidence seems to be mixed, though good feedback does increase performance. Very nice of you to dig up all those links so quickly! :)
Are you looking for evidence to support your beliefs or are you looking for evidence to tell you what your beliefs should be? :-)
On the anecdata basis, results vary. For some people in some situations performance is improved by feedback, but not always and not for everyone. Two examples off the top of my head where feedback/criticism doesn’t help: (1) if the person already reached the limits of his ability and/or motivation; (2) if criticism is used as a tool to wield power.
Thanks!
The keyword you’re looking for is “deliberate practice”. A quick pubmed search should turn up relevant results.
Are there any existing libraries for generating Anki decks?
It feels like generating Anki decks from data sources with defined object-relational schemas should be easy and fruitful. Alternatively, generating them from something like an R data frame seems like it could be worth doing.
ETA: I have Googled this before asking, by the way, but there are so many Anki decks about programming languages that it seems resistant to the obvious search terms.
Anki can import .csv files easily. I did create my Anki color perception deck via R and the process was very straightforward without the need for any special library.
On the other hand there great care to be taken with auto-generating cards from existing data sources. Taking time to think about each card often makes sense. Bad cards cost a lot of review time and when you automatically create cards it can frequently lead to a lot of bad cards.
Can you tell me something about your color perception deck? Are you trying to train yourself to be better at distinguising (and naming?) colours for some reason?
Yes, I train color distinctions. Every card has two colors and shows them plus a color name then the user has to decide which color Anki displayed. Over times the distance between the colors goes down and I pick colors that are more near to each other.
I have written about this on LW in the past.
I was wondering why. It doesn’t seem all that useful, unless you are abnormally bad at color perception or you have a job or hobby that somehow needs good color perception (something in art or design?). I suppose it’s fun and interesting to see how well that kind of thing can be trained, and how it changes your experience, but I was wondering if there was more to it.
Here and here.
When I do this, I write a little one-off program that spits out a tab-separated values file, then import the file with the Anki desktop app.
I had a similar problem a while back (given a bunch of one-sided cards, I wanted to programmatically generate their inverses). I couldn’t find anything either, and wound up scripting my browser(!?).
That’s done by adding note types:
Tools/Manage Note Types/Card/+
I’m having trouble posting an article, so any help would be appreciated. I’ve tried to make a discussion post, and whenever I click “submit,” it says, “submitting...” for a few seconds. However, the thread never appears in the discussion section. Also, when I try to close out of the submit article page, it says, “you’ve made changes to the article, but haven’t submitted it.” Does anyone have any idea what could be causing this? My only idea is that the target and class of hyperlinks aren’t set, or that the article’s too long; it’s 27 pages.
Can you save it to Drafts and then see it in the Drafts section (formatted as it would be if published)? It’s a good idea to do this in any case, to fix any issues with formatting before publishing. After that, “you’ve made changes to the article” at least won’t be true, so the issue could be isolated further. If that doesn’t work, start another article with “Hello World” content, make sure you succeed in saving it to Drafts and observing the result, then replace its content with your article.
Thanks. It works now. Turns out the error was with the hyperlinks in my table of contents.
Happy to see Elon Musk continuing to speak out about AI risk:
https://www.youtube.com/watch?v=Ze0_1vczikA
The first NGDP futures market is getting started based on the ideas of economist Scott Sumner. The idea is that the expected U.S. NGDP (nominal gross domestic product) is the single most important macroeconomic variable, and that having a futures (prediction) market will provide valuable information into this variable (Scott estimates that if it works, it will be worth hundreds of billions of dollars).
Unfortunately, due to US gambling laws (I think), the market will be based in New Zealand and U.S. citizens will not be allowed to participate.
To what extend do traditional finance markets provide an implicit prediction markets for future macroeconomic states?
A futures contract is one where you agree to buy a specific quantity of an asset today for a specific price, but you don’t pay until a specified time in the future.
If you predict it will have future value $100, and it only costs $10 now, it’s worth buying, hence there will be more demand, driving the price of the futures contract up. On the other hand, if it costs $100 today, but you expect it will cost $10 in the future, then the futures contract won’t be worth as much, driving the price down.
We still expect the price to be about as good of an approximation of the future value as you can get (as long as the volume is high) - if you have a better prediction, you can make money off it! So the price of the future will reflect the best aggregate prediction of the future value of the asset. This is essentially the efficient-market hypothesis. This is the inspiration for idea futures, a.k.a prediction markets.
For NGDP futures, they would create contracts like this:
The prices of these contracts now would reflect the market’s certainty that the future NGDP would be within that range.
Well, technically speaking the price of the future will reflect the capital-weighted opinions of the market participants. That is not necessarily the “best aggregate prediction”—it could be, but there are no guarantees.
A couple of points:
You don’t agree to “buy it today”, you “agree to buy” it today. Both exchange of payments and assets take place in the future
The price of a future isn’t determined by prediction any more than the asset is. The price of a future is given explicitly by the price of the underlying, suitably adjusted for cost of carry and cost of financing
First, that’s not true, technically speaking. The price of the future is whatever the market clears at. Arbitrage is a strong force that keeps the future and the underlying prices in a certain relationship, true, but only under certain (though common) conditions.
Second, here we are talking about NGDP futures and with them specifically there is no arbitrage against the underlying because the underlying is just an economic number that you cannot buy and warehouse. So in this particular case the price of the future is purely prediction-based.
It exists but it’s noisy. US Treasuries are more a bet on interest rates (i.e. monetary policy) than on actual growth; CDS is more about the possibility of a technical default caused by political brinkmanship than about the US genuinely running out of money. Based on a quick internet search, the S&P 500 is only very weakly correlated with GDP. Commodity prices are more about expectations for that commodity, foreign exchange rates are inseparable from the foreign country involved.
Formatting stories—any good evidence?
I’ve started toying around with setting up my story at its permanent website, and have put a test page of the first chapter at this page (NSFW due to image of tasteful female nudity). I find myself faced with all sorts of options—font? line width? dark on light or light on dark? inline style or CSS? spot colors? hyphens or em-dashes? straight or curly quotes? etc, etc? - and I don’t have a lot of evidence to base any answers on.
As a preliminary set of answers, I’ve drawn on Butterick’s Practical Typography, even though I don’t have any particular reasons to favour that set of advice over any other, other than that it’s a concentrated dose of a /lot/ of advice. I don’t know how to set up formatting for multiple viewing devices, I’ve never touched CSS, and my budget for fonts or professional advice is pretty much zero.
Does anyone reading this know where I can find evidence that any changes I could make to my preliminary formatting would do any better than what I already have?
You could just answer it experimentally with A/B tests measuring time-on-page or leaving a review or something like that.
I see you’re running off Apache on Dreamhost, so there’s no doubt plenty of libraries to help you there, but there’s other strategies: static sites can, with some effort, hook into Google Analytics to record the effect of an experiment, which is the approach I’ve used on gwern.net since I didn’t want to manage a host like Dreamhost.
If I knew when I started what I know now, I would have begun with a large multifactorial experiment of the tweaks I tested one by one as I thought of them. It sounds like you are aware of the options you want to test, so you have it easy in that regard. (With the Abalytics approach, testing all those options simultaneously would be a major pain in the ass and probably hamper page loads since all possibilities need to be specified in advance in the HTML source, but I suspect any Apache library for A/B testing would make it much more painless to run many concurrent interventions.)
It’ll probably take a few thousand visits before you have an idea of the larger effects, but that just means you need to leave the test running for a few months.
This seems like an obvious enough thought experiment there is probably a literature on it, but I have not found any: how much vacation would it be ethical for a superhero to take? The kneejerk reaction seems to be first none. Assuming even one life saved per hour, he’d be “killing” a handful of people even going on a date (beyond or assuming away a bare psychological minimum for sanity or cetera). The next kneejerk reaction is that the first one is nuts. Thoughts/references?
And despite involving superheroes, I am seriously interested.
As long as we’re talking psychologically human superheroes rather than, say, aliens with perfect ethics and unlimited willpower from the planet Krypton, this seems equivalent to the problem of maximizing worker productivity (adjusted if necessary for the type of work). There’s a substantial literature on that.
I think you have one extra assumption. I do not assume that said superhero wants to sacrifice his life to the greater good, but I do assume he is willing to chip in in a more reasonable way. To be as vague as possible, saving zero people and saving the maximum possible without a spare hour to eat cake on his birthday are both unacceptable, and I suspect the answer to how much he would want to work is closer to the middle than either extreme.
My question is how blameworthy it is to not be at the extreme of self-sacrifice.
Someone like you or me in every other way but who happens to save the life of one or two other people upon whom he happens to stumble per week seems almost certainly ethically superior to us. Assuming he wants to take advantage of his gift to a degree he would consider reasonable upon rational reflection and still have a life, how could he decide with more precision how much to work?
I suspect we can do better than just assuming 40h/wk.
The underlying point is that, from a consequentialist point of view, you shouldn’t care how self-sacrificing a superhero is but rather how effective they are in fighting crime and saving people—which is to say how good they are at their job. Reality doesn’t grade for effort. If Batman decides to work twenty-hour days because he feels guilty about anything less, and a week later he falls asleep at the wheel and the Batmobile drifts into the path of an oncoming cement truck, he may have been very praiseworthy by some deontological standard but in practical terms he didn’t do much for Gotham.
It’s still a little more complicated than “how many hours should they work?”, though. Superheroing’s an inherently reactive sort of enterprise—traditionally a superhero doesn’t just go out whenever they want to rough up gang members, but rather shows up when e.g. a guy in a koala mask with a ray gun is robbing a bank—so I expect skilled on-call professionals like trauma surgeons or datacenter admins might be a better model than, say, factory workers.
That seems like a false extreme to me, but I might have misspoken. Lets say he decides okay, I could work 85 hours a week and suffer no loss in productivity, but I am unwilling to work more than 40h. He still benefits the world more than any but 0-100 (I bet someone curing something might win, but still, few...) people. Is he ethically blameworthy or praiseworthy?
Among other fictional examples, the character of Panacea in Worm (and several wormfics) faces this issue.
Thanks, I’ll google that.
Warning 1: Worm is really, really, really long.
Warning 2: Although indeed a character in the story faces this issue, I don’t recall anything particularly surprising or insightful about it in the story. Which, for the avoidance of doubt, I don’t regard as a defect: the purpose of Worm is to tell a particular story the author wanted to tell, not to conduct a careful philosophical investigation into the civic responsibilities of superheroes.
(Worm does have a thing or two to say about the civic responsibilities of superheroes, I guess, but most of it isn’t said specifically through that subplot. After all, pretty much all its characters are superheroes/supervillains.)
Oh thanks for that. Will use.
This seems to have two mostly-orthogonal components to consider, both of which don’t seem like things we can actually discuss:
First, the possibility that not-working-constantly may make the superhero more productive, and have other benefits which ultimately cash out as productivity. This is an empirical question, so it makes no sense to discuss it for superheroes.
Second, the extent to which one is obligated to sacrifice one’s own non-inclusive wellbeing for others. Some people are roughly consequentialists, others are psychopaths, and there is no way to argue our way out of the disagreement—disagreements about values are settled by negotiation or by debellation, not by argument.
And this applies as much to superheroes as to the rest of us.
Thanks for the reply. I was trying to avoid the former and examine the latter. Take the word “psychopath” and divide it by a pretty big number and I think I’m with you, at least a little, until the “and.” Are there no arguments for it, or just none as helpful as social solutions iyo?
I am reminded of this.
See, I KNEW there was a literature ;)
More seriously, John McCarthy.
[EDITED to add:] Note the link at the end saying “solution” which has McCarthy’s own proposal.
I don’t think the issue is much different than normal people donating money for betnets.
I couldn’t find betnets on google but I assume its something like prediction markets, and I fail to see any connection.
That said, yes, I am hoping to learn something beyond superhero ethics :)
Sorry, typo should be bednets.
Basically while they had room for funding the Against Malaria Foundation was saving a life for roughly 2000$ via distributing bednets in Africa (numbers from memory).
Most people do earn less than 2000$ per hour but some people do earn that much and essentially have the ability to rescue a life per hour worked without superhero powers.
Ah, yes, that’s the kind of situation that would be the next step in the thought process.
Exit: looks like someone beat me to this .
Worm addresses this in a somewhat round about way: Cnanprn’f srryvatf bs vagrafr thvyg sbe rirel ubhe gung fur fcraqf qbvat guvatf bgure guna hfvat ure cbjref gb urny crbcyr jrer n znwbe pbagevohgbe gb ure zragny oernxqbja naq fhofrdhrag vzcevfbazrag va gur Oveq Pntr.
I’ve been reading studies on interventions to improve mood.
It seems worth taking seriously the possibility that we live in a world in which all single interventions have small to tiny effect sizes, and that, once we’ve removed factors known to have large negative impact, the mutable difference between people with mostly good mood and people with poorer mood comes down to a huge number of these small differences.
Some forms of therapy resemble this (examining a bunch of different thought patterns in CBT). Some studies claim to examine “lifestyle changes”, but they often do it in a really lackluster and low-compliance way, such as “we gave this group of depressed people a pamphlet with 50 things they should change about their lives, and compared them to the group we aggressively tracked and encourage to exercise daily”.
Since we have good evidence for small positive effect sizes for a bunch of different things, I’d love to see good evidence on how those effect sizes combine. But I can’t find this research.
Any thoughts? Pointers?
The fact that the average effect size in a population of an intervention is small doesn’t mean that there aren’t individual members in that population that benefit a great deal.
Over time I also think of mood as less of a one dimensional thing. People often change the way they judge their happiness, so you don’t have a constant standard.
Getting compliance is really hard.
Did you run this search?
Yes, or closely related queries. I usually use google scholar, but I haven’t found it to be better or worse than pubmed’s results, unless I’m looking for something very specific.
Small effect sizes are easier to hallucinate into being real.
A question regarding polls: I have used the polls feature quite a bit now and I got the feeling that many more people vote on the poll options than on the poll comment. Given that there mostly is an option “just show me” which could be taken to interpreted as “I don’t care about this poll but want to satisfy my curiosity” we could estimate the number of people who like the poll. Shouldn’t these also up-vote the poll-comment as a whole? Is it just lazyness to not up-vote or is there same higher standard for LW comments than I think?
I think it’s the same phenomenon where a top-level comment can get a single upvote (or no upvotes) but still spark a pretty long comment thread. Seems a bit strange to me, as I feel that most non-troll comments that spark or contribute to a good discussion are worthy of an upvote, but I think the answer is that there is some higher standard for LW comments than (for example) reddit comments. Jokes and playful misinterpretation don’t seem to do well here, even when they’re funny.
As far as I can tell, the phenomenon is self-reinforcing; the sparsity of upvotes in general on LW probably discourages people from upvoting things unless they meet a higher threshold. It seems to me that people upvote based not just on whether they agree or see value in the discussion, but on whether they think it matches with the ethos of the community. The end result is that the top-voted comments are almost always either “Very Less Wrong” things to say or very well-though-out and well-said.
That sounds like a good result. Reddit rather has a problem with how top-voted things always are contentless fluff because contentless fluff takes less time to consume and most people downvote much less than they upvote.
I don’t think there’s any valid inference from “thinks the poll worth voting on” to “thinks the comment with a poll in worth upvoting”.
Suppose I see a poll, think it’s a reasonable question to ask but no more, and am feeling helpful. Then I’ll likely fill it in. But why does that mean I should be upvoting it? I mean, if I see a comment, think it’s a reasonable thing to say but no more, should I be upvoting that? That would seem to lead to the conclusion that most people should be upvoting most comments, which seems waaaay off to me.
I upvote things that seem to me especially interesting or insightful or useful or witty or whatever. A poll can easily be worth filling in (if only because whoever posted it presumably is interested in the answers) without meeting that condition.
It’s possible to be interested in / get value from a poll without approving of the fact that it’s being conducted.
There is also the hypothesis that they don’t like the poll, but still think that one result is worse than another.
Has the activity on LessWrong changed since Spring (see this post])?
[pollid:779]
Where ‘before’ refers to the time in spring when this was on topic. And the middle option refers to the state at that time.
Not exactly on topic, but If you are measuring something objective (like quantity of activity), and not something subjective (like quality of activity), you are usually better off using an objective test (like number of posts) instead of a subjective one (like a self-report Likert scale).
Maybe someone can do the database query again and post the result?
But pure number of posts (which I’d bet has increased) isn’t the only criteria, or?
Do not read the following if you haven’t voted yet!
SERIOUSLY!
It’s interesting to notice the almost perfect bell curve, centered but slightly nudged toward more activity. My hypotheis: the majority of people noticed no change (me included), but were swayed by your opening comment.
Possibly not because there wasn’t any change, but because our brains aren’t so good at measuring this kind of data.
OK, lets have a look at what happens after I changed the text.
For the record: This is the original wording:
Original text:
These are the numbers of the poll options right now:
0
0
1
7
13
4
1
Would get more reliable results if you tried not to prime respondents.
Ah yes. Very well observed. Should have rot13 it. Or posted separately.
Random thought.
So, minor changes in designs of things sometimes result in better versions of things. You then build those things, and make minor design changes there. Repeat. Eventually you often get a version of a thing that no longer sees improvement from minor change.
“Global maximum!” declares a PhD biologist at a good university.
How common is this defect?
Could you give an example of a PhD biologist making that error?
By the way, this video showing the laryngeal nerve of the giraffe is a fascinating example of the role of local optimum in evolution.
I think I’d expect PhD biologists at good universities (or, at least, those working with evolutionary systems) to be aware that hill-climbing processes often get stuck in local optima.
I would assume the same, but unfortunately… that’s a real life thing that I heard one say in a lecture. Well, not “Global maximum!” but something with essentially identical meaning, without the subtext of big error.
People may be aware of a lesson learned from math, but not propagate it through all their belief systems.
Even without propagation of math lessons it’s generally taught that evolution doesn’t find optimal solution but just solutions that are good enough.
It’s also worth noting that various if you do an infinitive amount of minor design changes you can find global maxima. If I remember right the Metropolis–Hastings algorithm does get you a global maxima provided you turn the parameters right and wait long enough. It might take longer than trying every single possible value but if you just wait long enough you will get to your maxima.
Biologists also are often happy with solutions that aren’t 100% perfect. The standard for truth is often statistical significance.
Yes, I agree with everything you say (- well, I don’t know the M-H algorithm, but I’ll take that on faith).
I mentioned this explicitly because it’s mindblowingly bad to see someone saying this, with this background, when he says so many other smart things that clearly imply he understands the general principle of local optimizations not being global optimizations.
What he didn’t say is, “This enzyme works really well, and we can be pretty confident evolution has tried out most of the easy modifications on the current structure. It’s not perfect (admittedly), but it’s locally pretty good.”
It was more along the lines of, “We can be confident this is the best possible version of this enzyme.”
Anyway, a single human biologist isn’t the point. I’m much more interested in questions like, how often can I use local optima in an argument, and people will know what I mean / not think I’m crazy for suggesting there are other hills that might be stood upon.
That’s really bad. If you take any random enzyme shared by humans and chimpanzees both version are going to differ slightly and there no reason to strongly assume that the version of the chimpanzee is optimal for chimpanzees while the version for humans is optimal for humans.
There no reason that a random enzyme without very strong selection pressure is at a local maxima.
Criticize the following idea at will: let’s see if there’s a nugget of truth in it or if it busts under its own weight.
Moral progress can be modeled as the strengthening of a society memetic immune system.
Two extreme cases to illustrate this: a society where the power is held by an elite with very strict and very uniform moral code. In that case, any stray meme will likely be in conflict with the elite’s memeplex, and so it poses a threat to the society itself. The immune system is very weak, the society is very oppressive. Another case: a society where power is distributed or the elite derives its command from other means than a very tight set of ideas: in this case, not many memes will undermine society stability, its immune system is much stronger, the society is more tolerant and progressive.
I think you’re only looking at one failure mode—the excessively brittle society. Living organisms have boundaries that let some things in and refuse others.
A society needs to support some ideas, but not all ideas. For example, your society with the strong immune system needs some way to stabilize itself against going authoritarian.
Anyone have general thoughts on distributed computing/grid computing projects? Any in the following list interest you? Any in the following list appear to be a waste of resources?
http://en.wikipedia.org/wiki/List_of_distributed_computing_projects
On this topic, you may be interested in Gwern’s brutal takedown of Folding@home.
Should The Great Internet Mersenne Prime Search be considered a bad idea by that measure?
Are there any benefits to knowing prime numbers so large they can’t even be used in cryptography?
No?
Then I guess it’s a bad idea.
You never know what’s going to shake out from pure math. Still, hunting for extremely large primes might not be efficient, even by the standards of pure math.
This is only tangentially my field, but I’d expect the numbers themselves to be much less potentially useful than the algorithms needed to find them. Since GIMPS is just throwing FLOPs at the problem through established math, it doesn’t look like an especially good approach to me.
What could be learned by getting to know more of those numbers? What’s the benefit of knowing them now over waiting, e.g., 100 years when computing power is cheaper and better algorithms might exist?
And what else could be done with the computing power?
Although you can indeed never know the outcome of research, I think we can estimate whether particular research is worthwhile.
Is it a bad thing to invent or play resource-intensive computer games?
Depends on your relative valuation of entertainment and the other things that could be done with the resources. (And, in the case of creating rather than consuming, what you expect the people playing your game would be doing if you didn’t make it.)
If playing a computer game causes (aside from the entertainment value which is in fact its main point) a little harm and no good to speak of, that’s sad but many people will deem the entertainment worth that cost.
But in the case of Folding@home there isn’t much entertainment involved; rather, people run the Folding@home client because they think they’re thereby doing something useful. So if in fact they’re doing a little harm and no good to speak of, that’s a problem.
I’m Giving Away Money!
I recently posted about a writing wager I have developed for myself and a group of friends. The wager is simple: everyone chooses a charity and writes a novel. Finish the novel: donate to your charity. Give up or go a month without progress: donate to another charity (we have yet to decide whether it should be one charity or all three other charities).
I plan to choose an effective charity, however I am still in the decision process. So, here’s you’re chance to influence who gets money: tell me who I should donate to.
I’d like to hear any suggestions for the most effective charity you know. Support your choice as much or as little as you wish, I’ll still be making the final decision. Interpret the word “effective” as you wish. The point is, tell me who you think I should give money to.
I’m going to be the person who does the obvious because he saw the comment early, and link you to GiveWell with minimal commentary :P
Haha, thank ya. GiveWell’s definitely one of my main resources. Still, got to get it listed at some point.
Why is Quixey associated with rationalism? From its website it doesn’t seem different from any of many other startups.
Several LW members were involved in its founding.
Scott’s map was more about social connections than ideological ones.
Question: Say someone dramatically increased the rate at which humans can learn mathematics (over, say, the Internet). Assume also that an intelligence explosion is likely to occur in the next century, it will be a singleton, and the way it is constructed determines the future for earth-originating life. Does the increase in math learning ability make that intelligence explosion more or less likely to be friendly?
Responses I’ve heard to questions of the form, “Does solving problem X help or hinder safe AGI vs. unsafe AGI?”:
Improvements in rationality help safe AI, because sufficiently rational humans usually become unlikely to create unsafe AI. Most other improvements are a wash, because they help safe AI and unsafe AI equally.
Almost any improvement in productivity will slightly help safe AI, because more productive humans have more unconstrained time (i.e. time not spent paying the bills). Humans tend to do more good things and move towards rationality in their less constrained time, so increasing that time is a net win.
Not sure how I feel about these responses. But neither of them directly answers the question about math.
One answer would be that improving higher math education would be a net win because safe AI will definitely require hard math, whereas improving all math education would be a net loss because, like Moore’s Law, it would increase cognitive resources across the board, pushing the timeline further up. Note that if we ignore network effects (researchers talking to researchers, convincing them to not work on unsafe AI), the question becomes: Is the effect of improving X more like shifting the timeline forward by Y years, as in increasing computing power, or is it more like stretching the timeline by some linear factor, as in increasing human productivity? Thoughts?
I would think that FAI requires mathematics a lot more than does UFAI, which can be created through trial and error.
It seems like FAI requires deeper math than UFAI, for some appropriate value of deeper. But this “trial and error” still requires some math. You could imagine a fictitious Earth where suddenly it becomes easy to learn enough to start messing around with neural nets and decision trees and metaheuristics (or something). In that Earth, AI risk is increased by improving math education in that particular weird way.
I am trying to ask whether, in our Earth, there is a clear direction AI risk goes given more plausible kinds of improvements in math education. Are you basically saying that the math for UFAI is easy enough already that not too many new cognitive resources, freed up by those improvements, would go towards UFAI? That doesn’t seem true...
I’d endorse that. But IME mathematical advances aren’t usually new ways to do the same things, they’re more often discoveries that it’s possible to do new things.
Umm, would anybody here have invites for torrent trackers for textbooks (e.g. BitMe, The Geeks, Bibliotik)? PM me.
Cryonics and transhumanism are laughably irrational. Guess that’s what happens with a cult based on a Harry Potter fanfiction by a dropout
You do know that both sets of ideas predate HPMOR, right?