Well, exactly. That’s what I meant when I said that it was very confusing to me, as a young grad student in an outside field, to have a course that assigned Peirce and Lacan side by side with a straight face, evidently taking them equally seriously.
There may or may not be some legitimate field of inquiry going under the name of semiotics. In grad school a number of years ago, however, I took a (graduate-level) Introduction to Semiotics that was a pretty remarkable hodgepodge of bullshit, along with just enough non-bullshit to make a complete outsider like myself (not at all fluent in the obscurantist discourse of “cultural studies,” “critical theory,” and the like) feel like maybe the problem was me and not the material. (Later reflection gave me a lot more confidence that the problem was, in fact, the material.)
Among the reading was Freud, Lacan, Derrida, J. L. Austin, Marcel Mauss, Saussure, Lévi-Strauss, and Peirce. (There was other stuff too that I don’t recall right now.) Interestingly, of those I would say that only Lacan and Derrida were outright charlatans (which is not to endorse any of the others in particular, just to say that they were all doing something at least potentially more valuable than pulling stuff out of their asses). But the writings of the non-charlatans were presented so confusingly and tendentiously that it never remotely cohered into any sense that semiotics was a field with any integrity of its own or anything useful to contribute. That is to say, none of those thinkers would have described himself as a “semiotician,” so it was very much a post hoc attempt to put a framework around a bunch of very diverse writing that in many cases was pretty foreign to its original intent.
This is all n=1, of course, but on that basis I tend to think that semiotics as a standalone field is probably more or less as you say it is.
I like how you call it “a set of heuristic practices that work well in a lot of situations, justified by complete bullshit.” Because my first instinct when writing this comment was to include a remark to the effect that even if theoretical semiotics is mostly or entirely crap, there is some valuable work that calls itself some variety of applied semiotics. For example, there are some people who do musical semiotics, and—since it’s not at all obvious what music signifies and how it does so, either in the general or specific cases—I have found some of that work enlightening. But on further thought, you’re absolutely right in your characterization of it. Musical semiotics can be full of insight, but its adoption of the theoretical apparatus of (e.g.) Roland Barthes or Umberto Eco is no part of its value—that, instead, is an attempt to bring “theoretical” rigor to a fundamentally nonrigorous enterprise. So much the worse for any “application” of semiotics if it relies on the cesspool of semiotic theory to back up its assertions.
Phil, you’ve probably seen this already, but a bunch of proposals for alternative notation systems are collected here. Some of them are basically exactly what you would prefer to be reading. It would be really cool if someone wrote a Lilypond package that could output in some of these systems. (Maybe someone has, I don’t know.)
Very true. Staff notation essentially says “Here are the pitches and rhythms, now it’s your job to figure out how to make them happen on your instrument.” As you point out, a very real alternative to staff notation exists in tablature, which (in general) is any notation system that instead says “Here’s what you need to do physically on your instrument. Follow these instructions and the notes will automatically be the right ones—you don’t need to worry about what they ‘are’.”
Tablatures are surprisingly old, apparently going back 700 years or so in various forms. Of course, their drawbacks as general musical notation are clear enough. Namely, if you want to understand what’s going on in the music or play music on a different instrument, tablature is really only a kind of lookup table for actual notes, and often a very cumbersome one.
Agreed on all this.
I didn’t have anything really radical in mind. I think it’s pretty clear that there’s a long-term trend toward high-level music-making relying on notation to a decreasing extent. I have a number of friends who are professional composers, and some of them use notation to write for instruments, while others use electronics and largely don’t use notation at all. (The latter group, who compose for video games, movies, etc., are the ones who actually make money at it, so I’m by no means just talking about avant-garde electronic music.) A lot of commercial composers who would have been using paper and pencil 30 years ago are using Logic or Digital Performer today.
The other factor, of course, is that notated genres of music (“classical” music and its descendants, and some others) are increasingly marginal in Western culture. This trend is often way overblown, but is clearly visible at the timescale of decades or longer.
What I certainly don’t mean to suggest is that individuals who use notation in our musical lives, like you or me, will stop using it. It’ll be a cohort replacement effect, and no doubt a very gradual one. Nor do I think that music notation will entirely go away at some foreseeable point in the future. But reading and using it will slowly become a more specialized skill. My impression, though I don’t have a reference for this and could be completely wrong, that the ability of American adults (not pro musicians) to read music notation with some fluency has hugely declined over the last half-century.
All this is very much the framing argument of Taruskin’s Oxford History of Western Music, with its much-criticized focus on what he calls the “literate [his needlessly inflammatory term for ‘notated’] traditions” of music. Within that frame, he casts the present day as essentially an “end-of-history” moment.
Correct me where I’m wrong here! I’m not a specialist in these issues.
Let me add that, like you, I absolutely love music notation, borderline fetishize it, and say all this with more than a trace of a Luddite’s sadness.
Good post and I’ll chime in if you don’t mind. I teach this stuff for a living and even highly skilled musicians struggle with it in various ways (myself emphatically included).
The main thing I want to say is that there’s a reason why essentially all music education consists of many years of rote learning. Obviously, that rote learning works better if it’s guided in appropriate directions, but I really don’t know of any alternative to what you describe when you say “an orders-of-magnitude-less-efficient mechanism for memorizing note-to-note mappings for every note and every pair of keys.” I hate to say it, but … yep. [EDIT: eh, let me qualify that a bit. See point (A) below.]
Sight-transposition (i.e. sight-reading plus on-the-fly transposition) is a ninja-level skill. Some instrumentalists (usually those who play non-concert-pitch instruments) can do it reasonably well for at least some transposition intervals, and a few people like professional vocal accompanists and church organists need to be able to do it fluently as an expected part of their job. But outside of those folks, even professional musicians rarely have that facility.
Here’s something that directly supports your point at (D). As you know, pitch intervals in tonal theory are given names that break arithmetic—a second plus a fourth is a fifth, even though 2+4≠5. A certain well-known music theorist often expresses the view that this blatantly illogical convention is almost entirely responsible for the popular perception that music theory is a really, really difficult subject. I think this exaggerates things, but he’s got a point. However, most musicians know those interval names really well and have never thought much about how stupid they are, and so then high-level music theory becomes opaque to skilled musicians because we start by renaming intervals correctly (i.e. a second is diatonic interval 1, and you can add them like normal numbers).
In the case of the frustrating conventions of staff notation, there are historical reasons going back a millennium why we write pitches like that. Various reforms have been proposed, but path-dependency basically makes it impossible that any of them would ever be adopted. Far more likely (and well underway for decades now) is that musicians will stop using notation altogether.
Just to briefly answer your other questions with my personal views:
(A) Personally yes, I have all the note-to-note mappings memorized. I do this completely via thinking in scale degrees. I can name any scale degree in any key, so questions like the one you mentioned just revolve around thinking “B-flat is scale-degree 4 in F major. What’s scale-degree 4 in C or A-flat?”
(B) Yes, I do think this is plausible, and underappreciated in the specific case of music, since most musicians don’t think much about the ways in which notation isn’t an optimized system.
(C) Maybe this is too glib, but … social interaction? “Overthinking it” isn’t a path to doing well in social settings. For that matter, natural language might be another. In many respects it’s best learned by rote (along with some theory—just like music) but I’ve certainly had classmates in language courses who get too hung up on the illogic of grammar to progress well in basic skills like speaking and listening comprehension.
Yes, it should be clarified. The main ambiguity that I was reacting to is that “art” can mean specifically visual arts or it can mean “the arts,” extending to performing and literary arts. As it is, I’m not sure if my profession (scholarship concerning music) is “art” or “other.”
In fact (now addressing Yvain again), why is this category called “Profession” instead of “field”? It creates some odd overlap with the previous category of “Work status” which produces a little bit of confusion per my original suggestion and fubarobfusco’s reply.
On “Profession,” the field label “Art” is vague. Better would be “Arts and humanities.”
I used to hear something similar in debates over gay marriage:
Gay person: “I only want to have the same right as a straight person: the right to marry the person I love.”
Gay marriage opponent: “No no, you already have the same right as a straight person: the right to marry a person of the opposite sex. If you also want the right to marry a person of the same sex, you’re asking for extra rights, special privileges just because you’re gay. And that simply wouldn’t be fair.”
Edit: bramflakes beat me to it.
Right. But, when exposed to it, some are drawn in and some run as fast as possible in the opposite direction. The point of the example was that there’s a surprisingly large amount of individual variation on what kinds of fundamental sounds and timbres people find most pleasing, and (I cautiously suggest) that appears to be the most innate and least malleable or learnable aspect of a person’s response to various kinds of music.
Just a couple of thoughts about this. First, as far as anyone can tell music enjoyment is a remarkably multifaceted phenomenon (and “music” itself is a term that describes a pretty giant range of human behaviors). There’s no single reason, or even manageably short list of reasons, why people like it. It seems to be wrapped up in many different physical, neurological, cognitive, emotional, social, and cultural systems, any of which (in any combinations) could be responsible for a certain person’s reaction to a certain kind of music. Some of the aspects of that seem to be relatively innate, like finding certain sonic timbres inherently pleasurable, while others are highly learned, like the kind of pleasurable “understanding” that comes from knowing how a classical sonata movement is ordinarily structured.
In your case, I’d guess that you have an atypically low physiological/neurological enjoyment of things like instrumental timbres, which makes the more cognitively demanding aspects of music-listening no more than a chore. For comparison, this is why we don’t generally listen to spoken words (e.g., audiobooks) as background listening: there’s nothing to be gained from it outside the semantic content, which is distracting unless you can tune it out, in which case why bother.
(Merely finding music distracting is not at all rare. In fact, the various professional musicians and music scholars I know listen to less music than most other people do, because our training makes it hard for us to listen as other than a “foreground” mental activity. I myself almost never listen to background music. Unlike you, though, I do like music a lot.)
We seem to have a tendency, when discussing music as when discussing other things, to assume that other people are more like us than we have any good reason to think they are. For example, I find the timbres and general sound world of noise music to be extremely unpleasant. So when I imagine someone who likes noise music a lot, my first impulse is to think they must in some sense “enjoy unpleasant things” (an obvious category error), or at least that they must find something in noise music that’s rewarding enough to get past how clearly unpleasant the sounds are. And yet when I actually talk to a fan of noise music, they often tell me they find the timbres and sounds of noise music (exactly the aspects of it I can’t even imagine liking) to be very pleasant or arousing in some way. The enjoyment of these basic aspects of a kind of music (what kinds of sounds it’s made up of) seems to be sufficiently physiologically/neurologically determined for a lot of people that it is almost impossible to imagine liking a kind of music you don’t “naturally” like.
In other words, and I do not mean this even slightly pejoratively, I would expect it to be very difficult for you to imagine why other people find, say, the sound of an orchestra playing a single major triad (NB, a purely sonic event with no syntactic or semantic content) pleasant. Much as it is for me to imagine finding noise music pleasant—it’s just not what my brain is built to enjoy.
Relatedly, the history of the questions “why do people like music?” and “what kind of music is best?” feature some truly aggravating episodes that seem to stem from the idea that music is (or should be) a single kind of thing to all people, and that we just have to figure out what. (To be clear, I’m in no way suggesting that you’re taking that point of view.) The idea that music is just a really, really complicated phenomenon with which everyone interacts a bit differently—and the corresponding aesthetic pluralism that follows from that fact—has been amazingly slow to spread, no less so in professional music circles than elsewhere.
This is strictly pop-science writing, but there was an interesting piece in the NYT Magazine a couple of years ago about ketosis as a treatment for pediatric epilepsy, where apparently it’s extremely effective at controlling seizures in a significant fraction of patients.
I don’t think I understand at all what these descriptions of confidence levels are supposed to mean. Do they refer to your confidence in specific pieces of information about the people in the descriptions? Information you heard from those people? What scenario does the business about email addresses envision?
EDIT: Apologies, I now see the parenthetical “(being applied to identity verification, where possible),” which I managed to completely overlook on a first reading. Please ignore the above criticism, but you still might want to make the deciban descriptions more explicit.
My pleasure, glad it seems useful.
Sounds like you have some good, concrete ideas about how to proceed. Contacting professors whose work interests you, to ask about graduate study in their departments and/or labs, is certainly a necessary step.
Throughout academia, we have a rule of thumb: do not ever, ever, spend any of your own money or go into debt for a PhD. That means that any place at which you should give the slightest consideration to doing graduate work should offer you a full waiver of tuition, plus a modest income (“stipend”) and health insurance, for the duration of a reasonable period of study. The rationale for this rule of thumb is twofold: First, the expected financial returns to a PhD simply aren’t such that you can afford to risk having tens of thousands of dollars (or more) of debt to repay. Second, a university’s willingness to spend their money to fully fund you serves as a useful indicator that they think you have real potential for success.
When you correspond with scientists with whom you might want to study, they should be able to tell you roughly how funding works in their departments. It’s not the same at every university or for every student. Possible sources for funding are basically: (1) You working as a researcher in someone’s lab, supported by the university and/or by grants won by the lab’s PI; (2) you working as a teacher or teaching assistant; (3) fellowship support provided by the university (i.e. they just give you money); (4) outside grants or fellowships you win yourself. The normal case for scientists is that your funding mostly comes from (1), but among scientists of my acquaintance there has been a healthy mixture of all four, and nearly all graduate students in science will at some point get funding from more than one of those sources. However, what they should be able to tell you before you even apply is how many years of funding are guaranteed by the university, whether funding is usually available beyond the guaranteed years, and what the typical funding package consists of (as I said earlier, it should at a minimum contain a full tuition waiver, health insurance, and a modest stipend for living expenses suitable to the area you’d be living in).
That’s pretty much all I can tell you about the funding of graduate study in the sciences, since my entire academic life has been spent on the arts and humanities side, which handles graduate funding somewhat differently. The people you should be leaning on for advice are professors at your own undergraduate institution—particularly younger ones, since they will have gone through this more recently—and other knowledgeable scientists. They should be able to separate your academic and scientific potential from your lack of practical know-how and help guide you through the process of application, from identifying places to apply all the way to deciding which of your admission/funding offers to accept, if you get that far. They will have a lot more to tell you than I possibly can about what questions you should be asking of potential grad schools at all stages of the process.
A few other notes:
If you’re noticing conflicting information about how graduate funding works, it’s probably just because different departments handle it differently. When in doubt, refer to the rule of thumb above. It’s ok for departments to achieve full funding of graduate students in different ways, but not ok for them to fund some students but not others, or to admit you without making it clear how funding will work.
You could also be getting conflicting information from people with experience in different branches of science. Psychology, molecular bio, evolutionary bio, experimental physics—to pick a few—all have their own characteristic ways of approaching graduate study, collaboration, funding, etc. So it’s best to get advice from people as near as possible to your own interests.
Some science departments admit graduate students to the overall program and then let them later choose which lab to affiliate with. Others admit you with the up-front understanding that you will be working in a particular lab. Find out how it works at the places you apply to.
When weighing offers of graduate admission, try to get some data on outcomes for students in the program, such as job placement, time to degree, and success at winning grants (especially if grants are relied upon for graduate funding). Also, talk to current students in the program, who can tell you whether the program does well by its students, or alternatively makes life tough for them, e.g. by screwing them out of funding.
A really serious round of graduate applications does cost some money. In your comments you often seem concerned about that. Unfortunately with a total lack of support from your parents you’ll probably need to have a few hundred bucks in reserve for costs associated with applying, and another few hundred bucks for moving to the area where your new school is located. If you aren’t prepared to live off-campus in an apartment, which carries logistical headaches that you seem quite daunted by, all large research universities have on-campus graduate dorms, so you really would not need to do anything except drive there with your personal belongings packed into a car. Anyway, save up a little money.
A lot of these concerns are a ways down the road for you, though. You’ll probably find that getting funding is easier than you might think at graduate programs you really want to get into. The best thing you can do as an undergrad is make yourself an un-ignorable candidate for graduate admission. Study like crazy, get high test scores (super important, don’t let anyone tell you otherwise—this is true even in the humanities), find some ways to take initiative, and if possible form some good relationships with faculty at your college.
Good luck! Do try to get a mentor at your college, it’s a much more reliable source of personalized information than pseudonymous musicologists you met on the internet. There are also books and online forums for people who want to do graduate study in the sciences, although I can’t personally recommend any by name.
Thanks for this post. Whatever problems the JTB definition of knowledge may have—the most obvious one of those to LWers probably being the treatment of “knowledge” as a binary condition—the Gettier problem has always struck me as being a truly ridiculous critique, for just the reasons you put forward here.
Scott Lemieux once called this the “my-utopia-versus-your-grubby-reality asymmetry,” a delightful turn of phrase which has stuck with me since I read it.
Although Lemieux was talking about something subtly different from, or possibly a subset of, what you’re talking about: the practice of describing the benefits of your own preferences as if you could completely and down to the smallest detail redesign the relevant system from scratch, while insisting on subjecting your opponent’s preferences to a rigorous “how do we get there from here” analysis with all the compromises, imperfections, and unforeseeable obstacles the real world always entails.
“That” if you’re a grammar Nazi; either one if you’re a professional linguist or mere native speaker of English. :)
A big +1 to this and it echoes in many respects my advice here to a similar question. What you hit upon here that I did not do in that comment is the importance of understanding the etiology of one’s new belief.