Just donated $500 and pledged $6500 more in matching funds (10% of my salary).
Academian
Memory formation and memory retrieval are very different tasks, so one should be specific when making claims like “Caffeine helps long term memory.” For example, if caffeine only hinders long term memory formation, but not retrieval, then this would suggest using it during an exam, but not while studying. If vice versa, then vice versa.
Unfortunately for our purposes, the authors of your first article have blurred this distinction in their abstract, no doubt because it was not the subject of their study: their method was to add caffeine to rats’ water supplies, without controlling the timing of the doses in relation to the events of formation and retrieval.
I was happy to find your last article addresses precisely this question:
Groups of 12 adult male Wistar rats receiving caffeine (0.3-30 mg/kg, ip, in 0.1 ml/100 g body weight) administered 30 min before training, immediately after training, or 30 min before the test session were tested … Post-training administration of caffeine improved memory retention at the doses of 0.3-10 mg/kg … but not at the dose of 30 mg/kg. Pre-test caffeine administration also caused a small increase in memory retrieval …. In contrast, pre-training caffeine administration did not alter the performance of the animals either in the training or in the test session. These data provide evidence that caffeine improves memory retention but not memory acquisition, explaining some discrepancies among reports in the literature.
Nice article, IMO. Its conclusion might suggest drinking caffeine right after study sessions (or in breaks between them, while ruminating on the ideas) is the best strategy. On the other hand, perhaps in the long term, the non-specific effects of the first study would dominate.
Personally, I’m definitely unconvinced by these data as to how I should be using caffeine, but as you can see you’ve got me very curious!
Luke, there’s a serious and common misconception in your explanation of the independence axiom (serious enough that I don’t consider this nitpicking). If you could, please fix it as soon as you can to prevent the spread of this unfortunate misunderstanding. I wrote a post to try and dispell misconceptions such as this one, because utility theory is used in a lot of toy decision theory problems, versions of which might actually be encountered by utility-seeking AIs:
For example, the independence axiom of expected utility theory says that if you prefer one apple to one orange, you must also prefer one apple plus a tiny bit more apple over one orange plus that same tiny bit of apple. If a subject prefers A to B, then the subject can’t also prefer B+C to A+C. But Allais (1953) found that subjects do violate this basic assumption under some conditions.
This is not what the independence axiom says. What it says is that, for example, if you prefer an apple over an orange, then you must prefer the gamble [72% chance you get an apple, otherwise you get a cat] over the gamble [72% chance you get an orange, otherwise you get a cat]. The axiom is about mixing probabilistic outcomes, not mixing amounts of various commodities.
This distinction is important, because for example, if you’d rather have 1 apple than 1 orange, but you’d rather have 1 orange and 0.2 apples than 1.2 apples, you’re not violating the independence axiom, nor instantiating the Allais paradox. You simply don’t like having too much apple, which is fine as far as EU is concerned: apple can have negative marginal utility after a certain point. Such explanations are an essential feature, not a shortcomming, of utility theory.
The Allais paradox is a legitimate failure of utility theory in describing human behavior, though, so you’re of course right that expected utility theory is very useless as a predictive tool. I doubt any powerful AGI would commit the Allais paradox, though.
Otherwise, thanks for the incredibly informative post!
Since a couple of people want before/after information, here’s some: Before minicamp: I was able to work around 5 hours per day on research.
After: 10 hours/day, sustainable for months.
After: Less afraid to try new professional directions than ever before, by a margin much wider than this trait has ever changed for me.
After: Secured $24,000 of grant money from DARPA to work on applications of algebraic geometry to machine learning, my first time trying out applied math. Loving it.
After: Difference in productivity was so noticeable that I’m volunteering my time as an instructor at the next few camps (I taught some at the last camp, too) because I expect it to have further positive, lasting effects on my professional / personal life.
After: Got a new dissertation advisor; many people around me seemed to think that was impossible or risque, but it has gone very well and been very refreshing, given my interests. (Before the camp I was more afraid to make what felt like a “sudden” change, which was actually something I had been thinking about for a year and was not sudden at all.)
Note: My experience at the camp may not have been typical, because I did teach a few sessions at the beginning… but those were not the ideas that stuck with me most and motivated me professionally; they were Anna’s and Luke’s sessions.
Since I’m volunteering to teach for the next few camps, I won’t be able to give participant-side data after the next camp, so let this be my public testimonial: minicamp had a SERIOUS before/after effect on my life, resulting in more exploration, faster decision making (changed my thesis advisor, to great benefit and the surprise of many), and increased productivity. Its benefits are the cause of my volunteering to teach for it, and this comment.
In general, I think LessWrong.com would benefit from conspicuous guidelines: a readily-clickable FAQ or User’s Guide that describes posting etiquette and relevance criteria, and general mandatey stuff.
I encourage everyone to look at the example of http://MathOverflow.net/, a web community for mathematicians that started with a few graduate students just half a year ago and has grown immensely in size and productivity since then (notably , enjoying regular contributions from Fields Medalist Terrence Tao).
Not only do they have an FAQ, but a clearly distinguished ongoing Meta forum that was used extensively in its early development to analyze site policies:
http://mathoverflow.net/faq http://meta.mathoverflow.net/
If we did discover a cognitive trick for making people collectively reason better, a sentence about it in an FAQ could work wonders.
Great post! Some thoughts/experience I’d like to add:
1) How I got started. I began using a multiple-sub-agents heuristic for introspection when I stopped thinking of my mind as a point-mass. The brain has physical extent, and there even parts of my brain that I don’t much identify with as “me” even though they affect my bodily functioning and behavior. I thought, how might those parts work? How should I treat them? And then, hey, why not treat them like people? They’re made of brain, too.
By priors picked up from descriptions of various people trying this, you’re reasonably likely to identify one of your sub-agents as “you”.
2) “I am my executive system.” To avoid losing or constantly changing my sense of self, and to maintain neutrality, I try to identify most strongly with my executive system (theorized by Miller, Cohen and others to operate primarily in the prefontal cortex). I think of “me” as a team leader who can coordinate the efforts of the rest of my various brain functions toward coherent goals that take into account their individual preferences. For example, sometimes I’ll tell my entertainment-seeking-distraction function that it’s probably in his best interest to let my productive-ambitious function work and build opportunities so life can be more entertaining on average in the future.
3) An honor system with signalling. When I strike a deal like that between conflicting functions or “sub-agents”, I find it extremely important to honor the deal so the sub-agents continue to trust my leadership. After committing to this as a policy, I’ve found it unbelievably easier to negotiate inner conflicts, especially akrasia. For example, when I strike a deal between work and (other) entertainment, I commit to the entertainment agent that I will not procrastinate entertainment indefinitely. Then, I indulge on occasion as a signal that I will honor the deal more as I get older.
I don’t know about the rest of you, but it seems to me that this “honor system with signaling” is absolutely essential to maintaining my own “inner order”, and my quality of life has increased dramatically since I adopted it. Of course I can’t be sure how it’d work for others, but it’s an idea.
tl;dr: I was excited by this post, but so far I find reading the cited literature uncompelling :( Can you point us to a study we can read where the authors reported enough of their data and procedure that we can all tell that their conclusion was justified?
I do trust you, Yvain, and I know you know stats, and I even agree with the conclusion of the post—that people are imperfect introspectors—but I’m discouraged to continue searching through the literature myself at the moment because the first two articles you cited just weren’t clear enough on what they were doing and measuring for me to tell if their conclusions were justified, other than by intuition (which I already share).
For example, none of your summaries says whether the fraction of people noticing the experimenters’ effect on their behavior was enough to explain the difference between the two experiment groups, and this seems representative of the 1977 review article you cited as your main source as well.
I looked in more detail at your first example, the electric shocks experiment (Nisbett & Schachter, 1966), on which you report
… people who took the pill tolerated four times as strong a shock as controls … Only three of twelve subjects made a connection between the pill and their shock tolerance …
I was wondering, did the experimenters merely observe
(1) a “Statistically Significant” difference between PILL-GROUP and CONTROL-GROUP? And then say “Only 3 of 12 people in the pill group managed to detect the effect of the placebo on themselves?”
Because that’s not a surprise, given the null hypothesis that people are good introspectors… maybe just those three people were affected, and that caused the significant difference between the groups! And jumping to conclusions from (1) is a kind of mistake I’ve seen before from authors assuming (if not in their minds, at least in their statistical formulae) that an effect is uniform across people, when it clearly probably isn’t.
Or, did the experimenters observe that
(2) believing that only those three subjects were actually affected by (their knowledge of) the pill was not enough to explain the difference between the groups?
To see what the study really found, after many server issues with the journal website I tracked down the original 1966 article, which I’ve made available here. The paper doesn’t mention anything about people’s assessments of whether being (told they were) given a pill may have affected their pain tolerance.
Wondering why you wrote that, I went to the 1977 survey article you read, which I’ve made available as a searchable pdf here. There they say, at the bottom left of page 237, that their conclusion about the electric shocks vs pills was based on “additional unpublished data, collected from … experiments by Nisbett and Schachter (1966)”. But their description of that was almost as terse as your summary, and in particular, included no statistical reasoning.
Like I said, I do intuitively agree with the conclusion that people are imperfect introspectors, but I worry that the authors and reviewers of this article may have been sloppy in finding clear, quantitative evidence for this perspective, perhaps by being already too convinced of it...
LessWrong needs an FAQ. Really. I can’t encourage everyone enough to look at the example of
MathOverflow.net. It has a fantastic FAQ that simultaneously makes the site less scary and the standards more evident. Yes, those goals aren’t entirely mutually exclusive!
And MathOverflow, created by 2 grad students, grew explosively in a matter of months to involve many famous mathematicans and even fields medalists.
There is more than speculation here… there is evidence we should be updating on.
- Attention Less Wrong: We need an FAQ by 27 Apr 2010 10:06 UTC; 15 points) (
- 27 Apr 2010 9:17 UTC; 2 points) 's comment on Proposed New Features for Less Wrong by (
Be prepared to experience small attacks of guilt during the meditation for taking time away from your paper clips. It should help to be decisive before you begin, as Luke recommends, on a minimal amount of time that is worth the expected information gain and performance enhancing effects. Tell the paper clips—out loud, or at least in a clear voice in your head—that it’s in their interest to wait 15 minutes a day until you’re better at making them ;)
I would expect not for a paid workshop! Unlike CFAR’s core workshops, which are highly polished and get median 9⁄10 and 10⁄10 “are you glad you came” ratings, MSFP
was free and experimental,
produced two new top-notch AI x-risk researchers for MIRI (in my personal judgement as a mathematician, and excluding myself), and
produced several others who were willing hires by the end of the program and who I would totally vote to hire if there were more resources available (in the form of both funding and personnel) to hire them.
What I liked in a nutshell:
What would you prefer to be made of, if not matter?
On behalf of chemicals everywhere, I say: Screw you!
If there is a fact question at stake, take no prisoners; but you don’t get extra points for unnecessary angst.
This sounds like exactly the kind of failure mode I’m trying to describe. In your “empty identity” scenario, I’d now guess that an image of “selflessness” or “blankness” or something like that would either bias your beliefs about yourself or slow your processing of them. In particular, it might interfere with certain cognitive capacities that other people find natural, obvious, and useful. This is speculation on my part, but to the extent that narrative features are a bottleneck in how our brains process beliefs about ourselves and others, the way you naturally and efficiently represent yourself to yourself may be physically tied up with with the same brain-bits that represent stories.
My thought here is that it may be better to learn to use that machinery sanely than to not use it… it’s like getting a ridiculously fast software package for analyzing data that makes all sorts of known-to-be-false assumptions about how the data was generated collected. Using it entirely naively is probably bad, as is not using it at all. Knowing that when the package says “X” it’s actually evidence for “Y”, and using it accordingly, is probably best.
Have you read Thou Art Godshatter?
I agree with Eliezer that my values are an ad-hoc assembly of things that happened to increase the genetic fitness of my ancestors, and that this ad-hoc-ness is why I do not solely value my own genetic fitness. If natural selection were smarter, I would. But naturally, I’m satisfied with the values I got instead :)
From the perspective of a hypothetical, evolution-personified designer who “created” me, my morals might just be signals. So I’m careful that I might be running on hostile hardware that might try overtaking my conscious values to, say, become a corrupt and promiscuous political leader with many offspring. But I don’t identify with this hostility as “my values”, and will make much conscious effort to prevent such corruption.
ETA: You might really have those values; I just want to draw attention to them not being an inevitable consequence of evolution or “realizing one’s true purpose”. Thankfully, used as such “true purpose” doesn’t have to mean anything non-subjective, nor in particular equate to “temporally earlier in-some-sense-implicitly-conceived purpose”.
Great question! It was in the winter of 2013, about a year and a half ago.
Until I’m destroyed, of course!
… but since Qiaochu asked that we take ultrafinitism seriously, I’ll give a serious answer: something else will probably replace ultrafinitism as my preferred (maximum a posteriori) view of math and the world within 20 years or so. That is, I expect to determine that the question of whether ultrafinitism is true is not quite the right question to be asking, and have a better question by then, with a different best guess at the answer… just because similar changes of perspective have happened to me several times already in my life.
This sounds good in theory. But in my experience, WorthIt-Bob doesn’t usually argue rationally.
I didn’t say anything about WorthIt-Bob having to be rational… you’ve dealt with irrational people before, and you can deal with irrational sub-agents, too. Heck, you can even train your pets, right?
In particular, Orthonomal has some great advice for dealing with people and sub-agents alike: figure out all their feelings on the issue, even the ones they didn’t know they had. Then they might turn out more rational than you thought, or you might gain access to the root of their irrationality. Either way, you get a better model for them, and you probably need it.
Bad tactics: mentioning Sam Harris (who got a pretty bad reception here) and choosing somewhat political examples.
I didn’t want to choose issues people already agreed upon or ignored, including Harris himself.
Your point seems so true as to be obvious. …
Have you not had a conversation that was ended or degraded with “Well morality is subjective anyway, this is all a pointless question.”? The goal of the post is to respond as effectively as possible to this disorientation, and unsurprisingly, the most convincing response is an obviously true one… what I’m offering is which obviously true response is most effective. That’s what I was getting at when I wrote
Though perhaps obvious, this idea has some seriously persuasive consequences
though maybe I should expand on that in the OP?
Very nice post! My personal favorite things I’ve learned about from reading LessWrong:
Causality: Models, Reasoning, and Inference, a book by Judea Pearl written in 2000 which is frequently referenced by the SIAI and on LessWrong.
Politics as charity: that in terms of expected value, altruism is a reasonable motivator for voting (as opposed to common motivators like “wanting to be heard”).
That a significant number of people are productively working on philosophical problems relevant to our lives.
Lots of little sanity checks to keep in mind, like Conservation of Expected Evidence, i.e. that without evidence, your expectation of what your confidence will be after seeing evidence is equal to your prior confidence. (But see this comment on things you can expect from your beliefs.)
I can’t claim to be “converted to rationality” or any particular school of thought by LessWrong, because most of the ideas in the sequences were not new to me when I read them, but it was extremely impressive and relieving to see them all written down in one place, and they would have made a huge impact on me if I’d read them growing up!
I’m not saying rationalists should avoid engaging in ritual like the plague; but I do a lot of promoting of CFAR and rationality to non-LW-readers, and I happen to know from experience that a post like this in Main sends bad vibes to a lot of people. Again, I think it’s sad to have to worry so much about image, but I think it’s a reality.
You’re describing costly signaling. Contrary to your opening statement,
people on LessWrong are usually using the term “signalling” consistently with its standard meaning in economics and evolutionary biology. From Wikipedia,
In particular, the ev bio article even includes a section on dishonest signalling, which seems to be what you’re complaining about here:
This post is still interesting as a highlight reel of different examples of signalling, and shows that the term is, in its standard usage, rather non-specific. It’s just not an illustration that people here are using it wrongly.