I blog at https://dynomight.net where I like to strain my credibility by claiming that incense and ultrasonic humidifiers might be bad for you.
dynomight
One way you could measure which one is “best” would be to measure how long it takes people to answer certain questions. E.g. “For what fraction of the 1997-2010 period did Japan spend more on healthcare per-capita than the UK?” or “what’s the average ratio of healthcare spending in Sweden vs. Greece between 2000 and 2010?” (I think there is an academic literature on these kinds of experiments, though I don’t have any references on hand.)
In this case, I think Tufte goes overboard in saying you shouldn’t use color. But if the second plot had color, I’d venture it would win most such contests, if only because the y-axis is bigger and it’s easier to match the lines with the labels. But even if I don’t agree with everything Tufte says, I still find him useful because he suggests different options and different ways to think about things.
I loved this book. The most surprising thing to me was the answer that people who were there in the heyday give when asked what made Bell Labs so successful: They always say it was the problem, i.e. having an entire organization oriented towards the goal of “make communication reliable and practical between any two places on earth”. When Shannon left the Labs for MIT, people who were there immediately predicted he wouldn’t do anything of the same significance because he’d lose that “compass”. Shannon was obviously a genius, and he did much more after than most people ever accomplish, but still nothing as significant as what he did when at at the Labs.
If I hadn’t heard back from them, would you want me to tell you? Or would that be too sad?
Well, no. But I guess I found these things notable:
Alignment remains surprisingly brittle and random. Weird little tricks remain useful.
The tricks that work for some models often seem to confuse others.
Cobbling together weird little tricks seems to help (Hindi ranger step-by-step)
At the same time, the best “trick” is a somewhat plausible story (duck-store).
PaLM 2 is the most fun, Pi is the least fun.
I thought this was fantastic, very thought-provoking. One possibly easy thing that I think would be great would be links to a few posts that you think have used this strategy with success.
I specified (right before the first graph) that I was using the US standard of 14g. (I know the paper uses 10g. There’s no conflict because I use their raw data which is in g, not drinks.)
I know that the mainstream view on Lesswrong is that we aren’t observing alien aircraft, so I doubt many here will disagree with the conclusion. But I wonder if people here agree with this particular argument for that conclusion. Basically, I claim that:
P[aliens] is fairly high, but
P[all observations | aliens] is much lower than P[all observations | no aliens], simply because it’s too strange that all the observations in every category of observation (videos, reports, etc.) never cross the “conclusive” line.
As a side note: I personally feel that P[observations | no aliens] is actually pretty low, i.e. the observations we have are truly quite odd / unexpected / hard-to-explain-prosaically. But it’s not as low as P[observations | aliens]. This doesn’t matter to the central argument (you just need to accept that the ratio P[observations | aliens] / P[observations | no aliens] is small) but I’m interested if people agree with that.
Just to be clear, when talking about how people behave in forums, I mean more “general purpose” places like Reddit. In particular, I was not thinking about Less Wrong where in my experience, people have always bent over backwards to be reasonable!
Hey, you might be right! I’ll take this as useful feedback that the argument wasn’t fully convincing. Don’t mean to pull a motte-and-bailey, but I suppose if I had to, I’d retreat to an argument like, “if making a plot, consider using these rules as one option for how to pick axes.” In any case, if you have any examples where you think following this advice leads to bad choices, I’d be interested to hear them.
This covers a really impressive range of material—well done! I just wanted to point out that if someone followed all of this and wanted more, Shannon’s 1948 paper is surprisingly readable even today and is probably a nice companion:
http://people.math.harvard.edu/~ctm/home/text/others/shannon/entropy/entropy.pdf
I would dissuade no one from writing drunk, and I’m confident that you too can say that people are penguins! But I’m sorry to report that personally I don’t do it by drinking but rather writing a much longer version with all those kinds of clarifications included and then obsessively editing it down.
I wasn’t (intentionally?) being ironic. I guess that for underage drinking we have the advantage that you can sort of guess how old someone looks, but still… good point.
I’ve politely contacted them several times via several different channels just asking for clarifications and what the “missing coefficients” are in the last model. Total stonewall- they won’t even acknowledge my contacts. Some people more connected to the education community also apparently did that as a result of my post, with the same result.
It’s a regression. Just like they extrapolate backwards to (1882+50=1932) using data from 1959, they extrapolate forwards at the end. (This is discussed in the “timelines” section.) This is definitely a valid reason to treat it with suspicion, but nothing’s “wrong” exactly.
Many thanks! All fixed (except one that I prefer the old way.)
Good point regarding year tick marks! I was thinking think that labeling 0°C would make the most sense when freezing is really important. Say, if you were plotting historical data on temperatures and you were interested in trying to estimate the last frost date in spring or something. Then, 10°C would mean “twice as much margin” as 5°C.
Seed oils are usually solvent extracted, which makes me wonder, how thoroughly are they scrubbed of solvent, what stuff in the solvent is absorbed into the oil (also an effective solvent for various things), etc
I looked into this briefly at least for canola oil. There, the typical solvent is hexane. And some hexane does indeed appear to make it into the canola oil that we eat. But hexane apparently has very low toxicity, and—more importantly—the hexane that we get from all food sources apparently makes up less than 2% of our total hexane intake! https://www.hsph.harvard.edu/nutritionsource/2015/04/13/ask-the-expert-concerns-about-canola-oil/ Mostly we get hexane from gasoline fumes, so if hexane is a problem, it’s very hard to see how to pin the blame on canola oil.
I think matplotlib has way too many ways to do everything to be comprehensive! But I think you could do almost everything with some variants of these.
ax.spines['top'].set_visible(False) # or 'left' / 'right' / 'bottom' ax.set_xticks([0,50,100],['0%','50%','100%']) ax.tick_params(axis='x', left=False, right=False) # or 'y' ax.set_ylim([0,0.30]) ax.set_ylim([0,ax.get_ylim()[1]])
Sadly, no—we had no way to verify that.
I guess one way you might try to confirm/refute the idea of data leakage would be to look at the decomposition of brier scores: GPT-4 is much better calibrated for politics vs. science but only very slightly better at politics vs. science in terms of refinement/resolution. Intuitively, I’d expect data leakage to manifest as better refinement/resolution rather than better calibration.
Thanks, someone once gave me the advice that after you write something, you should go back to the beginning and delete as many paragraphs as you can without making everything incomprehensible. After hearing this, I noticed that most people tend to write like this:
Intro
Context
Overview
Other various throat clearing
Blah blah blah
Finally an actual example, an example, praise god
Which is pretty easy to correct once you see it!