Former physicist, current worry-about-AI-ist.
Previously at AI Impacts
Richard Korzekwa
One way of thinking about offsetting is using it to price in the negative effects of the thing you want to do. Personally, I find it confusing to navigate tradeoffs between dollars, animal welfare, uncertain health costs, cravings for foods I can’t eat, and fewer options when getting food. The convenient thing about offsets is I can reduce the decision to “Is the burger worth $x to me?”, where $x = price of burger + price of offset.
A common response to this is “Well, if you thought it was worth it to pay $y to eliminate t hours of cow suffering, then you should just do that anyway, regardless of whether you buy the burger”. I think that’s a good point, but I don’t feel like it helps me navigate the confusing-to-me tradeoff between like five different not-intuitively-commensurable considerations.
Not to mention that of all of the hunter gatherer tribes ever studied, there has never been a single vegetarian group discovered. Not. A. Single. One.
Of the ~200 studied, ~75% of them got over 50% of their calories from animals. Only 15% of them got over 50% of their calories from non-animal sources.
Do you have a source for this? I’m asking more out of curiosity than doubt, but in general, I think it would be cool to have more links for some of the claims. And thanks for all of the links that are already there!
It is sometimes good to avoid coming across as really weird or culturally out of touch, and ads can give you some signal on what’s normal and culturally relevant right now. If you’re picking up drinks for a 4th of July party, Bud Light will be very culturally on-brand, Corona would be fine, but a bit less on-brand, and mulled wine would be kinda weird. And I think you can pick this sort of thing up from advertising.
Also, it might be helpful to know roughly what group membership you or other people might be signalling by using a particular product. For example, I drive a Subaru. Subaru has, for a long time, marketed to (what appears to me to be) people who are a bit younger, vote democrat, and spend time in the mountains. This is in contrast to, say, Ram trucks, which are marketed to (what looks to me like) people who vote Republican. If I’m in a context where people who don’t know me very well see my car, I am now aware that they might be biased toward thinking I vote democrat or spend time outdoors. (FWIW, I did a low-effort search for which states have the strongest Subaru sales and it is indeed states with mountains and states with people who vote democrat).
Recently I’ve been wondering what this dynamic does to the yes-men. If someone is strongly incentivized to agree with whatever nonsense their boss is excited about that week, then they go on Twitter or national TV to repeat that nonsense, it can’t be good for seeing the world accurately.
Sometimes what makes a crime “harder to catch” is the risk of false positives. If you don’t consider someone to have “been caught” unless your confidence that they did the crime is very high, then, so long as you’re calibrated, your false positive rate is very low. But holding off on punishment in cases where you do not have very high confidence might mean that, for most instances where someone commits the crime, they are not punished.
If you want someone to compress and communicate their views on the future, whether they anticipate everyone will be dead within a few decades because of AI seems like a pretty important thing to know. And it’s natural to find your way from that to asking for a probability. But I think that shortcut isn’t actually helpful, and it’s more productive to just ask something like “Do you anticipate that, because of AI, everyone will be dead within the next few decades?”. Someone can still give a probability if they want, but it’s more natural to give a less precise answer like “probably not” or a conditional answer like “I dunno, depends on whether <thing happens>” or to avoid the framing like “well, I don’t think we’re literally going to die, but”.
He says, under the section titled “So what options do I have if I disagree with this decision?”:
But beyond [leaving LW, trying to get him fired, etc], there is no higher appeals process. At some point I will declare that the decision is made, and stands, and I don’t have time to argue it further, and this is where I stand on the decision this post is about.
Yeah, seems like it fails mainly on 1, though I think that depends on whether you accept the meaning of “could not have done otherwise” implied by 2⁄3. But if you accept a meaning that makes 1 true (or, at least, less obviously false), then the argument is no longer valid.
This seems closely related to an argument I vaguely remember from a philosophy class:
A person is not morally culpable of something if they could not have done otherwise
If determinism is true, there is only one thing a person could do
If there is only one thing a person could do, they could not have done otherwise
If determinism is true, whatever someone does, they are not morally culpable
Seems reasonable.
Possibly I’m behind on the state of things, but I wouldn’t put too much trust in a model’s self-report on how things like routing work.
Of course many ways of making a room more fun are idiosyncratic to a particular theme, concept, or space.
I think fun is often idiosyncratic to particular people as well, and this is one reason why fun design is not more common, at least for spaces shared by lots of pepople. For me, at least, ‘fun’ spaces are higher variance than more conventional spaces. Many do indeed seem fun, but sometimes my response is “this is unusual and clearly made for someone who isn’t me”.
But maybe this is mostly a skill issue. The Epic campus looks consistently fun to me, for example.
AI Impacts looked into this question, and IMO “typically within 10 years, often within just a few years” is a reasonable characterization. https://wiki.aiimpacts.org/speed_of_ai_transition/range_of_human_performance/the_range_of_human_intelligence
I also have data for a few other technologies (not just AI) doing things that humans do, which I can dig up if anyone’s curious. They’re typically much slower to cross the range of human performance, but so was most progress prior to AI, so I dunno what you want to infer from that.
And like, this is why it’s normal epistemics to ignore the blurbs on the backs of books when evaluating their quality, no matter how prestigious the list of blurbers! Like that’s what I’ve always done, that’s what I imagine you’ve always done, and that’s what we’d of course be doing if this wasn’t a MIRI-published book.
If I see a book and I can’t figure out how seriously I should take it, I will look at the blurbs.
Good blurbs from serious, discerning, recognizable people are not on every book, even books from big publishers with strong sales. I realize this is N=2, so update (or not) accordingly, but the first book I could think of that I knew had good sales, but isn’t actually good is The Population Bomb. I didn’t find blurbs for that (I didn’t look all that hard, though, and the book is pretty old, so maybe not a good check for today’s publishing ecosystem anyway). The second book that came to mind was The Body Keeps the Score. The blurbs for that seem to be from a couple respectable-looking psychiatrists I’ve never heard of.
Another victory for trend extrapolation!
My weak downvotes are +1 and my strong downvotes are −9. Upvotes are all positive.
I agree that in the context of an explicit “how soon” question, the colloquial use of fast/slow often means sooner/later. In contexts where you care about actual speed, like you’re trying to get an ice cream cake to a party and you don’t want it to melt, it’s totally reasonable to say “well, the train is faster than driving, but driving would get me there at 2pm and the train wouldn’t get me there until 5pm”. I think takeoff speed is more like the ice cream cake thing than the flight to NY thing.
That said, I think you’re right that if there’s a discussion about timelines in a “how soon” context, then someone starts talking about fast vs slow takeoff, I can totally see how someone would get confused when “fast” doesn’t mean “soon”. So I think you’ve updated me toward the terminology being bad.
I agree. I look at the red/blue/purple curves and I think “obviously the red curve is slower than the blue curve”, because it is not as steep and neither is its derivative. The purple curve is later than the red curve, but it is not slower. If we were talking about driving from LA to NY starting on Monday vs flying there on Friday, I think it would be weird to say that flying is slower because you get there later. I guess maybe it’s more like when people say “the pizza will get here faster if we order it now”? So “get here faster” means “get here sooner”?
Of course, if people are routinely confused by fast/slow, I am on board with using different terminology, but I’m a little worried that there’s an underlying problem where people are confused about the referents, and using different words won’t help much.
Yeah! I made some lamps using sheet aluminum. I used hot glue to attach magnets, which hold it onto the hardware hanging from the ceiling in my office. You can use dimmers to control the brightness of each color temperature strip separately, but I don’t have that set up right now.
why do you think s-curves happen at all? My understanding is that it’s because there’s some hard problem that takes multiple steps to solve, and when the last step falls (or a solution is in sight), it’s finally worthwhile to toss increasing amounts of investment to actually realize and implement the solution.
I think S-curves are not, in general, caused by increases in investment. They’re mainly the result of how the performance of a technology changes in response to changes in the design/methods/principles behind it. For example, with particle accelerators, switching from Van der Graaff generators to cyclotrons might give you a few orders of magnitude once the new method is mature. But it takes several iterations to actually squeeze out all the benefits of the improved approach, and the first few and last few iterations give less of an improvement than the ones in the middle.
This isn’t to say that the marginal return on investment doesn’t factor in. Once you’ve worked out some of the kinks with the first couple cyclotrons, it makes more sense to invest in a larger one. This probably makes S-curves more S-like (or more step like). But I think you’ll get them even with steadily increasing investment that’s independent of the marginal return.
1 vote
Overall karma indicates overall quality.
0 votes
Agreement karma indicates agreement, separate from overall quality.
For what it’s worth, the sentiment I recall at the time among Americans was not that (almost) everyone everywhere thought it was terrible, just that the official diplomatic stance from (almost) every government was that it was terrible (and also that those governments had better say it’s terrible or at least get out of the way while the US responds). I think I remember being under the impression that almost everyone in Europe thought it was obviously bad. To be fair, I didn’t think much at the time about what, e.g., the typical person in China or Brazil or Nigeria thought about it. Also, that was a long time ago, so probably some revision in my memory.