Former physicist, current worry-about-AI-ist.
Previously at AI Impacts
Richard Korzekwa
Sometimes what makes a crime “harder to catch” is the risk of false positives. If you don’t consider someone to have “been caught” unless your confidence that they did the crime is very high, then, so long as you’re calibrated, your false positive rate is very low. But holding off on punishment in cases where you do not have very high confidence might mean that, for most instances where someone commits the crime, they are not punished.
If you want someone to compress and communicate their views on the future, whether they anticipate everyone will be dead within a few decades because of AI seems like a pretty important thing to know. And it’s natural to find your way from that to asking for a probability. But I think that shortcut isn’t actually helpful, and it’s more productive to just ask something like “Do you anticipate that, because of AI, everyone will be dead within the next few decades?”. Someone can still give a probability if they want, but it’s more natural to give a less precise answer like “probably not” or a conditional answer like “I dunno, depends on whether <thing happens>” or to avoid the framing like “well, I don’t think we’re literally going to die, but”.
He says, under the section titled “So what options do I have if I disagree with this decision?”:
But beyond [leaving LW, trying to get him fired, etc], there is no higher appeals process. At some point I will declare that the decision is made, and stands, and I don’t have time to argue it further, and this is where I stand on the decision this post is about.
Yeah, seems like it fails mainly on 1, though I think that depends on whether you accept the meaning of “could not have done otherwise” implied by 2⁄3. But if you accept a meaning that makes 1 true (or, at least, less obviously false), then the argument is no longer valid.
This seems closely related to an argument I vaguely remember from a philosophy class:
A person is not morally culpable of something if they could not have done otherwise
If determinism is true, there is only one thing a person could do
If there is only one thing a person could do, they could not have done otherwise
If determinism is true, whatever someone does, they are not morally culpable
Seems reasonable.
Possibly I’m behind on the state of things, but I wouldn’t put too much trust in a model’s self-report on how things like routing work.
Of course many ways of making a room more fun are idiosyncratic to a particular theme, concept, or space.
I think fun is often idiosyncratic to particular people as well, and this is one reason why fun design is not more common, at least for spaces shared by lots of pepople. For me, at least, ‘fun’ spaces are higher variance than more conventional spaces. Many do indeed seem fun, but sometimes my response is “this is unusual and clearly made for someone who isn’t me”.
But maybe this is mostly a skill issue. The Epic campus looks consistently fun to me, for example.
AI Impacts looked into this question, and IMO “typically within 10 years, often within just a few years” is a reasonable characterization. https://wiki.aiimpacts.org/speed_of_ai_transition/range_of_human_performance/the_range_of_human_intelligence
I also have data for a few other technologies (not just AI) doing things that humans do, which I can dig up if anyone’s curious. They’re typically much slower to cross the range of human performance, but so was most progress prior to AI, so I dunno what you want to infer from that.
And like, this is why it’s normal epistemics to ignore the blurbs on the backs of books when evaluating their quality, no matter how prestigious the list of blurbers! Like that’s what I’ve always done, that’s what I imagine you’ve always done, and that’s what we’d of course be doing if this wasn’t a MIRI-published book.
If I see a book and I can’t figure out how seriously I should take it, I will look at the blurbs.
Good blurbs from serious, discerning, recognizable people are not on every book, even books from big publishers with strong sales. I realize this is N=2, so update (or not) accordingly, but the first book I could think of that I knew had good sales, but isn’t actually good is The Population Bomb. I didn’t find blurbs for that (I didn’t look all that hard, though, and the book is pretty old, so maybe not a good check for today’s publishing ecosystem anyway). The second book that came to mind was The Body Keeps the Score. The blurbs for that seem to be from a couple respectable-looking psychiatrists I’ve never heard of.
Another victory for trend extrapolation!
You will crash your car in front of my house within the next week
My weak downvotes are +1 and my strong downvotes are −9. Upvotes are all positive.
I agree that in the context of an explicit “how soon” question, the colloquial use of fast/slow often means sooner/later. In contexts where you care about actual speed, like you’re trying to get an ice cream cake to a party and you don’t want it to melt, it’s totally reasonable to say “well, the train is faster than driving, but driving would get me there at 2pm and the train wouldn’t get me there until 5pm”. I think takeoff speed is more like the ice cream cake thing than the flight to NY thing.
That said, I think you’re right that if there’s a discussion about timelines in a “how soon” context, then someone starts talking about fast vs slow takeoff, I can totally see how someone would get confused when “fast” doesn’t mean “soon”. So I think you’ve updated me toward the terminology being bad.
I agree. I look at the red/blue/purple curves and I think “obviously the red curve is slower than the blue curve”, because it is not as steep and neither is its derivative. The purple curve is later than the red curve, but it is not slower. If we were talking about driving from LA to NY starting on Monday vs flying there on Friday, I think it would be weird to say that flying is slower because you get there later. I guess maybe it’s more like when people say “the pizza will get here faster if we order it now”? So “get here faster” means “get here sooner”?
Of course, if people are routinely confused by fast/slow, I am on board with using different terminology, but I’m a little worried that there’s an underlying problem where people are confused about the referents, and using different words won’t help much.
Yeah! I made some lamps using sheet aluminum. I used hot glue to attach magnets, which hold it onto the hardware hanging from the ceiling in my office. You can use dimmers to control the brightness of each color temperature strip separately, but I don’t have that set up right now.
why do you think s-curves happen at all? My understanding is that it’s because there’s some hard problem that takes multiple steps to solve, and when the last step falls (or a solution is in sight), it’s finally worthwhile to toss increasing amounts of investment to actually realize and implement the solution.
I think S-curves are not, in general, caused by increases in investment. They’re mainly the result of how the performance of a technology changes in response to changes in the design/methods/principles behind it. For example, with particle accelerators, switching from Van der Graaff generators to cyclotrons might give you a few orders of magnitude once the new method is mature. But it takes several iterations to actually squeeze out all the benefits of the improved approach, and the first few and last few iterations give less of an improvement than the ones in the middle.
This isn’t to say that the marginal return on investment doesn’t factor in. Once you’ve worked out some of the kinks with the first couple cyclotrons, it makes more sense to invest in a larger one. This probably makes S-curves more S-like (or more step like). But I think you’ll get them even with steadily increasing investment that’s independent of the marginal return.
Neurons’ dynamics looks very different from the dynamics of bits.
Maybe these differences are important for some of the things brains can do.
This seems very reasonable to me, but I think it’s easy to get the impression from your writing that you think it’s very likely that:
The differences in dynamics between neurons and bits are important for the things brains do
The relevant differences will cause anything that does what brains do to be subject to the chaos-related difficulties of simulating a brain at a very low level.
I think Steven has done a good job of trying to identify a bit more specifically what it might look like for these differences in dynamics to matter. I think your case might be stronger if you had a bit more of an object level description of what, specifically, is going on in brains that’s relevant to doing things like “learning rocket engineering”, that’s also hard to replicate in a digital computer.
(To be clear, I think this is difficult and I don’t have much of an object level take on any of this, but I think I can empathize with Steven’s position here)
AI Impacts Quarterly Newsletter, Apr-Jun 2023
The Trinity test was preceded by a full test with the Pu replaced by some other material. The inert test was designed to test whether they were getting the needed compression. (My impression is this was not publicly known until relatively recently)
Recently I’ve been wondering what this dynamic does to the yes-men. If someone is strongly incentivized to agree with whatever nonsense their boss is excited about that week, then they go on Twitter or national TV to repeat that nonsense, it can’t be good for seeing the world accurately.