Same question—I applied on Nov 22 and haven’t heard back
Wow, thanks for putting together this incredible explanation! I found it very accessible.
The Factored Sets idea seems like a really important achievement building on Pearlian causality.
I enjoyed Sense of Style but agree with this post. Thanks for writing.
This post itself is a great example of something that wouldn’t quite work in classic style, but whatever style it is, it’s great! It’s like the closest thing to classic style given the level of self-reference present in the subject.
Really great overview. I’ll probably draw points from this in an explainer I’m going to write for a new audience.
When I first started reading LessWrong, one of my memorable “wow moments” was realizing that being an atheist is just the starting line to all this productive philosophy and transhumanism, despite the fact that most people would spend their whole lives just arguing with me about that part.
I wonder if this post can give readers a similar “wow moment” by putting it into context that just acknolwedging AI risk, despite being this super controversial non-mainstream thing, is just the starting line to a productive discussion of superintelligent decision theory and negotiation/trade.
Good point. I think the reference I’m imagining should include a section on “what do the experts think?” And show how a significant number of very smart experts think this and there’s arguably a trend toward more and more of them agreeing.
I still think most of the resource should be presenting arguments themselves because I think most people in tech largely convince themselves on their own intuitive arguments like “AI just does what we tell it” or “a smart enough AI won’t be mean”.
Just curious, why did you personally buy a one-hose AC unit in 2019 rather than two-hose?
Yeah the credit card will probably agree because the OP’s case has a ton of merit and they err on the side of the customer unless maybe you’re a serial abuser of the chargeback system.
I’m a huge fan of Pinker. How The Mind Works and The Language Instinct are two of my all-time favorite books. So I’m surprised and saddened to see him engaging in this debate for years without showing a familiarity with many of the core AI concepts, such as instrumental convergence and corrigibility.
Yeah I would do a chargeback. They didn’t provide service of any value, so you shouldn’t have to pay them.
Bravo Solenoid Entity, love your work with SSC Podcast and really appreciate that you’re taking this on.
Elon Musk said a few weeks ago that Tesla’s main strategy right now is to slash the cost of personal transportation by 4x by perfecting full-self-driving AI, and attempting to achieve that this year. (Relatedly, they’re not allocating resources to making an even cheaper version of the Model 3 because it wouldn’t be 4x cheaper anyway.)
Making good on Musk’s claim would probably add another $trillion to Tesla’s market cap in short order.
This is fine, and I can’t help wondering would it say if given more resources? I feel compelled to give it more resources.
Of course the commenters talking about cooling gradients vs net cooling can’t agree on an air conditioning utility function
Yes it’s an important insight that paper clips are a representative example of a much bigger and simpler space of optimization targets than alpine villages.
Right now no one knows how to maximize either paper clips or alpine villages. The first thing we know how to do will probably be some poorly-understood recursively self-improving cycle of computer code interacting with other computer code. Then the resulting intelligence will start converging on some goal and converge on capabilities to optimize it extremely powerfully. The problem is that that emergent goal will be a lot more random and arbitrary than an alpine village. Most random things that this process can land on look like a paper clip in how devoid of human value they are, not like an alpine village which has a very significant amount of human value in it.
Thanks for this! I’ve been listening every day and it’s my preferred way to keep up with LessWrong.