Co-founded Nonlinear.org (x-risk incubator) and Superlinear (x-risk prizes/bounties).
Also into complex systems, history, and crypto.
Co-founded Nonlinear.org (x-risk incubator) and Superlinear (x-risk prizes/bounties).
Also into complex systems, history, and crypto.
People are so irrationally intimidated by lawyers that some legal firms make all their money by sending out thousands of scary form letters demanding payment for bullshit transgressions. My company was threatened with thousands of frivolous lawsuits but only actually sued once.
Threats are cheap.
I’d expect the most common failure mode for rationalists here is not understanding how patronage networks work.
Even if you do everything else right, it is very hard to get elected to a position of power if the other guy is distributing the office’s resources for votes.
You should be able to map out the voting blocs and what their criteria are, i.e. “Union X and its 500 members will mostly vote for Incumbent Y because they get $X in contracts per year etc”
Great idea, we’ll add this to the roadmap!
So glad you’re enjoying it! It’s mine too—I consume way more LW content because of it.
Good idea! Will add this to the roadmap.
Thanks for the feedback! We think a bot could make sense as well—we’re exploring this internally.
Love this! I used to manage teams of writers/editors and here are some ideas we found useful for increasing readability:
To remove fluff, imagine someone is paying you $1,000 for every word you remove. Our writers typically could cut 20-50% with minimal loss of information.
Long sentences are hard to read, so try to change your commas into periods.
Long paragraphs are hard to read, so try to break each paragraph into 2-3 sentences.
Most people just skim, and some of your ideas are much more important than others, so bold/italicize your important points.
Came here to say this. Highly recommend this book for anyone working on deception.
Anecdata: many in my non-EA/rat social circles of entrepreneurs and investors are engaging with this for the first time.
And, to my surprise (given the optimistic nature of entrepreneurs/VCs) they aren’t just being reflexive techno-optimists, they’re taking the ideas seriously and, since Bankless, “Eliezer” is becoming a first name-only character.
Eliezer said he’s an accelerationist in basically everything except AI and gain-of-function bio and that seems to resonate. AI is Not Like The Other Problems.
I also think this approach deserves more consideration.
Also: since BCIs can generate easy-to-understand profits, and are legibly useful to many, we could harness market forces to shorten BCI timelines.
Ambitious BCI projects will likely be more shovel ready than many other alignment approaches—BCIs are plausibly amenable to Manhattan Project-level initiatives where we unleash significant human and financial capital. Maybe use Advanced Market Commitments to kickstart the innovators, etc.
For anybody interested, Tim Urban has a really well written post about Neuralink/BCIs: https://waitbutwhy.com/2017/04/neuralink.html
“I’m an accelerationist for solar power, nuclear power to the extent it hasn’t been obsoleted by solar power and we might as well give up but I’m still bitter about it, geothermal, genetic engineering, neuroengineering, FDA delenda est, basically everything except GoF bio and AI”
https://twitter.com/ESYudkowsky/status/1629725763175092225?t=A-po2tuqZ17YVYAyrBRCDw&s=19
Another example of Overton movement—imagine seeing these results a few years ago:
Going to share a seemingly-unpopular opinion and in a tone that usually gets downvoted on LW but I think needs to be said anyway:
This stat is why I still have hope: 100,000 capabilities researchers vs 300 alignment researchers.
Humanity has not tried to solve alignment yet.
There’s no cavalry coming—we are the cavalry.
I am sympathetic to fears of a new alignment researchers being net negative, and I think plausibly the entire field has, so far, been net negative, but guys, there are 100,000 capabilities researchers now! One more is a drop in the bucket.
If you’re still on the sidelines, go post that idea that’s been gathering dust in your Google Docs for the last six months. Go fill out that fundraising application.
We’ve had enough fire alarms. It’s time to act.
I think this is a really promising idea.
If the goal is to unify diverse stakeholders, including non-technical ones, I wonder if it would be better to use a less-wonky target (e.g. “50%” instead of ”.002 OOMs”)
+1
My background is extremely relevant here and if anybody in the alignment community would like help thinking through strategy, I’d love to be helpful.
It’s unfortunate that this version is spreading because many people will think it’s a low credibility TEDx talk instead of a very credible main stage TED talk.
People give standing ovations when they feel inspired to because something resonated with them. They’re applauding him for trying to save humanity, and this audience reaction gives me hope.
There is a reason courtrooms give both sides equal chances to make their case before they ask the jury to decide.
It is very difficult for people to change their minds later, and most people assume that if you’re on trial, you must be guilty, which is why judges remind juries about “innocent before proven guilty”.
This is one of the foundations of our legal system, something we learned over thousands of years of trying to get better at justice. You’re just assuming I’m guilty and saying that justifies not giving me a chance to present my evidence.
Also, if we post another comment thread a week later, who will see it? EAF/LW don’t have sufficient ways to resurface old but important content.
Re: “my guess is Ben’s sources have received dozens of calls”—well, your guess is wrong, and you can ask them to confirm this.
You also took my email strategically out of context to fit the Emerson-is-a-horned-CEO-villain narrative. Here’s the full one:
Great idea! We’ll add it to the list.