I think almost all startups are really great! I think there really is a very small set of startups that end up harmful for the world, usually by specifically making a leveraged bet on trying to create very powerful AGI, or accelerating AI-driven R&D.
Because in some sense expecting a future with both of these technologies is what distinguishes our community from the rest of the world, if you end up optimizing for profit, leaning into exactly those technology then ends up a surprisingly common thing for people to do (as its where the alpha of our community relative to the rest of the world lies), which I do think is really bad.
As a concrete example, I don’t think Elicit is making the world much worse. I think its sign is not super obvious, but I don’t have a feeling they are accelerating timelines very much, or are making takeoff more sharp, or are exhausting political coordination goodwill. Similarly Midjourney I think is good for the world, so is Suno, so are basically all the AI art startups. I do think they might drive investment into the AI sector, and that might draw them into the red, but in as much as we want to have norms, and I give people advice on what to do, working on those things feels like it could definitely be net good.
I don’t think we should have norms or a culture that requires everything everyone does to be good specifically for AI Safety. Startups are mostly good because they produce economic value and solve problems for people. Sometimes they do so in a way that helps with AI Safety, sometimes not. I think Suno has helped with AI Safety because it has allowed us to make a dope album that made the rationalist culture better. Midjourney has helped us make LW better. But mostly, they just make the world better the same way most companies in history have made the world better.
Well yeah, but the question here is “what should be community guidelines on specifically how to approach startups that are aimed at specifically helping with AI safety” (which may or may not include AI), not “what kinds of AI startups should people start, if any?”
Yeah, thanks! I agree with @habryka’s comment, though I’m a little worried it may shut down conversation since it might make people think the conversation is about AI startups in general and less about AI startups in service of AI safety. This is because people might consider the debate/question answered after agreeing with the top comment.
That said, I do hear the “any AI startup is bad because it increases AI investment and therefore reduces timelines” so I think it’s worth at least getting more clarity on this.
Ah, I see. I did interpret the framing around “net positive” to be largely around normal companies. It’s IMO relatively easy to be net-positive, since all you need to do is to avoid harm in expectation and help in any way whatsoever, which my guess is almost any technology startup that doesn’t accelerate things, but has reasonable people at the helm, can achieve.
When we are talking more about “how to make a safety-focused company that is substantially positive on safety?”, that is a very different question in my mind.
Nod, but fwiw if you don’t have a cached answer, I am interested in you spending like 15 minutes thinking through whether there exist startup-centric approaches to helping with x-risk that are good.
I think almost all startups are really great! I think there really is a very small set of startups that end up harmful for the world, usually by specifically making a leveraged bet on trying to create very powerful AGI, or accelerating AI-driven R&D.
Because in some sense expecting a future with both of these technologies is what distinguishes our community from the rest of the world, if you end up optimizing for profit, leaning into exactly those technology then ends up a surprisingly common thing for people to do (as its where the alpha of our community relative to the rest of the world lies), which I do think is really bad.
As a concrete example, I don’t think Elicit is making the world much worse. I think its sign is not super obvious, but I don’t have a feeling they are accelerating timelines very much, or are making takeoff more sharp, or are exhausting political coordination goodwill. Similarly Midjourney I think is good for the world, so is Suno, so are basically all the AI art startups. I do think they might drive investment into the AI sector, and that might draw them into the red, but in as much as we want to have norms, and I give people advice on what to do, working on those things feels like it could definitely be net good.
I think you’re kind of avoiding the question. What startups are really great for AI safety?
I don’t think we should have norms or a culture that requires everything everyone does to be good specifically for AI Safety. Startups are mostly good because they produce economic value and solve problems for people. Sometimes they do so in a way that helps with AI Safety, sometimes not. I think Suno has helped with AI Safety because it has allowed us to make a dope album that made the rationalist culture better. Midjourney has helped us make LW better. But mostly, they just make the world better the same way most companies in history have made the world better.
Well yeah, but the question here is “what should be community guidelines on specifically how to approach startups that are aimed at specifically helping with AI safety” (which may or may not include AI), not “what kinds of AI startups should people start, if any?”
Yeah, thanks! I agree with @habryka’s comment, though I’m a little worried it may shut down conversation since it might make people think the conversation is about AI startups in general and less about AI startups in service of AI safety. This is because people might consider the debate/question answered after agreeing with the top comment.
That said, I do hear the “any AI startup is bad because it increases AI investment and therefore reduces timelines” so I think it’s worth at least getting more clarity on this.
Ah, I see. I did interpret the framing around “net positive” to be largely around normal companies. It’s IMO relatively easy to be net-positive, since all you need to do is to avoid harm in expectation and help in any way whatsoever, which my guess is almost any technology startup that doesn’t accelerate things, but has reasonable people at the helm, can achieve.
When we are talking more about “how to make a safety-focused company that is substantially positive on safety?”, that is a very different question in my mind.
Nod, but fwiw if you don’t have a cached answer, I am interested in you spending like 15 minutes thinking through whether there exist startup-centric approaches to helping with x-risk that are good.
Anything that improves interpretability seems like a no brainer.
With the caveat that it has to actually improve interpretability in a provable fashion, and not just make us think it did, like we saw with CoTR.