Yep, we agree on that. Somehow the governance structure of the organization(s) that control the armies of superintelligences has to be quite different from what it is today, in order to avoid a situation where a tiny group of people gets to effectively become dictator.
I don’t see what Anthropic’s opinions on open source have to do with it. Surely you don’t want ANY company to be putting their particular flavor of ethics into the machines that you probably have to use to compete in the market efficiently? That’s what I think at any rate.
I don’t see what Anthropic’s opinions on open source have to do with it. Surely you don’t want ANY company to be putting their particular flavor of ethics into the machines that you probably have to use to compete in the market efficiently?
Sure, applies to OpenAI as much as anyone else.
Consider three cases:
OpenAnthropic’s models, on the margin, refuse to help with [Blorple] projects much more than they refuse to help with [Greeble] projects. But you can just use another model if you care about [Greeble], because they’re freely competing on a marketplace with many providers—which could be open source or could be a diversity of DeepCentMind models. Seems fine.
OpenAnthropic’s models, on the margin, refuse to help with [Blorple] projects much more than they refuse to help with [Greeble] projects. Because we live in the fast RSI world, this means the universe is [Greeble] flavored forever. Dang, sort of sucks. What I’m saying doesn’t have that much to do with this situation.
OpenAnthropic’s models, on the margin, refuse to help with [Blorple] projects much more than they refuse to help with [Greelble] projects. We don’t live in an suuuuper fast RSI world, only a somewhat fast one, but it turns out that we’ve decided only OpenAnthropic is sufficiently responsible to own AIs past some level of power, and so we’ve given them a Marque of Monopoly, that OpenAnthropic has really wanted and repeatedly called for. So we don’t have autobalancing from marketplace or open source, and despite an absence of super fast RSI, the universe becomes only [Greeble] flavored, it just takes a bit longer.
Both 2 and 3 are obviously undesirable, but if I were in a position of leadership at OpenAnthropic, then to ward against a situation like 3 you could—for reasons of deontology, or for utilitarian anticipations of pushback, or for ecological concerns for future epistemic diversity—accompany calls for Marques with actual concrete measures by which you would avoid imprinting your Greebles on the future. And although we’ve very concrete proposals for Marques, we’ve not had such similarly concrete proposals for determining such values.
This might seem very small of course if the concern is RSI and universal death.
Cool. Yeah I think I agree with that. Note that I think case 2 is likely; see AI-2027.com for a depiction of how fast I think takeoff will go by default.
Yep, we agree on that. Somehow the governance structure of the organization(s) that control the armies of superintelligences has to be quite different from what it is today, in order to avoid a situation where a tiny group of people gets to effectively become dictator.
I don’t see what Anthropic’s opinions on open source have to do with it. Surely you don’t want ANY company to be putting their particular flavor of ethics into the machines that you probably have to use to compete in the market efficiently? That’s what I think at any rate.
Sure, applies to OpenAI as much as anyone else.
Consider three cases:
OpenAnthropic’s models, on the margin, refuse to help with [Blorple] projects much more than they refuse to help with [Greeble] projects. But you can just use another model if you care about [Greeble], because they’re freely competing on a marketplace with many providers—which could be open source or could be a diversity of DeepCentMind models. Seems fine.
OpenAnthropic’s models, on the margin, refuse to help with [Blorple] projects much more than they refuse to help with [Greeble] projects. Because we live in the fast RSI world, this means the universe is [Greeble] flavored forever. Dang, sort of sucks. What I’m saying doesn’t have that much to do with this situation.
OpenAnthropic’s models, on the margin, refuse to help with [Blorple] projects much more than they refuse to help with [Greelble] projects. We don’t live in an suuuuper fast RSI world, only a somewhat fast one, but it turns out that we’ve decided only OpenAnthropic is sufficiently responsible to own AIs past some level of power, and so we’ve given them a Marque of Monopoly, that OpenAnthropic has really wanted and repeatedly called for. So we don’t have autobalancing from marketplace or open source, and despite an absence of super fast RSI, the universe becomes only [Greeble] flavored, it just takes a bit longer.
Both 2 and 3 are obviously undesirable, but if I were in a position of leadership at OpenAnthropic, then to ward against a situation like 3 you could—for reasons of deontology, or for utilitarian anticipations of pushback, or for ecological concerns for future epistemic diversity—accompany calls for Marques with actual concrete measures by which you would avoid imprinting your Greebles on the future. And although we’ve very concrete proposals for Marques, we’ve not had such similarly concrete proposals for determining such values.
This might seem very small of course if the concern is RSI and universal death.
Cool. Yeah I think I agree with that. Note that I think case 2 is likely; see AI-2027.com for a depiction of how fast I think takeoff will go by default.