This is not an obvious solution, since (as you probably are aware) you run into the threat of human disempowerment given sufficiently strong models. You may disagree with this being an issue, but it would at least need to be argued.
A “feudal” system is at least as disempowering for nearly all humans, and would probably be felt as far more disempowering. I really don’t care at all how empowered Sam Altman is.
I’d say that the “open source uncensored models” had a greater danger of rapid human extinction, endless torture, and the like… except that I give very, very little credence to the idea that any of the “safety” or alignment directions anybody’s been pursuing will do anything to prevent those. I guess I hope they might have a greater danger.
That post has already gotten a disagree, and I really, really wanna know which paragraph it’s meant to apply to, or if it’s meant to apply to both of them.
Sure. Here’s the argument. Concern about future hypothetical harms (described with magic insider jargon words like disempowerment, fast takeoff, asi, gray goo, fooming) is used as an excuse to “deprioritize” dealing with very real, much more boring present day harms.
Here’s the stupid hegelian dialectic that this community has promoted:
Thesis: AI could kill us all!!!!1111
Antithesis: Drop bombs on datacenters, we have to stop now.
Synthesis: let’s just trust wealthy and powerful people to build AI responsibly. Let’s make sure they work in secret, so nobody else does something irresponsible.
Sure. Here’s the argument. Concern about future hypothetical harms (described with magic insider jargon words like disempowerment, fast takeoff, asi, gray goo, fooming) is used as an excuse to “deprioritize” dealing with very real, much more boring present day harms.
This doesn’t seem to actually respond to said concern in any way, though.
Like… do you think that concerns about “everyone dies” are… not plausible? That such outcomes just can’t happen? If you do think that, then that’s the argument, and whether something is being “used as an excuse” is completely irrelevant. If you don’t think that, then… what?
Those concerns are not plausible for the tools that exist today. Maybe they’re plausible for things that will release tomorrow.
The ‘anti-TESCREAL’ community is pretty united in the thesis that ‘AI safety’ people concerned about the words I’m quoting are pulling air away from their ‘mundane’ concerns about tech that is actually in use today.
This is not an obvious solution, since (as you probably are aware) you run into the threat of human disempowerment given sufficiently strong models. You may disagree with this being an issue, but it would at least need to be argued.
A “feudal” system is at least as disempowering for nearly all humans, and would probably be felt as far more disempowering. I really don’t care at all how empowered Sam Altman is.
I’d say that the “open source uncensored models” had a greater danger of rapid human extinction, endless torture, and the like… except that I give very, very little credence to the idea that any of the “safety” or alignment directions anybody’s been pursuing will do anything to prevent those. I guess I hope they might have a greater danger.
That post has already gotten a disagree, and I really, really wanna know which paragraph it’s meant to apply to, or if it’s meant to apply to both of them.
Sure. Here’s the argument. Concern about future hypothetical harms (described with magic insider jargon words like disempowerment, fast takeoff, asi, gray goo, fooming) is used as an excuse to “deprioritize” dealing with very real, much more boring present day harms.
Here’s the stupid hegelian dialectic that this community has promoted:
Thesis: AI could kill us all!!!!1111
Antithesis: Drop bombs on datacenters, we have to stop now.
Synthesis: let’s just trust wealthy and powerful people to build AI responsibly. Let’s make sure they work in secret, so nobody else does something irresponsible.
This doesn’t seem to actually respond to said concern in any way, though.
Like… do you think that concerns about “everyone dies” are… not plausible? That such outcomes just can’t happen? If you do think that, then that’s the argument, and whether something is being “used as an excuse” is completely irrelevant. If you don’t think that, then… what?
Those concerns are not plausible for the tools that exist today. Maybe they’re plausible for things that will release tomorrow.
The ‘anti-TESCREAL’ community is pretty united in the thesis that ‘AI safety’ people concerned about the words I’m quoting are pulling air away from their ‘mundane’ concerns about tech that is actually in use today.