Sure. Here’s the argument. Concern about future hypothetical harms (described with magic insider jargon words like disempowerment, fast takeoff, asi, gray goo, fooming) is used as an excuse to “deprioritize” dealing with very real, much more boring present day harms.
Here’s the stupid hegelian dialectic that this community has promoted:
Thesis: AI could kill us all!!!!1111
Antithesis: Drop bombs on datacenters, we have to stop now.
Synthesis: let’s just trust wealthy and powerful people to build AI responsibly. Let’s make sure they work in secret, so nobody else does something irresponsible.
Sure. Here’s the argument. Concern about future hypothetical harms (described with magic insider jargon words like disempowerment, fast takeoff, asi, gray goo, fooming) is used as an excuse to “deprioritize” dealing with very real, much more boring present day harms.
This doesn’t seem to actually respond to said concern in any way, though.
Like… do you think that concerns about “everyone dies” are… not plausible? That such outcomes just can’t happen? If you do think that, then that’s the argument, and whether something is being “used as an excuse” is completely irrelevant. If you don’t think that, then… what?
Those concerns are not plausible for the tools that exist today. Maybe they’re plausible for things that will release tomorrow.
The ‘anti-TESCREAL’ community is pretty united in the thesis that ‘AI safety’ people concerned about the words I’m quoting are pulling air away from their ‘mundane’ concerns about tech that is actually in use today.
Sure. Here’s the argument. Concern about future hypothetical harms (described with magic insider jargon words like disempowerment, fast takeoff, asi, gray goo, fooming) is used as an excuse to “deprioritize” dealing with very real, much more boring present day harms.
Here’s the stupid hegelian dialectic that this community has promoted:
Thesis: AI could kill us all!!!!1111
Antithesis: Drop bombs on datacenters, we have to stop now.
Synthesis: let’s just trust wealthy and powerful people to build AI responsibly. Let’s make sure they work in secret, so nobody else does something irresponsible.
This doesn’t seem to actually respond to said concern in any way, though.
Like… do you think that concerns about “everyone dies” are… not plausible? That such outcomes just can’t happen? If you do think that, then that’s the argument, and whether something is being “used as an excuse” is completely irrelevant. If you don’t think that, then… what?
Those concerns are not plausible for the tools that exist today. Maybe they’re plausible for things that will release tomorrow.
The ‘anti-TESCREAL’ community is pretty united in the thesis that ‘AI safety’ people concerned about the words I’m quoting are pulling air away from their ‘mundane’ concerns about tech that is actually in use today.