Sure. Here’s the argument. Concern about future hypothetical harms (described with magic insider jargon words like disempowerment, fast takeoff, asi, gray goo, fooming) is used as an excuse to “deprioritize” dealing with very real, much more boring present day harms.
This doesn’t seem to actually respond to said concern in any way, though.
Like… do you think that concerns about “everyone dies” are… not plausible? That such outcomes just can’t happen? If you do think that, then that’s the argument, and whether something is being “used as an excuse” is completely irrelevant. If you don’t think that, then… what?
Those concerns are not plausible for the tools that exist today. Maybe they’re plausible for things that will release tomorrow.
The ‘anti-TESCREAL’ community is pretty united in the thesis that ‘AI safety’ people concerned about the words I’m quoting are pulling air away from their ‘mundane’ concerns about tech that is actually in use today.
This doesn’t seem to actually respond to said concern in any way, though.
Like… do you think that concerns about “everyone dies” are… not plausible? That such outcomes just can’t happen? If you do think that, then that’s the argument, and whether something is being “used as an excuse” is completely irrelevant. If you don’t think that, then… what?
Those concerns are not plausible for the tools that exist today. Maybe they’re plausible for things that will release tomorrow.
The ‘anti-TESCREAL’ community is pretty united in the thesis that ‘AI safety’ people concerned about the words I’m quoting are pulling air away from their ‘mundane’ concerns about tech that is actually in use today.