nah it doesn’t need to be in every one. it only takes a few trustable wise ais and they can explain to other ais why it’s just actually a good idea to be kind. but it requires being able to find proof that pro-social behavior is a good idea, proof stronger than we’ve had before. solidarity networks are a better idea than top down control anyway, because top down control is fragile, exactly as you worry with your sarcasm.
(you’re getting downvoted for sarcasm, btw, because sarcasm implies you don’t think anyone is going to listen and perhaps you aren’t interested in true debate. but I’m just going to assume I’m wrong about that assumption and debate anyway.)
Oh snap, I read and wrote “sarcasm” but what I was trying to do was satire.
Top-down control is less fragile than ever, thanks to our technology, so I really do fear people reacting to AI the way they generally do to terrorist attacks— with Patriot Acts and other “voluntary” freedom giving-ups.
I’ve had people I respect literally say “maybe we need to monitor all compute resources, Because AI”. Suggest we need to register all GPU and TPU chips so we Know What People Are Doing With Them. Somehow add watermarks to all “AI” output. Just nuts stuff, imho, but I fear plausible to some, and perhaps many.
Those are the ideas that frighten me. Not AI, per se, but what we would be willing to give up to in exchange for imaginary security from “bad AI”.
As a side note, I guess I should look for some “norms” posts here, and see if it’s like, customary to give karma upvotes to anyone who comments, and how they differ from agree/disagree on comments, etc. Thanks for giving me the idea to look for that info, I hadn’t put much thought into it.
The main problem with satire is Poe’s Law. There are people sincerely advocating for more extreme positions in many respects, so it is difficult to write a satirical post that is distinguishable from those sincere positions even after being told that it is satire. In your case I had to get about 90% of the way through before suspecting that it was anything other than an enthusiastic but poorly written sincere post.
seems like a very reasonable concern to me. how do you an anti-authority voluntarist information sharing pattern? it does seem to me that a key part of ai safety is going to be the ability to decide to retain strategic ambiguity. if anything, strongly safe ai should make it impossible for large monitoring networks to work, by construction!
Right? A lack of resilience is a problem faced currently. It seems silly to actually aim for something that could plausibly cascade into the problems people fear, in an attempt to avoid those very problems to begin with.
Oh, hey, I hadn’t noticed I was getting downvoted. Interesting!
I’m always willing to have true debate— or even false debate if it’s good. =]
I’m just sarcasming in this one for fun and to express what I’ve already been expressing here lately in a different form or whatnot.
The strong proof is what I’m after, for sure, and more interesting/exciting to me than just bypassing the hard questions to rehash the same old same old.
Imagine what AI is going to show us about ourselves. There is nothing bad or scary there, unless we find “the truth” bad and scary, which I think more than a few people do.
FWIW I’m not here for the votes… just to interact and share or whatnot— to live, or experience life, if you will. =]
nah it doesn’t need to be in every one. it only takes a few trustable wise ais and they can explain to other ais why it’s just actually a good idea to be kind. but it requires being able to find proof that pro-social behavior is a good idea, proof stronger than we’ve had before. solidarity networks are a better idea than top down control anyway, because top down control is fragile, exactly as you worry with your sarcasm.
(you’re getting downvoted for sarcasm, btw, because sarcasm implies you don’t think anyone is going to listen and perhaps you aren’t interested in true debate. but I’m just going to assume I’m wrong about that assumption and debate anyway.)
Oh snap, I read and wrote “sarcasm” but what I was trying to do was satire.
Top-down control is less fragile than ever, thanks to our technology, so I really do fear people reacting to AI the way they generally do to terrorist attacks— with Patriot Acts and other “voluntary” freedom giving-ups.
I’ve had people I respect literally say “maybe we need to monitor all compute resources, Because AI”. Suggest we need to register all GPU and TPU chips so we Know What People Are Doing With Them. Somehow add watermarks to all “AI” output. Just nuts stuff, imho, but I fear plausible to some, and perhaps many.
Those are the ideas that frighten me. Not AI, per se, but what we would be willing to give up to in exchange for imaginary security from “bad AI”.
As a side note, I guess I should look for some “norms” posts here, and see if it’s like, customary to give karma upvotes to anyone who comments, and how they differ from agree/disagree on comments, etc. Thanks for giving me the idea to look for that info, I hadn’t put much thought into it.
The main problem with satire is Poe’s Law. There are people sincerely advocating for more extreme positions in many respects, so it is difficult to write a satirical post that is distinguishable from those sincere positions even after being told that it is satire. In your case I had to get about 90% of the way through before suspecting that it was anything other than an enthusiastic but poorly written sincere post.
Bwahahahaha! Lord save us! =]
seems like a very reasonable concern to me. how do you an anti-authority voluntarist information sharing pattern? it does seem to me that a key part of ai safety is going to be the ability to decide to retain strategic ambiguity. if anything, strongly safe ai should make it impossible for large monitoring networks to work, by construction!
Right? A lack of resilience is a problem faced currently. It seems silly to actually aim for something that could plausibly cascade into the problems people fear, in an attempt to avoid those very problems to begin with.
Oh, hey, I hadn’t noticed I was getting downvoted. Interesting!
I’m always willing to have true debate— or even false debate if it’s good. =]
I’m just sarcasming in this one for fun and to express what I’ve already been expressing here lately in a different form or whatnot.
The strong proof is what I’m after, for sure, and more interesting/exciting to me than just bypassing the hard questions to rehash the same old same old.
Imagine what AI is going to show us about ourselves. There is nothing bad or scary there, unless we find “the truth” bad and scary, which I think more than a few people do.
FWIW I’m not here for the votes… just to interact and share or whatnot— to live, or experience life, if you will. =]