I would probably define AGI first, just because, and I’m not sure about the idea that we are “competing” with automation (which is still just a tool conceptually right?).
We cannot compete with a hammer, or a printing press, or a search engine. Oof. How to express this? Language is so difficult to formulate sometimes.
If you think of AI as a child, it is uncontrollable. If you think of AI as a tool, of course it can be controlled. I think a corp has to be led by people, so that “machine” wouldn’t be autonomous per se…
Guess it’s all about defining that “A” (maybe we use “S” for synthetic or “S” for silicon?)
Well and I guess defining that “I”.
Dang. This is for sure the best place to start. Everyone needs to be as certain as possible (heh) they are talking about the same things. AI itself as a concept is like, a mess. Maybe we use ML and whatnot instead even? Get real specific as to the type y todo?
I dunno but I enjoyed this piece! I am left wondering, what if we prove AGI is uncontrollable but not that it is possible to create? Is “uncontrollable” enough justification to not even try, and moreso, to somehow [personally I think this impossible, but] dissuade people from writing better programs?
I’m more afraid of humans and censorship and autonomous policing and whathaveyou than “AGI” (or ASI)
I would probably define AGI first, just because, and I’m not sure about the idea that we are “competing” with automation (which is still just a tool conceptually right?).
We cannot compete with a hammer, or a printing press, or a search engine. Oof. How to express this? Language is so difficult to formulate sometimes.
If you think of AI as a child, it is uncontrollable. If you think of AI as a tool, of course it can be controlled. I think a corp has to be led by people, so that “machine” wouldn’t be autonomous per se…
Guess it’s all about defining that “A” (maybe we use “S” for synthetic or “S” for silicon?)
Well and I guess defining that “I”.
Dang. This is for sure the best place to start. Everyone needs to be as certain as possible (heh) they are talking about the same things. AI itself as a concept is like, a mess. Maybe we use ML and whatnot instead even? Get real specific as to the type y todo?
I dunno but I enjoyed this piece! I am left wondering, what if we prove AGI is uncontrollable but not that it is possible to create? Is “uncontrollable” enough justification to not even try, and moreso, to somehow [personally I think this impossible, but] dissuade people from writing better programs?
I’m more afraid of humans and censorship and autonomous policing and whathaveyou than “AGI” (or ASI)