I am aware of functional programming, but only due to having explored it myself (I am still at City College of San Francisco, and will not be transferring to UC—hopefully Berkeley or UCSD—until this fall). Unfortunately, most Community and Junior Colleges don’t teach functional programming, because they are mostly concerned with cranking out code monkeys rather than real Computer Scientists or Cognitive Scientists (My degree is Cog Sci/Computationalism and Computational Engineering—or, the shorter name: Artificial Intelligence. At least that is what most of the people in the degree program are studying. Especially at Berkeley and UCSD, the two places I wish to go).
So, is what you are referring to, with a learning type system, not Sub-human equivalent because it has no random or Stochastic processes?
Or, to be a little more clear, they are not sub-human equivalent because they are highly deterministic and (as you put it) predictable.
I get what you mean about human body-type adaptation. We still have the DNA in our bodies for having tails of all types (from reptile to prehensile), and we still have DNA for other deprecated body plans. Thus, a human-equivalent AI would need to be flexible enough to be able to adapt to a change in its body plan and tools (at least this is what I am getting).
In another post (which I cannot find, as I need to learn how to search my old posts better), I propose that computers are another form of intelligence that is evolving with humans as the agent of selection and mutation. Thus, they have a vastly different evolutionary pathway than biological intelligence has had. I came up with this after hearing Eliezer Yudowski speak at one of the Singularity Summits (and maybe Convergence 08. I cannot recall if he was there or not). He talks about Mind Space, and how humans are only a point in Mind Space, and that the potential Mind Space is huge (maybe even unbounded. I hope that he will correct me if I have misunderstood this).
Thanks for the reply. It is very helpful.
I am aware of functional programming, but only due to having explored it myself (I am still at City College of San Francisco, and will not be transferring to UC—hopefully Berkeley or UCSD—until this fall). Unfortunately, most Community and Junior Colleges don’t teach functional programming, because they are mostly concerned with cranking out code monkeys rather than real Computer Scientists or Cognitive Scientists (My degree is Cog Sci/Computationalism and Computational Engineering—or, the shorter name: Artificial Intelligence. At least that is what most of the people in the degree program are studying. Especially at Berkeley and UCSD, the two places I wish to go).
So, is what you are referring to, with a learning type system, not Sub-human equivalent because it has no random or Stochastic processes?
Or, to be a little more clear, they are not sub-human equivalent because they are highly deterministic and (as you put it) predictable.
I get what you mean about human body-type adaptation. We still have the DNA in our bodies for having tails of all types (from reptile to prehensile), and we still have DNA for other deprecated body plans. Thus, a human-equivalent AI would need to be flexible enough to be able to adapt to a change in its body plan and tools (at least this is what I am getting).
In another post (which I cannot find, as I need to learn how to search my old posts better), I propose that computers are another form of intelligence that is evolving with humans as the agent of selection and mutation. Thus, they have a vastly different evolutionary pathway than biological intelligence has had. I came up with this after hearing Eliezer Yudowski speak at one of the Singularity Summits (and maybe Convergence 08. I cannot recall if he was there or not). He talks about Mind Space, and how humans are only a point in Mind Space, and that the potential Mind Space is huge (maybe even unbounded. I hope that he will correct me if I have misunderstood this).