My current position is I don’t know what the correct action to take to nudge the world the way I want. The world seems to be working somewhat at this point and any nudge may send it into a path towards something that doesn’t work (even sub-human AI might change the order of the world so much it stops working).
So my strategy is to try and prepare a nudge that could be used in case of emergencies. While trying to live a semi-normal life as well and cope with akrasia etc, it is not going quickly.
There are some actions that seem to be clear wins, like fighting against unFriendly AI. I find it difficult to see what kind of nudge you could prepare that would be effective in an emergency. Can you say more about the kind of thing you had in mind?
I think very fast UFAI is unlikely, so I tend to worry about the rest of the bottleneck. Slow AI* has its own dangers and is not a genie I would like to let out of the bottle unless I really need it. Even if the first Slow AI is Friendly it doesn’t guarantee the next 1000 will be, so it depends on the interaction between the AI and the society that makes it.
Not that I expect to code it all myself. I really should be thinking about setting up an institution to develop and hide the information in such a way that it is distributed but doesn’t leak. The time to release the information/code would be when there had been a non-trivial depopulation of earth and it was having trouble reforming an industrial society (or other time industrial earth was in danger). The reason not release it straight away would be to hope for better understanding of the future trajectory of the Slow AIs.
There might be an argument for releasing the information if we could show we would never get a better understanding of the future of the Slow AIs.
*By slow AI I mean AI that has as much likelihood of Fooming as unenhanced humans do, due to sharing similar organization and limitations of intelligence.
It seems to me that we already have all manner of sub-human AI. The AIs that deal with telephone traffic, data mining, air-traffic control, the Gov’t and Intelligence services, the Military, Universities that have AI programs, Zoos that have breeding programs (and sequence the genomes of endangered animals to find the best mate for the animal), etc.
Are these types of AI far too primitive to even be considered sub-human, in your opinion?
Are these types of AI far too primitive to even be considered sub-human, in your opinion?
Not exactly too primitive but of the wrong structure. Are you familiar with functional programming type notation? An off line learning system can be considered a curried function of type
classify :: Corpus → (a → b)
Where a and b are the input and output types, and Corpus is the training data. Consider a chess playing game that learns from previous chess games (for simplicity).
Corpus → (ChessGameState → ChessMove) or a data mining tool set up for finding terrorists
Corpus → ((Passport, FlightItinerary) → Float) where the float is the probability that the person travelling is a terrorist based on the passport presented and the itinerary.
They can be very good at their jobs, but they are predictable. You know their types. What I was worried about is learning systems that don’t have a well defined input and output over their life times.
Consider the humble PC it doesn’t know how many monitors it is connected to or what will be connected to its USB sockets. If you wanted to create a system that could learn to control it you would need to be from any type to any type, dependent upon what it had connected.* I think humans and animals are designed to be this kind of system as our brain has been selected to cope with many different types of body with minimal evolutionary change. It is what allows us to add prosthetics and cope with bodily changes over life (growth and limb/sense loss). These system are a lot more flexible as they can learn things quickly by restricting their search spaces, but still have a wide range of possible actions.
There are more considerations for an intelligence about the type of function that determines how the corpus/memory determines the current input/output mapping as well. But that is another long reply.
*You can represent any type to any other type as a large integer in a finite system. But with the type notation I am trying to indicate what the system is capable of learning at any one point. We don’t search the whole space for computational resource reasons.
I am aware of functional programming, but only due to having explored it myself (I am still at City College of San Francisco, and will not be transferring to UC—hopefully Berkeley or UCSD—until this fall). Unfortunately, most Community and Junior Colleges don’t teach functional programming, because they are mostly concerned with cranking out code monkeys rather than real Computer Scientists or Cognitive Scientists (My degree is Cog Sci/Computationalism and Computational Engineering—or, the shorter name: Artificial Intelligence. At least that is what most of the people in the degree program are studying. Especially at Berkeley and UCSD, the two places I wish to go).
So, is what you are referring to, with a learning type system, not Sub-human equivalent because it has no random or Stochastic processes?
Or, to be a little more clear, they are not sub-human equivalent because they are highly deterministic and (as you put it) predictable.
I get what you mean about human body-type adaptation. We still have the DNA in our bodies for having tails of all types (from reptile to prehensile), and we still have DNA for other deprecated body plans. Thus, a human-equivalent AI would need to be flexible enough to be able to adapt to a change in its body plan and tools (at least this is what I am getting).
In another post (which I cannot find, as I need to learn how to search my old posts better), I propose that computers are another form of intelligence that is evolving with humans as the agent of selection and mutation. Thus, they have a vastly different evolutionary pathway than biological intelligence has had. I came up with this after hearing Eliezer Yudowski speak at one of the Singularity Summits (and maybe Convergence 08. I cannot recall if he was there or not). He talks about Mind Space, and how humans are only a point in Mind Space, and that the potential Mind Space is huge (maybe even unbounded. I hope that he will correct me if I have misunderstood this).
My current position is I don’t know what the correct action to take to nudge the world the way I want. The world seems to be working somewhat at this point and any nudge may send it into a path towards something that doesn’t work (even sub-human AI might change the order of the world so much it stops working).
So my strategy is to try and prepare a nudge that could be used in case of emergencies. While trying to live a semi-normal life as well and cope with akrasia etc, it is not going quickly.
There are some actions that seem to be clear wins, like fighting against unFriendly AI. I find it difficult to see what kind of nudge you could prepare that would be effective in an emergency. Can you say more about the kind of thing you had in mind?
I think very fast UFAI is unlikely, so I tend to worry about the rest of the bottleneck. Slow AI* has its own dangers and is not a genie I would like to let out of the bottle unless I really need it. Even if the first Slow AI is Friendly it doesn’t guarantee the next 1000 will be, so it depends on the interaction between the AI and the society that makes it.
Not that I expect to code it all myself. I really should be thinking about setting up an institution to develop and hide the information in such a way that it is distributed but doesn’t leak. The time to release the information/code would be when there had been a non-trivial depopulation of earth and it was having trouble reforming an industrial society (or other time industrial earth was in danger). The reason not release it straight away would be to hope for better understanding of the future trajectory of the Slow AIs.
There might be an argument for releasing the information if we could show we would never get a better understanding of the future of the Slow AIs.
*By slow AI I mean AI that has as much likelihood of Fooming as unenhanced humans do, due to sharing similar organization and limitations of intelligence.
Could you define sub-human AI, please?
It seems to me that we already have all manner of sub-human AI. The AIs that deal with telephone traffic, data mining, air-traffic control, the Gov’t and Intelligence services, the Military, Universities that have AI programs, Zoos that have breeding programs (and sequence the genomes of endangered animals to find the best mate for the animal), etc.
Are these types of AI far too primitive to even be considered sub-human, in your opinion?
Not exactly too primitive but of the wrong structure. Are you familiar with functional programming type notation? An off line learning system can be considered a curried function of type
classify :: Corpus → (a → b)
Where a and b are the input and output types, and Corpus is the training data. Consider a chess playing game that learns from previous chess games (for simplicity).
Corpus → (ChessGameState → ChessMove) or a data mining tool set up for finding terrorists
Corpus → ((Passport, FlightItinerary) → Float) where the float is the probability that the person travelling is a terrorist based on the passport presented and the itinerary.
They can be very good at their jobs, but they are predictable. You know their types. What I was worried about is learning systems that don’t have a well defined input and output over their life times.
Consider the humble PC it doesn’t know how many monitors it is connected to or what will be connected to its USB sockets. If you wanted to create a system that could learn to control it you would need to be from any type to any type, dependent upon what it had connected.* I think humans and animals are designed to be this kind of system as our brain has been selected to cope with many different types of body with minimal evolutionary change. It is what allows us to add prosthetics and cope with bodily changes over life (growth and limb/sense loss). These system are a lot more flexible as they can learn things quickly by restricting their search spaces, but still have a wide range of possible actions.
There are more considerations for an intelligence about the type of function that determines how the corpus/memory determines the current input/output mapping as well. But that is another long reply.
*You can represent any type to any other type as a large integer in a finite system. But with the type notation I am trying to indicate what the system is capable of learning at any one point. We don’t search the whole space for computational resource reasons.
Thanks for the reply. It is very helpful.
I am aware of functional programming, but only due to having explored it myself (I am still at City College of San Francisco, and will not be transferring to UC—hopefully Berkeley or UCSD—until this fall). Unfortunately, most Community and Junior Colleges don’t teach functional programming, because they are mostly concerned with cranking out code monkeys rather than real Computer Scientists or Cognitive Scientists (My degree is Cog Sci/Computationalism and Computational Engineering—or, the shorter name: Artificial Intelligence. At least that is what most of the people in the degree program are studying. Especially at Berkeley and UCSD, the two places I wish to go).
So, is what you are referring to, with a learning type system, not Sub-human equivalent because it has no random or Stochastic processes?
Or, to be a little more clear, they are not sub-human equivalent because they are highly deterministic and (as you put it) predictable.
I get what you mean about human body-type adaptation. We still have the DNA in our bodies for having tails of all types (from reptile to prehensile), and we still have DNA for other deprecated body plans. Thus, a human-equivalent AI would need to be flexible enough to be able to adapt to a change in its body plan and tools (at least this is what I am getting).
In another post (which I cannot find, as I need to learn how to search my old posts better), I propose that computers are another form of intelligence that is evolving with humans as the agent of selection and mutation. Thus, they have a vastly different evolutionary pathway than biological intelligence has had. I came up with this after hearing Eliezer Yudowski speak at one of the Singularity Summits (and maybe Convergence 08. I cannot recall if he was there or not). He talks about Mind Space, and how humans are only a point in Mind Space, and that the potential Mind Space is huge (maybe even unbounded. I hope that he will correct me if I have misunderstood this).