The link on the crossposted EA post is wrong and goes on previous round which could be confusing.. Correct link is: https://forum.effectivealtruism.org/posts/oFeGLaJ5bZBBRbjC9/ea-funds-long-term-future-fund-is-open-to-applications-until
As OP said the higher status is associated with image-free blueish t-shirts, and in most cases I think he is right. But obviously some exceptions are possible.
There is a problem of “moral unemployment”—that is, if superintelligent AI will do all hard work of analysing “what I should want”, it will strip from me the last pleasant duty I may have.
E.g: Robot: “I know that the your deepest desire, which you may be not fully aware of, but after a lot of suffering, you will learn it for sure – is to write a novel. And I already wrote this novel for you—the best one which you could possibly write.”
It is probably wrong to interpret DA as “doom is imminent”. DA just say that we are likely in the middle of total population of all humans (or other relevant observers) ever born.
For some emotional reasons we are not satisfied to be in the middle and interprets it as “doom”—but there are 100 billion more people in the future according to DA. It becomes look like a doom, if we account for expected population growth, as in that case, the next 100 billion people will appear in a few hundreds years.
More over, DA tells that doom very soon is very unlikely, which I call “reverse DA”.
In last few months I experimented a lot with printing custom t-shirts with my own design. Having such a t-shirt will not increase my social status, but interesting t-shirt could make me more visible in the crowd and will be a good starter of conversation. There are several important ideas if you want custom print t-shirt:
1) You need over-print all surface t-shirt.
2) You need sublimation technology of printing. Never print small logos on cotton t-shirt, they will not survive first washing. Sublimation is unkillable.
3) You need “jersey structure” of the material: it is still synthetic, but much more similar to cotton.
An example of a place with right t-shirts and some of my designs for them is: https://www.rageon.com/a/users/alexprint
Try these links:
Fig 1: https://i.imgur.com/sef5SgH.jpg
Fig 2: https://i.imgur.com/EOWwz4x.jpg
One small bit of such interaction could be rephrasing of commands.
Human: “I want apple.”
Robot: “Do you want a computer or a fruit?”
Another way of interaction is presenting of the plan of actions, may be drawing it as a visual image:
Robot: “To give you an apple, I have to go to the shop, which will take at least one hour.”
Human: “No, just find the apple in the refrigerator.”
The third way is to confirm that human still want X after reasonable amounts of time, say, every one day:
Robot: “Yesterday you asked for an apple. Do you still want it?”
The forth is sending reports after every timestamp, which describes how the the project is going and which new subgoals appeared:
Robot: “There is no apples in the shop. I am going to another village, but it will take two more hours.”
Human: “No, it is too long, buy me an orange.”
The general intuition pomp for such interactions is relations between a human and ideal human secretary, and such pomp could be even used to train the robot. Again, this type of learning is possible only after the biggest part of AI safety is solved, or the robot will go foom after the first question.
May be it is not an bug, but a feature, where by forgetting or remembering parts of information I could manipulate expected probability? It was already discussed on LW as the “flux universe”.
I don’t think that r&d will cease. My argument was in style if “A then B”, but I don’t think that A is true. I am argue here against those who associate the end of Moore’s law with the end of growth of computational power.
A thing could function, but there are better and cheaper things—the correct name probably is “functional obsolescence”
May be we need a draft party? Like a meta-post on LW below which people will share their drafts? It would also help to coordinate who do what.
Anyway, to make a robot which is able to discern different types oа consent is AI safety complete task—so AI safety should be solved before this robot arrive to the home of the user. I explored a similar model in “Dangerous value learners.”
Rephrasing a command is a good way to ensure understanding and to establish the consent, like in case: Alice: “I want coffee in bed”; Robot: “Do you want it to be poured in bed”?
In fact, not a mine idea, I read an article about it by Panov—https://www.sociostudies.org/almanac/articles/prebiological_panspermia_and_the_hypothesis_of_the_self-consistent_galaxy_origin_of_life/
If Moore’s law completely stops (in a sense there will be no new more effective chips), this will lower the price of computations for a few reasons:
1) Biggest part of a processor price is covering of R&D, but if Moore’s law stops, there will be no R&D, only manufacturing costs.
2) Biggest part of the manufacturing cost is covering the cost and amortisation of large chip fabs. If no more new chip fabs, price will be marginal. For example, 8080 processor cost 350 USD after inventing in the beginning of1970s and only 3.5 USD at the end of 1970s then it was morally obsolete.
3) No more moral amortisation of computers. Amortisation will be not 3 years, but 20 years, which will lower the price of computation—or alternatively will allow users to linearly increase their computation power by buying more and more computers for a long periods of time.
4) Expiring patents will allow cheaper manufacturing by other vendors.
I meant not that the “robot will self-improve”, but that the research in robotics will create AIs which are agential and adapted to act in the real world. Such AIs may start to self-improve later and without robotic body.
My main objection to this idea is that it is a local solution, and doesn’t have built-in mechanisms to become global AI safety solution, that is, to prevent other AIs creation, which could be agential superintelligences. One can try to make “AI police” as a service, but it could be less effective than agential police.
Another objection is probably Gwern’s idea that any Tool AI “wants” to become agential AI.
This idea also excludes the robotic direction in AI development, which will anyway produce agential AIs.
One idea I have about emotions is that they are evolutionary adaptations which evolved before “reasoning intelligence” (whatever it is) in mammals and their goal was to turn on specific mode of action. For example, anger turns on the mode of action of fight response, which includes higher blood pressure, lower sensitiveness to pain, and making fists. Most of the basic emotions could be explained as modes of action (e.g. fear, sexual arousal), but it is not the full story.
If we read any book on ethology of, say, birds, we found that an important part of their behaviour is demonstrations. A cat is not only ready to fight, but the cat demonstrates its readiness to fight to the opponent in credible way by raising its hair and vocalization. I read that, in the case of birds, demonstrations are more important than actual fights, as demonstrations better show who won and who lost—without physical damage to both sides.
Humans brain is built upon animal brain, so it inherited most of animal features, but in suppressed and more flexible form. In animals, emotions work as a rule-based system which control behavior and signaling. This rule system is relatively simple and “mechanical”, so there is nothing mystical in emotions or difficult for reproduction in AI. (Like in case of a cat: if you see small animal—hunt; if you see animal of your size—fight and demonstrate; if you see the animal much large than you—run). Also, there is nothing “human” in emotions—they are an animal part of us.
Emotions also sometimes can give us quicker estimation of the nature of a situation then reasoning in System 2, as they result from quick estimation of a situation by a large neural net, which could peek up many subtle clues and presents them as one conclusion, like, “Run!”
Given all this, emotions as demonstrations of a chosen mode of action may be used by AI—the same way as national states demonstrate their military posture. They could be also used for quick estimation of a new situation for risks by a artificial neural net trained on such situations.
Something could exhibit goal-like behaviour for the outside viewers without having internal structure of an agent. For example, a brick is falling to the ground—we could say that it is aimed on the specific point on the ground, but it is not an agent. The same way an infectious disease can take over the world without being an agent. Moreover, even some humans sometimes are not agent.
In my opinion, Oracle AI output only answers to questions, and Tool AI can do some other staff, like continuous data stream transformation or controlling mechanisms.
National states, human body and OSs—all of them are good and even clever in preserving homeostatic state (except the time of government shutdown) - but they typically achieve it not via high level agential reasoning.
Swarm of agents could exhibit behaviour different from the behaviour or goals of any separate agent.
There is an interesting psychotherapeutic technic of calling subpersonalities one by one, called “Voice Dialogue” which were developed by Stones. I experienced a few surprising results from it both being a seater and a subject of the therapy. This technic may be used to demonstrate the soundness of the subpersonalities theory for those who doubt—or to query the subpersonalities one by one, may be with the goal of learning their values for AI alignment. This is their site: http://delos-inc.com/