Katja pls interconnect discussion parts by links (or something like TOC )
Liso
This is good point, which I like to have more precisely analysed. (And I miss deeper analyse in The Book :) )
Could we count will (motivation) of today’s superpowers = megacorporations as human’s or not? (and in which level could they control economy?)
In other worlds: Is Searle’s chinese room intelligent? (in definition which The Book use for (super)intelligence)
And if it is then it is human or alien mind?
And could be superintelligent?
What arguments we could use to prove that none of today’s corporations (or states or their secret services) is superintelligent? Think collective intelligence with computer interfaces! Are they really slow at thinking? How could we measure their IQ?
And could we humans (who?) control it (how?) if they are superintelligent? Could we at least try to implement some moral thinking (or other human values) to their minds? How?
Law? Is law enough to prevent that superintelligent superpower will do wrong things? (for example destroy rain forrest because he want to make more paperclips?)
First of all thanx for work with this discussion! :)
My proposals:
wiki page for collaborative work
There are some points in the book which could be analysed or described better and probably which are wrong. We could find them and help improve. wiki could help us to do it
better time for europe and world?
But this is probably not a problem. If it is a problem then it is probably not solvable. We will see :)
This is similar to question about 10time quicker mind and economic growth. I think there are some natural processes which are hard to be “cheated”.
One woman could give birth in 9 month but two women cannot do it in 4.5 month. Twice more money to education process could give more likely 2*N graduates after X years than N graduates after X/2 years.
Some parts of science acceleration have to wait years for new scientists. And 2 time more scientists doesnt mean 2 time more discoveries. Etc.
But also 1.5x more discoveries could bring 10x bigger profit!
We could not suppose only linear dependencies in such a complex problems.
Target was probably much smarter than an individual human about setting up the procedures and the incentives to have a person there ready to respond quickly and effectively, but that might have happened over months or years.
We have not to underestimate slow superintelligences. Our judiciary is also slow. So some acts we could do are very slow.
Humanity could be overtaken also by slow (and alien) superintelligence.
It does not matter if you would quickly see that it is in wrong way. You still could slowly lose step by step your rights and power to act… (like slowly loosing pieces in chess game)
If strong entities in our world will (are?) driving by poorly designed goals—for example “maximize profit” then they could really be very dangerous to humanity.
I really dont want to spoil our discussion with politics rather I like to see rational discussion about all existential threats which could raise from superintelligent beings/entities.
We have not underestimate any form and not underestimate any method of our possible doom.
With bigdata comming, our society is more and more ruled by algorithms. And algorithms are smarter and smarter.
Algorithms are not independent from entities which have enough money or enough political power to use it.
BTW. Bostrom wrote (sorry not in chapter we discussed yet) about possible perverse instantiation which could be done due to not well designed goal by programmer. I am afraid that in our society it will be manager or politician who will/is design goal. (we have find way that there be also philosopher and mathematician)
In my oppinion first (if not singleton) superintelligence will be (or is) most probably ‘mixed form’. Some group of well organized people (dont forget lawyers) with big database and supercomputer.
Next stages after intelligence explosion could have any other forms.
One child could have two parents (and both could answer) so 598 is questionable number.
Stuart is it really your implicit axiom that human values are static, fixed?
(Were they fixed historically? Is humankind mature now? Is humankind homogenic in case of values?)
@Nozick: we are plugged to machine (Internet) and virtual realities (movies, games). Do we think that it is wrong? Probably it is question about level of connection to reality?
@Häggström: there is contradiction in definition what is better. F1 is better than F because it has more to strive and F2 is better than F1 because it has less to strive.
@CEV: time is only one dimension in space of conditions which could affect our decisions. Human cultures are choosing cannibalism in some situations. SAI could see several possible future decisions depending on surroundings and we have to think very carefully which conditions are acceptable and which are not. Or we could choose what we choose in some special scene prepared for humanity by SAI.
This could be not good mix ->
Our action: 1a) Channel manipulation: other sound, other image, other data & Taboo for AI: lying.
This taboo: “structured programming languages.”, could be impossible, because structure understanding and analysing is probably integral part of general intelligence.
She could not reprogram itself in lower level programming language but emulate and improve self in her “memory”. (She could not have access to her code segment but could create stronger intelligence in data segment)
Is “transcendence” third possibility? I mean if we realize that human values are not best and we retire and resign to control.
(I am not sure if it is not motivation selection path—difference is subtle)
BTW. if you are thinking about partnership—are you thinking how to control your partner?
It seems that the unfriendly AI is in a slightly unfavourable position. First, it has to preserve the information content of its utility function or other value representation, in addition to the information content possessed by the friendly AI.
There are two sorts of unsafe AI: one which care and one which doesnt care.
Ignorant is fastest—only calculate answer and doesn’t care of anything else.
Friend and enemy has to analyse additional things...
I am afraid that we have not precisely defined term goal. And I think we need it.
I am trying to analyse this term.
Do you think that todays computer’s have goals? I dont think so (but probably we have different understanding of this term). Are they useless? Have cars goals? Are they without action and reaction?
Probably I could more precisely describe my idea in other way: In Bostrom’s book there are goals and subgoals. Goals are utimate, petrified and strengthened, subgoals are particular, flexible and temporary.
Could we think AI without goals but with subgoals?
One posibility could be if they will have “goal centre” externalized in human brain.
Could we think AI as tabula rasa, pure void in the begining after creation? Or AI could not exists without hardwired goals?
If they could be void—will be goal imprinted with first task?
Or with first task with word “please”? :)
About utility maximizer—human (or animal brain is not useless if it not grow without limit. And there is some tradeoff between gain and energy comsumption.
We have or could to think balanced processes. One dimensional, one directional, unbalanced utility function seems to have default outcome doom. But are the only choice?
How did that nature? (I am not talking about evolution but about DNA encoding)
Balance between “intelligent” neural tissues (SAI) and “stupid” non-neural (humanity). :)
Probably we have to see difference between purpose and B-goal (goal in Bostrom’s understanding).
If machine has to solve arithmetic equation it has to solve it and not destroy 7 planet to do it most perfect.
I have feeling that if you say “do it” Bostrom’s AI hear “do it maximally perfect”.
If you tell: “tell me how much is 2+2 (and do not destroy anything)” then she will destroy planet to be sure that nobody could stop her to answer how much is 2+2.
I am feeling that Bostrom is thinking that there is implicitly void AI in the begining and in next step there is AI with ultimate unchangeable goal. I am not sure if it is plausible. And I think that we need good definition or understanding about goal to know if it is plausible.
Could AI be without any goals?
Would that AI be dangerous in default doom way?
Could we create AI which wont be utility maximizer?
Would that AI need maximize resources for self?
Think prisoner’s dilemma!
What would aliens do?
Is selfish (self centered) reaction really best possibitlity?
What will do superintelligence which aliens construct?
(no discussion that humans history is brutal and selfish)
Difficult question. Do you mean also ten times faster to burn out? 10x more time to rest? Or due to simulation not rest, just reboot?
Or permanently reboot to drug boosted level of brain emulation on ten times quicker substrate? (I am afraid of drugged society here)
And I am also afraid that ten time quicker farmer could not have ten time summer per year. :) So economic growth could be limited by some botlenecks. Probably not much faster.
What about ten time faster philosophic growth?
This probably needs more explanation. You could tell that my reaction is not in appropriate place. It is probably true. BCI we could define like physicaly interconnection between brain and computer.
But I think in this moment we could (and have) analyse also trained “horses” with trained “raiders”. And also trained “pairs” (or groups?)
Better interface between computer and human could be done also in nonivasive path = better visual+sound+touch interface. (hourse-human analogy)
So yes = I expect they could be substantially useful also in case that direct physical interace would too difficult in next decade(s).
Are you played this type of game?
[pollid:777]
I think that if you played on big map (freeciv support really huge) then your goals (like in real world) could be better fulfilled if you play WITH (not against) AI. For example managing 5 tousands engineers manually could take several hours per round.
You could meditate more concepts (for example for example geometric growing, metasthasis method of spread civilisation etc and for sure cooperation with some type of AI) in this game…
“human”-style humor could be sandbox too :)
I like to add some values which I see not so static and which are proably not so much question about morality:
Privacy and freedom (vs) security and power.
Family, society, tradition.
Individual equality. (disparities of wealth, right to have work, …)
Intellectual properties. (right to own?)
Lemma1: Superintelligence could be slow. (imagine for example IQ test between Earth and Mars where delay between question and answer is about half hour. Or imagine big clever tortoise which could understand one sentence per hour but then could solve riemann hypothesis)
Lemma2: Human organization could rise quickly. (It is imaginable that bilions join organization during several hours)
Next theorem is obvious :)