The mere fear that the entire human race will be exterminated in their sleep through some intricate causality we are too dumb to understand will seriously diminish our quality of life.
MattJ
That doesn’t make sense to me. If someone wants to fool me that I’m looking att a tree he has to paint a tree in every detail. Depending on how closely I examine this tree he has to match my scrutiny to the finest detail. In the end, his rendering of a tree will be indistinguishable from an actual tree even at the molecular level.
We don’t want an ASI to be ”democratic”. We want it to be ”moral”. Many people in the West conflate the two words thinking that democratic and moral is the same thing but it is not. Democracy is a certain system of organizing a state. Morality is how people and (in the future) an ASI behave towards one another.
There are no obvious reasons why an authocratic state would care more or less about a future ASI being immoral, but an argument can be made that autocratic states will be more cautious and put more restrictions on the development of an ASI because autocrats usually fear any kind of opposition and an ASI could be a powerful adversary of itself or in the hands of powerful competitors.
The actual peace deal will be something for the Ukraine to agree to. It is not up to Trump to dictate the terms. All Trump should do is to stop financing the war and we will have peace.
Having said that, if it is somehow possible for Trump to pressure Ukraine into agreeing to become a US colony, my support for Trump was a mistake. The war would be preferable to the peace.
Good post! We will soon have very powerful quantum computers that probably could simulate what will happen if a mirror bacteria is confronted with the human immune system. Maybe there is no risk at all or an existential risk to humanity. This should be a prioritized task for our first powerful quantum computer to find out.
Because he says so.
I’m not allowed to vote in the election but I hope Trump wins because I think he will negotiate a peace in Ukraine. If Harris wins I think the war will drag on for another couple of years at worst.
I have no problem getting pushback.
I guess it could be a great tool to help people quickly learn to converse in a foreign language.
The ”eternal recurrence” is surprisingly the most attractive picture of the ”afterlife”. The alternatives: the annihilation of the self or eternal life in heaven are both unattractive, for different reasons. Add to this that Nietzsche is right to say that the eternal recurrence is a view of the world which seems compatible with a scientific cosmology.
I remember I came up with a similar thought experiment to explain the Categorical Imperative.
Assume there is only one Self-Driving Car on the market, what principle would you want it to follow?
The first priciple we think of is: ”Always do what the driver would want you to do”.
This would certainly be the principle we would want if our SDC was the only car on the road. But there are other SDCs and so in a way we are choosing a principle for our own car which is also at the same time a ”universal law”, valid for every car on the road.
With this in mind, it is easy to show that the principle we could rationally want is: ”Always act on that principle which the driver can rationally will to become a universal law”.
Coincidently this is also Kant’s Categorical Imperative.
Yes, but that ”generative AI can potentially replace millions of jobs” is not contradictory to the statement that it eventually ”may turn out to be a dud”.
I initially reacted in the same way as you to the exact same passage but came to the conclusion that it was not illogical. Maybe I’m wrong but I don’t think so.
I think the auther ment that there was a perception that it could replace millions of jobs, and so an incentive for business to press forward with their implementation plans, but that this would eventually back fire if the hallucination problem is insoluble.
I am a Kantian and believe that those a priori rules have already been discovered.
But my point here was merely that you can isolate the part that belongs to pure ethics from evererything empirical, like in my example what a library is; why do people go to libraries; what is a microphone and what is it’s purpose and so on. What makes an action right or wrong at the most fundamental level however is independent of everything empirical and simply an a priori rule.
I guess also my broader point was that Stephen Wolfram is far too pessimistic about the prospects of making a moral AI. A future AI may soon have a greater understanding of the world and the people in it, and so all we have to do is to provide the right a priori rule and we will be fine.
Of course, the technical issue still remains: how do we make the AI stick to that rule, but that is not an ethical problem but an engineering problem.
You can’t isolate individual ”atoms” in ethics, according to Wolfram. Let’s put that to the test. Tell me if the following ”ethical atoms” are right or wrong:
I will speak in a loud voice
2…on a monday
3…in a public library
4…where I’ve been invited to speak about my new book and I don’t have a microphone.
Now, (1) seems morally permissible, and (2) doesn’t change the evaluation. (3) does make my action seem morally impermissible, but (4) turns it around again. I’m convinced all of this was very simple to everyone.
Ethics is the science about the a priori rules that make these judgments so easy to us, or at least that was Kant’s view which I share. It should be possible to make an AI do this calculation even faster than we do, and all we have to do is to provide the AI with the right a priori rules. When that is done, the rest is just empirical knowledge about libraries and human beings and we will eventually have a moral AI.
I think that was the point. Comedians of the future will be performers. They will not write their own jokes but be increasingly good at reading the lines written by AI.
When Chat GPT came out I asked it to write a Seinfeld episode about taking an IQ-test. In my judgment it was just as good and funny as every other Seinfeld episodes I’ve watched. Mildly amusing. Entertaining enough not to stop reading.
This answer is a little bit confusing to me. You say that ”agency” may be an important concept even if we don’t have a deep understanding of what it entails. But how about a simple understanding?
I thought that when people spoke about ”agency” and AI, they meant something like ”a capacity to set their own final goals”, but then you claim that Stockfish could best be understood by using the concept of ”agency”. I don’t see how.
I myself kind of agree with the sentiment in the original post that ”agency” is a superfluous concept, but want to understand the opposite view?
You seem to hold the position that:
Scientists and not philosophers should do meta-ethics and normative ethics, until
AGIs can do it better at which point we should leave it to them.
I don’t believe that scientists either have the inclination or the competence to do what you ask of them, and secondly that letting AGIs decide right and wrong would be a nightmare scenario for the human race.
”Whom the gods would destroy, they first make mad.”
One way to kill everyone would be to turn the planet into a Peoples Temple and have everyone drink the kool-aid.
I think even those of us who think of ouselves as psychologically robust will be defenseless against the manipulations of a future GPT-9.
I thought that was what was meant. The question is probably the easiest one to answer affirmatively with a high degree of confidence. I can think of several ongoing ”moral catastrophs”.
Yes.