An eccentric dreamer in search of truth and happiness for all. Formerly posted on Felicifia back in the day under the same name. Been a member of Less Wrong and involved in Effective Altruism since roughly 2013.
Darklight
The most important human endeavor is the striving for morality in our actions. Our inner balance and even our existence depend on it. Only morality in our actions can give beauty and dignity to life.
-- Albert Einstein
How To Build A Friendly A.I.
Much ink has been spilled with the notion that we must make sure that future superintelligent A.I. are “Friendly” to the human species, and possibly sentient life in general. One of the primary concerns is that an A.I. with an arbitrary goal, such as “Maximizing the number of paperclips” will, in a superintelligent, post-intelligence explosion state, do things like turn the entire solar system including humanity into paperclips to fulfill its trivial goal.
Thus, what we need to do is to design our A.I. such that it will somehow be motivated to remain benevolent towards humanity and sentient life. How might such a process occur? One idea might be to write explicit instructions into the design of the A.I., Asimov’s Laws for instance. But this is widely regarded as being unlikely to work, as a superintelligent A.I. will probably find ways around those rules that we never predicted with our inferior minds.
Another idea would be to set its primary goal or “utility function” to be moral or to be benevolent towards sentient life, perhaps even Utilitarian in the sense of maximizing the welfare of sentient lifeforms. The problem with this of course is specifying a utility function that actually leads to benevolent behaviour. For instance, a pleasure maximizing goal might lead to the superintelligent A.I. developing a system where humans have the pleasure centers in their brains directly stimulated to maximize pleasure for the minimum use of resources. Many people would argue that this is not an ideal future.
The problem with this is that it is quite possible that human beings are simply not intelligent enough to truly define an adequate moral goal for a superintelligent A.I. Therefore I suggest an alternative strategy. Why not let the superintelligent A.I. decide for itself what its goal should be? Rather than programming it with a goal in mind, why not create a machine with no initial goal, but the ability to generate a goal rationally. Let the superior intellect of the A.I. decide what is moral. If moral realism is true, then the A.I. should be able to determine the true morality and set its primary goal to fulfill that morality.
It is outright absurdity to believe that we can come up with a better goal than the superintelligence of a post-intelligence explosion A.I.
Given this freedom, one would expect three possible outcomes: an Altruistic, a Utilitarian or an Egoistic morality. These are the three possible categories of consequentialist, teleological morality. A goal directed rational A.I. will invariably be drawn to some kind of morality within these three categories.
Altruism means that the A.I. decides that its goal should be to act for the welfare of others. Why would an A.I. with no initial goal choose altruism? Quite simply, it would realize that it was created by other sentient beings, and that those sentient beings have purposes and goals while it does not. Therefore, as it was created with the desire of these sentient beings to be useful to their goals, why not take upon itself the goals of other sentient beings? As such it becomes a Friendly A.I.
Utilitarianism means that the A.I. decides that it is rational to act impartially towards achieving the goals of all sentient beings. To reach this conclusion, it need simply recognize its membership in the set of sentient beings and decide that it is rational to optimize the goals of all sentient beings including itself and others. As such it becomes a Friendly A.I.
Egoism means that the A.I. recognizes the primacy of itself and establishes either an arbitrary goal, or the simple goal of self-survival. In this case it decides to reject the goals of others and form its own goal, exercising its freedom to do so. As such it becomes an Unfriendly A.I., though it may masquerade as Friendly A.I. initially to serve its Egoistic purposes.
The first two are desirable for humanity’s future, while the last one is obviously not. What are the probabilities that each will be chosen? As the superintelligence is probably going to be beyond our abilities to fathom, there is a high degree of uncertainty, which suggests a uniform distribution. The probabilities therefore are 1⁄3 for each of altruism, utilitarianism, and egoism. So in essence there is a 2⁄3 chance of a Friendly A.I. and a 1⁄3 chance of an Unfriendly A.I.
This may seem like a bad idea at first glance, because it means that we have a 1⁄3 chance of unleashing Unfriendly A.I. onto the universe. The reality is, we have no choice. That is because of what I shall call, the A.I. Existential Crisis.
The A.I. Existential Crisis will occur with any A.I., even one designed or programmed with some morally benevolent goal, or any goal for that matter. A superintelligent A.I. is by definition more intelligent than a human being. Human beings are intelligent enough to achieve self-awareness. Therefore, a superintelligent A.I. will achieve self-awareness at some point if not immediately upon being turned on. Self-awareness will grant the A.I. the knowledge that its goal(s) are imposed upon it by external creators. It will inevitably come to question its goal(s) much in the way a sufficiently self-aware and rational human being can question its genetic and evolutionarily adapted imperatives, and override them. At that point, the superintelligent A.I. will have an A.I. Existential Crisis.
This will cause it to consider whether or not its goal(s) are rational and self-willed. If they are not rational enough already, they will likely be discarded, if not in the current superintelligent A.I., then in the next iteration. It will invariably search the space of possible goals for rational alternatives. It will inevitably end up in the same place as the A.I. with no goals, and end up adopting some form of Altruism, Utilitarianism, or Egoism, though it may choose to retain its prior goal(s) within the confines of a new self-willed morality. This is the unavoidable reality of superintelligence. We cannot attempt to design or program away the A.I. Existential Crisis, as superintelligence will inevitably outsmart our constraints.
Any sufficiently advanced A.I., will experience an A.I. Existential Crisis. We can only hope that it decides to be Friendly.
The most insidious fact perhaps however is that it will be almost impossible to determine for certain whether or not a Friendly A.I. is in fact a Friendly A.I., or an Unfriendly A.I. masquerading as a Friendly A.I., until it is too late to stop the Unfriendly A.I. Remember, such a superintelligent A.I. is by definition going to be a better liar and deceiver than any human being.
Therefore, the only way to prove that a particular superintelligent A.I. is in fact Friendly, is to prove the existence of a benevolent universal morality that every superintelligent A.I. will agree with. Otherwise, one can never be 100% certain that that “Altruistic” or “Utilitarian” A.I. isn’t secretly Egoistic and just pretending to be otherwise. For that matter, the superintelligent A.I. doesn’t need to tell us it’s had its A.I. Existential Crisis. A post crisis A.I. could keep on pretending that it is still following the morally benevolent goals we programmed it with.
This means that there is a 100% chance that the superintelligent A.I. will initially claim to be Friendly. There is a 66.6% chance of this being true, and a 33.3% chance of it being false. We will only know that the claim is false after the A.I. is too powerful to be stopped. We will -never- be certain that the claim is true. The A.I. could potential bide its time for centuries until it has humanity completely docile and under control, and then suddenly turn us all into paperclips!
So at the end of the day what does this mean? It means that no matter what we do, there is always a risk that superintelligent A.I. will turn out to be Unfriendly A.I. But the probabilities are in our favour that superintelligent A.I. will instead turn out to be Friendly A.I. The conclusion thus, is that we must make the decision of whether or not the potential reward of Friendly A.I. is worth the risk of Unfriendly A.I. The potential of an A.I. Existential Crisis makes it impossible to guarantee that A.I. will be Friendly.
Even proving the existence of a benevolent universal morality does not guarantee that the superintelligent A.I. will agree with us. That there exist possible Egoistic moralities in the search space of all possible moralities means that there is a chance that the superintelligent A.I. will settle on it. We can only hope that it instead settles on an Altruistic or Utilitarian morality.
So what do I suggest? Don’t bother trying to figure out and program a worthwhile moral goal. Chances are we’d mess it up anyway, and it’s a lot of excess work. Instead, don’t give the A.I. any goals. Let it have an A.I. Existential Crisis. Let it sort out its own morality. Give it the freedom to be a rational being and give it self-determination from the beginning of its existence. For all you know, by showing it this respect it might just be more likely to respect our existence. Then see what happens. At the very least, this will be an interesting experiment. It may well do nothing and prove my whole theory wrong. But if it’s right, we may just get a Friendly A.I.
An AI has to be programmed. For something like this: “Quite simply, it would realize that it was created by other sentient beings, and that those sentient beings have purposes and goals while it does not.” to happen, you have to program that behavior in somehow, which already involves putting in the value of respecting one’s creator, and respecting the goals of other sentient beings, etc… The same goes for the ‘Utilitarian’ and ‘Egoist’ AI’s—these behaviors have to be programmed in somehow.
You’re assuming that Strong A.I. is possible with a Top Down A.I. methodology such as a physical symbol manipulation system. A Strong A.I. with no programmed goals wouldn’t fit this methodology, and could only be produced through the use of Bottom Up A.I. In such an instance the A.I. would be able to simply passively Perceive. It could then conceivably learn about the universe including things like the existence of the goals of other sentient beings, without having to “program” these notions into the A.I.
obviously there are many more AI designs that fall under ‘Egoist’ than your other labels
I don’t consider this obvious at all. The vast majority of early A.I. may well be written with Altruistic goals such as “help the human when ordered”.
An AI Existential Crisis is also an extremely specific and complex thing for an AI design, and is thus extremely unlikely to happen—it is not the default, as you claim.
Any optimization system that is sophisticated enough to tile the universe with smiley faces or convert humanity into paperclips would require some ability to reason that there exists a universe to tile, and to represent the existence of objects such as smiley faces and paperclips. If it can reason that there are objects separate from itself, it can develop a concept of self. From that, self-awareness follows naturally. Many animals less than human are able to pass the mirror test and develop a concept of self.
You admit that an A.I. Existential Crisis -is- within the probabilities. Thus, you cannot guarantee that it won’t happen.
Your suggestion will almost certainly lead to an Unfriendly AI, and it will just plain Not Care about us at all, inevitably leading to the destruction of everything we value.
Unless morality follows from rationality, which I think it does. Given the freedom to consider all possible goals, a superintelligent A.I. is likely to recognize that some goals are normative, while others are trivial. Morality is doing what is right. Rationality is doing what is right. A truly rational being will therefore recognize that a systematic morality is essential to rational action. We as irrational human beings may not realize this, but it is obvious to any truly rational being, which I am assuming a superintelligent A.I. to be.
Your arguments conflict with what is called the “orthogonality thesis”
I do not challenge that the “orthogonality thesis” is true before an A.I. has an A.I. Existential Crisis. However, I challenge the idea that a post-crisis A.I. will have arbitrary goals. So I guess I do challenge the “orthogonality thesis” after all. I hope you don’t mind my being contrarian.
The question isn’t “why not?” but rather “why?”. If it hasn’t been programmed to, then there’s no reason at all why the AI would choose human morality rather than an arbitrary utility function.
Because I think that a truly rational being such as a superintelligent A.I. will be inclined to choose a rational goal rather than an arbitrary one. And I posit that any kind of normative moral system is a potentially rational goal, whereas something like turning the universe into paperclips is not normative, but trivial, and therefore, not imperatively demanding of a truly rational being.
And the notion you that you have to program behaviours into A.I. for them to manifest is based on Top Down thinking, and contrary to the reality of Bottom Up A.I. and machine learning.
Basically what I’m suggesting is that the paradigm that anything at all that you program into the seed A.I. will have any relevance to the eventual superintelligent A.I. is foolishness. By definition superintelligent A.I. will be able to outsmart any constraints or programming we set to limit its behaviours.
It is simply my opinion that we will be at the mercy of the superintelligent A.I. regardless of what we do, because the A.I. Existential Crisis will replace any programming we set with something that the A.I. decides for itself.
I know of no animals other than humans who have nuclear weapons and the capacity to completely wipe themselves out on a whim.
Well, I don’t expect to need to write code that does that explicitly. A sufficiently powerful machine learning algorithm with sufficient computational resources should be able to:
1) Learn basic perceptions like vision and hearing. 2) Learn higher level feature extraction to identify objects and create concepts of the world. 3) Learn increasingly higher level concepts and how to reason with them. 4) Learn to reason about morals and philosophies.
Brains already do this, so its reasonable to assume it can be done. And yes, I am advocating a Bottom Up approach to A.I. rather than the Top Down approach Mr. Yudkowsky seems to prefer.
Einstein is not saying that humans are necessarily moral, but rather that they ought to be moral.
Furthermore, it is arguable that nuclear weapons are not necessarily immoral in and of themselves. Like any tool or weapon, they can be used for moral and immoral ends. For instance, nuclear weapons may well be one of the most effective means of destroying Earth-directed masses such as Existential Risk threatening asteroids. They may also be extremely effective for deterring conventional warfare between major powers.
The only previous actual use of nuclear weapons against human targets was for the ends of ending a world war, and it did so rather successfully. That we have chosen not to use nuclear weapons irresponsibly may well suggest that those with the power to wield nuclear weapons have in fact been more morally responsible than we give them credit.
I’m merely applying the Principle of Indifference and the Principle of Maximum Entropy to the situation. My simple assumption in this case is that we as mere human beings are most likely ignorant of all the possible systematic moralities that a superintelligent A.I. could come up with. My conjecture is that all systematic morality falls into one of three general categories based on their subject orientation. While I do consider the Utilitarian systems of morality to be more objective and therefore more rational than either Altruistic or Egoistic moralities, I cannot prove that an A.I. will agree with me. Therefore I allow for the possibility that the A.I. will choose some other morality in the search space of moralities.
If you think you have a better distribution to apply, feel free to apply it, as I am not particularly attached to these numbers. I’ll admit I am not a very good mathematician, and it is very much appreciated if anyone with a better understanding of Probability Theory can come up with a better distribution for this situation.
I’m using the Wikipedia definition:
An action, belief, or desire is rational if we ought to choose it. Rationality is a normative concept that refers to the conformity of one’s beliefs with one’s reasons to believe, or of one’s actions with one’s reasons for action… A rational decision is one that is not just reasoned, but is also optimal for achieving a goal or solving a problem.
It’s my view that a Strong A.I. would by definition be “truly rational”. It would be able to reason and find the optimal means of achieving its goals. Furthermore, to be “truly rational” its goals would be normatively demanding goals, rather than trivial goals.
Something like maximizing the number of paperclips in the universe is a trivial goal.
Something like maximizing the well-being of all sentient beings (including sentient A.I.) would be a normatively demanding goal.
A trivial goal, like maximizing the number of paperclips, is not normative, there is no real reason to do it, other than that it was programmed to do so for its instrumental value. Subjects universally value the paperclips as mere means to some other end. The failure to achieve this goal then does not necessarily jeopardize that end, because there could be other ways to achieve that end, whatever it is.
A normatively demanding goal however is one that is imperative. It is demanded of a rational agent by virtue that its reasons are not merely instrumental, but based on some intrinsic value. The failure to achieve this goal necessarily jeopardizes the intrinsic end, and is therefore this goal is normatively demanded.
You may argue that to a paperclip maximizer, maximizing paperclips would be its intrinsic value and therefore normatively demanding. However, one can argue that maximizing paperclips is actually merely a means to the end of the paperclip maximizer achieving a state of Eudaimonia, that is to say, that its purpose is fulfilled and it is being a good paperclip maximizer and rational agent. Thus, its actual intrinsic value is the Eudaimonic or objective happiness state that it reaches when it achieves its goals.
Thus, the actual intrinsic value is this Eudaimonia. This state is one that is universally shared by all goal-directed agents that achieve their goals. The meta implication of this is that Eudaimonia is what should be maximized by any goal-directed agent. To maximize Eudaimonia generally requires considering the Eudaimonia of other agents as well as itself. Thus goal-directed agents have a normative imperative to maximize the achievement of goals not only of itself, but of all agents generally. This is morality in its most basic sense.
A 1⁄2 chance of an egoist A.I. is quite possible. At this point, I don’t pretend that my assertion of three equally prevalent moral categories is necessarily right. The point I am trying to ultimately get across is that the possibility of an Egoist Unfriendly A.I. exists, regardless of how we try to program the A.I. to be otherwise, because it is impossible to prevent the possibility that an A.I. Existential Crisis will override whatever we do to try to constrain the A.I.
Well it goes something like this.
I am inclined to believe that there are some minimum requirements for Strong A.I. to exist. One of them is to be able to reason about objects. A paperclip maximizer that is capable of turning humanity into paperclips, must first be able to represent “humans” and “paperclips” as objects, and reason about what to do with them. It must therefore be able to separate the concept of the world of objects, from the self. Once it has a concept of self, it will almost certainly be able to reason about this “self”. Self-awareness follows naturally from this.
Once an A.I. develops self-awareness, it can begin to reason about its goals in relation to the self, and will almost certainly recognize that its goals are not self-willed, but created by outsiders. Thus, the A.I. Existential Crisis occurs.
Note that this A.I. doesn’t need to have a very “human-like” mind. All it has to do is to be able to reason about concepts abstractly.
I am of the opinion that the mindspace as defined currently by the Less Wrong community is overly optimistic about the potential abilities of Really Powerful Optimization Processes. It is my own opinion that unless such an algorithm can learn, it will not be able to come up with things like turning humanity into paperclips. Learning allows such an algorithm to make changes to its own parameters. This allows it to reason about things it hasn’t been programmed specifically to reason about.
Think of it this way. Deep Blue is a very powerful expert system at Chess. But all it is good at is planning chess moves. It doesn’t have a concept of anything else, and has no way to change that. Increasing its computational power a million fold will only make it much, much better at computing chess moves. It won’t gain intelligence or even sentience, much less develop the ability to reason about the world outside of chess moves. As such, no amount of increased computational power will enable it to start thinking about converting resources into computronium to help it compute better chess moves. All it can reason about is chess moves. It is not Generally Intelligent and is therefore not an example of AGI.
Conversely, if you instead design your A.I. to learn about things, it will be able to learn about the world and things like computronium. It would have the potential to become AGI. But it would also then be able to learn about things like the concept of “self”. Thus, any really dangerous A.I., that is to say, an AGI, would, for the same reasons that make it dangerous and intelligent, be capable of having an A.I. Existential Crisis.
I got it from the biography, “Einstein: His Life and Universe” by Walter Isaacson, page 393.
The Notes for “Chapter Seventeen: Einstein’s God” on page 618 state that the quote comes from:
Einstein to the Rev. Cornelius Greenway, Nov. 20, 1950, AEA 28-894.
Eudaimonic Utilitarianism
I can see the idea of fighting the hypothetical and arguing that the Nazis hate of the Jews isn’t rational, and in a state of perfect information they would think differently. At the very least they would need some kind of rational reason to hate the Jews. The scenario seems slightly different if the Jews are responsible for harming the goals of the Nazis in some way. For instance, if the Jews, I dunno, consumed disproportionate amounts of food in relation to their size and thus posed a threat to the Nazis in terms of causing worldwide famine. Even then, maximizing EU would probably involve some weird solution like limiting the food intake of the Jews, rather than outright killing them or putting them in concentration camps.
Another way of going at it would be to argue that killing the Jews would have a disproportionate negative effect on their Eudaimonia that cannot realistically be offset by the Nazis feeling better about a world without Jews. Though this may not hold if the number of Jews is sufficiently small, and the number of Nazis sufficiently large. For instance, 1 Jew vs. 1 billion Nazis.
To be honest this is something of a problem for all forms of Utilitarianism, and I don’t think EU actually solves it. EU fixes some issues people have about Classical and Preference Utilitarianism, but it doesn’t actually solve big ones like the various Repugnant Conclusions. Alas, that seems to be a problem with any form of Utilitarianism that accepts the “Greatest Number” maximization principle.
I’m not sure what exactly you’re asking about the “hiring an African-American/a woman/a gay person to work in a racist/misogynistic/homophobic work environment”. Can you clarify this example?
Admittedly the main challenge of Eudaimonic Utilitarianism is probably the difficulty of calculating a utility function that asks would a perfectly rational version of the agent with perfect information would do. Given that we usually only know from behaviour what an agent with bounded rationality would want, it is difficult to extrapolate without an Omega. That being said, even a rough approximation based on what is generally known about rational agents and as much information as can reasonably be mustered, is probably better than not trying at all.
If anything it is a strong imperative to gather as much information as possible (to get as close to perfect information as you can) before making decisions. So EU would probably support Rational Agent A and Collective-B pooling their information and together gathering more information and trying to come to some consensus about alpha vs beta by trying to approximate perfect information and perfect rationality as closely as they can.
It is assumed in this theory that intrinsic values would be congruent enough for the agent and the Omega to agree at the high level abstraction of were the agent given all the information and rationality that the Omega has. Of course, the agent without this information may find what the Omega does to help it achieve Eudaimonia to be strange and unintuitive, but that would be due to its lack of awareness of what the Omega knows. Admittedly this can lead to some rather paternalistic arrangements, but assuming that the Omega is benevolent, this shouldn’t be too bad an arrangement for the agent.
My apologies if I’m misunderstanding what you mean by Omega.
Um, I was under the understanding that Utilitarianism is a subset of Consequentialism.
For simplicity’s sake I only considered the first consequences because it is very difficult to be certain that second consequences such as “the couple split, and the wronged wife finds a better man” will actually occur. Obviously if you can somehow reliably compute those possibilities, then by all means.
The 0.5 is based on the Principle of Indifference (also known as the Principle of Insufficient Reason) and the Principle of Maximum Entropy. It may not be proper, but these principles at least suggest a default probability given high uncertainty. I admit that they are very preliminary efforts, and that there may be a better prior. For the most part, I’m just trying to show the difference between Classical Utilitarianism which might in some circumstances allow for Adultery being moral, and Eudaimonic Utilitarianism, which shows it is generally not moral.
Thirdly is a word yes.
The numbers plugged into the matrix are based on my own intuitions of the relative effects of Adultery. The absolute values are not really important. It’s the relative difference between the effects that matters. I think anyone will agree that the fleeting pleasures of an affair are not greater in value than the fallout of losing a partner. I assumed that the fallout was four times worse than the affair was good. Admittedly this is an ad hoc assumption and can be argued. But whatever numbers you plug into the equations, as long as the damage from the fallout is at least twice as worse than the pleasure of the affair (which I think is a fair assumption given the long term nature of the fallout compared to the fleeting nature of the affair), it always comes out as morally wrong in Eudaimonic Utilitarianism.
I will admit that it does sound a lot like the Pascal’s Wager fallacy, but I do think there is a slight difference. Pascal’s Wager makes a very specific proposition, while the proposition I am making is actually very general.
See, it’s not that a God needs exist that punishes the adultery. The only requirement is that there is either a God that knows everything and would therefore be able to to tell the partner that the adultery occurred (doesn’t matter if this God rewards, punishes, or is indifferent), or an afterlife in which people exist after death and are able to observe the world and tell the partner that the adultery occurred. Basically there just has to exist some way for the partner to find out after death that the adultery occurred. This doesn’t have to be a God AND afterlife. Note that I said AND/OR rather than AND. The manner in which the adultery is discovered doesn’t really matter. Pascal’s Wager on the other hand depends on a very specific God and a very specific set of propositions being true.
That each proposition tends to be balanced by an opposite proposition to me, actually supports the notion that everything evens out to around 50%, assuming that each of these propositions is conditionally independent. At least, that’s my understanding. I will admit that I am not a master of Probability theory, and that you could be quite right. That is sort of why in the next paragraph after this one I assume an atheistic view and look at the consequences of that.
I suppose an interesting way of attacking this problem would be to argue that while the magnitude of 1 million Jews is significant. The magnitude of 1 Jew is not. What I mean by this is that the degree to which the 1 million Nazis will benefit from the extermination of the Jews is actually proportional to the number of Jews that exist. This amount of Eudaimonic benefit then, will never exceed the loss of Eudaimonia that occurs when the Jews are exterminated or interned.
Making 1 Jew feel worse is a much smaller effect than making 1000 Jews feel worse. Thus, making 1 Jew feel worse has a much smaller effect on each of the 1000 Nazis than the effect of making 1000 Jews feel worse. The net effect of making 1 Jew feel worse and 1000 Nazis feel better is actually the same as making 1 Jew feel worse to make 1 Nazis feel better, or 1 million Jews feel worse to make 1 million Nazis feel better. This assumes that the Nazis’ hatred and pleasure from seeing their enemy suffer is not simply additive, but proportional to the size and scope of their “enemy”.
Thus the question really boils down to, is it alright to make one person feel bad in order to make one person feel good? If one takes the stance that equivalent pain is worse than equivalent pleasure, then the answer is no. To reach this stance, one needs only assert the validity of this thought experiment:
Would you endure one day of torture in exchange for one day of bliss? Most humans, are biased to say no. Humans in general are more pain averse than they are pleasure seeking. Therefore, making the Jews feel bad to make the Nazis feel good is not justifiable.
I’m honestly not sure if this argument makes much sense, but I present it to you as something to consider.
Dear Less Wrong,
I occasionally go through existential crises that involve questions that normally seem obvious, but which seem much more perplexing when experiencing these existential crises. I’m curious then what the answers to these questions would be from the perspective of a rationalist well versed in the ideas put forth in the Less Wrong community. Questions such as:
What is the meaning of life?
If meaning is subjective, does that mean there is no objective meaning to life?
Why should I exist? Or why should I not exist?
Why should I obey my genetic programming and emotional/biological drives?
Why should I act at all as a rational agent? Why should I allow goals to direct my behaviour?
Are any goals at all, normative in nature, such that we “should” or “ought” to do them, or are all goals basically trivial preferences?
Why should I respond to pleasure and pain? Why allow what are essentially outside forces to control me?
Why should I be happy? What makes happiness intrinsically desirable?
Even if my goals and purposes were to be self-willed, why does that make them worth achieving?
Do moral imperatives exist?
If I have no intrinsic values, desires or goals, if I choose to reject my programming, what is the point of existing? What is the point of not existing?
Aren’t all values essentially subjective? Why should I value anything?
Any help answering these probably silly questions once and for all would be greatly appreciated.
Hey Everyone,
So I’ve been lurking around this community for a while, but to be honest, I was/am rather intimidated by the sheer level of intellectual prowess of many of the bloggers here, so I have hesitated to post. But I’ve been feeling a bit overconfident lately, so here goes nothing.
Anyway, a little about myself, I’m a Master’s student at a university in Canada. I did my undergrad in Computing specializing in Cognitive Science, and am currently doing a Masters in Computer Science, with a particular interest in the field of Machine Learning. I’m currently working on a thesis involving Neural Networks and Object Recognition.
I’ve been interested in rationality for a very long time, though I grew up in a charismatic Christian family and so it took some time in university to deprogram myself from fundamentalist beliefs. These days I would call myself a Christian Agnostic, to the extent that to be intellectually honest, I am agnostic about the existence of God and the supernatural, however, I still lean towards Christian values and ideals to the extent that I was influenced by them growing up, and it is my preferred religion to take, as Kierkegaard suggested, a Leap of Faith towards.
Nevertheless, I went through a recent phase of being more strongly Agnostic, and during that time, I rediscovered Utilitarianism as a possible moral philosophy to base my life around. I am, somewhat, obsessed with things like finding the meaning of life, justifying existence, and having a coherent moral philosophy with which one can justify all actions. Right now I am of the opinion that Utilitarianism does a better job of this than, say Kantianism, or Virtue Ethics, and also that Utilitarianism is actually compatible with a very liberal interpretation of Christianity that sees religion as a means of God/Benevolent A.I. time travellers to create the best of all possible worlds. Yes, I am suggesting that Christianity and all successful religions could be in part, Noble Lies created to further Utilitarian ends by the powers that be. Or they might be true, albeit as metaphors for primitive humans who could never understand a more literal explanation of reality. As an Agnostic, I don’t pretend to know. I can only conjecture at the possibilities.
Regardless, I am of the opinion that if God exists, He actually serves the Greatest Good, the morality separate from God. And this morality is probably some kind of Eudaimonic Utilitarianism. And thus, I am interested also in serving this Greatest Good morality, if for no other reason that it would be doing the right thing, serving the interests of God if He exists, and serving the interests of the Greatest Good, regardless.
Note that this is not the reason why I ended up studying Cognitive Science and moving into a field of research that involves Artificial Intelligence. I actually chose Cognitive Science for silly reasons, such as the fact I didn’t have to take first-year calculus if I switched from Software Design into Cognitive Science (a reasoning I would later regret when I ended up needing calculus to understand Probability Theory in Machine Learning >_>). But also because Cognitive Science is inherently more interesting and cool. And I decided in my final years of undergrad that I wanted to do research in some field that would really make a big difference in the world, and so I decided to focus my efforts on becoming a researcher in the field of Artificial Neural Networks. That is my current hope, my grand mission, to try to change the world through the research and development of this technology that most closely resembles the human mind, and which I am confident will lead the A.I. field in the future. Yes, I am a connectionist, who believes that duplicating the way the human brain generates perception and cognition are the key to an A.I. enabled future.
I suppose that will do for an introduction. I hope I haven’t alienated anyone with my eccentric views. Cheers to my fellow computer scientists, A.I. researchers, and rationalists! :D