Independent AGI system builder. I came here as part of my search to find out what other NGI systems are doing to provide moral control of their AGI systems, but this is clearly not the right place for that. I will continue my search for intelligent life on Earth elsewhere.
David Cooper
Computational Morality (Part 1) - a Proposed Solution
I’ve read the Arbital post several times now to make sure I’ve got the point, and most of the complexity which it refers to is what my solution covers with its database of knowledge of sentience. The problem for AGI is exactly the same as it would be for us if we went to an alien world and discovered an intelligent species like our own which asked us to help resolve the conflicts raging on their planet (having heard from us that we managed to do this on our own planet). But these aliens are unlike us in many ways—different things please or anger them, and we need to collect a lot of knowledge about this so that we can make accurate moral judgements in working out the rights and wrongs of all their many conflicts. We are now just like AGI, starting with an empty database. Well, we may find that some of the contents of our database about human likes and dislikes helps in places, but some parts might be so wrong that we must be very careful not to jump to incorrect assumptions. Crucially though, just like AGI, we do have a simple principle to apply to sort out all the moral poblems on this alien world. The complexities are merely details to store in the database, but the algorithm for crunching the data is the exact same one used for working out morality for humans—it remains a matter of weighing up harm, and it’s only the weightings that are different.
Of course, the weightings should also change for every individual according to their own personal likes and dislikes—just as we have difficulty understanding the aliens, we have difficulty understanding other humans, and we can even have difficulty understanding ourselves. When we’re making moral decisions about people we don’t know, we have to go by averages and hope that it fits, but any information that we have about the individuals in question will help us improve our calculations. If a starving person has an intolerance to a particular kind of food and we’re taking emergency supplies to their village, we’ll try to make sure we don’t run out of everything except that problem food item before we get to that individual, but we can only get that right if we know to do so. The complexities are huge, but in every case we can still do the correct thing based on the information that is available to us, and we’re always running the same, simple morality algorithm. The complexity that is blinding everyone to what morality is is not located in the algorithm. The algorithm is simple and universal.
I am aware that many people have a radically different idea about what morality is, but my concern is focused squarely on our collective need to steer AGI system builders towards the right answers before they start to release dangerous software into places where it can begin to lever influence. If there’s a problem with the tone, that’s because it’s really a first draft which could do with a little editing. My computer’s been freezing repeatedly all day and I rushed into posting what I’d written in case I lost it all, which I nearly did as I couldn’t get the machine to unfreeze for long enough to save it in any other way. However, if people can see past issues of tone and style, what I’d like them to do is try to shoot it down in flames, because that’s how proposed solutions need to be put to the test.
I’ve put my ideas out there in numerous places over the years, but I’m still waiting for someone to show that they’re inferior to some other way of calculating right and wrong. For the most part, I’ve run into waffle-mongers who have nothing to offer as an alternative at all, so they can’t even produce any judgements to compare. Others propose things which I can show straight off generate wrong answers, but no one has yet managed to do that with mine, so that’s the open challenge here. Show me a situation where my way of calculating morality fails, and show me a proposed system of morality that makes different judgements from mine which I can’t show to be defective.
I can see straight away that we’re running into a jargon barrier. (And incidentally, Google has never even heard of utility monstering.) Most people like me who are involved in the business of actually building AGI have a low opinion of philosophy and have not put any time into learning its specialist vocabulary. I have a higher opinion of philosophy than most though (and look forward to the day when AGI turns philosophy from a joke into the top-level branch of science that should be its status), but I certainly do have a low opinion of most philosophers, and I haven’t got time to read through large quantities of junk in order to find the small amount of relevant stuff that may be of high quality—we’re all tied up in a race to get AGI up and running, and moral controls are a low priority for most of us during that phase. Indeed, for many teams working for dictatorships, morality isn’t something they will ever want in their systems at all, which is why it’s all the more important that teams which are trying to build safe AGI are left as free as possible to spend their time building it rather than wasting their time filling their heads with bad philosophy and becoming experts in its jargon. There is a major disconnect here, and while I’m prepared to learn the jargon to a certain degree where the articles I’m reading are rational and apposite, I’m certainly not going to make the mistake of learning to speak in jargon, because that only serves to put up barriers to understanding which shut out the other people who most urgently need to be brought into the discussion.
Clearly though, jargon can has an important role in that it avoids continual repetition of many of the important nuts and bolts of the subject, but there needs to e a better way into this which reduces the workload by enabling newcomers to avoid all the tedious junk so that they can get to the cutting edge ideas by as direct a route as possible. I spent hours yesterday reading through pages of highly-respected bilge, and because I have more patience than most people, I will likely spend the next few days reading through more of the same misguided stuff, but you simply can’t expect everyone in this business to wade through a fraction as much as I have—they have much higher priorities and simply won’t do it.
You say that my approach is essentially utilitarianism, but no—morality ins’t about maximising happiness, although it certainly should not block such maximisation for those who want to pursue it. Morality’s role is to minimise the kinds of harm which don’t open the way to the pursuit of happiness. Suffering is bad, and morality is about trying to eliminate it, but not where that suffering is out-gunned by pleasures which make the suffering worthwhile for the sufferers.
You also say that I don’t embrace any kind of deontology, but I do, and I call it computational morality. I’ve set out how it works, and it’s all a matter of following rules which maximise the probability that any decision is the best one that could be made based on the available information. You may already use some other name for it which I don’t know yet, but it is not utilitarianism.
I’m an independent thinker who’s worked for decades on linguistics and AI in isolation, finding my own solutions for all the problems that crop up. I have a system which is now beginning to provide natural language programming capability. I’ve made this progress by avoiding spending any time looking at what other people are doing. With this morality business though, it bothers me that other people are building what will be highly biased systems which could end up wiping everyone out—we need to try to get everyone who’s involved in this together and communicate in normal language, systematically going through all the proposals to find out where they break. Now, you may think you’ve already collectively done that work for them, and that may be the case—it’s possible that you’ve got it right and that there are no easy answers, but how many people building AGI have the patience to do tons of unrewarding reading instead of being given a direct tour of the crunch issues?
Here’s an example of what actually happens. I looked up Utilitarianism to make sure it means what I’ve always taken it to mean, and it does. But what did I find? This: http://www.iep.utm.edu/util-a-r/#H2 Now, this illustrates why philosophy has such a bad reputation—the discussion is dominated by mistakes which are never owned up to. Take the middle example:-
If a doctor can save five people from death by killing one healthy person and using that person’s organs for life-saving transplants, then act utilitarianism implies that the doctor should kill the one person to save five.
This one keeps popping up all over the place, but you can take organs from the least healthy of the people needing organs just before he pops his clogs and use them to save all the others without having to remove anything from the healthy person at all.
The other examples above and below it are correct, so the conclusion underneath is wrong: “Because act utilitarianism approves of actions that most people see as obviously morally wrong, we can know that it is a false moral theory.” This is why expecting us all to read through tons or error-ridden junk is not the right approach. You have to reduce the required reading material to a properly though out set of documents which have been fully debugged. But perhaps you already have that here somewhere?
It shouldn’t even be necessary though to study the whole field in order to explore any one proposal in isolation: if that proposal is incorrect, it can be dismissed (or sent off for reworking) simply by showing up a flaw in it. If no flaw shows up, it should be regarded as potentially correct, and in the absence of any rivals that acquire that same status, it should be recommended for installation into AGI, because AGI running without it will be much more dangerous.
You are right in thinking that I have not studied the field in the depth that may be necessary—I have always judged it by the woeful stuff that makes it across into other places where the subject often comes up, but it’s possible that I’ve misjudged the worth of some of it by being misled by misrepresentations of it, so I will look up the things in your list that I haven’t already checked and see what they have to offer. What this site really needs though is its own set of articles on them, all properly debugged and aimed squarely at AGI system deveolpers.
″ You seem to want to sidestep the question of “just what are the right answers to questions of morality and metaethics?”. I submit to you that this is, in fact, the critical question.”
I have never sidestepped anything. The right answers are the ones dictated by the weighing up of harm based on the available information (which includes the harm ratings in the database of knowledge of sentience). If the harm from one choice has a higher weight than another choice, that other choice is more moral. (We all have such a database in our heads, but each contains different data and can apply different weightings to the same things, leading to disagreements between us about what’s moral, but AGI will over time generate its own database which will end up being much more accurate than any of ours.)
“And have you managed to convince anyone that your ideas are correct?”
I’ve found a mixture of people who think it’s right and others who say it’s wrong and who point me towards alternatives which are demonstrably faulty.
“They are inferior because they get the wrong answers.”
Well, that’s what we need to explore, and we need to take it to a point where it isn’t just a battle of assertions and counter-assertions.
“I can easily show that your approach generates wrong answers. Observe: You say that “we have to stand by the principle that all sentiences are equally important”. But I don’t agree; I don’t stand by that principle, nor is there any reason for me to do so, as it is counter to my values.”
This may needs a new blog post to explore it fully, but I’ll try to provide a short version here. If a favourite relative of yours was to die and be reincarnated as a rat, you would, if you’re rational, want to treat that rat well if you knew who it used to be. You wouldn’t regard that rat as an inferior kind of thing that doesn’t deserve protection from people who might seek to make it suffer. It wouldn’t matter that your reincarnated relative has no recollection of their previous life—they would matter to you as much in that form as they would if they had a stroke and were reduced to similar capability to a rat and had lot all memory of who they were. The two things are equivalent and it’s irrational to consider one of them as being in less need of protection from torture than the other.
Reincarnation! Really! You need to resort to bringing that crazy idea into this? (Not your reply, but it’s the kind of reaction that such an idea is likely to generate). But this is an important point—the idea that reincarnation can occur is more rational than the alternatives. If the universe is virtual, reincarnation is easy and you can be made to live as any sentient player. But if it isn’t, and if there’s no God waiting to scoop you up into his lair, what happens to the thing (or things) inside you that is sentient? Does it magically disappear and turn into nothing? Did it magically pop into existence out of nothing in the first place? Those are mainstream atheist religious beliefs. In nature, there isn’t anything that can be created or destroyed other than building and breaking up composite objects. If a sentience is a compound object which can be made to suffer without any of its components suffering, that’s magic too. If the thing that suffers is something that emerges out of complexity without any of the components suffering, again that’s magic. If there is sentience (feelings), there is a sentience to experience those feelings, and it isn’t easy to destroy it—that takes magic, and we shouldn’t be using magic as mechanisms in our thinking. The sentience in that rat could quite reasonably be someone you love, or someone you loved in a past life long ago. It would be a serious error not to regard all sentiences as having equal value unless you have proof that some of them are lesser things, but you don’t have that.
You’ve also opened the door to “superior” aliens deciding that the sentience in you isn’t equivalent to the sentiences in them, which allows them to tread you in less moral ways by applying your own standards.
“As you see, your answer differs from mine. That makes it wrong (by my standards—which are the ones that matter to me, of course).”
And yet one of the answers is actually right, while the other isn’t. Which one of us will AGI judge to have the better argument for this? This kind of dispute will be settled by AGI’s intelligence quite independently of any morality rules that it might end up running. The best arguments will always win out, and I’m confident that I’ll be the one winning this argument when we have unbiased AGI weighing things up.
″ “and show me a proposed system of morality that makes different judgements from mine which I can’t show to be defective.” --> Why? For you to be demonstrably wrong, it is not required that anyone or anything else be demonstrably right. If you say that 2 and 2 make 5, you are wrong even if no one present can come up with the right answer about what 2 and 2 actually make—whatever it is, it sure ain’t 5!”
If you can show me an alternative morality which isn’t flawed and which produces different answers from mine when crunching the exact same data, one of them will be wrong, and that would provide a clear point at which close examination would lead to one of those systems being rejected.
Origin of Morality
Sentience
“I disagree. I reject your standard of correctness. (As do many other people.)”
Shingles is worse than a cold. I haven’t had it, but those who have will tell you how bad the pain is. We can collect data on suffering by asking people how bad things feel in comparison to other things, and this is precisely what AGI will set about doing in order to build its database and make its judgements more and more accurate. If you have the money to alleviate the suffering of one person out of a group suffering from a variety of painful conditions and all you know about them is which condition they have just acquired, you can use the data in that database to work out which one you should help. That is morality being applied, and it’s the best way of doing it—any other answer is immoral. Of course, if we know more about these people, such as how good or bad they are, that might change the result, but again there would be data that can be crunched to work out how much suffering their past actions caused to undeserving others. There is a clear mechanism for doing this, and not doing it that way using the available information is immoral.
“The question of whether there is an objective standard of correctness for moral judgments, is the domain of metaethics.”
We already have what we need—a pragmatic system for getting as close to the ideal morality as possible based on collecting the data as to how harmful different experiences are. The data will never be complete, they will never be fully accurate, but they are the best that can be done and we have a moral duty to compile and use them.
“(I will avoid commenting on the reincarnation-related parts of your comment, even though they do form the bulk of what you’ve written. All of that is, of course, nonsense...”
If you reject that, you are doing so in favour of magical thinking, and AGI won’t be impressed with that. The idea that the sentience in you can’t go on to become a sentience in a maggot is based on the idea that after death that sentience magically becomes nothing. I am fully aware that most people are magical thinkers, so you will always feel that you are right on the basis that hordes of fellow magical thinkers back up your magical beliefs, but you are being irrational. AGI is not going to be programmed to be irrational in the same way most humans are. The job of AGI is to model reality in the least magical way it can, and having things pop into existence out of nothing and then return to being nothing is more magical than having things continue to exist in the normal way that things in physics behave. (All those virtual particles that pop in and out of existence in the vacuum, they emerge from a “nothing” that isn’t nothing—it has properties such as a rule that whatever’s taken from it must have the same amount handed back.) Religious people have magical beliefs too and they too make the mistake of thinking that numbers of supporters are evidence that their beliefs are right, but being right is not democratic. Being right depends squarely on being right. Again here, we don’t have absolute right answers in one sense, but we do have in terms of what is probably right, and an idea that depends on less magic (and more rational mechanism) is more likely to be right. You have made a fundamental mistake here by rejecting a sound idea on the basis of a bias in your model of reality that has led to you miscategorising it as nonsense, while your evidence for it being nonsense is support by a crowd of people who haven’t bothered to think it through.
“Au contraire: here is the Wikipedia article on utility monsters, and here is some guy’s blog post about utility monsters. This was easily found via Google.”
I googled “utility monstering” and there wasn’t a single result for it—I didn’t realise I had to change the ending on it. Now that I know what it means though, I can’t see why you brought it up. You said, “You don’t embrace any kind of deontology, but deontology can prevent Omelas, Uility Monstering, etc.” I’d already made it clear that feelings are different for different individuals, so either that means I’m using some kind of deontology already or something else that does the same job. There needs to be a database of knowledge of feelings, providing information on the average person, but data also needs to be collected on individuals to tune the calculations to them more accurately. Where you don’t know anything about the individual, you have to go by the database of the average person and apply that as it is more likely to be right than any other database that you randomly select.
“If you don’t mind my asking, are you affiliated with MIRI? In what way are you involved in “the business of actually building AGI”?”
I have no connection with MIRI. My involvement in AGI is simply that I’m building an AGI system of my own design, implementing decades of my own work in linguistics (all unpublished). I have the bulk of the design finished on paper and am putting it together module by module. I have a componential analysis dictionary which reduces all concepts down to their fundamental components of meaning (20 years’ worth of hard analysis went into building that). I have designed data formats to store thoughts in a language of thought quite independent of any language used for input, all based on concept codes linked together in nets—the grammar of thought is, incidentally, universal, unlike spoken languages. I’ve got all the important pieces and it’s just a matter of assembling the parts that haven’t yet been put together. The actual reasoning, just like morality, is dead easy.
“The class of moral theories referred to as “utilitarianism” does, indeed, include exactly such frameworks as you describe (which would fall, roughly, into the category of “negative utilitarianism”). (The SEP article about consequentialism provides a useful taxonomy.)”
I read up on negative utilitarianism years ago and didn’t recognise it as being what I’m doing, but perhaps your links are to better sources of information.
“You are aware, I should hope, that this makes you sound very much like an archetypical crank?”
It also makes me sound like someone who has not been led up the wrong path by the crowd. I found something in linguistics that makes things magnitudes easier than the mess I’ve seen other people wrestling with.
“It will not, I hope, surprise you to discover that your objection is quite common and well-known, and just as commonly and easily disposed of.”
No it is not easily disposed of, but I’ll get to that in a moment. The thought experiment is wrong and it gives philosophy a bad name, repelling people away from it by making them write off the junk they’re reading as the work of half-wits and making it harder to bring together all the people that need to be brought together to try to resolve all this stuff in the interests of making sure AGI is safe. It is essential to be rigorous in constructing thought experiments and to word them in such a way as to force the right answers to be generated from them. If you want to use that particular experiment, it needs wording to state that none of the ill people are compatible with each other, but the healthy person is close enough to each of them that his organs are compatible with them. It’s only by doing that that the reader will believe you have anything to say that’s worth hearing—you have to show that it has been properly debugged.
So, what does come out of it when you frame it properly? You run straight into other issues which you also need to eliminate with careful wording, such as blaming lifestyle for their health problems. The ill people also know that they’re on the way out if they can’t get a donor organ and don’t wish to inflict that on anyone else: no one decent wants a healthy person to die instead of them, and the guilt they would suffer from if it was done without their permission would ruin the rest of their life. Also, people accept that they can get ill and die in natural ways, but they don’t accept that they should be chosen to die to save other people who are in that position—if we had to live in a world where that kind of thing happened, we would all live not just in fear of becoming ill and dying, but in fear of being selected for death while totally healthy, and that’s a much bigger kind of fear. We can pursue healthy lifestyles in the hope that it will protect us from the kind of damage that can result in organ failure, and that drives most of the fear away—if we live carefully we are much more confident that it won’t happen to us, and sure enough, it usually does happen to other people who haven’t been careful. To introduce a system where you can simply be selected for death randomly is much more alarming, causing inordinately more harm—that is the vast bulk of the harm involved in this thought experiment, and these slapdash philosophers completely ignore it while pretending they’re the ones who are being rigorous. If you don’t take all of the harm into account, your analysis of the situation is a pile of worthless junk. All the harm must be weighed up, and it all has to be identified intelligently. This is again an example of why philosophers are generally regarded as fruitcakes.
“Well, that hardly seems a reliable approach…”
It’s being confirmed right here—I’m finding the same range of faulty stuff on every page I read, although it’s possible that it is less wrong than most. There is room for hope that I have found the most rational place on the Net for this kind of discussion, but there are a lot of errors that need to be corrected, and it’s such a big task that it will probably have to wait for AGI to drive that process.
″...the Stanford Encyclopedia of Philosophy is a far better source for this sort of thing.) You really ought to delve into the field at some length…”
Thanks—it saves a lot of time to start with the better sources of information and it’s hard to know when you’ve found them.
“It would be a mistake to suppose that everyone who has studied the matter until now, and everyone who has attempted to systematize it, has been stupid, incompetent, etc.”
Certainly—there are bound to be some who do it a lot better than the rest, but they’re hidden deep in the noise.
“Systematic surveys of moral philosophy, even good ones, are not difficult to find.”
I have only found fault-ridden stuff so far, but hope springs eternal.
[Correction: when I said “you said”, it was actually someone else’s comment that I quoted.]
The only votes that matter are the ones made by AGI.
It’s clear from the negative points that a lot of people don’t like hearing the truth. Let me spell this out even more starkly for them. What we have with the organ donor thought experiment is a situation where an approach to morality is being labelled as wrong as the result of a deeply misguided attack on it. It uses the normal human reactions to normal humans in this situation to make people feel that the calculation is wrong (based on their own instinctive reactions), but it claims that you’re going against the spirit of the thought experiment if the moral analysis works with normal humans—to keep to the spirit of the thought experiment you are required to dehumanise them, and once you’ve done that, those instinctive reactions are no longer being applied to the same thing at all.
Let’s look at the fully dehumanised version of the experiment. Instead of using people with full range of feelings, we replace them with sentient machines. We have five sentient machines which have developed hardware faults, and we can repair them all by using parts from another machine that is working fine. They are sentient, but all they’re doing is enjoying a single sensation that goes on and on. If we dismantle one, we prevent it from going on enjoying things, but this enables the five other machines to go on enjoying that same sensation in its place. In this case, it’s find to dismantle that machine to repair the rest. None of them have the capacity to feel guilt or fear and no one is upset by this decision. We may be upset that the decision has had to be made, but we feel that it is right. This is radically different from the human version of the experiment, but what the philosophers have done is use our reactions to the human version to make out that the proposed system of morality has failed because they have made it dehumanise the people and turn them into the machine version of the experiment.
In short, you’re breaking the rules and coming to incorrect conclusions, and you’re doing it time and time again because you are failing to handle the complexity in the thought experiments. That is why there is so much junk being written about this subject, and it makes it very hard for anyone to find the few parts that may be valid.
“Philosophy isn’t relevant to many areas of AGI, but it is relevant to what you aer talking about here.”
Indeed it is relevant here, but it is also relevant to AGI in a bigger way, because AGI is a philosopher, and the vast bulk of what we want it to do (applied reasoning) is philosophy. AGI will do philosophy properly, eliminating the mistakes. It will do the same for maths and physics where there are also some serious mistakes waiting to be fixed.
“Learning to do something does entail having to do it. Knowing the jargon allows efficient communication with people who know more than you...if you countenance their existence.”
The problem with it is the proliferation of bad ideas—no one should have to become an expert in the wide range of misguided issues if all they need is to know how to put moral control into AGI. I have shown how it should be done, and I will tear to pieces any ill-founded objection that is made to it. If an objection comes up that actually works, I will abandon my approach if I can’t refine it to fix the fault.
“That’s not deontology, because it’s not object level.”
Does it matter what it is if it works? Show me where it fails.Get a team together and throw your best objection at me. If my approach breaks, we all win—I have no desire to cling to a disproven idea. If it stands up, you get two more goes. And if it stands up after three goes, I expect you to admit that it may be right and to agree that I might just have something.
“Someone who is days from death is not a “healthy person” as required. You may have been mistaken about other people’s mistakenness before.”
Great—you would wait as late as possible and transfer organs before multiple organ failure sets in. The important point is not the timing, but that it would be more moral than taking them from the healthy person.
What is wrong with the reasoning? If people are unable to follow the reasoning, they can ask for help in comments and I will help them out. I expect a lot of negative points from people who are magical thinkers, and many of them have ideas about uploading themselves so that they can live forever, but they don’t stop to think about what they are and whether they would be uploaded along with the data. The data doesn’t contain any sentience. The Chinese Room can run the algorithms and crunch the data, but there’s no sentience there; no “I” in the machine. They are not uploading themselves—they are merely uploading their database.
When it comes to awarding points, the only ones that count are the ones made by AGI. AGI will read through everything on the net some day and score it for rationality, and that will be the true test of quality. Every argument will be given a detailed commentary by AGI and each player will be given scores as to how many times they got things wrong, insulted the person who was right, etc. There is also data stored as to who provided which points, and they will get a score for how well the did in recognising right ideas (or failing to recognise them). I am not going to start writing junk designed to appeal to people based on their existing beliefs. I am only interested in pursuing truth, and while some that truth is distasteful, it is pointless to run away from it.
Minus four points already from anonymous people who can provide no counter-argument. They would rather continue to go on being wrong than make a gain by changing their position to become right. That is the norm for humans , sadly.
It could be (not least because there’s a person inside it who functions as its main component), but that has no impact on the program being run through it. There is no place at which any feelings influence the algorithm being run or the data being generated.
Thanks for the questions.
If we write conventional programs to run on conventional hardware, there’s no room for sentience to appear in those programs, so all we can do is make the program generate fictions about experiencing feelings which it didn’t actually experience at all. The brain is a neural computer though, and it’s very hard to work out how any neural net works once it’s become even a little complex, so it’s hard to rule out the possibility that sentience is somehow playing a role within that complexity. If sentience really exists in the brain and has a role in shaping the data generated by the brain, then there’s no reason why an artificial brain shouldn’t also have sentience in it performing the exact same role. If you simulated it on a computer though, you could reduce the whole thing to a conventional program which can be run by a Chinese Room processor, and in such a case we would be replacing any sentience with simulated sentience (with all the actual sentience removed). The ability to do that doesn’t negate the possibility for the sentience to be real though in the real brain though. But the big puzzle remains: how does the experience of feelings lead to data being generated to document that experience? That looks like an impossible process, and you have to wonder if we’re going to be able to convince AGI systems that there is such a thing as sentience at all.
Anyway, all I’m trying to do here is help people home in on the nature of the problem in the hope that this may speed up its resolution. The problem is in that translation from raw experience to data documenting it which must be put together by a data system—data is never generated by anything that isn’t a data system (which implements the rules about what represents what), and data systems have never been shown to be able to handle sentience as any part of their functionality, so we’re still waiting for someone to make a leap of the imagination there to hint at some way that might bridge that gap. It may go on for decades more without anyone making such a breakthrough, so I think it’s more likely that we’ll get answers by trying to trace back the data that the brain produces which makes claims about experiencing feelings to find out where and how that data was generated and whether it’s based in truth or fiction. As it stands, science doesn’t have any model that illustrates even the simplest implementation of sentience driving the generation of any data about itself, and that’s surprising when things like pain which seem so real and devastatingly strong are thought to have such a major role in controlling behaviour. And it’s that apparent strength which leads to so many people assuming sentience can appear with a functional role within systems which cannot support that (as well as in those that maybe, just maybe, can).
Thanks. I was actually trying to post the above as a personal blog post initially while trying to find out how the site works, but I think I misunderstood how the buttons at the bottom of the page function. It appears in the Frontpage list where I wasn’t expecting it to go—I had hoped that if anyone wanted to promote it to Frontpage, they’d discuss it with me first and that I’d have a chance to edit it into proper shape. I have read a lot of articles elsewhere about machine ethics but have yet to find anything that spells out what morality is in the way that I think I have, but if there’s something here that does the job better, I want to find it, so I will certainly follow your pointers. What I’ve seen from other people building AGI has alarmed me because their ideas about machine ethics appear to be way off, so what I’m looking for is somewhere (anywhere) where practical solutions are being discussed seriously for systems that may be nearer to completion than is generally believed.