Reducing “a biological perspective on ethics” to “a description of how human ethics works” doesn’t seem quite right to me. Naturalistic ethics isn’t just concerned with the “how” of human morality. Things like “why” questions, shared other-oriented behaviours, social insect cooperation and chimpanzees are absolutely on the table.
timtyler
Ethical behaviour is part of the subject matter of biology. If you exclude the science involved, there’s not much left that’s worth discussing.
Typically, people ask two things out of ethics- a reason to be ethical in the first place, and a way to resolve ethical dilemnas.
A biological perspective on ethics considers it to be:
A personal guide regarding how to behave;
A guide to others expectations of you;
A set of tools for manipulating others;
A means of signalling goodness and affiliations;
A set of memes that propagate at their hosts’ expense.
Ethical philosophers tend to be especially hot on point 4.
On Drupal.org they use a forum plugin for their forum: https://drupal.org/forum/22 I think using Drupal and not using a forum plugin when you want to built a forum but trying to do it your own way, counts as not using existing components.
You seem to have very weak evidence that they actually did this. It seems tremendously unlikely to me. Drupal comes with a forum module and it has many third party forums available. I see no good reason to think that they failed to make use of these resources.
It also looks like you tried to build a new system of how an online forum should operate instead of just taking a readymade solution. As a result it seems you have to make a bunch of bad UI decisions.
It looks like Drupal to me. You might not like it, but you can hardly say they failed to make use of existing components.
Resources inside a light cone go according to T cubed, population growth is exponential: thus we see resource limitation ubiquitously: Malthus was (essentially) correct.
Maybe “T cubed” will turn out to be completely wrong, and there will be some way of getting hold of exponential resources—but few will be holding their breath for news of this.
Hans Rosling makes the claim that world population will top out at around 10 billion, by simply continuing to do what we do now, educate people and let them have access to birth control.
Malthus will be counting the machines too.
Human numbers may decline during a memetic takeover, but machine numbers probably won’t.
It was the Era of Accidents, before the dawn of optimization. You’d only expect to see something with 40 bits of optimization if you looked through a trillion samples.
There’s no way that this is true. Inanimate processes optimize too. Lightning strikes minimize the shortest path. Drainage patterns in mountainous regions find short paths between raindrops and their oceans. Cracks propagate so as to find the weakest path. The idea that only living systems optimize is just a mistake.
This bit is nonsense too:
There was no search but blind search. Everything from scratch, not even looking at the neighbors of previously successful points. Not hill-climbing, not mutation and selection, not even discarding patterns already failed. Just a random sample from the same distribution, over and over again.
Perhaps look into the principle of least action and Universal Darwinism.
Are you sure that “anti placebo effect” is a good name though?
It may be that nocebo has a better claim to being an “anti-placebo effect”.
- Sep 30, 2013, 7:03 AM; 0 points) 's comment on The Anti-Placebo Effect by (
Example of somebody making that claim.
That’s a ‘circular’ link to your own comment.
It seems to me a rational agent should never change its self-consistent terminal values. To act out that change would be to act according to some other value and not the terminal values in question.
It might decide to do that—if it meets another powerful agent, and it is part of the deal they strike.
People here sometimes say that a rational agent should never change its terminal values.
That’s simply mistaken. There are well-known cases where it is rational to change your “terminal” values.
Think about what might happen if you meet another agent of similar power but with different values / look into “vicarious selection” / go read Steve Omohundro.
if many neurons die early at an early age
Well known fact.
[...] and there is variation within the individual neurons
Well known fact.
That said, I’d be surprised if there’s enough variation in neurons for anything like this to happen.
Neurons do vary considerably. Natural selection over an individual lifetime is one reason to expect neurons to act against individual best interests. However, selfish neuron models are of limited use—due to the relatively small quantity of selection acting on neurons over an individual’s lifespan. Probably the main thing they explain is some types of brain cancer.
Selfish memes and selfish synapses seem like more important cases of agent-based modeling doing useful work in the brain. Selfish memes actually do explain why brains sometimes come to oppose the interests of the genes that constructed them. In both cases there’s a lot more selection going on than happens with neurons.
Well, first, let’s just admit this: The race to win the Singularity is over, and Google has won.
I checked with Google trends. It seems as though they may yet face some competition. Also, 15 years of real time is quite a bit of internet time. Previously it looked as though Microsoft had won—and IBM had won.
Show me where I expressed a confidence level in that post.
Well, log2(24/5) = 2.26. You offered 2.3 bits of further information. It seems like a bit more than 100% confidence… ;-)
James Bamford’s books in this area are very readable:
The classic history of the field is this one, but you’ll get some coverage of the topic in practically any popular book on cryptography.
With cryptography, the government attempted to delay mainstream access to the technology—so they could benefit from using it. It would be interesting to know if they are doing the same to mainstream machine intelligence efforts—for example, via intellectual property laws and secrecy orders.
I thought it was a bad analogy too. The foxes and rabbits have conflicting goals. However the falling human and the rising hot air have mutually compatible goals which can be simultaneously satisfied. It seems like a very different situation to me. I think there was a lack of sympathetic reading here.
Generally, one should strive to criticise the strongest argument you can imagine, not a feeble characature.
What reference classes should we use here [?]
Previous highly successful species. Previous highly successful species with powerful innovations. The same with “societies” in place of “species”.
On problems that are drawn from a barrel of causally similar problems, where human optimism runs rampant and unforeseen troubles are common, the Outside View beats the Inside View.
Does anyone want to argue that Eliezer’s criteria for using the outside view are wrong, or don’t apply here?
“Optimism” is one kind of distortion—and “paranoia” is another kind.
Highlighting “optimism” distortions while ignoring “paranoid” ones is a typical result of paranoid distortions.
Taking background knowledge for granted happens in many domains.
The usefulness of the term “meme” is probably best attested to by its popularity—with 250+ million references on the internet.