Make a prominent “next” button on the sequence pages so you can easily go from one sequence post to the next post. There’s currently a button but it is difficult to find and requires two clicks.
JoshuaZ
I don’t recall this being discussed by the community at all. It seems like a bad idea. Valuable conversations can extend from comments that are already negative. −3 is also not that negative. This also discourages people from actually explaining why someone is wrong if there are a lot of people who downvote the comment. This will both make it harder for that person to become less wrong and make it more likely that bystanders who are reading the conversation will not see any explanation for why the comment is downvoted. Overall this is at best a mixed idea that should have been discussed with the community before implementing.
I’m downvoting primarily to discourage deliberately sensationalist titles. I don’t want to start seeing “What this AI Gatekeeper did will shock you!” and “Five reasons why MWI will show you how everything you thought you knew about quantum mechanics is a lie!” and “These ten effective altruists will restore your faith in humanity!”
and don’t disagree with anything I’ve ever seen written by Vladimir Nesov, Kaj Sotala, Luke Muelhauser, komponisto, or even Wei Da
This confuses me since these people are not in agreement on some issues.
There’s another related aspect that’s worth noting: supervillains are active, superheros classically reactive. The Joker hatches a plot and Batman stops him. Brainiac threatens to take over the Earth and Superman stops him. Doc Oc tries to blow up New York and Spiderman stops him. Etc. Etc. Ad infinitum et nauseam. If there’s not any supervillain active on any given day in Gotham, Batman sits around preparing to fight them, letting most of the status quo stay unchanged.
To think about changing the status quo, think like a supevillain.
Six options:
1) Low rate of success is coupled with a very low investment level. 2) The behavior isn’t to try to pick up the woman at all but rather to engage in shared bonding among the males. (Note how this behavior seems to generally occur when there is a group of males.) 3) Lack of self-restraint. The people in question who do this are typically low status and low income. There’s a large body of evidence that people with lack of self-control have less life success. (The marshmallow studies and all that.) Some of these people may have little self-control or bother so little to exercise self-control that clearly unsuccessful behavior is still attempted. 4) Attempts to harass the people in question, possibly to blow off steam at one’s own lack of sexual success. 5) A well-meaning attempt to actually complement people for being good looking and well-dressed. They may just be unaware of how uncomfortable this behavior often makes women feel. 6) Possibly combining with any combination of the above possibilities- cultural behavior. Once there’s some small fraction doing something, how long does it take before the same behavior is imitated in the general group?
Recent work shows that it is possible to use acoustic data to break public key encryption systems. Essentially, if one can send specific encrypted plaintext then the resulting sounds the CPU makes when decrypting can reveal information about the key. The attack was successfully demonstrated to work on 4096 bit RSA encryption. While some versions of the attack require high quality microphones, some versions apparently were successful just using mobile phones.
Aside from the general interest issues, this is one more example of how a supposedly boxed AI might be able to send out detailed information to the outside. In particular, one can send surprisingly high bandwith even accidentally through acoustic channels.
An incidental note: lack of these sorts of skills can also create ugh fields around the subjects or surrounding subjects.
A lot of Eliezer’s work has been not at all related strongly to FAI but has been to popularizing rational thinking. In your view, should the SIAI focus exclusively on AI issues or should it also care about rational issues? In that context, how does Eliezer’s ongoing work relate to the SIAI?
Related SMBC.
We simply do not have a scientific process any more.
This is both unfair to scientists and inaccurate. In 2011, we’ve had such novel scientific discoveries as snails that can survive being eaten by birds, we’ve estimated the body temperature of dinosaurs, we’ve captured the most detailed picture of a dying star ever taken, and we’ve made small but significant progress to resolving P ?= NP. These are but a few of the highlights that happened to both be in my recent memory and which I could easily locate links to. I’ve also not included anything that could be argued to be engineering rather than science. There are many achievements just like this.
Why might it seem like we don’t have a scientific process?
First, there’s simple nostalgia. As I write this, the space shuttle is on its very last mission. I suspect that almost everyone here either longs for the days of their youth when humans walked on the moon, or wish they had lived then to witness that. Thus, the normal human nostalgia is wrapped up in some actual problems of stagnation and lack of funding. This creates a potential halo effect for the past.
Second, as the number of scientists increases over time, the number of scientists who are putting out poor science will increase. Similarly, the amount of stuff that gets through peer review even when it shouldn’t will increase as the number of journals and the number of papers submitted goes up. So the amount of bad science will go up.
Third, the internet, and similar modern communication technologies lets us find out about so-called bad science much faster than we would otherwise. Much of that would get buried in obscure journals but instead we have bloggers commenting and respected scientists responding. So as time goes on, even if the amount of bad science stays constant, the perception would be of an increase.
I would go so far as to venture that we might have a more robust and widespread scientific process than at any other time in history. To put the Bem study in perspective, keep in mind that a hundred years ago, psychology wasn’t even trying to use statistical methods; look at how Freud and Jung’s ideas were viewed. Areas like sociology and psychology have if anything become more scientific over time. From that standpoint, a paper that uses statistics in a flawed fashion is indicative of how much progress the soft sciences have made in terms of being real sciences in that one needs bad stats to get bad ideas through rather than just anecdotal evidence.
To paraphrase someone speaking on a completely different issue, the arc of history is long, but it bends towards science.
I find it interesting that people do this. I’m going to use this as an opportunity to advocate doing the exact opposite: One thing I’ve found helps me listen to people more is when I’m having a disagreement with what someone else is saying over the course of a few posts, I go to their user page and find something that looks like it deserves an upvote and give it. This makes me much more willing to accept that the other person isn’t being stupid, ignorant or otherwise just generally irrational on the point I disagree with them on.
If you had a Death Note, what would you do with it?
See if I could get some very old people or otherwise have terminal illnesses volunteer to have their names written in it. We can use that data to experiment more with the note and figure out how it works. The existence of such an object implies massive things wrong with our current understanding of the universe, so figuring that out might be really helpful.
One serious danger for organizations is that they can easily outlive their usefulness or can convince themselves that they are still relevant when they are not. Essentially this is a form of lost purpose. This is not a bad thing if the organizations are still doing useful work, but this isn’t always the case. In this context, are there specific sets of events (other than the advent of a Singularity) which you think will make the SIAI need to essentially reevaluate its goals and purpose at a fundamental level?
The essential issue seems to be here that your friend is claiming that because humans aren’t perfect Bayesians that Bayesianism is somehow philosophically wrong. Whether human cognition is flawed even severely doesn’t impact whether or not Bayesianism is a better approach. Note that your friend’s argument if it were valid then it would apply not just to Bayesianism but any attempt to use statistics. It is pretty clear that humans pay a lot more attention to anecdotes than actual stats for example. By this argument, statistics themselves should be ignored.
This seems in essence to be an is v. ought fallacy.
I think it is no coincidence that this switch occurs in this context. Oh no, some dusty old tomes got destroyed! Compared to other events of the time, piddling for human “utility.” But burning books lowers the status of academics, which is why it is considered (in Haidt-ian terms) a taboo by some—including, I would suggest, most on this site.
We have good reason to think that the missing volumes of Diophantus were at Alexandria. Much of what Diophantus did was centuries before his time. If people in the 1500s and 1600s had complete access to his and other Greek mathematicians’ work, math would have likely progressed at a much faster pace, especially in number theory.
We also have reason to think that Alexandria contained the now lost Greek astronomical records, which likely contained comets and possibly also historical nova observations. While we have some nova and supernova observations from slightly later (primarily thanks to Chinese and Japanese records), the Greeks were doing astronomy well before. This sort of thing isn’t just an idle curiosity: understanding the timing of supernova connects to understanding the most basic aspects of our universe. The chemical elements necessary for life are created and spread by supernova. Understanding the exact ratios, how common supernova are, and understanding more how supernova spread out, among other issues, are all important to understanding very important questions like how common life is, which is directly relevant to the Great Filter. We do have a lot of supernova observations in the last few years but historical examples are few and far between.
Compared to other events of the time, piddling for human “utility.”
On the contrary. Kill a few people or make them suffer and it has little direct impact beyond a few years in the future. Destroying knowledge has an impact that resonates down for far longer.
But burning books lowers the status of academics, which is why it is considered (in Haidt-ian terms) a taboo by some—including, I would suggest, most on this site.
This is an interesting argument, and I find it unfortunate that you’ve been downvoted. The hypothesis is certainly interesting. But it may also be taboo for another reason: in many historical cases, book burning has been a precursor to killing people. This is a cliche, but it is a cliche that happens to have historical examples before it. Another consideration is that a high status of academics is arguably quite a good thing from a consequentialist perspective. People like Norman Borlaug, Louis Pasteur, and Alvin Roth have done more lasting good for humanity than almost anyone else. Academics are the main people who have any chance of having a substantial impact on human utility beyond their own lifespans (the only other groups are people who fund academics or people like Bill Gates who give funding to implement academic discoveries on a large scale). So even if it is purely an issue of status and taboo, there’s a decent argument that those are taboos which are advantageous to humanity.
Some minor comments regarding Eliezer’s remark. The emphasis on non-contradiction of opinions in the Talmud and elsewhere is fairly recent. Maimonides for example was more than willing to say that statements in the Talmud were wrong when it came to factual issues. Also note that much of the Talmud was written before the medieval period (the Mishna dates to around 200 and the Gemara was completed around 600 or so only very early in to the medieval period).
The notion of the infallibility of the Talmud is fairly recent gaining real force with the writings of the Maharal in the late 1500s. In fact, many Orthodox Jews don’t realize how recent that aspect of belief is. The belief in the infallible and non-contradictiory nature of the Talmud has also been growing stronger in some respects. Among the ultra-Orthodox, they are starting to apply similar beliefs to their living or recently deceased leaders and the chassidim have been doing something similar with their rebbes for about 200 years. Currently, there are major charedi leaders who have stated that mice can spontaneously generate because the classical sources say so. I have trouble thinking of a better example of how religion can result in serious misunderstandings about easily testable facts.
While amusing, it doesn’t seem like a good idea for new readers. Is essentially a spoiler.
If you are going to do this, please keep in mind Wikipedia’s most relevant policies and guidelines in this context: The conflict of interest guideline, the Neutral point of view policy, and the prohibition on original research.
Joel Stickley, How To Write Badly Well