Personal website: https://andrewtmckenzie.com/
Andy_McKenzie
Right. This is why I think it’s under-ratedly important for contrarians who actually believe in the potential efficacy of their beliefs to not seem like contrarians. If you truly believe that your ideas are underrepresented then you will much better promote them by being appearing generally “normal” and passing off the underrepresented idea as a fairly typical part of your ostensibly coherent worldview. I will admit that this is more challenging.
If somebody was planning to destroy the world, the rationalist could stop him and not break his oath of honesty by simply killing the psychopath. Then if the rationalist were caught and arrested and still didn’t reveal why he had committed murder, perhaps even being condemned to death for the act but never breaking his oath of honesty, now that would make an awesome movie.
Sure. Basically, there are two groups, each of which has made a major contribution:
1) Shawn Mikula and his group. They have made substantial progress (some would say, almost solved) of how to make the neuronal connections and other brain structures such as white matter tracts in a full mouse brain traceable using electron microscopy. Electron microscopy is the lowest level of imaging currently feasible, and can clearly resolve structures that are thought to be key to memory such as synapses.
2) The 21CM group, including Robert McIntyre. They have developed a totally new method of preserving a brain that should yield both highly practical and technical sound preservation. In a sense it combines the methods discussed by Gwern in his article Plastination vs Cryonics, because it first uses a method traditionally associated with “plastination” (glutaraldehyde perfusion), and then uses a method traditionally associated with cryonics, i.e. perfusion with a cryoprotective agent and then low temperature storage and, presumably, vitrification, which means that damage from ice crystal formation should be avoided and the brain should turn a glass state.
Apologies if this is still too technical and I’m happy to answer any follow-up questions. Many key steps remain but this is progress worthy of celebrating and, in my view, supporting.
A simple solution is to just make doctors/hospitals liable for harm which occurs under their watch, period. Do not give them an out involving performative tests which don’t actually reduce harm, or the like. If doctors/hospitals are just generally liable for harm, then they’re incentivized to actually reduce it.
Can you explain more what you actually mean by this? Do you mean if someone comes into the hospital and dies, the doctors are responsible, regardless of why they died? If you mean that we figure out whether the doctors are responsible for whether the patient died, then we get back to whether they have done everything to prevent it, and one of these things might be ordering lab tests to better figure out the diagnosis, and then it seems we’re back to the original problem i.e. the status quo. Just not understanding what you mean.
Here are all the LW comments I have bookmarked at del.icio.us/porejide, aside from the one I took from Grognor below. This is probably overkill for a single comment.
1) This comment by Mitchell Porter, for the idea of looking into the time before there was algebra, seeing how it was invented, and using that as an outside view for our current difficult problems (like consciousness).
2) This comment by Eliezer, which I found interesting because it responded to a critique of Bayes Cosmo Shalizi which I also found persuasive, leaving me in a (typical) state of not knowing what to believe.
3) Yvain’s comment preceding his post on slippery slopes and Schelling points; this was useful for my thinking about both of those topics.
4) PO8′s comment about early detection of cancer and whether the purported benefits could be a selection bias. This is an interesting idea and I intend to look into it further when I get more time.
5) Luke M’s comment that the best way to convert someone is to be cool, likeable, and generally sarcastic about the idea you want them to change. Makes sense—most do not respond well to wordy arguments.
6) Vladimir M’s comment that non-mathy pop-physics is unlikely to lead to real insight. I tentatively agree but would like to see some actual data.
7) Carl Shulman’s comment on how a normal prior distribution for charity effectiveness does not map well to reality. This doubles as a demonstration of how difficult Bayesian computations can be and an interesting quantitative look at charity.
8) Yvain’s comment, which I feel bad re-posting because it is ironic and I’m growing increasingly annoyed with irony, but I include for completeness and because it makes a useful point through its irony.
9) Eugine Nier’s comment that it’s more important for your beliefs to be correct than consistent. He also gives an example of a situation in which there can be a trade-off between the two. I found this useful because I often am biased towards consistency. I made an Anki flashcard based on this comment.
10) jimrandomh’s comment that “drugs” are not a natural category. Useful on both the object (people talk about “drugs” often) and meta (people talk about non-natural categories as if they were natural categories often) levels.
11) komponisto’s comment that “I said I was apathetic. I didn’t say I was ignorant.” I just thought this was clever. Doesn’t seem as good in hindsight. But maybe I’m biased by the relatively low upvotes.
12) Nominull’s comment that “When promoting the truth, if you value the truth, it is wise to use especially those methods that rely on the truth being true. That way, if you have accidentally misidentified the truth, there is an automatic safety valve.” I also made an Anki flashcard for this one.
13) Xachariah’s comment deconstructing the phrase “how are you” in a way I still often think about when I hear that phrase.
14) Richard Kennaway’s comment discussing the trade-offs to engaging in sexual relationships (i.e., lost time and energy for intellectual pursuits).
15) Konkvistador’s comment quoting Peter Thiel saying that as soon as you starting discussing why something occurs, people start losing sight of whether it occurs. Useful rhetorically, depending on your goals.
16) Mitchell Porter’s comment about how LW tropes might eventually find political expression, and what that would actually look like. This comment is truly a gem. “Look, politics isn’t a game of hide and seek. Ideological groups have the cohesion that they do because membership in the group depends on openly espousing the ideology.” I also made an Anki flashcard for this. This kind of comment makes me a bit sad that he appears to be spending much of his time on looking for the quantum correlates of consciousness, which does not make much sense to me. But from the perspective of “fund people, not projects”, we should give him leeway.
17) Mitchell Porter’s comment dismissing the idea of game theoretic equilibria between intelligences in disjoint worlds. I remember finding it profound when I read it.
18) taw’s comment which contains an interesting history lesson on infanticide.
19) Eliezer’s comment that you should deal with sunk costs by imagining that you were teleported into someone’s life and thinking about what you would do differently if that were the case.
20) Mitchell Porter’s comment speculating on what LW’s role in history would be. Another gem. Not only was this informative, but I hadn’t even thought on that kind of level before.
21) Will Newsome’s comment. I forget why I tagged this, but looking back it has some very interesting bits about theism.
22) hegimonicon’s comment that there is an “enormous gulf between finding out things on your own and being directed to them by a peer.” It’s somewhat counter-intuitive that in many cases your best way to convince someone of something is to suggest general lines of reasoning and let them figure out the specifics for themselves.
Agreed, and this is very similar to what I described in my comment on the other post about this here.
Where I disagree is the sole focus on connection strengths or weights. They are certainly important, but synapses are unlikely to be adequately described by just one parameter. Further, local effects like neuropeptides likely play a role.
This reminds me of a conversation from Dumb and Dumber.
Lloyd: What are the chances of a guy like you and a girl like me… ending up together? Mary: Well, that’s pretty difficult to say. Lloyd: Hit me with it! I’ve come a long way to see you, Mary. The least you can do is level with me. What are my chances? Mary: Not good. Lloyd: You mean, not good like one out of a hundred? Mary: I’d say more like one out of a million. [pause] Lloyd: So you’re telling me there’s a chance.
Good post.
Thanks for writing this up as a shorter summary Rob. Thanks also for engaging with people who disagree with you over the years.
Here’s my main area of disagreement:
General intelligence is very powerful, and once we can build it at all, STEM-capable artificial general intelligence (AGI) is likely to vastly outperform human intelligence immediately (or very quickly).
I don’t think this is likely to be true. Perhaps it is true of some cognitive architectures, but not for the connectionist architectures that are the only known examples of human-like AI intelligence and that are clearly the top AIs available today. In these cases, I expect human-level AI capabilities to grow to the point that they will vastly outperform humans much more slowly than immediately or “very quickly”. This is basically the AI foom argument.
And I think all of your other points are dependent on this one. Because if this is not true, then humanity will have time to iteratively deal with the problems that emerge, as we have in the past with all other technologies.
My reasoning for not expecting ultra-rapid takeoff speeds is that I don’t view connectionist intelligence as having a sort of “secret sauce”, that once it is found, can unlock all sorts of other things. I think it is the sort of thing that will increase in a plodding way over time, depending on scaling and other similar inputs that cannot be increased immediately.
In the absence of some sort of “secret sauce”, which seems necessary for sharp left turns and other such scenarios, I view AI capabilities growth as likely to follow the same trends as other historical growth trends. In the case of a hypothetical AI at a human intelligence level, it would face constraints on its resources allowing it to improve, such as bandwidth, capital, skills, private knowledge, energy, space, robotic manipulation capabilities, material inputs, cooling requirements, legal and regulatory barriers, social acceptance, cybersecurity concerns, competition with humans and other AIs, and of course value maintenance concerns (i.e. it would have its own alignment problem to solve).
I guess if you are also taking those constraints into consideration, then it is really just a probabilistic feeling about how much those constraints will slow down AI growth. To me, those constraints each seem massive, and getting around all of them within hours or days would be nearly impossible, no matter how intelligent the AI was.
As a result, rather than indefinite and immediate exponential growth, I expect real-world AI growth to follow a series of sigmoidal curves, each eventually plateauing before different types of growth curves take over to increase capabilities based on different input resources (with all of this overlapping).
One area of uncertainty: I am concerned about there being a spectrum of takeoff speeds, from slow to immediate. In faster takeoff speed worlds, I view there as being more risk of bad outcomes generally, such as a totalitarian state using an AI to take over the world, or even the x-risk scenarios that you describe.
This is why I favor regulations that will be helpful in slower takeoff worlds, such as requiring liability insurance, and will not cause harm by increasing take-off speed. For example, pausing AGI training runs seems likely to make takeoff speed more discontinuous, due to creating hardware, algorithmic, and digital autonomous agent overhangs, thereby making the whole situation more dangerous. This is why I am opposed to it and dismayed to see so many on LW in favor of it.
I also recognize that I might be wrong about AI takeoff speeds not being fast. I am glad people are working on this, so long as they are not promoting policies that seem likely to make things more dangerous in the slower takeoff scenarios that I consider more likely.
Another area of uncertainty: I’m not sure what is going to happen long-term in a slow takeoff world. I’m confused. While I think that the scenarios you describe are not likely because they are dependent upon there being a fast takeoff and a resulting singleton AI, I find outcomes in slow takeoff worlds extraordinarily difficult to predict.
Overall I feel that AI x-risk is clearly the most likely x-risk of any in the coming years and am glad that you and others are focusing on it. My main hope for you is that you continue to be flexible in your thinking and make predictions that help you to decide if you should update your models.
Here are some predictions of mine:
Connectionist architectures will remain the dominant AI architecture in the next 10 years. Yes, they will be hooked up in larger deterministic systems, but humans will also be able to use connectionist architectures in this way, which will actually just increase competition and decrease the likelihood of ultra-rapid takeoffs.
Hardware availability will remain a constraint on AI capabilities in the next 10 years.
Robotic manipulation capabilities will remain a constraint on AI capabilities in the next 10 years.
1) Insularity: I actually don’t think LW is all that insular. Users often link to science articles, ask for opinions on other writers, discuss films and books, etc. Exactly what set of sites or communities is LW being compared to here when you call it insular?
2) Growth (in terms of users): This is quantifiable. http://www.google.com/trends/?q=less+wrong Looks like a big jump at the beginning of 2011, perhaps when HPMoR took off, and fairly constant since. Anyway, I’m not sure that becoming big in terms of raw users is all that much of a goal, although high-quality users certainly is (at least to me).
3) Growth (in terms of articles): I agree this is a problem. There are weird incentives with karma for main vs discussion for getting promoted and such, which probably turns off people from writing a “series” of posts.
4) Organization of content in useful chunks: Also agree that this is a problem. Though we often talk about Anki, the actual Anki flashcards available are quite poor (as I found when I tried to download ones for cognitive bias). Same with the organization of the so-called sequences.
I think #1 and #2 are not that important. I think #3 and #4 are ultimately site formatting problems. There have been many suggestions made, like tweaks to the karma system, subreddits, and etc. Given the ethos of the sequences, I’m surprised that some of these changes haven’t been tested in a trial period to see whether it improves the quality of the content. That seems the obvious play.
Good post; it’s useful to discuss biases that people who frequent this site are especially susceptible to. This happens in US extremist religious groups too, for example see this article about the subset of people who predicted the apocalypse last year:
It’s been noted by scholars who study apocalyptic groups that believers tend to have analytical mindsets. They’re often good at math. I met several engineers, along with a mathematics major and two financial planners. These are people adept at identifying patterns in sets of data, and the methods they used to identify patterns in the Bible were frequently impressive, even brilliant. Finding unexpected connections between verses, what believers call comparing scripture with scripture, was a way to become known in the group. The essays they wrote explaining these links could be stunningly intricate.
That intricacy was part of the appeal. The arguments were so complex that they were impossible to summarize and therefore very challenging to refute. As one longtime believer, an accountant, told me: “Based on everything we know, and when you look at the timelines, you look at the evidence—these aren’t the kind of things that just happen. They correlate too strongly for it not to be important.” The puzzle was too perfect. It couldn’t be wrong.
This suggests a possibly alternative explanation, that analytical types tend/learn to enjoy systematizing, especially on topics that will be important to others. As Cosma Shalizi says,
Now, I relish the schadenfreude-laden flavors of a mega-disaster scenario as much as the next misanthropic, science-fiction-loving geek, especially when it’s paired with some “The fools! Can’t they follow simple math?” on the side.
Does anyone know of any AI-related predictions by Hinton?
Here’s the only one I know of—“People should stop training radiologists now. It’s just completely obvious within five years deep learning is going to do better than radiologists because it can get a lot more experience. And it might be ten years but we got plenty of radiologists already.” − 2016, slightly paraphrased
This seems like still a testable prediction—by November 2026, radiologists should be completely replaceable by deep learning methods, at least other than regulatory requirements for trained physicians.
This is too nihilistic and is not really what experts like Ioannidis are proposing. Better to evaluate the studies (or find sources that evaluate the studies) individually for their sample size and statistical measures, such as whether or not they control for relevant covariates and do multiple hypothesis testing corrections.
You can download a video of Ioannidis’ Mar ’11 lecture on nutrition from http://videocast.nih.gov/PastEvents.asp?c=144 (it’s big though, 250 MB). Some notes:
Randomized trials have problems too.
For example, they’ll often inflate the effects by contrasting the most extreme groups (upper vs lower 20%).
Or, just basic biases, like the winner’s curse (large effects tend to come from studies with small sample sizes—you can see this by comparing the log of treatment effect vs the log of total sample size in the cochrane database) or publication bias (leading to missing data).
Odds ratios in randomized trials also decrease over time.
Generally, Ioannidis wants massive testing via biobanks (sample sizes in the millions), longitudinal measurements, and large-scale global collaborations. These do not necessarily mean only randomized trials, and in fact they are pretty much impossible for that kind of data set. Epi can work too, it just needs to be done well.
Eliezer, one qualm: You consistently bring up mirror neurons and consider it to be obvious prima facie that they are used for action understanding in humans. Unfortunately, most contemporary neuroscientists in the field agree that there is no consistent evidence of this:
http://talkingbrains.blogspot.com/2008/08/eight-problems-for-mirror-neuron-theory.html
That is not to say that humans don’t understand other people’s actions or that we do not have adequate theory of minds! But it does mean that there is no reason to suspect that those complicated cognitive events can be reduced to simply a group of “mirror” neurons. Ramachandran often mentions them as well, which irks me slightly as well.
Thanks so much for putting this together Mati! If people are interested in cryonics/brain preservation and would like to learn about (my perspective on) the field from a research perspective, please feel free to reach out to me: https://andrewtmckenzie.com/
I also have some external links/essays available here: https://brainpreservation.github.io/
A few comments:
A lot of slow takeoff, gradual capabilities ramp-up, multipolar AGI world type of thinking. Personally, I agree with him this sort of scenario seems both more desirable and more likely. But this seems to be his biggest area of disagreement with many others here.
The biggest surprise to me was when he said that he thought short timelines were safer than long timelines. The reason for that is not obvious to me. Maybe something to do with contingent geopolitics.
Doesn’t seem great to dismiss people’s views based on psychologizing about them. But, these are off-the-cuff remarks, held to a lower standard than writing.
Great post. Very clear and concise as usual. I recently read an interesting article by Eugene Volokh on slippery slopes focused specifically on gay marriage, which you can find in pdf form here. (If you don’t like pdf’s, the title is “Same-Sex Marriage and Slippery Slopes.”) Interestingly, he also discusses the US’s first amendment as something like a Schelling point, though he doesn’t use the same terminology.
In parts of Europe, they’ve banned Holocaust denial for years and everyone’s been totally okay with it. There are also a host of other well-respected exceptions to free speech, like shouting “fire” in a crowded theater
From what Volokh says in the article, it seems that many countries aside from the US don’t have as strong of protections for freedom of speech. E.g., in Canada and Sweden ministers have been prosecuted for comments condemning homosexuality.
Here’s a nice recent summary by Mitchell Porter, in a comment on Robin Hanson’s recent article (can’t directly link to the actual comment unfortunately):
Robin considers many scenarios. But his bottom line is that, even as various transhuman and posthuman transformations occur, societies of intelligent beings will almost always outweigh individual intelligent beings in power; and so the best ways to reduce risks associated with new intelligences, are socially mediated methods like rule of law, the free market (in which one is free to compete, but also has incentive to cooperate), and the approval and disapproval of one’s peers.
The contrasting philosophy, associated especially with Eliezer Yudkowsky, is what Robin describes with foom (rapid self-enhancement) and doom (superintelligence that cares nothing for simpler beings). In this philosophy, the advantages of AI over biological intelligence are so great, that the power differential really will favor the individual self-enhanced AI, over the whole of humanity. Therefore, the best way to reduce risks is through “alignment” of individual AIs—giving them human-friendly values by design, and also a disposition which will prefer to retain and refine those values, even when they have the power to self-modify and self-enhance.
Eliezer has lately been very public about his conviction that AI has advanced way too far ahead of alignment theory and practice, so the only way to keep humanity safe is to shut down advanced AI research indefinitely—at least until the problems of alignment have been solved.
ETA: Basically I find Robin’s arguments much more persuasive, and have ever since those heady days of 2008 when they had the “Foom” debate. A lot of people agreed with Robin, although SIAI/MIRI hasn’t tended to directly engage with those arguments for whatever reason.
This is a very common outsider view of LW/SIAI/MIRI-adjacent people, that they are “foomers” and that their views follow logically from foom, but a lot of people don’t agree that foom is likely because this is not how growth curves have worked for nearly anything historically.
When you write “the AI” throughout this essay, it seems like there is an implicit assumption that there is a singleton AI in charge of the world. Given that assumption, I agree with you. But if that assumption is wrong, then I would disagree with you. And I think the assumption is pretty unlikely.
No need to relitigate this core issue everywhere, just thought this might be useful to point out.
Extremely interesting article with a number of good points!
Is there any chance that you could expand upon the driving objection? Why, in your model of sleep and the cognitive effects of sleep, does getting little sleep increase your risk of getting into a car accident when driving?
Another point: I find Mendelian randomization studies fairly convincing for the long-term effects of sleep. For example, here’s one based on UK Biobank data suggesting that sleep traits do not have an interaction with Alzheimer’s disease risk: https://academic.oup.com/ije/article/50/3/817/5956327
Dick Teresi, The Undead