The odds aren’t good, but here’s hoping.
AndreInfante
Here’s one from a friend of mine. It’s not exactly an argument against AI risk, but it is an argument that the problem may be less urgent than it’s traditionally presented.
There’s plenty of reason to believe that Moore’s Law will slow down in the near future
Progress on AI algorithms has historically been rather slow.
AI programming is an extremely high level cognitive task, and will likely be among the hardest things to get an AI to do.
These three things together suggest that there will be a ‘grace period’ between the development of general agents, and the creation of a FOOM-capable AI.
Our best guess for the duration of this grace period is on the order of multiple decades.
During this time, general-but-dumb agents will be widely used for economic purposes.
These agents will have exactly the same perverse instantiation problems as a FOOM-capable AI, but on a much smaller scale. When they start trying to turn people into paperclips, the fallout will be limited by their intelligence.
This will ensure that the problem is taken seriously, and these dumb agents will make it much easier to solve FAI-related problems, by giving us an actual test bed for our ideas where they can’t go too badly wrong.
This is a plausible-but-not-guaranteed scenario for the future, which feels much less grim than the standard AI-risk narrative. You might be able to extend it into something more robust.
Sorry, I probably should have more more specific. What I should really say is ‘how important the unique fine-grained structure of white matter is.’
If the structure is relatively generic between brains, and doesn’t encode identity-crucial info in its microstructure, we may be able to fill it in using data from other brains in the future.
Technically, it’s the frogs and fish that routinely freeze through the winter. Of course, they evolved to pull off that stunt, so it’s less impressive.
We’ve cryopreserved a whole mouse kidney before, and were able to thaw and use it as a mouse’s sole kidney.
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2781097/
We’ve also shown that nematode memory can survive cryopreservation:
The trouble is that larger chunks of tissue (like, say, a whole mouse or a human brain) are more prone to thermal cracking at very low temperatures. Until we solve that problem, nobody’s coming back short of brain emulation or nanotechnology.
To rebut: sociopaths exist.
I feel like the dog brain studies are at least fairly strong evidence that quite a bit of information is preserved. The absence of an independent validation is largely down to the poor mainstream perception of cryonics. It’s not that Alcor is campaigning to cover up contrary studies—it’s that nobody cares enough to do them. Vis-a-vis the use of dogs, there actually aren’t that many animals with comparable brain volume to humans. I mean, if you want to find an IRB that’ll let you decorticate a giraffe, be my guest. Dogs are a decent analog, under the circumstances. They’re not so much smaller you’d expect drastically different results.
In any case, if this guy wants to claim that cryonics doesn’t preserve fine-grained brain detail, he can do the experiment and prove it. You can’t just point at a study you don’t like and shout ‘the authors might be biased’ and thus refute its claim. You need to be able to provide either serious methodological flaws, or an actual failure to replicate.
The traditional argument is that there’s a vast space of possible optimization processes, and the vast majority of them don’t have humanlike consciousness or ego or emotions. Thus, we wouldn’t assign them human moral standing. AIXI isn’t a person and never will be.
A slightly stronger argument is that there’s no way in hell we’re going to build an AI that has emotions or ego or the ability to be offended by serving others wholeheartedly, because that would be super dangerous, and defeat the purpose of the whole project.
Your lawnmower isn’t your slave. “Slave” prejudicially loads the concept with anthrocentric morality that does not actually exist.
According to the PM I got, I had the most credible vegetarian entry, and it was ranked as much more credible than my actual (meat-eating) beliefs. I’m not sure how I feel about that.
But that might be quite a lot of detail!
In the example of curing cancer, your computational model of the universe would need to include a complete model of every molecule of every cell in the human body, and how it interacts under every possible set of conditions. The simpler you make the model, the more you risk cutting off all of the good solutions with your assumptions (or accidentally creation false solutions due to your shortcuts). And that’s just for medical questions.
I don’t think it’s going to be possible for an unaided human to construct a model like that for a very long time, and possibly not ever.
I think there’s a question of how we create an adequate model of the world for this idea to work. It’s probably not practical to build one by hand, so we’d likely need to hand the task over to an AI.
Might it be possible to use the modelling module of an AI in the absence of the planning module? (or with a weak planning module) If so, you might be able to feed it a great deal of data about the universe, and construct a model that could then be “frozen” and used as the basis for the AI’s “virtual universe.”
Have you considered coating your fingers with capsaicin to make scratching your mucus membrances immediately painful?
(Apologies if this advice is unwanted—I have not experienced anything similar, and am just spitballing).
For Omnivores:
Do you think the level of meat consumption in America is healthy for individuals? Do you think it’s healthy for the planet?
Meat is obviously healthy for individuals. We evolved to eat as much of it as we could get. Many nutrients seem to be very difficult to obtain in sufficient, bio-available form from an all-vegetable diet. I just suspect most observant vegans are substantially malnourished.
On the planet side of things, meat is an environmental disaster. The methane emissions are horrifying, as is the destruction of rainforest. Hopefully, lab-grown meat allows us to switch to an eco-friendly alternative.
How do you feel about factory farming? Would you pay twice as much money for meat raised in a less efficient (but “more natural”) way?
Factory farming is necessary to continue to feed the world. I don’t care about “natural”, but I’d pay extra for food from animals that had been genetically engineered to be happy and extremely stupid/near-comatose, to reduce total suffering-per-calorie. This would be more effective and less costly than switching to free-range.
Are there any animals you would (without significantly changing your mind) never say it was okay to hunt/farm and eat? If so, what distinguishes these animals from the animals which are currently being hunted/farmed?
Great apes. cetaceans, and a few birds. The range of animal intelligence is extremely broad. I find it extremely unlikely that chickens have anything recognizable as a human-like perception of the world. I think the odds are better than not that dolphins, chimps, and parrots do.
If you’re interested, the animal I’m most on the fence about is pigs.
If all your friends were vegetarians, and you had to go out of your way to find meat in a similar way to how vegans must go out of their way right now, do you think you’d still be an omnivore?
Yes. I cook most of my own meals, and my meat consumption would continue even in the absence of social eating.
For Vegetarians:
If there was a way to grow meat in a lab that was indistinguishable from normal meat, and the lab-meat had never been connected to a brain, do you expect you would eat it? Why/why not?
I obviously no moral problem with that. That would be fantastic. However, I probably wouldn’t eat the lab meat. I find the texture / mouth-feel of most meat pretty gross, and lab-grown meat would be significantly more expensive than my current diet. Since microbiome acclimation means that resuming eating meat could make me very sick for a while, I’m not sure I see the profit in it.
I am very interested in synthetic milk, cheese, and eggs, however.
Indigenous hunter gatherers across the world get around 30 percent of their annual calories from meat. Chimpanzees, our closest non-human relatives, eat meat. There are arguments that humans evolved to eat meat and that it’s natural to do so. Would you disagree? Elaborate.
Obviously, humans evolved to be omnivorous. However, the paleo people are lunatics if they think we ate as much meat as they do (much less of the hyper-fatty livestock we’ve bred over the last couple of millenia). Meat was a most likely a rare supplement to the largely-vegetarian diets of ancestral peoples.
Regardless, none of this is the point. Today, it’s perfectly possible to eat a vegan diet and be healthy (see: Soylent). You can’t avoid the obvious moral horror of eating the flesh of semi-sentient animals like pigs by shouting the word ‘natural’ and running away.
Do you think it’s any of your business what other people eat? Have you ever tried (more than just suggesting it or leading by example) to get someone to become a vegetarian or vegan?
Only if they bring it up first. I do think we have a moral obligation to try to reduce animal suffering, but harassing my friends isn’t actually helping the cause in any way, and might be hurting. I do try to corrupt my meat-eating friends who are having seconds thoughts about it, but, you know, in a friendly way.
What do you think is the primary health risk of eating meat (if any)?
Parasites probably. Meat in moderation clearly isn’t especially bad for you. It’s just, you know, wrong.
I seriously doubt that. Plenty of humans want to kill everyone (or, at least, large groups of people). Very few succeed. These agents would be a good deal less capable.
(1) Intelligence is an extendible method that enables software to satisfy human preferences. (2) If human preferences can be satisfied by an extendible method, humans have the capacity to extend the method. (3) Extending the method that satisfies human preferences will yield software that is better at satisfying human preferences. (4) Magic happens. (5) There will be software that can satisfy all human preferences perfectly but which will instead satisfy orthogonal preferences, causing human extinction.
This is deeply silly. The thing about arguing from definitions is that you can prove anything you want if you just pick a sufficiently bad definition. That definition of intelligence is a sufficiently bad definition.
EDIT:
To extend this rebuttal in more detail:
I’m going to accept the definition of ‘intelligence’ given above. Now, here’s a parallel argument of my own:
Entelligence is an extendible method for satisfying an arbitrary set of preferences that are not human preferences.
If these preferences can be satisfied by an extendible method, then the entelligent agent has the capacity to extend the method.
Extending the method that satisfies these non-human preferences will yield software that’s better at satisfying non-human preferences.
The inevitable happens.
There will be software that will satisfy non-human preferences, causing human extinction.
Now, I pose to you: how do we make sure that we’re making intelligent software, and not “entelligent” software, under the above definitions? Obviously, this puts us back to the original problem of how to make a safe AI.
The original argument is rhetorical slight of hand. The given definition of intelligence implicitly assumes that the problem doesn’t exist, and all AI’s will be safe, and then goes on to prove that all AIs will be safe.
It’s really, fundamentally silly.
That sounds fascinating. Could you link to some non-paywalled examples?
Just an enthusiastic amateur who’s done a lot of reading. If you’re interested in hearing a more informed version of the pro-cryonics argument (and seeing some of the data) I recommend the following links:
On ischemic damage and the no-reflow phenomenon: http://www.benbest.com/cryonics/ischemia.html
Alcor’s research on how much data is preserved by their methods: http://www.alcor.org/Library/html/braincryopreservation1.html http://www.alcor.org/Library/html/newtechnology.html http://www.alcor.org/Library/html/CryopreservationAndFracturing.html
Yudkowsky’s counter-argument to the philosophical issue of copies vs. “really you”: http://lesswrong.com/lw/r9/quantum_mechanics_and_personal_identity/
Interesting! I didn’t know that, and that makes a lot of sense.
If I were to restate my objection more strongly, I’d say that parrots also seem to exceed chimps in language capabilities (chimps having six billion cortical neurons). The reason I didn’t bring this up originally is that chimp language research is a horrible, horrible field full of a lot of bad science, so it’s difficult to be too confident in that result.
Plenty of people will tell you that signing chimps are just as capable as Alex the parrot—they just need a little bit of interpretation from the handler, and get too nervous to perform well when the handler isn’t working with them. Personally, I think that sounds a lot like why psychics suddenly stop working when James Randi shows up, but obviously the situation is a little more complicated.
I made serious progress on a system for generating avatar animations based on the motion of a VR headset. It still needs refinement, but I’m extremely proud of what I’ve got so far.
Straw man. Connectonomics is relevant to trying to explain the concept of uploading to the lay-man. Few cryonics proponents actually believe it’s all you need to know to reconstruct the brain.
The fact that someone can be dead for several hours and then be resuscitated, or have their brain substantially heated or cooled without dying, puts a theoretical limit on how sensitive your long-term brain state can possibly be to these sorts of transient details of brain structure. It seem very likely that long-term identity-related brain state is stored almost entirely in relatively stable neurological structures. I don’t think this is particularly controversial, neurobiologically.
This is not, to the best of my knowledge, true, and he offers no evidence for this claim. Cryonics does a very good job of preserving a lot of features of brain tissue. There is some damage done by the cryoprotectants and thermal shearing, but it’s specific and well-characterized damage, not total structural disruption. Although I will say that ice crystal formation in the deep brain caused by the no-reflow problem is a serious concern. Whether that’s a showstopper depends on how important you think the fine-grained structure of white matter is.
Bad philosophy on top of bad neuroscience!