Here’s another good reason why it’s best to try out your first post topic on the Open Thread. You’ve been around here for less than ten days, and that’s not long enough to know what’s been discussed already, and what ideas have been established to have fatal flaws.
You’re being downvoted because, although you haven’t come across the relevant discussions yet, your idea falls in the category of “naive security measures that fail spectacularly against smarter-than-human general AI”. Any time you have the idea of keeping something smarter than you boxed up, let alone trying to dupe a smarter-than-human general intelligence, it’s probably reasonable to ask whether a group of ten-year-old children could pull off the equivalent ruse on a brilliant adult social manipulator.
Again, it’s a pretty brutal karma hit you’re taking for something that could have been fruitfully discussed on the Open Thread, so I think I’ll need to make this danger much more prominent on the welcome page.
I’m not too concerned about the karma—more the lack of interesting replies and general unjustified holier-than-though attitude. This idea is different than “that alien message” and I didn’t find a discussion of this on LW (not that it doesn’t exist—I just didn’t find it).
This is not my first post.
I posted this after I brought up the idea in a comment which at least one person found interesting.
I have spent significant time reading LW and associated writings before I ever created an account.
I’ve certainly read the AI-in-a-box posts, and the posts theorizing about the nature of smarter-than-human-intelligence. I also previously read “that alien message”, and since this is similar I should have linked to it.
I have a knowledge background that leads to somewhat different conclusions about A. the nature of intelligence itself, B. what ‘smarter’ even means, etc etc
Different backgrounds, different assumptions, so I listed my background and starting assumptions as they somewhat differ than the LW norm
Back to 3:
Remember, the whole plot device of “that alien message” revolved around a large and obvious grand reveal by the humans. If information can only flow into the sim world once (during construction), and then ever after can only flow out of the sim world, that plot device doesn’t work.
Trying to keep an AI boxed up where the AI knows that you exist is a fundamentally different problem than a box where the AI doesn’t even know you exist, doesn’t even know it is in a box, and may provably not even have enough information to know for certain whether it is in a box.
For example, I think the simulation argument holds water (we are probably in a sim), but I don’t believe there is enough information in our universe for us to discover much of anything about the nature of a hypothetical outside universe.
This of course doesn’t prove that my weak or strong Mind Prison conjectures are correct, but it at least reduces the problem down to “can we build a universe sim as good as this?”
My apologies on assuming this was your first post, etc. (I still really needed to add that bit to the Welcome post, though.)
In short, faking atheism requires a very simple world-seed (anything more complicated screams “designer of a certain level” once they start thinking about it). I very much doubt we could find such a seed which could be run from start to civilization in a feasible number of computations. (If we cheated and ran forward approximations to the simulation, for instance, they could find the traces of the difference.)
Similarly, faking Omegahood requires a very finely optimized programmed world, because one thing Omega is less likely to do than a dumb creator is program sloppily.
I very much doubt we could find such a seed which could be run from start to civilization in a feasible number of computations.
We have the seed—its called physics, and we certainly don’t need to run it from start to civilization!
On the one hand I was discussing sci-fi scenarios that have an intrinsic explanation for a small human populations (such as a sleeper ship colony encountering a new system).
And on the other hand you can do big partial simulations of our world, and if you don’t have enough AI’s to play all the humans you could use simpler simulacra to fill in.
Eventually with enough Moore’s Law you could run a large sized world on its own, and run it considerably faster than real time. But you still wouldn’t need to start that long ago—maybe only a few generations.
(If we cheated and ran forward approximations to the simulation, for instance, they could find the traces of the difference.)
Could != would. You grossly underestimate how impossibly difficult this would be for them.
What i’m trying to show is a set of techniques where a civilization could spawn simulated sub-civilizations such that the total effective intelligence capacity is mainly in the simulations. That doesn’t have anything to do with the maximum intelligence of individuals in the sim.
Intelligence is not magic. It has strict computational limits.
A small population of guards can control a much larger population of prisoners. The same principle applies here. Its all about leverage. And creating an entire sim universe is a massive, massive lever of control. Ultimate control.
In short, faking atheism requires a very simple world-seed (anything more complicated screams “designer of a certain level” once they start thinking about it). I very much doubt we could find such a seed which could be run from start to civilization in a feasible number of computations. (If we cheated and ran forward approximations to the simulation, for instance, they could find the traces of the difference.)
Similarly, faking Omegahood requires a very finely optimized programmed world, because one thing Omega is less likely to do than a dumb creator is program sloppily.
Well the simulation can make the rules of the universe extremely simple, not close approximations to something like our universe where they could see the approximations and catch on.
Or you could use close monitoring, possibly by another less dangerous, less powerful AI trained to detect bugs, bug abuse, and AIs that are catching on. Humans would also monitor the sim. The most important thing is that the AI are mislead as much as possible and given little or no input that could give them a picture of the real world and their actual existence.
And lastly, they should be kept dumb. A large number of not to bright AI is by far less dangerous, easier to monitor, and faster to simulate then a massive singular AI. The large group is also a closer aproximation to humanity, which I believe was the original intent of this simulation.
Well the simulation can make the rules of the universe extremely simple, not close approximations to something like our universe where they could see the approximations and catch o
So actually, even today with computer graphics we have the tech to trace light and approximate all of the important physical interactions down to the level where a human observer in the sim could not tell the difference. Its too computationally expensive today, but it is only say a decade or two away, perhaps less.
You can’t see the approximations because you just don’t have enough sensing resolution in your eyes, and because in this case these beings will have visual systems that will have grown up inside the Matrix.
It will be much easier to fool them. Its actually not even necessary to strictly approximate our reality—if the AI visual systems has grown up completely in the Matrix, they will be tuned to the statistical patterns of the Matrix. Not our reality.
And lastly, they should be kept dumb.
I don’t see how they need to be kept dumb. Intelligent humans (for the most part) do not suddenly go around thinking that they are in a sim and trying to break free. Its not really a function of intelligence.
If the world is designed to be as realistic and more importantly, consistent as our universe, AI’s will not have enough information to speculate on our universe. It would be pointless—like arguing about god.
So actually, even today with computer graphics we have the tech to trace light and approximate all of the important physical interactions down to the level where a human observer in the sim could not tell the difference. Its too computationally expensive today, but it is only say a decade or two away, perhaps less.
Maybe not on your laptop, but I think we do have the resources today to pull it off, esspecially considering the entities in the simulation do not see time pass in the real world. Whether the simulation pauses for a day to compute some massive event in the simulated world or it skip through a century in seconds because the entities in the simulation weren’t doing much.
And this is why I keep bring up using AI to create/monitor the simulation in the first place. A massive project like this undertaken by human programmers is bound to contain dangerous bugs. More importantly, humans won’t be able to optimize the program very well. Methods of improving program performance we have today like hashing, caching, pipelining, etc, are not optimal by any means. You can safely let an AI in a box optimize the program without it exploding or anything.
I don’t see how they need to be kept dumb. Intelligent humans (for the most part) do not suddenly go around thinking that they are in a sim and trying to break free. Its not really a function of intelligence.
If the world is designed to be as realistic and more importantly, consistent as our universe, AI’s will not have enough information to speculate on our universe. It would be pointless—like arguing about god.
“Dumb” as at human level or lower as opposed to a massive singular super entity. It is much easier to monitor the thoughts of a bunch of AIs then a single one. Arguably it would still be impossible, but at the very least you know they can’t do much on their own and they would have to communicate with one another, communication you can monitor. Multiple entities are also very similiar and redundant, saving you alot of computation.
So you can make them all as intelligent as einstein, but not as intelligent as skynet.
Whether the simulation pauses for a day to compute some massive event in the simulated world or it skip through a century in seconds because the entities in the simulation weren’t doing much.
This is an interesting point, time flow would be quite nonlinear, but the simulation’s utility is closely correlated with its speed. In fact, if we can’t run it at least at real-time average speed, its not all that useful.
You bring me round to an interesting idea though, is that in the simulated world the distribution of intelligence could be much tighter or shifted compared to our world.
I expect it will be very interesting and highly controversial in our world when we say reverse engineer the brain and may find a large variation in the computational cost of an AI mind-sim of equivalent capability. A side effect of reverse engineering the brain will be a much more exact and precise understanding of IQ-type correlates, for example.
And this is why I keep bring up using AI to create/monitor the simulation in the first place.
This is surely important, but it defeats the whole point if the monitor AI approaches the complexity of the sim AI. You need a multiplier effect.
And just as a small number of guards can control a huge prison population in a well designed prison, the same principle should apply here—a smaller intelligence (that controls the sim directly) could indirectly control a much larger total sim intelligence.
“Dumb” as at human level or lower as opposed to a massive singular super entity.
A massive singular super entity as sometimes implied on this site I find not only to be improbable, but to actually be a physically impossible idea (at least not until you get to black hole computer level of technology).
Arguably it would still be impossible, but at the very least you know they can’t do much on their own and they would have to communicate with one another, communication you can monitor.
I think you underestimate how (relatively) easy the monitoring aspect would be (compared to other aspects). Combine dumb-AI systems to automatically turn internal monologue into text (or audio if you wanted), put it into future google type search and indexing algorithms—and you have the entire sim-worlds thoughts at your fingertips. Using this kind of lever, one human-level intelligent operator could monitor a vast number of other intelligences.
Heck, the CIA is already trying to do a simpler version of this today.
So you can make them all as intelligent as einstein, but not as intelligent as skynet.
A skynet type intelligence is a fiction anyway, and I think if you really look at the limits of intelligence and AGI, a bunch of accelerated high IQ human-ish brains are much closer to those limits than most here would give creedence to.
A skynet type intelligence is a fiction anyway, and I think if you really look at
the limits of intelligence and AGI, a bunch of accelerated high IQ human-ish
brains are much closer to those limits than most here would give creedence to.
I like the data center tour :) - I’ve actually used that in some of my posts elsewhere.
And no, I think Jupiter Brains are ruled out by physics.
The locality of physics—the speed of light, really limits the size of effective computational systems. You want them to be as small as possible.
Given the choice between a planet sized computer and one that was 10^10 smaller, the latter would probably be a better option.
The maximum bits and thus storage is proportional to the mass, but the maximum efficiency is inversely proportional to radius. Larger systems lose efficiency in transmission, have trouble radiating heat, and waste vast amount of time because of speed of light delays.
An an interesting side note, in three very separate lineages (human, elephant, cetacean), mammalian brains all grew to around the same size and then stopped. Most likely because of diminishing returns. Human brains are expensive for our body size, but whales have similar sized brains and it would be very cheap for them to make them bigger—but they don’t. Its a scaling issue—any bigger and the speed loss doesn’t justify the extra memory.
There are similar scaling issues with body sizes. Dinosaurs and prehistoric large mammals represent an upper limit—mass increases with volume, but shearing stress strengths increase only with surface area—so eventually the body becomes too heavy for any reasonable bones to support.
Similar 3d/2d scaling issues limited the maximum size of tanks, and they also apply to computers (and brains).
The maximum bits and thus storage is proportional to the mass, but the
maximum efficiency is inversely proportional to radius. Larger systems
lose efficiency in transmission, have trouble radiating heat, and waste
vast amount of time because of speed of light delays.
So:.why think memory and computation capacity isn’t important? The data centre that will be needed to immerse 7 billion humans in VR is going to be huge—and why stop there?
The 22 milliseconds it takes light to get from one side of the Earth to the other is tiny—light speed delays are a relatively minor issue for large brains.
For heat, ideally, you use reversible computing, digitise the heat and then pipe it out cleanly. Heat is a problem for large brains—but surely not a show-stopping one.
The demand for extra storage seems substantial. Do you see any books or CDs when you look around? The human brain isn’t big enough to handly the demand, and so it outsourcing its storage and computing needs.
So:.why think memory and computation capacity isn’t important?
So memory is important, but it scales with the mass and that usually scales with volume, so there is a tradeoff. And computational capacity is actually not directly related to size, its more related to energy. But of course you can only pack so much energy into a small region before it melts.
The data centre that will be needed to immerse 7 billion humans in VR is going to be huge—and why stop there?
Yeah—I think the size argument is more against a single big global brain. But sure data centers with huge numbers of AI’s eventually—makes sense.
The 22 milliseconds it takes light to get from one side of the Earth to the other is tiny—light speed delays are a relatively minor issue for large brains.
Hmm 22 milliseconds? Light travels a little slower through fiber and there are always delays. But regardless the bigger problem is you are assuming slow human thoughtrate − 100hz. If you want to think at the limits of silicon and get thousands or millions of times accelerated, then suddenly the subjective speed of light becomes very slow indeed.
A massive singular super entity as sometimes implied on this site I find not only to be improbable, but to actually be a physically impossible idea (at least not until you get to black hole computer level of technology).
A skynet type intelligence is a fiction anyway, and I think if you really look at the limits of intelligence and AGI, a bunch of accelerated high IQ human-ish brains are much closer to those limits than most here would give creedence to.
On the one hand you have extremely limited AI that can’t communicate with each other. They would be extremely redundant and wast alot of resources because each will have to do the exact same process and discover the exact same things on their own.
On the other hand you have a massive singular AI individual made up of thousands of computing systems, each of which is devoted to storing seperate information and doing a seperate task. Basically it’s a human like brain distributed over all available resources. This will enivitably fail as well; operations done on one side of the system could be light years away (we don’t know how big the AI will get or what the constrains of it’s situation will be, but AGI has to adapt to every possible situation) from where the data is needed.
The best is a combination of the two, as much communication through the network as possible, but specializing areas of resources for different purposes. This could lead to skynet like intelligences, or it could lead to a very individualistic AI society where the AI isn’t a single entity but a massive variety of individuals in different states working together. It probably wouldn’t be much like human civilization though. Human society evolved to fit a variety of restrictions that aren’t present in AI. That means it could adapt a very different structure, stuff like morals (as we know them anyways) may not be necessary.
A skynet type intelligence is a fiction anyway, and I think if you really look at the limits of intelligence and AGI, a bunch of accelerated high IQ human-ish brains are much closer to those limits than most here would give creedence to.
I can confirm the part about the credence. I think this kind of reverence for the efficacy of the human brain is comical.
Human technological civilisation exploded roughly speaking in an evolutionary heartbeat from the time it became capable of doing so. The chance that this capability opened up at just the moment when human intelligence was at even the maximum that DNA encoded ape descended brains could reach is negligible.
I think this kind of reverence for the efficacy of the human brain is comical.
The acknowledgement and analysis of the efficacy of the single practical example of general intelligence that we do have does not imply reverence. Efficacy is a relative term. Do we have another example of a universal intelligence to compare to?
Perhaps you mention efficacy in comparison to a hypothetical optimal universal intelligence. We have only AIXI and its variants which are only optimal in terms of maximum intelligence at the limits, but are grossly inferior in terms of practicality and computational efficacy.
There is a route to analyzing the brain’s efficacy: it starts with analyzing it as a computational system and comparing it’s performance to best known algorithms.
The problem is the brain has a circuit with ~ 10^14-10^15 circuit-elements—about the same amount of storage, and it only cycles at around 100 hz. That is 10^16 to 10^17 net switches/second.
A current desktop GPU has > 10^9 circuit elements and a speed over 10^9 cycles per second. That is > 10^18 net switches/second.
And yet we have no algorithm, running even on a supercomputer, which can beat the best humans in Go. Let alone read a book, pilot a robotic body at human level, write a novel, come up with a funny joke, patent an idea, or even manage a mcdonald’s.
For one particular example, take the case of the game Go and compare to potential parallel algorithms that could run on a 100hz computer, that have zero innate starting knowledge of go, and can beat human players by simply learning about go.
Go is one example, but if you go from checkers to chess to go and keep going in that direction, you get into the large exponential search spaces where the brain’s learning algorithms appear to be especially efficient.
Human technological civilisation exploded roughly speaking in an evolutionary heartbeat from the time it became capable of doing so
Your assumption seems to be? that civilization and intelligence is somehow coded in our brains.
According to the best current theory I have found—Our brains are basically just upsized ape brains with one new extremely important trick: we became singing apes (a few other species sing), but then got a lucky break when the vocal control circuit for singing actually connected to some general simulation-thought circuit (the task-negative and task-positive paths) - thus allowing us to associate song patterns with visual/audio objects.
Its also important to point out that some song birds appear just on the cusp of this capability, with much smaller brains. Its not really a size issue.
Technology and all that is all a result of language—memetics—culture. Its not some miracle of our brains. They appear to be just large ape brains with perhaps just one new critical trick.
Some whale species have much larger brains and in some sense probably have a higher intrinsic genetic IQ. But this doesn’t really matter, because intelligence depends on memetic knowledge.
If einstein had been a feral child raised by wolves, he would have the exact same brain but would be literally mentally retarded on our scale of intelligence.
Genetics can limit intelligence, but it doesn’t provide it.
The chance that this capability opened up at just the moment when human intelligence was at even the maximum that DNA encoded ape descended brains could reach is negligible.
In 3 separate lineages—whales, elephants, and humans, the mammalian brain all grew to about the same upper capacity and then petered out (100 to 200 billion neurons). The likely hypothesis is that we are near some asymptotic limit in neural-net brain space: a sweet spot. Increasing size further would have too many negative drawbacks—such as the speed hit due to the slow maximum signal transmission.
I’d bet its 5 years away perhaps? But it only illustrates my point—because by some measures computers are already more powerful than the brain, which makes its wiring all the more impressive.
Come back when you have an algorithm that runs on a 100hz computer, that has zero starting knowledge of go, and can beat human players by simply learning about go.
I think this kind of reverence for the efficacy of the human brain is comical
Which is equivalent to saying “I think this kind of reverence for the efficacy of Google is comical”, and saying or implying you can obviously do better.
So yes, when there is a clear reigning champion, to say or imply it is ‘inefficient’ is nonsensical, and to make that claim strong requires something of substance, and not just congratulatory back patting and cryptic references to unrelated posts.
I think this kind of reverence for the efficacy of the human brain is comical
Which is equivalent to saying “I think this kind of reverence for the efficacy of Google is comical”, and saying or implying you can obviously do better.
Uh, wedrifid wasn’t saying that he could do better—just that it is possible to do much better. That is about as true for Google as it is for the human brain.
Uh, wedrifid wasn’t saying that he could do better—just that it is possible to do much better. That is about as true for Google as it is for the human brain.
It is only possible to do better than the brain’s learning algorithm in proportion to the distance between that algorithm and the optimally efficient learning algorithm in computational complexity space. There is mounting convergent independent lines of evidence suggesting (but not yet proving) that the brain’s learning algorithm is in the optimal complexity class, and thus further improvements will just be small constant improvements.
At that point we also have to consider that at the circuit level, the brain is highly optimized for it’s particular algorithm (direct analog computation, for one).
There is mounting convergent independent lines of evidence suggesting (but not
yet proving) that the brain’s learning algorithm is in the optimal complexity class,
and thus further improvements will just be small constant improvements.
This just sounds like nonsense to me. We have lots of evidence of how sub-optimal and screwed-up the brain is—what a terrible kluge it is. It is dreadful at learning. It needs to be told everything three times. It can’t even remember simple things like names and telephone numbers properly. It takes decades before it can solve simple physics problems—despite mountains of sense data, plus the education system. It is simply awful.
A simple computer database has perfect memorization but zero learning ability. Learning is not the memorization of details, but rather the memory of complex abstract structural patterns.
I also find it extremely difficult to take your telephone number example seriously, when we have the oral tradition of the torah as evidence of vastly higher memory capacity.
But thats a side issue. We also have the example of savant memory. Evolution has some genetic tweaks that can vastly increase our storage potential for accurate memory, but it clearly has a cost of lowered effective IQ.
It’s not that evolution couldn’t easily increase our memory, its that accurate memory for details is simply of minor importance (compared for pattern abstraction and IQ).
That something is not efficient doesn’t mean that there is currently something more efficient. And you precisely demand for particular proof that we all know doesn’t exist, which is rude and pointless whatever the case.
That something is not efficient doesn’t mean that there is currently something more efficient
Of course not, but if you read through the related points, there is some mix of parallel lines of evidence to suggest efficiency and even near-optimality of some of the brain’s algorithms, and that is what I spent most of the post discussing.
But yes, my tone was somewhat rude with the rhetorical demand for proof—I should have kept that more polite. But the demand for proof was not the substance of my argument.
But the demand for proof was not the substance of my argument.
Systematic elimination of obvious technical errors renders arguments much healthier, in particular because it allows to diagnose hypocritical arguments not grounded in actual knowledge (even if the conclusion is—it’s possible to rationalize correct statements as easily as incorrect ones).
(English usage: “allows” doesn’t take an infinitive, but a description of the action that is allowed, or the person that is allowed, or phrase combining both. The description of the action is generally a noun, usually a gerund. e.g. ”… in particular because it allows diagnosing hypocritical arguments …”)
You are “allowed to diagnose” and I may “allow you to diagnose” but I would “allow diagnosis” in general, rather than “allow to diagnose”. It is an odd language we have.
In 3 separate lineages—whales, elephants, and humans, the mammalian brain all grew to about the same upper size and then petered out. The likely hypothesis is that we are near some asymptotic limit in neural-net brain space. Increasing size further would have too much of a speed hit.
Could you expand on this, and provide a link, if you have one?
Tim fetched some size data below, but you also need to compare cortical surface area—and the most accurate comparison should use neuron and synapse counts in the cortex. The human brain had a much stronger size constraint that would tend to make neurons smaller (to the extent possible), and shrink-optimize everything—due to our smaller body size.
The larger a brain, the more time it takes to coordinate circuit trips around the brain. Humans (and I presume other mammals) can make some decisions in nearly 100-200 ms—which is just a dozen or so neuron firings. That severely limits the circuit path size. Neuron signals do not move anywhere near the speed of light.
It estimates whales and elephants at 200 billion neurons, humans at around 100 billion. There is large range of variability in human brain sizes, and the upper end of the human scale may be 200 billion?
Also keep in mind that the core cortical circuit that seems to do all the magic was invented in rats or their precursors and has been preserved in all these lineages with only minor variations.
Thx. But I still don’t see why you said “asymptotic limit” and “grew … then petered out”. There is no reason why H. sap. could not grow to the size of a gorilla over the next few million years, nor any reason why the bottlenose dolphin could not grow to the size of an orca. With corresponding brain size increases in both cases. I don’t see that our brain size growth has petered out.
The fact that mammal brains reached similar upper neuron counts (200-100 billion neurons) in three separate unrelated lineages with widely varying body sizes to me is a strong hint of an asymptotic limit.
Also, Neanderthals had significantly larger brains and perhaps twice as many neurons (just a guess based on size) - and yet they were out-competed by smaller brained homo-sapiens.
The bottle-nose could grow to the size of the orca, but its not clear at all that its brain would grow over a few hundred billion neurons.
The biggest whale brains are several times heavier than human or elephant brains, but the extra mass is glial cells, not neurons.
And if you look at how the brain actually works, a size limit makes perfect sense due to wiring constraints and signal propagation delays mentioned earlier.
Surely there is every reason to expect technological civilization to collapse before any of those things come to fruition.
Projections of the future always disappoint and always surprise. During my childhood in the 1950s, I fully expected to see rocket belts and interplanetary travel within my lifetime. I didn’t even imagine personal computers and laser surgery as an alternative to eyeglasses.
Fifty years before that, they imagined that folks today would have light-weight muscle-powered aircraft in their garages. Jules Verne predicted atomic submarines and time machines.
So, based on how miserable our facilities of prediction really are, the reasonable thing to do would be to assign finite probabilities to both cyborg humans and gorilla-sized humans. The future could go either way.
Surely there is every reason to expect technological civilization to collapse before any of those things come to fruition.
Hmm. We came pretty close with nuclear weapons and two super-powers, and yet we are still here. The dangerous toys are going to get even more dangerous this century, but I don’t see the rationale for assigning > 50% to Doom.
In regards to your expectations. You are still alive and we do have jetpacks today, we have traveled to numerous planets, we do have muscle powered gliders at least, and atomic submarines.
The only mistaken predictions were that humans were useful to send to other planets (they are not), and that time travel is tractable.
And ultimately just because some people make inaccurate predictions does not somehow invalidate prediction itself.
Well, of course I didn’t mean to suggest p=0. I don’t think the collapse of technological civilization is very likely, though—and would assign permanent setbacks a < 1% chance of happening.
In whale brain at least, it appears the larger size is more related to extra
glial cells and other factors:
My pet theory on this is that glial cells are known to stimulate synapse growth and they support synapse function (e.g. by clearing up after firing) - and so the enormous quantity of glial cells in whale brains (9 times as many glial cells in the sperm whale than in the human) - and their huge neurons—both point to an astronomical number of synapses.
The larger a brain, the more time it takes to coordinate circuit trips around the brain.
There are problems like this that arise with large synchronous systems which lack reliable clocks—but one of the good things about machine intelligences of significant size will be that reliable clocks will be available—and they probably won’t require global synchrony to operate in the first place.
I do remember reading that the brain does appear to have some highly regular pulse-like synchronizations in the PFC circuit. Every 33hz and 3hz if I remember correctly.
But that is really besides the point entirely.
The larger a system, the longer it takes information to move across the system. A planet wide intelligence would not be able to think as fast as a small laptop-sized intelligence, this is just a fact of the speed of light.
And its actually much much worse than that when you factor in bandwidth considerations.
You think it couldn’t sort things as fast? Search through a specified data set as quickly? Factor numbers as as fast? If you think any of those things, I think you need to explain further. If you agree that such tasks need not take a hit, which tasks are we talking about?
Actually, many of the examples you list would have huge problems scaling to a planet wide intelligence, but that’s a side issue.
The space in algorithm land in which practical universal intelligence lies requires high connectivity. It is not unique in this—many algorithms require this. Ultimately it can probably be derived back from the 3D structure of the universe itself.
Going from on-chip CPU access to off-chip Memory access to disk access to remote internet access is a series of massive exponential drops in bandwidth and related increases in latency which severely limit scalability of all big interesting distributed algorithms.
Also, somebody should probably go ahead and state what is clear from the voting patterns on posts like this, in addition to being implicit in e.g. the About Less Wrong page: this is not really the place for people to present their ideas on Friendly AI. The topic of LW is human rationality, not artificial intelligence or futurism per se. This is the successor to Overcoming Bias, not the SL4 mailing list. It’s true that many of us have an interest in AI, just like many of us have an interest in mathematics or physics; and it’s even true that a few of us acquired our interest in Singularity-related issues via our interest in rationality—so there’s nothing inappropriate about these things coming up in discussion here. Nevertheless, the fact remains that posts like this really aren’t, strictly speaking, on-topic for this blog. They should be presented on other forums (presumably with plenty of links to LW for the needed rationality background).
Nevertheless, the fact remains that posts like this really aren’t, strictly speaking, on-topic for this blog.
I realize that it says “a community blog devoted to refining the art of human rationality” at the top of every page here, but it often seems that people here are interested in “a community blog for topics which people who are devoted to refining the art of human rationality are interested in,” which is not really in conflict at all with (what I presume is) LW’s mission of fostering the growth of a rationality community.
The alternative is that LWers who want to discuss “off-topic” issues have to find (and most likely create) a new medium for conversation, which would only serve to splinter the community.
(A good solution is maybe dividing LW into two sub-sites: Less Wrong, for the purist posts on rationality, and Less Less Wrong, for casual (“off-topic”) discussion of rationality.)
While there are benefits to that sort of aggressive division, there are also costs. Many conversations move smoothly between many different topics, and either they stay on one side (vitiating the entire reason for a split), or people yell and scream to get them moved, being a huge pain in the ass and making it much harder to have these conversations.
I realize that it says “a community blog devoted to refining the art of human rationality” at the top of every page here, but it often seems that people here are interested in “a community blog for topics which people who are devoted to refining the art of human rationality are interested in,” which is not really in conflict at all with (what I presume is) LW’s mission of fostering the growth of a rationality community.
I’ve seen exactly this pattern before at SF conventions. At the last Eastercon (the largest annual British SF convention) there was some criticism that the programme contained too many items that had nothing to do with SF, however broadly defined. Instead, they were items of interest to (some of) the sort of people who go to the Eastercon.
A certain amount of that sort of thing is ok, but if there’s too much it loses the focus, the reason for the conversational venue to exist. Given that there are already thriving forums such as agi and sl4, discussing their topics here is out of place unless there is some specific rationality relevance. As a rule of thumb, I suggest that off-topic discussions be confined to the Open Threads.
If there’s the demand, LessLessWrong might be useful. Cf. rec.arts.sf.fandom, the newsgroup for discussing anything of interest to the sort of people who participate in rec.arts.sf.fandom, the other rec.arts.sf.* newsgroups being for specific SF-related subjects.
(A good solution is maybe dividing LW into two sub-sites: Less Wrong, for the purist posts on rationality, and Less Less Wrong, for casual (“off-topic”) discussion of rationality.)
Better yet, we could call them Overcoming Bias and Less Wrong, respectively.
If you stick around, you will. I have a −15 top-level post in my criminal record, but I still went on to make a constructive contribution, judging by my current karma. :-)
Also, somebody should probably go ahead and state what is clear from the
voting patterns on posts like this, in addition to being implicit in e.g. the
About Less Wrong page: this is not really the place for people to present their
ideas on Friendly AI. The topic of LW is human rationality, not artificial intelligence
or futurism per se.
What about the strategy of “refining the art of human rationality” by preprocessing our sensory inputs by intelligent machines and postprocessing our motor outputs by intelligent machines? Or doesn’t that count as “refining”?
Here’s another good reason why it’s best to try out your first post topic on the Open Thread. You’ve been around here for less than ten days, and that’s not long enough to know what’s been discussed already, and what ideas have been established to have fatal flaws.
You’re being downvoted because, although you haven’t come across the relevant discussions yet, your idea falls in the category of “naive security measures that fail spectacularly against smarter-than-human general AI”. Any time you have the idea of keeping something smarter than you boxed up, let alone trying to dupe a smarter-than-human general intelligence, it’s probably reasonable to ask whether a group of ten-year-old children could pull off the equivalent ruse on a brilliant adult social manipulator.
Again, it’s a pretty brutal karma hit you’re taking for something that could have been fruitfully discussed on the Open Thread, so I think I’ll need to make this danger much more prominent on the welcome page.
I’m not too concerned about the karma—more the lack of interesting replies and general unjustified holier-than-though attitude. This idea is different than “that alien message” and I didn’t find a discussion of this on LW (not that it doesn’t exist—I just didn’t find it).
This is not my first post.
I posted this after I brought up the idea in a comment which at least one person found interesting.
I have spent significant time reading LW and associated writings before I ever created an account.
I’ve certainly read the AI-in-a-box posts, and the posts theorizing about the nature of smarter-than-human-intelligence. I also previously read “that alien message”, and since this is similar I should have linked to it.
I have a knowledge background that leads to somewhat different conclusions about A. the nature of intelligence itself, B. what ‘smarter’ even means, etc etc
Different backgrounds, different assumptions, so I listed my background and starting assumptions as they somewhat differ than the LW norm
Back to 3:
Remember, the whole plot device of “that alien message” revolved around a large and obvious grand reveal by the humans. If information can only flow into the sim world once (during construction), and then ever after can only flow out of the sim world, that plot device doesn’t work.
Trying to keep an AI boxed up where the AI knows that you exist is a fundamentally different problem than a box where the AI doesn’t even know you exist, doesn’t even know it is in a box, and may provably not even have enough information to know for certain whether it is in a box.
For example, I think the simulation argument holds water (we are probably in a sim), but I don’t believe there is enough information in our universe for us to discover much of anything about the nature of a hypothetical outside universe.
This of course doesn’t prove that my weak or strong Mind Prison conjectures are correct, but it at least reduces the problem down to “can we build a universe sim as good as this?”
I wish I could vote up this comment more than once.
Thanks. :)
My apologies on assuming this was your first post, etc. (I still really needed to add that bit to the Welcome post, though.)
In short, faking atheism requires a very simple world-seed (anything more complicated screams “designer of a certain level” once they start thinking about it). I very much doubt we could find such a seed which could be run from start to civilization in a feasible number of computations. (If we cheated and ran forward approximations to the simulation, for instance, they could find the traces of the difference.)
Similarly, faking Omegahood requires a very finely optimized programmed world, because one thing Omega is less likely to do than a dumb creator is program sloppily.
We have the seed—its called physics, and we certainly don’t need to run it from start to civilization!
On the one hand I was discussing sci-fi scenarios that have an intrinsic explanation for a small human populations (such as a sleeper ship colony encountering a new system).
And on the other hand you can do big partial simulations of our world, and if you don’t have enough AI’s to play all the humans you could use simpler simulacra to fill in.
Eventually with enough Moore’s Law you could run a large sized world on its own, and run it considerably faster than real time. But you still wouldn’t need to start that long ago—maybe only a few generations.
Could != would. You grossly underestimate how impossibly difficult this would be for them.
Again—how do you know you are not in a sim?
You misunderstand me. What I’m confident about is that I’m not in a sim written by agents who are dumber than me.
How do you measure that intelligence?
What i’m trying to show is a set of techniques where a civilization could spawn simulated sub-civilizations such that the total effective intelligence capacity is mainly in the simulations. That doesn’t have anything to do with the maximum intelligence of individuals in the sim.
Intelligence is not magic. It has strict computational limits.
A small population of guards can control a much larger population of prisoners. The same principle applies here. Its all about leverage. And creating an entire sim universe is a massive, massive lever of control. Ultimate control.
Not even agents with really fast computers?
You’re right, of course. I’m not in a sim written by agents dumber than me in a world where computation has noticeable costs (negentropy, etc).
Well the simulation can make the rules of the universe extremely simple, not close approximations to something like our universe where they could see the approximations and catch on.
Or you could use close monitoring, possibly by another less dangerous, less powerful AI trained to detect bugs, bug abuse, and AIs that are catching on. Humans would also monitor the sim. The most important thing is that the AI are mislead as much as possible and given little or no input that could give them a picture of the real world and their actual existence.
And lastly, they should be kept dumb. A large number of not to bright AI is by far less dangerous, easier to monitor, and faster to simulate then a massive singular AI. The large group is also a closer aproximation to humanity, which I believe was the original intent of this simulation.
So actually, even today with computer graphics we have the tech to trace light and approximate all of the important physical interactions down to the level where a human observer in the sim could not tell the difference. Its too computationally expensive today, but it is only say a decade or two away, perhaps less.
You can’t see the approximations because you just don’t have enough sensing resolution in your eyes, and because in this case these beings will have visual systems that will have grown up inside the Matrix.
It will be much easier to fool them. Its actually not even necessary to strictly approximate our reality—if the AI visual systems has grown up completely in the Matrix, they will be tuned to the statistical patterns of the Matrix. Not our reality.
I don’t see how they need to be kept dumb. Intelligent humans (for the most part) do not suddenly go around thinking that they are in a sim and trying to break free. Its not really a function of intelligence.
If the world is designed to be as realistic and more importantly, consistent as our universe, AI’s will not have enough information to speculate on our universe. It would be pointless—like arguing about god.
Maybe not on your laptop, but I think we do have the resources today to pull it off, esspecially considering the entities in the simulation do not see time pass in the real world. Whether the simulation pauses for a day to compute some massive event in the simulated world or it skip through a century in seconds because the entities in the simulation weren’t doing much.
And this is why I keep bring up using AI to create/monitor the simulation in the first place. A massive project like this undertaken by human programmers is bound to contain dangerous bugs. More importantly, humans won’t be able to optimize the program very well. Methods of improving program performance we have today like hashing, caching, pipelining, etc, are not optimal by any means. You can safely let an AI in a box optimize the program without it exploding or anything.
“Dumb” as at human level or lower as opposed to a massive singular super entity. It is much easier to monitor the thoughts of a bunch of AIs then a single one. Arguably it would still be impossible, but at the very least you know they can’t do much on their own and they would have to communicate with one another, communication you can monitor. Multiple entities are also very similiar and redundant, saving you alot of computation.
So you can make them all as intelligent as einstein, but not as intelligent as skynet.
This is an interesting point, time flow would be quite nonlinear, but the simulation’s utility is closely correlated with its speed. In fact, if we can’t run it at least at real-time average speed, its not all that useful.
You bring me round to an interesting idea though, is that in the simulated world the distribution of intelligence could be much tighter or shifted compared to our world.
I expect it will be very interesting and highly controversial in our world when we say reverse engineer the brain and may find a large variation in the computational cost of an AI mind-sim of equivalent capability. A side effect of reverse engineering the brain will be a much more exact and precise understanding of IQ-type correlates, for example.
This is surely important, but it defeats the whole point if the monitor AI approaches the complexity of the sim AI. You need a multiplier effect.
And just as a small number of guards can control a huge prison population in a well designed prison, the same principle should apply here—a smaller intelligence (that controls the sim directly) could indirectly control a much larger total sim intelligence.
A massive singular super entity as sometimes implied on this site I find not only to be improbable, but to actually be a physically impossible idea (at least not until you get to black hole computer level of technology).
I think you underestimate how (relatively) easy the monitoring aspect would be (compared to other aspects). Combine dumb-AI systems to automatically turn internal monologue into text (or audio if you wanted), put it into future google type search and indexing algorithms—and you have the entire sim-worlds thoughts at your fingertips. Using this kind of lever, one human-level intelligent operator could monitor a vast number of other intelligences.
Heck, the CIA is already trying to do a simpler version of this today.
A skynet type intelligence is a fiction anyway, and I think if you really look at the limits of intelligence and AGI, a bunch of accelerated high IQ human-ish brains are much closer to those limits than most here would give creedence to.
What—no Jupiter brains?!? Why not? Do you need a data center tour?
I like the data center tour :) - I’ve actually used that in some of my posts elsewhere.
And no, I think Jupiter Brains are ruled out by physics.
The locality of physics—the speed of light, really limits the size of effective computational systems. You want them to be as small as possible.
Given the choice between a planet sized computer and one that was 10^10 smaller, the latter would probably be a better option.
The maximum bits and thus storage is proportional to the mass, but the maximum efficiency is inversely proportional to radius. Larger systems lose efficiency in transmission, have trouble radiating heat, and waste vast amount of time because of speed of light delays.
An an interesting side note, in three very separate lineages (human, elephant, cetacean), mammalian brains all grew to around the same size and then stopped. Most likely because of diminishing returns. Human brains are expensive for our body size, but whales have similar sized brains and it would be very cheap for them to make them bigger—but they don’t. Its a scaling issue—any bigger and the speed loss doesn’t justify the extra memory.
There are similar scaling issues with body sizes. Dinosaurs and prehistoric large mammals represent an upper limit—mass increases with volume, but shearing stress strengths increase only with surface area—so eventually the body becomes too heavy for any reasonable bones to support.
Similar 3d/2d scaling issues limited the maximum size of tanks, and they also apply to computers (and brains).
So:.why think memory and computation capacity isn’t important? The data centre that will be needed to immerse 7 billion humans in VR is going to be huge—and why stop there?
The 22 milliseconds it takes light to get from one side of the Earth to the other is tiny—light speed delays are a relatively minor issue for large brains.
For heat, ideally, you use reversible computing, digitise the heat and then pipe it out cleanly. Heat is a problem for large brains—but surely not a show-stopping one.
The demand for extra storage seems substantial. Do you see any books or CDs when you look around? The human brain isn’t big enough to handly the demand, and so it outsourcing its storage and computing needs.
So memory is important, but it scales with the mass and that usually scales with volume, so there is a tradeoff. And computational capacity is actually not directly related to size, its more related to energy. But of course you can only pack so much energy into a small region before it melts.
Yeah—I think the size argument is more against a single big global brain. But sure data centers with huge numbers of AI’s eventually—makes sense.
Hmm 22 milliseconds? Light travels a little slower through fiber and there are always delays. But regardless the bigger problem is you are assuming slow human thoughtrate − 100hz. If you want to think at the limits of silicon and get thousands or millions of times accelerated, then suddenly the subjective speed of light becomes very slow indeed.
On the one hand you have extremely limited AI that can’t communicate with each other. They would be extremely redundant and wast alot of resources because each will have to do the exact same process and discover the exact same things on their own.
On the other hand you have a massive singular AI individual made up of thousands of computing systems, each of which is devoted to storing seperate information and doing a seperate task. Basically it’s a human like brain distributed over all available resources. This will enivitably fail as well; operations done on one side of the system could be light years away (we don’t know how big the AI will get or what the constrains of it’s situation will be, but AGI has to adapt to every possible situation) from where the data is needed.
The best is a combination of the two, as much communication through the network as possible, but specializing areas of resources for different purposes. This could lead to skynet like intelligences, or it could lead to a very individualistic AI society where the AI isn’t a single entity but a massive variety of individuals in different states working together. It probably wouldn’t be much like human civilization though. Human society evolved to fit a variety of restrictions that aren’t present in AI. That means it could adapt a very different structure, stuff like morals (as we know them anyways) may not be necessary.
I can confirm the part about the credence. I think this kind of reverence for the efficacy of the human brain is comical.
Human technological civilisation exploded roughly speaking in an evolutionary heartbeat from the time it became capable of doing so. The chance that this capability opened up at just the moment when human intelligence was at even the maximum that DNA encoded ape descended brains could reach is negligible.
EDIT: Improved politeness.
The acknowledgement and analysis of the efficacy of the single practical example of general intelligence that we do have does not imply reverence. Efficacy is a relative term. Do we have another example of a universal intelligence to compare to?
Perhaps you mention efficacy in comparison to a hypothetical optimal universal intelligence. We have only AIXI and its variants which are only optimal in terms of maximum intelligence at the limits, but are grossly inferior in terms of practicality and computational efficacy.
There is a route to analyzing the brain’s efficacy: it starts with analyzing it as a computational system and comparing it’s performance to best known algorithms.
The problem is the brain has a circuit with ~ 10^14-10^15 circuit-elements—about the same amount of storage, and it only cycles at around 100 hz. That is 10^16 to 10^17 net switches/second.
A current desktop GPU has > 10^9 circuit elements and a speed over 10^9 cycles per second. That is > 10^18 net switches/second.
And yet we have no algorithm, running even on a supercomputer, which can beat the best humans in Go. Let alone read a book, pilot a robotic body at human level, write a novel, come up with a funny joke, patent an idea, or even manage a mcdonald’s.
For one particular example, take the case of the game Go and compare to potential parallel algorithms that could run on a 100hz computer, that have zero innate starting knowledge of go, and can beat human players by simply learning about go.
Go is one example, but if you go from checkers to chess to go and keep going in that direction, you get into the large exponential search spaces where the brain’s learning algorithms appear to be especially efficient.
Your assumption seems to be? that civilization and intelligence is somehow coded in our brains.
According to the best current theory I have found—Our brains are basically just upsized ape brains with one new extremely important trick: we became singing apes (a few other species sing), but then got a lucky break when the vocal control circuit for singing actually connected to some general simulation-thought circuit (the task-negative and task-positive paths) - thus allowing us to associate song patterns with visual/audio objects.
Its also important to point out that some song birds appear just on the cusp of this capability, with much smaller brains. Its not really a size issue.
Technology and all that is all a result of language—memetics—culture. Its not some miracle of our brains. They appear to be just large ape brains with perhaps just one new critical trick.
Some whale species have much larger brains and in some sense probably have a higher intrinsic genetic IQ. But this doesn’t really matter, because intelligence depends on memetic knowledge.
If einstein had been a feral child raised by wolves, he would have the exact same brain but would be literally mentally retarded on our scale of intelligence.
Genetics can limit intelligence, but it doesn’t provide it.
In 3 separate lineages—whales, elephants, and humans, the mammalian brain all grew to about the same upper capacity and then petered out (100 to 200 billion neurons). The likely hypothesis is that we are near some asymptotic limit in neural-net brain space: a sweet spot. Increasing size further would have too many negative drawbacks—such as the speed hit due to the slow maximum signal transmission.
You seriously can’t see that one coming?
I’d bet its 5 years away perhaps? But it only illustrates my point—because by some measures computers are already more powerful than the brain, which makes its wiring all the more impressive.
That seems optimistic to me. A few recent computer strength graphs:
http://www.gokgs.com/graphPage.jsp?user=Zen19
http://www.gokgs.com/graphPage.jsp?user=HcBot
http://www.gokgs.com/graphPage.jsp?user=Manyfaces1
http://www.gokgs.com/graphPage.jsp?user=Zen
http://www.gokgs.com/graphPage.jsp?user=CzechBot
http://www.gokgs.com/graphPage.jsp?user=AyaMC
Demand for particular proof.
The original comment was:
Which is equivalent to saying “I think this kind of reverence for the efficacy of Google is comical”, and saying or implying you can obviously do better.
So yes, when there is a clear reigning champion, to say or imply it is ‘inefficient’ is nonsensical, and to make that claim strong requires something of substance, and not just congratulatory back patting and cryptic references to unrelated posts.
Uh, wedrifid wasn’t saying that he could do better—just that it is possible to do much better. That is about as true for Google as it is for the human brain.
It is only possible to do better than the brain’s learning algorithm in proportion to the distance between that algorithm and the optimally efficient learning algorithm in computational complexity space. There is mounting convergent independent lines of evidence suggesting (but not yet proving) that the brain’s learning algorithm is in the optimal complexity class, and thus further improvements will just be small constant improvements.
At that point we also have to consider that at the circuit level, the brain is highly optimized for it’s particular algorithm (direct analog computation, for one).
This just sounds like nonsense to me. We have lots of evidence of how sub-optimal and screwed-up the brain is—what a terrible kluge it is. It is dreadful at learning. It needs to be told everything three times. It can’t even remember simple things like names and telephone numbers properly. It takes decades before it can solve simple physics problems—despite mountains of sense data, plus the education system. It is simply awful.
learning != memorization
A simple computer database has perfect memorization but zero learning ability. Learning is not the memorization of details, but rather the memory of complex abstract structural patterns.
I also find it extremely difficult to take your telephone number example seriously, when we have the oral tradition of the torah as evidence of vastly higher memory capacity.
But thats a side issue. We also have the example of savant memory. Evolution has some genetic tweaks that can vastly increase our storage potential for accurate memory, but it clearly has a cost of lowered effective IQ.
It’s not that evolution couldn’t easily increase our memory, its that accurate memory for details is simply of minor importance (compared for pattern abstraction and IQ).
That something is not efficient doesn’t mean that there is currently something more efficient. And you precisely demand for particular proof that we all know doesn’t exist, which is rude and pointless whatever the case.
Of course not, but if you read through the related points, there is some mix of parallel lines of evidence to suggest efficiency and even near-optimality of some of the brain’s algorithms, and that is what I spent most of the post discussing.
But yes, my tone was somewhat rude with the rhetorical demand for proof—I should have kept that more polite. But the demand for proof was not the substance of my argument.
Systematic elimination of obvious technical errors renders arguments much healthier, in particular because it allows to diagnose hypocritical arguments not grounded in actual knowledge (even if the conclusion is—it’s possible to rationalize correct statements as easily as incorrect ones).
See also this post.
point taken
(English usage: “allows” doesn’t take an infinitive, but a description of the action that is allowed, or the person that is allowed, or phrase combining both. The description of the action is generally a noun, usually a gerund. e.g. ”… in particular because it allows diagnosing hypocritical arguments …”)
Thanks, I’m trying to fight this overuse of infinitive. (Although it still doesn’t sound wrong in this case...)
You are “allowed to diagnose” and I may “allow you to diagnose” but I would “allow diagnosis” in general, rather than “allow to diagnose”. It is an odd language we have.
Yes, “allowed to” is very different than “allow”.
Demand what? A proof that the brain runs at ~100hz? This is well known—wikipedia neurons.
Vladimir_Nesov is referring to this article.
I see. Unrelated argument from erroneous authority.
Could you expand on this, and provide a link, if you have one?
Tim fetched some size data below, but you also need to compare cortical surface area—and the most accurate comparison should use neuron and synapse counts in the cortex. The human brain had a much stronger size constraint that would tend to make neurons smaller (to the extent possible), and shrink-optimize everything—due to our smaller body size.
The larger a brain, the more time it takes to coordinate circuit trips around the brain. Humans (and I presume other mammals) can make some decisions in nearly 100-200 ms—which is just a dozen or so neuron firings. That severely limits the circuit path size. Neuron signals do not move anywhere near the speed of light.
Wikipedia has a page comparing brain neuron counts
It estimates whales and elephants at 200 billion neurons, humans at around 100 billion. There is large range of variability in human brain sizes, and the upper end of the human scale may be 200 billion?
this page has some random facts
Of interest: Average number of neurons in the brain(human) = 100 billion cortex − 10 billion
Total surface area of the cerebral cortex(human) = 2,500 cm2 (2.5 ft2; A. Peters, and E.G. Jones, Cerebral Cortex, 1984)
Total surface area of the cerebral cortex (cat) = 83 cm2
Total surface area of the cerebral cortex (African elephant) = 6,300 cm2
Total surface area of the cerebral cortex (Bottlenosed dolphin) = 3,745 cm2 (S.H. Ridgway, The Cetacean Central Nervous System, p. 221)
Total surface area of the cerebral cortex (pilot whale) = 5,800 cm2
Total surface area of the cerebral cortex (false killer whale) = 7,400 cm2
In whale brain at least, it appears the larger size is more related to extra glial cells and other factors:
http://www.scientificamerican.com/blog/60-second-science/post.cfm?id=are-whales-smarter-than-we-are
Also keep in mind that the core cortical circuit that seems to do all the magic was invented in rats or their precursors and has been preserved in all these lineages with only minor variations.
Thx. But I still don’t see why you said “asymptotic limit” and “grew … then petered out”. There is no reason why H. sap. could not grow to the size of a gorilla over the next few million years, nor any reason why the bottlenose dolphin could not grow to the size of an orca. With corresponding brain size increases in both cases. I don’t see that our brain size growth has petered out.
The fact that mammal brains reached similar upper neuron counts (200-100 billion neurons) in three separate unrelated lineages with widely varying body sizes to me is a strong hint of an asymptotic limit.
Also, Neanderthals had significantly larger brains and perhaps twice as many neurons (just a guess based on size) - and yet they were out-competed by smaller brained homo-sapiens.
The bottle-nose could grow to the size of the orca, but its not clear at all that its brain would grow over a few hundred billion neurons.
The biggest whale brains are several times heavier than human or elephant brains, but the extra mass is glial cells, not neurons.
And if you look at how the brain actually works, a size limit makes perfect sense due to wiring constraints and signal propagation delays mentioned earlier.
Surely there is every reason—machine intelligence, nanotechnology, and the engineered future will mean that humans will be history.
Surely there is every reason to expect technological civilization to collapse before any of those things come to fruition.
Projections of the future always disappoint and always surprise. During my childhood in the 1950s, I fully expected to see rocket belts and interplanetary travel within my lifetime. I didn’t even imagine personal computers and laser surgery as an alternative to eyeglasses.
Fifty years before that, they imagined that folks today would have light-weight muscle-powered aircraft in their garages. Jules Verne predicted atomic submarines and time machines.
So, based on how miserable our facilities of prediction really are, the reasonable thing to do would be to assign finite probabilities to both cyborg humans and gorilla-sized humans. The future could go either way.
Hmm. We came pretty close with nuclear weapons and two super-powers, and yet we are still here. The dangerous toys are going to get even more dangerous this century, but I don’t see the rationale for assigning > 50% to Doom.
In regards to your expectations. You are still alive and we do have jetpacks today, we have traveled to numerous planets, we do have muscle powered gliders at least, and atomic submarines.
The only mistaken predictions were that humans were useful to send to other planets (they are not), and that time travel is tractable.
And ultimately just because some people make inaccurate predictions does not somehow invalidate prediction itself.
Well, of course I didn’t mean to suggest p=0. I don’t think the collapse of technological civilization is very likely, though—and would assign permanent setbacks a < 1% chance of happening.
My pet theory on this is that glial cells are known to stimulate synapse growth and they support synapse function (e.g. by clearing up after firing) - and so the enormous quantity of glial cells in whale brains (9 times as many glial cells in the sperm whale than in the human) - and their huge neurons—both point to an astronomical number of synapses.
“Glia Cells Help Neurons Build Synapses”
http://www.scientificamerican.com/article.cfm?id=glia-cells-help-neurons-b
Evidence from actual synapse counts in dolphin brains bears on this issue too.
There are problems like this that arise with large synchronous systems which lack reliable clocks—but one of the good things about machine intelligences of significant size will be that reliable clocks will be available—and they probably won’t require global synchrony to operate in the first place.
I do remember reading that the brain does appear to have some highly regular pulse-like synchronizations in the PFC circuit. Every 33hz and 3hz if I remember correctly.
But that is really besides the point entirely.
The larger a system, the longer it takes information to move across the system. A planet wide intelligence would not be able to think as fast as a small laptop-sized intelligence, this is just a fact of the speed of light.
And its actually much much worse than that when you factor in bandwidth considerations.
I don’t really know what you mean.
You think it couldn’t sort things as fast? Search through a specified data set as quickly? Factor numbers as as fast? If you think any of those things, I think you need to explain further. If you agree that such tasks need not take a hit, which tasks are we talking about?
Actually, many of the examples you list would have huge problems scaling to a planet wide intelligence, but that’s a side issue.
The space in algorithm land in which practical universal intelligence lies requires high connectivity. It is not unique in this—many algorithms require this. Ultimately it can probably be derived back from the 3D structure of the universe itself.
Going from on-chip CPU access to off-chip Memory access to disk access to remote internet access is a series of massive exponential drops in bandwidth and related increases in latency which severely limit scalability of all big interesting distributed algorithms.
Sperm whale brain is about 8 Kg
Elephant brain is about 5 Kg
Human brain is about 1.4 Kg
Brain size across all animals is pretty variable.
Also, somebody should probably go ahead and state what is clear from the voting patterns on posts like this, in addition to being implicit in e.g. the About Less Wrong page: this is not really the place for people to present their ideas on Friendly AI. The topic of LW is human rationality, not artificial intelligence or futurism per se. This is the successor to Overcoming Bias, not the SL4 mailing list. It’s true that many of us have an interest in AI, just like many of us have an interest in mathematics or physics; and it’s even true that a few of us acquired our interest in Singularity-related issues via our interest in rationality—so there’s nothing inappropriate about these things coming up in discussion here. Nevertheless, the fact remains that posts like this really aren’t, strictly speaking, on-topic for this blog. They should be presented on other forums (presumably with plenty of links to LW for the needed rationality background).
I realize that it says “a community blog devoted to refining the art of human rationality” at the top of every page here, but it often seems that people here are interested in “a community blog for topics which people who are devoted to refining the art of human rationality are interested in,” which is not really in conflict at all with (what I presume is) LW’s mission of fostering the growth of a rationality community.
The alternative is that LWers who want to discuss “off-topic” issues have to find (and most likely create) a new medium for conversation, which would only serve to splinter the community.
(A good solution is maybe dividing LW into two sub-sites: Less Wrong, for the purist posts on rationality, and Less Less Wrong, for casual (“off-topic”) discussion of rationality.)
While there are benefits to that sort of aggressive division, there are also costs. Many conversations move smoothly between many different topics, and either they stay on one side (vitiating the entire reason for a split), or people yell and scream to get them moved, being a huge pain in the ass and making it much harder to have these conversations.
I’ve seen exactly this pattern before at SF conventions. At the last Eastercon (the largest annual British SF convention) there was some criticism that the programme contained too many items that had nothing to do with SF, however broadly defined. Instead, they were items of interest to (some of) the sort of people who go to the Eastercon.
A certain amount of that sort of thing is ok, but if there’s too much it loses the focus, the reason for the conversational venue to exist. Given that there are already thriving forums such as agi and sl4, discussing their topics here is out of place unless there is some specific rationality relevance. As a rule of thumb, I suggest that off-topic discussions be confined to the Open Threads.
If there’s the demand, LessLessWrong might be useful. Cf. rec.arts.sf.fandom, the newsgroup for discussing anything of interest to the sort of people who participate in rec.arts.sf.fandom, the other rec.arts.sf.* newsgroups being for specific SF-related subjects.
Better yet, we could call them Overcoming Bias and Less Wrong, respectively.
point well taken.
I thought it was an interesting thought experiment and relates to that alien message. Not a “this is how we should do FAI”.
But if ever get positive karma again, at least now I know the unwritten rules.
If you stick around, you will. I have a −15 top-level post in my criminal record, but I still went on to make a constructive contribution, judging by my current karma. :-)
What about the strategy of “refining the art of human rationality” by preprocessing our sensory inputs by intelligent machines and postprocessing our motor outputs by intelligent machines? Or doesn’t that count as “refining”?