Great stuff! As someone who’s come to all this Bayes/LessWrong stuff quite late, I was surprised to discover that Scott Alexander’s blog is one of the more popular in the blogosphere, flying the flag for this sort of approach to rationality. I’ve noticed that he’s liked by people on both the Left and the Right, which is a very good thing. He’s a great moderating influence and I think he offers a palatable introduction to a more serious, less biased way of looking at the world, for many people.
gurugeorge
I think the concept of psychological neoteny is interesting (Google Bruce Charlton neoteny) in this regard.
Roughly, the idea would be that some people retain something of the plasticity and curiosity of children, whereas others don’t, they mature into “proper” human beings and lose that curiosity and creativity. The former are the creative types, the latter are the average human type.
There are several layered ironies if this is a valid notion.
Anyway, for the latter type, they really do exhaust their interests in maturity, they stick to one career, their interests are primarily friends and family, etc., so it’s easy to see how for them, life might be “done” at some point. For geeks, nerds, artists, and probably a lot of scientists too, the curiosity never ends, there’s always interest about what happens next, what’s around the corner, so for them, the idea of life extension and immortality is a positive.
It could be that, like sleep, the benefits of reading fiction aren’t obvious and aren’t on the surface. IOW, escapism might be like dreaming—a waste from one point of view (time spent) but still something without which we couldn’t function properly, so therefore not a waste, but a necessary part of maintenance, or summat.
I remember reading a book many years ago which talked about the “hormonal bath” in the body being actually part of cognition, such that thinking of the brain/CNS as the functional unit is wrong (it’s necessary but not sufficient).
This ties in with the philosophical position of Externalism (I’m very much into the Process Externalism of Riccardo Manzotti). The “thinking unit” is really the whole body—and actually finally the whole world (not in the Panpsychist sense, quite, but rather in the sense of any individual instance of cognition being the peak of a pyramid that has roots that go all the way through the whole).
I’m as intrigued and hopeful about the possibility of uploading, etc., as the next nerd, but this sort of stuff has always led me to be cautious about the prospects of it.
There may also a lot more to be discovered about the brain and body too, in the area of some connection between the fascia and the immune system (cf. the anecdotal connection between things like yoga and “internal” martial arts and health).
I’m not sure what you mean by gerrymandered.
What I meant is that you have sub-systems dedicated to (and originally evolved to perform) specific concrete tasks, and shifting coalitions of them (or rather shifting coalitions of their abstract core algorithms) are leveraged to work together to approximate a universal learning machine.
IOW any given specific subsystem (e.g. “recognizing a red spot in a patch of green”) has some abstract algorithm at its core which is then drawn upon at need by an organizing principle which utilizes it (plus other algorithms drawn from other task-specific brain gadgets) for more universal learning tasks.
That was my sketchy understanding of how it works from evol psych and things like Dennett’s books, Pinker, etc.
Furthermore, I thought the rationale of this explanation was that it’s hard to see how a universal learning machine can get off the ground evolutionarily (it’s going to be energetically expensive, not fast enough, etc.) whereas task-specific gadgets are easier to evolve (“need to know” principle), and it’s easier to later get an approximation of a universal machine off the ground on the back of shifting coalitions of them.
That’s a lot to absorb, so I’ve skimmed it, so please forgive if responses to the following are already implicit in what you’ve said.
I thought the point of the modularity hypothesis is that the brain only approximates a universal learning machine and has to be gerrymandered and trained to do so?
If the brain were naturally a universal learner, then surely we wouldn’t have to learn universal learning (e.g. we wouldn’t have to learn to overcome cognitive biases, Bayesian reasoning wouldn’t be a recent discovery, etc.)? The system seems too gappy and glitchy, too full of quick judgement and prejudice, to have been designed as a universal learner from the ground up.
I think that it’s acceptable when it works.
What I mean is, a lot of the transhumanist stuff is predicated on these things working properly. But we know how badly wrong computers can sometimes go, and that’s in everyone’s experience, so much so that “switch it off and switch it on again” is part of common, everyday lore now.
Imagine being so intimately connected with a computerized thingummybob that part of your conscious processing, what makes you you, is tied up with it—and it’s prone to crashing. Or hacking, or any of the other ills that can befall computery things. Potential horrorshow.
Similar for bio enhancements, etc. For example, physical enhancement like steroids, but safer and easier to use, are still a long way off, and until they come, people are just not going to go for it. We really only have a very sketchy understanding of how the body and brain work at the moment. It’s developing, but it’s still early days.
So ultimately, I think for the foreseeable future, people are still going to go for things that are separable, that the natural organic body can use as tools that can be put away, that the natural organic body can easily separate itself from, at will, if they go wrong.
They’re not going to go for any more intimate connections until such things work much, much better than anything we’ve got now.
And I think it’s actually debatable whether that’s ever going to happen. It may be the case that there are limits on complexity, and that the “messy” quality of organics is actually the best way of having extremely complex thinking, moving objects—or that there’s a trade-off between having stupid things that do massive processing well, and clever things that do simple processing well, and you can’t have both in one physical (information processing) entity (but the latter can use the former as tools).
Another angle to look at this would be to look at the rickety nature of high IQ and/or genius—it’s six and a half dozen whether a hyper-intelligent being is going to be of any use at all, or just go off the rails as soon as it’s booted up. It’s probably the same for “AI”.
I don’t think any of this is insurmountable, but I think people are massively underestimating the time it’s going to take to get there; and we’ll already have naturally evolved into quite different beings by that time (maybe as different as early homonids from us), so by that time, this particular question is moot (as there will have been co-evolution with the developing tech anyway, only it will have been very gradual).
Thanks for the heads up, never heard of this guy before but he’s very good and quite inspiring for me where I’m at right now.
I may be mad, but I actually think of Popper more or less in the same breath as Bayesianism—modus tollens and reductio (the main methods of Popperian “critical rationalism”—CR basically says that the reductio is the model of all successful empirical reasoning) just seem to me to be special cases of Bayesianism. The idea with both (as I see it) is that we start where we are and get to the truth by shaving away untruths, by testing our ideas to destruction and going with what’s left standing because we’ve got nothing better left standing—that seems to me the basic gist of both philosophies.
I’m also fond of the idea that knowledge is always conjecture, and that belief has nothing to do with knowledge (and knowledge can occasionally be accidental). Knowledge is just the “aperiodic crystals” of language in its manifest forms (ink on paper, sounds out of a mouth, coding, or whatever), which, by convention (“language games”), represent or model reality either accurately or not, regardless of psychological state of belief.
Furthermore, while I’m on my high horse, Bayesianism is conjectural deductive reasoning—neither “subjective” nor “objective” approaches have anything to do with it. It doesn’t “update beliefs” it updates, modifies, discards, conjectures.
IOW, you take a punt, a bet, a conjecture (none of which have anything to do with belief) at how things are, objectively. The punt is itself in the form of a “language crystal”, objectively out there in reality, in some embodied form, which is something embedded in reality that conventionally models reality, as above—again, nothing to do with belief.
In this context, truth and objectivity (in another sense) are ideals—things we’re aiming for. It may be the case that there is no true proposition, but when we say we have a probably true proposition, what that means is that we have a ranking of conjectures against each other, in a ratio, and the most probable is the most provable (the one that can be best corroborated—in the Popperian sense—by evidence). That’s all.
I think for non-elites it’s about the same. It depends on how you conceive “ideas” of course—whether you restrict the term purely to abstractions, or broaden it to include all sorts of algorithms, including the practical.
Non-elites aren’t concerned with abstractions as much as elites, they’re much more concerned with practical day-to-day matters like raising a family, work, friends, entertainment, etc.
Take for instance DIY videos on Youtube—there are tons of them nowadays, and that’s an example of the kind of thing that non-elites (and indeed elites to the extent that they might actually care about DIY) are going to benefit from tremendously. And I think it’s going to be natural for a non-elite individual to check out a few (after all it’s pretty costless, except in terms of a tiny bit of time) and sift out what seem like the best methods.
What happens if it doesn’t want to—if it decides to do digital art or start life in another galaxy?
That’s the thing, a self-aware intelligent thing isn’t bound to do the tasks you ask of it, hence a poor ROI. Humans are already such entities, but far cheaper to make, so a few who go off and become monks isn’t a big problem.
I can’t remember where I first came across the idea (maybe Daniel Dennett) but the main argument against AI is that it’s simply not worth the cost for the foreseeable future. Sure, we could possibly create an intelligent, self-aware machine now, if we put nearly all the relevant world’s resources and scientists onto it. But who would pay for such a thing?
What’s the ROI for a super-intelligent, self-aware machine? Not very much, I should think—especially considering the potential dangers.
So yeah, we’ll certainly produce machines like the robots in Interstellar—clever expert systems with a simulacrum of self-awareness. Because there’s money in it.
But the real thing? Not likely. The only way it will be likely is much further down the line when it becomes cheap enough to do so for fun. And I think by that time, experience with less powerful genies will have given us enough feedback to be able to do so safely.
If there’s any kernel to the concept of rationality, it’s the idea of proportioning beliefs to evidence (Hume). Everything really flows from that, and the sub-variations (like epistemic and instrumental rationality) are variations of that principle, concrete applications of it in specific domains, etc.
“Ratio” = comparing one thing with another, i.e. (in this context) one hypothesis with another, in light of the evidence.
(As I understand it, Bayes is the method of “proportioning beliefs to evidence” par excellence.)
All purely sensory qualities of an object are objective, yes. Whatever sensory experience you have of an object is just precisely how that object objectively interacts with your sensory system. The perturbation that your being (your physical substance) undergoes upon interaction with that object via the causal sensory channels is precisely the perturbation caused by that object on your physical system, with the particular configuration (“wiring”) it has.
There are still subjective perceived qualities of objects though—e.g. illusory (e.g.like Müller-Lyer, etc., but not “illusions” like the famous “bent” stick in water, that’s a sensory experience), pleasant, inspiring, etc.
I’m calling “sensory” here the experience (perturbation of one’s being) itself, “perception” the interpretation of it (i.e. hypothetical projection of a cause of the perturbation outside the perturbation itself). Of course in doing this I’m “tidying up” what is in ordinary language often mixed (e.g. sometimes we call sensory experiences as I’m calling them “perceptions”, and vice-versa). At least, there are these two quite distinct things or processes going on, in reality. There may also be caveats about at what level the brain leaves off sensorily receiving and starts actively interpreting perception, not 100% sure about that.
Yes, for that person. Remember, we’re not talking about an intrinsic or inherent quality, but an objective quality. Test it however many times you like, the lemon will be sweet to that person—i.e. it’s an objective quality of the lemon for that person.
Or to put it another way, the lemon is consistently “giving off” the same set of causal effects that produce in one person “tart”, another person “sweet”.
The initial oddness arises precisely because we think “sweetness” must itself be an intrinsic quality of something, because there’s several hundred years of bad philosophy that tells us there are qualia, which are intrinsically private, intrinsically subjective, etc.
Hmm, but isn’t this conflating “learning” in the sense of “learning about the world/nature” with “learning” in the sense of “learning behaviours”? We know the brain can do the latter, it’s whether it can do the former that we’re interested in, surely?
IOW, it looks like you’re saying precisely that the brain is not a ULM (in the sense of a machine that learns about nature), it is rather a machine that approximates a ULM by cobbling together a bunch of evolved and learned behaviours.
It’s adept at learning (in the sense of learning reactive behaviours that satisfice conditions) but only proximally adept at learning about the world.
Great stuff, thanks! I’ll dig into the article more.
I think there’s always been something misleading about the connection between knowledge and belief. In the sense that you’re updating a model of the world, yes, “belief” is an ok way of describing what you’re updating. But in the sense of “belief” as trust, that’s misleading. Whether one trusts one’s model or not is irrelevant to its truth or falsity, so any sort of investment one way or another is a side-issue.
IOW, knowledge is not a modification of a psychological state, it’s the actual, objective status of an “aperiodic crystal” (sequences of marks, sounds, etc) as filtered via public habits of use (“interpretation” in more of the mathematical sense) to be representational. IOW there are 3 components, the sequence of scratches, the way the sequence of scratches is used (usually involving interaction with the world, implicitly predicting the world will react a certain way conditional upon certain actions), and the way the world is. None of those involve belief.
So don’t worry about belief. Take things lightly. Except on relatively rare mission-critical occasions, you don’t need to know, and as Feynman typically wisely pointed out, it’s ok not to know.
That thing of lurching from believing in one thing as the greatest thing since sliced bread, to another, I’m familiar with, but at some point, you start to see that emotional roller-coaster as unnecessary.
So it’s not gullibility, but lability (labileness?) that’s the key. Like the old Zen master story “Is that so?”:-
“The Zen master Hakuin was praised by his neighbours as one living a pure life. A beautiful Japanese girl whose parents owned a food store lived near him. Suddenly, without any warning, her parents discovered she was with child. This made her parents angry. She would not confess who the man was, but after much harassment at last named Hakuin. In great anger the parent went to the master. “Is that so?” was all he would say.
“After the child was born it was brought to Hakuin. By this time he had lost his reputation, which did not trouble him, but he took very good care of the child. He obtained milk from his neighbours and everything else he needed. A year later the girl-mother could stand it no longer. She told her parents the truth—the real father of the child was a young man who worked in the fishmarket. The mother and father of the girl at once went to Hakuin to ask forgiveness, to apologize at length, and to get the child back. Hakuin was willing. In yielding the child, all he said was: “Is that so?”
Oh, true for the “uploaded prisoner” scenario, I was just thinking of someone who’d deliberately uploaded themselves and wasn’t restricted—clearly suicide would be possible for them.
But even for the “uploaded prisoner”, given sufficient time it would be possible—there’s no absolute impermeability to information anywhere, is there? And where there’s information flow, control is surely ultimately possible? (The image that just popped into my head was something like, training mice via. flashing lights to gnaw the wires :) )
But that reminds me of the problem of trying to isolate an AI once built.
Fascinating topic, and a topic that’s going to loom larger as we progress. I’ve just registered in order to join in with this discussion (and hopefully many more at this wonderful site). Hi everybody! :)
Surely an intelligent entity will understand the necessity for genetic/memetic variety in the face of the unforeseen? That’s absolutely basic. The long-term, universal goal is always power (to realize whatever); power requires comprehensive understanding; comprehensive understanding requires sufficient “generate” for some “tests” to hit the mark.
The question then, I guess is, can we sort of “drift” into being a mindless monoculture of replicators?
Articles like this, or like s-f in general, or even just thought experiments in general (again, on the “generate” side of the universal process) shows that we are unlikely to, since this is already a warning of potential dangers.