The link seems broken? : (
On the process level: I would offer a bit of unsolicited advice about the method you used to generate reasons for pessimism. You (and others), might try it in future.
First of all, I strongly applaud the step of taking out a physical clock/ timer and making a solid attempt at answering the question for yourself. Virtue points (and karma) awarded!
However, when I read your list, it’s blatantly one-sided. You’re only trying to generate reasons for pessimism not reasons for optimism. This is not as bad as writing the bottom line, but generating arguments for only one side of a question biases your search.
Given this, one thing that I might do is first, spend 5 minutes generating the best arguments for (or concrete scenarios which inspire) pessimism about impact measures, then shift my mental orientation and spend 5 minutes generating arguments for (or concrete scenarios in which) impact measures seem promising.
But I wouldn’t stop there. I would then spend 5 minutes (or as long as I need), looking over the first list and trying to generate counterarguments: reasons why the world probably isn’t that way. Once I had done that, I would look over my new list of counter arguments, and try to generate counter-counterarguments, iterating until I either get stuck, or reach a sort of equilibrium where the arguments I’ve made are as strong as I can see how to make.
Then I would go back to my second original list (the one with reasons for optimism) and do the same back and forth, generating counterarguments and counter-counterarguments, until I get stuck or reach equilibrium on that side.
At that point, I should have two lists of the strongest reasons I can muster, arguments in favor of pessimism and arguments in favor of optimism, both of which have been stress-tested by my own skepticism. I’d then compare both lists, and if any of the arguments invalidates or weakens another, I adjust them accordingly (there might be a few more rounds of back and forth).
At this point, I’ve really thoroughly considered the issue. Obviously this doesn’t mean that I’ve gotten the right answer, or that I’ve thought of everything. But it dose mean that for all practical purposes, I’ve exhausted the low hanging fruit of everything I can think of.
0. Take a binary question.
1. Make the best case I can for one answer, giving what ever arguments, or ways the world would have to be, that support that outcome.
2. Similarly make the best case I can for the other answer.
3. Take the reasoning for my first answer generate counterarguments. Generate responses to those counterarguments. Continue Iterate until you reach equilibrium.
4. Do the same to the reasoning for your second answer
5. Compare your final arguments on both sides of the question, adjusting as necessary.
(This procedure is inspired by a technique that I originally learned from Leverage Research / Paradigm Academy. By their terminology, this procedure is called (the weak form of) Pyrrhonian skepticism, after the Greek philosopher Pyrrho (who insisted that knowledge was impossible, because there were always arguments on both sides of a question). I’ve also heard it referred to, more generally, as “alternate stories”.)
Of course, this takes more time to do, and that time cost may or may not be worth it to you. Furthermore, there are certainly pieces of your context or thinking process that I’m missing. Maybe you, in fact, did part of this process. But this is an extended method to consider.
This is surprisingly near to a cogent response.
I’m very glad to read disambiguations like this one.
(It has tentatively prompted me to write up one for all the different things that “rationality” can mean when one is doing “rationality development”. We’ll see if I get around to actually writing it up anytime soon, though.)
I’m glad to have read this. In particular:
Sometimes, people get confused and call S-curves exponential growth. This isn’t necessarily wrong but it can confuse their thinking. They forget that constraints exist and think that there will be exponential growth forever. When slowdowns happen, they think that it’s the end of the growth—instead of considering that it may simply be another constraint and the start of another S-Curve.
This is obvious in hindsight, but I hadn’t put my finger on it.
I want to offer salutations to this post.
I intend to link to it whenever I have opportunity to declare my allegiance. I am on the side of civilization.
Looking at this list I kind of want to see these movements mapped on a timeline. When did they start? How fast did they grow?
As a note, I belive that FHI is planning to publish a(n edited?) version of this document as an actual book ala Superintelligence: Paths, Dangers, Strategies.
(Eli’s personal “trying to have thoughts” before reading the other comments. Probably incoherent. Possibly not even on topic. Respond iff you’d like.)
(Also, my thinking here is influenced by having read this report recently.)
On the one hand, I can see the intuition that if a daemon is solving a problem, there is some part of the system that is solving the problem, and there is another part that is working to (potentially) optimize against you. In theory, we could “cut out” the part that is the problematic agency, preserving the part that solves the problem. And that circuit would be smaller.
Does that argument apply in the evolution/human case?
Could I “cut away” everything that isn’t solving the problem of inclusive genetic fitness and end up with a smaller “inclusive genetic fitness maximizer”?
On the on hand, this seems like a kind of confusing frame. If some humans do well on the metric of inclusive genetic fitness (in the ancestral environment), this isn’t because there’s a part of the human that’s optimizing for that and then another part that’s patiently waiting and watching for a context shift in order to pull a treacherous turn on evolution. The human is just pursuing its goals, and as a side effect, does well at the IGF metric.
But it also seems like you could, in principle, build an Inclusive Genetic Fitness Maximizer out of human neuro-machinery: a mammal-like brain that does optimize for spreading its genes.
Would such an entity be computationally smaller than a human?
Maybe? I don’t have a strong intuition either way. It really doesn’t seem like much of the “size” of the system is due to the encoding of the goals. Approximately 0 of the difference in size is due to the goals?
A much better mind design might be much smaller, but that wouldn’t make it any less daemonic.
And if, in fact, the computationally smallest way to solve the IGF problem is as a side-effect of some processes optimizing for some other goal, then the minimum circuit is not daemon-free.
Though I don’t know of any good reason why is should be the case that not optimizing directly for the metric works better than optimizing directly for it. True, evolution “chose” to design human as adaptation-executors, but this seems due to evolution’s constraints in searching the space, not due to indirectness having any virtue over directness. Right?
but a ragtag team of hippie-philosopher-AI-researchers
I love this phrase. I think I’m going to use it in my online dating profile.
Building up an intellectual edifice (of whatever quality) around some topic of interest: fairly rare
I definitely do this. I have half formed books that I might write one day on topics that interest me, and have sprawling Yed graphs in which I’m trying to make sense of confusions and conflicting evidence.
One thing of note is that I was introduced to explicit model building and theorizing a couple of years ago. Because of this had the mental handle of “building a model” as a thing that one could do, with a few role models of people doing it.
I was doing model building of some kind before then (I remember drawing out a graph of body language signals when I was about 21), but I think having the explicit handle helped a lot.
I think this is worth being one of the answers.
I upvoted this post as strongly as I could with my Karma, and I’m putting this comment here to reinforce: this is a great question, and I learned some things about the 19th century from it.
I would love to see more things on Less Wrong on the topics of:
Intellectual progress, and what are the necessary and sufficient conditions for its occurrence.
If past eras were more intellectually productive, either overall or per capita.
Only a partial answer: In my personal experience, writing up whatever thoughts / ideas you have (and even better, sharing them with other people), in some form or another, allows for iteration on what otherwise would have been idle musing.
Well, I now understand what Robin Hanson means when he says futurism is telling morality tales.
For example, you could have a “hand scanner” that showed a “hand” as a dot on a map (like an old-fashioned radar display), and similar scanners for fingers/thumbs/palms; then you would see a cluster of dots around the hand, but you would be able to imagine the hand-dot moving off from the others.
This analogy clarifies my view of conciousness, a lot.
“Sure, the qualia is always associated with brain activity, but qualia can’t be brain activity, it’s so obviously of a different kind!”
This is a great question, which makes me all the more excited about LessWrong now also being Quara(?).
Success and happiness cause you to regain willpower
Anyone have a citation for this? (Including citations that didn’t replicate.)
A (completely unvetted) idea that was just suggested to me by someone:
There’s some folk wisdom that first born children are born later, and spend more time in the womb on average. If this is true, perhaps it mediates the intelligence boosting effect? (I have no strong reason to suspect that it does, but it seems good to note possible hypotheses here.)
Does anyone know if the folk wisdom is true? Does being first born correlate with a longer natal incubation time?
As was written in this seminal post:
In Artificial Intelligence, and particularly in the domain of nonmonotonic reasoning, there’s a standard problem: “All Quakers are pacifists. All Republicans are not pacifists. Nixon is a Quaker and a Republican. Is Nixon a pacifist?”
What on Earth was the point of choosing this as an example? To rouse the political emotions of the readers and distract them from the main question? To make Republicans feel unwelcome in courses on Artificial Intelligence and discourage them from entering the field? (And no, before anyone asks, I am not a Republican. Or a Democrat.)
Why would anyone pick such a distracting example to illustrate nonmonotonic reasoning? Probably because the author just couldn’t resist getting in a good, solid dig at those hated Greens. It feels so good to get in a hearty punch, y’know, it’s like trying to resist a chocolate cookie.
As with chocolate cookies, not everything that feels pleasurable is good for you. And it certainly isn’t good for our hapless readers who have to read through all the angry comments your blog post inspired.
It’s not quite the same problem, but it has some of the same consequences.