I’m glad to have read this. In particular:
Sometimes, people get confused and call S-curves exponential growth. This isn’t necessarily wrong but it can confuse their thinking. They forget that constraints exist and think that there will be exponential growth forever. When slowdowns happen, they think that it’s the end of the growth—instead of considering that it may simply be another constraint and the start of another S-Curve.
This is obvious in hindsight, but I hadn’t put my finger on it.
I want to offer salutations to this post.
I intend to link to it whenever I have opportunity to declare my allegiance. I am on the side of civilization.
Looking at this list I kind of want to see these movements mapped on a timeline. When did they start? How fast did they grow?
As a note, I belive that FHI is planning to publish a(n edited?) version of this document as an actual book ala Superintelligence: Paths, Dangers, Strategies.
(Eli’s personal “trying to have thoughts” before reading the other comments. Probably incoherent. Possibly not even on topic. Respond iff you’d like.)
(Also, my thinking here is influenced by having read this report recently.)
On the one hand, I can see the intuition that if a daemon is solving a problem, there is some part of the system that is solving the problem, and there is another part that is working to (potentially) optimize against you. In theory, we could “cut out” the part that is the problematic agency, preserving the part that solves the problem. And that circuit would be smaller.
Does that argument apply in the evolution/human case?
Could I “cut away” everything that isn’t solving the problem of inclusive genetic fitness and end up with a smaller “inclusive genetic fitness maximizer”?
On the on hand, this seems like a kind of confusing frame. If some humans do well on the metric of inclusive genetic fitness (in the ancestral environment), this isn’t because there’s a part of the human that’s optimizing for that and then another part that’s patiently waiting and watching for a context shift in order to pull a treacherous turn on evolution. The human is just pursuing its goals, and as a side effect, does well at the IGF metric.
But it also seems like you could, in principle, build an Inclusive Genetic Fitness Maximizer out of human neuro-machinery: a mammal-like brain that does optimize for spreading its genes.
Would such an entity be computationally smaller than a human?
Maybe? I don’t have a strong intuition either way. It really doesn’t seem like much of the “size” of the system is due to the encoding of the goals. Approximately 0 of the difference in size is due to the goals?
A much better mind design might be much smaller, but that wouldn’t make it any less daemonic.
And if, in fact, the computationally smallest way to solve the IGF problem is as a side-effect of some processes optimizing for some other goal, then the minimum circuit is not daemon-free.
Though I don’t know of any good reason why is should be the case that not optimizing directly for the metric works better than optimizing directly for it. True, evolution “chose” to design human as adaptation-executors, but this seems due to evolution’s constraints in searching the space, not due to indirectness having any virtue over directness. Right?
but a ragtag team of hippie-philosopher-AI-researchers
I love this phrase. I think I’m going to use it in my online dating profile.
Building up an intellectual edifice (of whatever quality) around some topic of interest: fairly rare
I definitely do this. I have half formed books that I might write one day on topics that interest me, and have sprawling Yed graphs in which I’m trying to make sense of confusions and conflicting evidence.
One thing of note is that I was introduced to explicit model building and theorizing a couple of years ago. Because of this had the mental handle of “building a model” as a thing that one could do, with a few role models of people doing it.
I was doing model building of some kind before then (I remember drawing out a graph of body language signals when I was about 21), but I think having the explicit handle helped a lot.
I think this is worth being one of the answers.
I upvoted this post as strongly as I could with my Karma, and I’m putting this comment here to reinforce: this is a great question, and I learned some things about the 19th century from it.
I would love to see more things on Less Wrong on the topics of:
Intellectual progress, and what are the necessary and sufficient conditions for its occurrence.
If past eras were more intellectually productive, either overall or per capita.
Only a partial answer: In my personal experience, writing up whatever thoughts / ideas you have (and even better, sharing them with other people), in some form or another, allows for iteration on what otherwise would have been idle musing.
Well, I now understand what Robin Hanson means when he says futurism is telling morality tales.
For example, you could have a “hand scanner” that showed a “hand” as a dot on a map (like an old-fashioned radar display), and similar scanners for fingers/thumbs/palms; then you would see a cluster of dots around the hand, but you would be able to imagine the hand-dot moving off from the others.
This analogy clarifies my view of conciousness, a lot.
“Sure, the qualia is always associated with brain activity, but qualia can’t be brain activity, it’s so obviously of a different kind!”
This is a great question, which makes me all the more excited about LessWrong now also being Quara(?).
Success and happiness cause you to regain willpower
Anyone have a citation for this? (Including citations that didn’t replicate.)
A (completely unvetted) idea that was just suggested to me by someone:
There’s some folk wisdom that first born children are born later, and spend more time in the womb on average. If this is true, perhaps it mediates the intelligence boosting effect? (I have no strong reason to suspect that it does, but it seems good to note possible hypotheses here.)
Does anyone know if the folk wisdom is true? Does being first born correlate with a longer natal incubation time?
As was written in this seminal post:
In Artificial Intelligence, and particularly in the domain of nonmonotonic reasoning, there’s a standard problem: “All Quakers are pacifists. All Republicans are not pacifists. Nixon is a Quaker and a Republican. Is Nixon a pacifist?”
What on Earth was the point of choosing this as an example? To rouse the political emotions of the readers and distract them from the main question? To make Republicans feel unwelcome in courses on Artificial Intelligence and discourage them from entering the field? (And no, before anyone asks, I am not a Republican. Or a Democrat.)
Why would anyone pick such a distracting example to illustrate nonmonotonic reasoning? Probably because the author just couldn’t resist getting in a good, solid dig at those hated Greens. It feels so good to get in a hearty punch, y’know, it’s like trying to resist a chocolate cookie.
As with chocolate cookies, not everything that feels pleasurable is good for you. And it certainly isn’t good for our hapless readers who have to read through all the angry comments your blog post inspired.
It’s not quite the same problem, but it has some of the same consequences.
I think circling has the strongest argument going for it
Really? Over and above meditation practices that are about training metacognition?
A single 150-page book required the skins of 12 sheep to make the parchment. That much parchment wasn’t cheap — the parchment on which a book was written cost far more than the actual writing. With that much cost sunk in the materials, it’s no wonder that book-buyers wanted beautiful, handwritten script — it added relatively little to the cost.
It was paper which changed all that. European paper production didn’t get properly underway until the 1300’s. Once it did, book prices plummeted, writing became the primary expense of book production, and printing presses with movable type followed a century later.
Do you know what changed that caused paper production to be viable?
How far back can we follow the chain?
Actually, it might be cool to make a GIF of “how the printing press happened” with graphs like this, if anyone likes doing that sort of thing.
Unsolicited editorial note: I think this might be clearer if you had extra versions of the graphs that are labeled with the relevant technologies instead of labeled abstractly. Walking through the printing press example, with a graph for each time-step for instance.
(In practice I did this myself, but it would be easier if you held my hand, with pictures that I can look at.)