Mistake 1: Trying to convince others that I know more than I really do.
Mistake 2: Thinking that I actually know more than I do.
Mistake 1: Trying to convince others that I know more than I really do.
Mistake 2: Thinking that I actually know more than I do.
After my oldest came out as a Santa-denier, I told him something along the lines of “Congratulations. I admit that I and every other grown-up were lying to you. From now on I will never deliberately lie to you about anything again. Please keep your insight secret from other kids who aren’t in on the joke yet, so they too can benefit from figuring it out themselves.”
Thanks for reminding me about SENS and de Grey, I should email him. I should reach out to all the smart people in the research community I know well enough to randomly pester and collect their opinions on this.
What’s the connection to re-evaluating FAI and transhumanism?
I didn’t say I think eugenics = Nazi. I just said Nazis advocated a particularly murderous and arbitrary form of eugenics, so now that’s all that comes to mind for most people today when they think about eugenics, if they do at all.
With a lot of work, though, we may eventually make that issue moot through in-vivo gene therapy.
“the Green Revolution disproves”
“the technology to use their fields efficiently”
“developing plants and irrigation methods”
“with modern technology it is almost completely renewable”
This illustrates precisely what I’m trying to say. The reason we haven’t experienced a Malthusian Crunch is not that the concept itself is impossible or absurd, but because we develop new technologies fast enough to continually postpone it.
This has some implications:
If technological development is derailed by cultural backlash, prolonged recession, or political lunacy, we may find ourselves having to cope with population overshoot on top of whatever the original problem was.
Responsible global citizens need to defend and promote technological progress with every bit of the same zeal they currently have for the natural environment.
Extrapolations of continued technological progress based on past performance are inherently unreliable. So if our extrapolations of not having to worry about overshoot are in effect extrapolations of extrapolations about technological progress, then those extrapolations are themselves not reliable and we cannot afford complacency.
I was tempted to vote “makes no sense at all”. I did not because I’ve had far too many experiences where I dismiss a colleague’s idea as being the product of muddled thinking only to later realize that a) the idea makes sense, they just didn’t know how to express it clearly or b) the idea makes practical sense but my profession chooses to sweep it under the rug because it’s too inconvenient. On Stackoverflow and LW I see the same tendency to mistake hard/tedious problems for meaningless problems and “solve” the problem by prematurely claiming to have dissolved the question or substituting in a different question the respondent finds more convenient.
Some questions really are meaningless or misguided. But experience has taught me to usually give questions the benefit of a doubt until I have enough background information to be more sure. So, I played along and gave the technically correct answer of “I’m parts both”.
Come to think of it, “Red/Blue makes no sense at all” is not even a valid answer to the question. The question did not ask whether it made sense. Such a meta-question should really be a checkbox orthogonal to the main poll question.
People gain skills by working on hard problems, so it doesn’t seem necessary for you to take additional time to explicitly hone your skill set before starting on any project(s) that you want to work on.
The embarrassing truth is I spent so much time cramming stuff into my brain while trying to survive in academia that until now I haven’t really had time to think about the big picture. I just vectored toward what at any given point seemed like the direction that would give me the most options for tackling the aging problem. Now I’m finally as close to an optimal starting point as I can reasonably expect and the time has come to confront the question: “now what”?
The current 500-year window needs to be be VERY typical if it’s the main evidence in support of the statement that “even with no singularity technological advance is a normal part of our society”.
This is like someone in the 1990s saying that constantly increasing share price “is a normal part of Microsoft”.
I think technological progress is desirable and hope that it will continue for a long time. All I’m saying is that being overconfident about future rates of technological progress is one of this community’s most glaring weaknesses.
I favor access to birth control by individuals and am against state decisions on family planning and health.
So do I. But, I bet I can come up with a demographic trend or two that would make the above position a difficult one to defend.
But I predict that the U.S. will not default—at least not this time.
I do too, for what it’s worth. I also predict that I will not die or become uninsurable during the coming year, but I pay my ALCOR dues nonetheless.
I suspect that this is all political theater, but in any ritual combat there the inherent risk that someone will miscalculate and things get real faster than anybody is prepared for.
Great idea! Here’s how I can convert your prospective experiment into retrospective ones:
Comparing hazard functions for individuals with diagnoses of infertility versus individuals who originally enter the clinic record system due to a routine checkup.
The tough part will be guarding against Goodhart’s Law. I suspect that the current system of publications and grant money as an indicator of ability started out as an attempt to improve the efficiency of scientific progress and has by now been thoroughly Goodharted.
As Lumifer points out, tenure was intended to give productive scientists some protected time so they could think. However, the amount of hoops you jump through on the way to getting there puts you through the opposite of protected time so by the time you get tenure you’ve gotten jaded, cynical, and acquired some habits useful for academic survival but harmful to academic excellence.
Thanks, that’s helpful, I’ll read it.
Why don’t you read what’s been said on this site and elsewhere
Because this is a vast site, and I don’t know where to look for what’s been said already in this case. It reminds me of Googling for a computer problem and turning up page after page of forum posts saying “google it you n00b”.
So again, thank you for the link. But what would be even more helpful is knowing what kinds of search strategies you would pursue if you were struck by an idea that was new to you so you didn’t know what keywords to query (or if there even are any keywords for it yet).
Can someone please let me know why this is the most down-voted I have ever been since de-lurking on this site? I’m not whining, I genuinely want to know what intellectual standards I’m not meeting or what social rules I’m violating by posting this.
My goal in posting this was to identify possible dangling units within the friendly AI concept.
The choke point in our Fritz Haber/Norman Borlaug/Edward Jenner pipeline is not the amount of science education out there. It’s a combination of the low-hanging fruit being picked, insufficient investment in novel approaches and not enough geniuses.
Very true. Each year we produce thousands of new Ph.D.s and import thousands more, while slowly choking off funding for basic research, so they languish in a post-doc holding pattern until many of them give up and go do something less innovative but safer.
All of these “what you should do if you are a utilitarian” articles should start with “Assuming you are a being for whom utility matters roughly equally regardless of who experiences it...”
Yes! Thank you for articulating in one sentence what I haven’t been able to in a dozen posts.
Once we’ve dealt with the mass starvation, vast numbers of deaths from malaria, horrendous poverty, etc., then we can start paying a lot more attention to awesomeness.
What if, for practical purposes, there is an inexhaustible supply of suck? What if we can’t deal with it once and for all and then turn our attention to the fun stuff?
So, judging from the reception of my post about the Malthusian Crunch certain Wrongians sense this and have gone into denial (perhaps, if they’re honest with themselves, privately admitting the hope that if they ignore the starving masses long enough, they will go away).
I propose a middle ground between giving everything and giving nothing—a non-arbitrary cutoff for how much help is enough. A cutoff that can be defended on pragmatic grounds without having to assume a shared normative morality.
You put just enough resources into pure suckiness remediation to insure that spillover suckiness will not derail your awesomeness plans. I emphasize pure because there are pursuits that simultaneously strive for new heights of awesomeness and fix suck in equal measure. Obviously this quality is desirable and such projects should not be penalized for having it.
Given the fate of the societies which did not climb the technological tree sufficiently fast, I’d say throttling down progress sure doesn’t look like a wise choice.
I completely agree, that’s a great point! The sixth one, to be exact.
Yes, I have. In my opinion ten billion is too close to overshoot and even 7 billion is too close. Especially if it is accompanied by increased per-capita demand for resources, which it has been so far. If we’re going to rely mainly on the population term of the equation, I think we need to shrink down to about 4 billion before we’re back in the safe zone.
Or, how about telling kids that Santa is rewarding or punishing them for how he predicts they will act during the coming year? Get them started on Newcomb problems!