lol yeah, I know what you’re talking about.
Okay okay, fine. ;-)
lol yeah, I know what you’re talking about.
Okay okay, fine. ;-)
Sometimes I feel like looking into how I can help humanity (e.g. 80000 hours stuff), but other times I feel like humanity is just irredeemable and may as well wipe itself off the planet (via climate change, nuclear war, whatever).
For instance, humans are so facepalmingly bad at making decisions for the long term (viz. climate change, running out of fossil fuels) that it seems clear that genetic or neurological enhancements would be highly beneficial in changing this (and other deficiencies, of course). Yet discourse about such things is overwhelmingly negative, mired in what I think are irrational kneejerk reactions to defend “what it means to be human.” So I’m just like, you know what? Fuck it. You can’t even help yourselves help yourselves. Forget it.
Thoughts?
How cool, I’ve never heard of CFAR before. It looks awesome. I don’t think I’m capable of making a lot of money, but I’ll certainly look into CFAR.
Edit: I just realized that CFAR’s logo is at the top of the site. Just never looked into it. I am not a smart man.
Given the unbelievable difficulty in overcoming cognitive bias (mentioned in this article and many others), is it even realistic to expect that it’s possible? Maybe there are a lucky few who may have that capacity, but what about a majority of even those with above-average intelligence, even after years of work at it? Would most of them not just sort of drill themselves into a deeper hole of irrationality? Even discussing their thoughts with others would be of no help, given the fact that most others will be afflicted with cognitive biases as well. Since this blog is devoted to precisely that effort (i.e. helping people become more rational), I would think that those who write posts here must have reason to believe that it is indeed quite possible, but do you have any examples of such improvement? Have any scientists done any studies on overcoming cognitive bias? The ones I’ve seen only show that being aware of cognitive bias barely removes its effects.
It almost seems like the only way to truly overcome cognitive biases is to do something like design a computer program based on something you know for sure you’re not biased about (e.g. statistics that people formed correct opinions about in various experiments) and then run it for something you are likely to be biased about.
I apologize if there are already a bunch of posts (or even comments!) answering this question; I’ve been on the site like all day and haven’t come across any, so I figured it couldn’t hurt to ask.
people basing morality on fiction.
Yes, and that seems truly damaging. I get the need to create conflict in fiction, but it seems to come always at the expense of technological progress, in a way I’ve never really understood. When I read Brave New World, I genuinely thought it truly was a “brave new world.” So what if some guy was conceived naturally?? Why is that inherently superior? Sounds like status quo bias, if you ask me. Buncha Luddite propraganda.
I’ve actually been working on a pro-technology, anti-Luddite text-based game. Maybe working on it is in fact a good idea towards balancing out the propaganda and changing public opinion...
Since people were pretty encouraging about the quest to do one’s part to help humanity, I have a follow-up question. (Hope it’s okay to post twice on the same open thread...)
Perhaps this is a false dichotomy. If so, just let me know. I’m basically wondering if it’s more worthwhile to work on transitioning to alternative/renewable energy sources (i.e. we need to develop solar power or whatever else before all the oil and coal run out, and to avoid any potential disastrous climate change effects) or to work on changing human nature itself to better address the aforementioned energy problem in terms of better judgment and decision-making. Basically, it seems like humanity may destroy itself (if not via climate change, then something else) if it doesn’t first address its deficiencies.
However, since energy/climate issues seem pretty pressing and changing human judgment is almost purely speculative (I know CFAR is working on that sort of thing, but I’m talking about more genetic or neurological changes), civilization may become too unstable before it can take advantage from any gains from cognitive enhancement and such.On the other hand, climate change/energy issues may not end up being that big of a deal, so it’s better to just focus on improving humanity to address other horrible issues as well, like inequality, psychopathic behavior, etc.
Of course, society as a whole should (and does) work on both of these things. But one individual can really only pick one to make a sizable impact—or at the very least, one at a time. Which do you guys think may be more effective to work on?
[NOTE: I’m perfectly willing to admit that I may be completely wrong about climate change and energy issues, and that collective human judgment is in fact as good as it needs to be, and so I’m worrying about nothing and can rest easy donating to malaria charities or whatever.]
The short answer is that I’m fairly confident about it and I’m fairly confident in the calibration of my confidence levels.
The long answer relies on my clarifying, I think, what I mean by “would rather do.” I’ll define it as “interesting enough to me to want to spend a significant number of years of my life on.” I actually started getting really burned out on my current pursuit, so if you’d asked me this question about a week ago, I would have answered that I’d rather just get some dumb job and play video games in my free time for the rest of my life, but now I feel much more invigorated about it (primarily due to reading about deliberate practice), and when it comes to actually applying myself to something, this is the thing I’m most interested in.
The only runner-up, as I mentioned in the linked post, is something involved in charity work, ideally something a bit analytical and strategic. I’ve also toyed with the idea of doing something more math-related, but math has always been a weakness for me, though I enjoy it, so I doubt it would be best to pursue that professionally.
Through 80000hours.org, I learned that I could have just been suffering from familiarity bias—i.e. that I’m only considering types of jobs I’m aware of when there could be something I’m not aware of that I would love—so I looked through the descriptions of a bunch of careers, and I just couldn’t bring myself to care about any of them. From what I gather about human psychology, I’m sure if I chose, at random, a career that I’d be likely to be good at, I’d come to like it and be happy, and that’s pretty much what I’ll do once it’s clear that I’ve failed at my current pursuit, but there’s no strong argument for jumping to that stage prematurely. I’ll be a few more years behind in that case than if I quit and jumped to something else now, but since I don’t currently value succeeding in other fields anyway, it seems as though I may as well continue rolling the dice for a while.
Oh yeah, I’m not saying Spivak’s Calculus doesn’t provide good training in proofs. I really didn’t even get far enough to tell whether it did or not, in which case, feel free to disregard my comment as uninformed. But to be more specific about my “not liking”, I just found the part I did read to be more opaque than engaging or intriguing, as I’ve found other texts (like Strang’s Linear Algebra, for instance).
Edit: Also, I’m specifically responding to statements that I thought referring to liking the book in the enjoyment sense (expressed on this thread and elsewhere as well). If that’s not the kind of liking they meant, then my comment is irrelevant.
It’s a much more advanced book, more suitable for a deeper review somewhere at the intermediate or advanced undergraduate level. I think Axler’s “Linear Algebra Done Right” is better as a second linear algebra book (though it’s less comprehensive), after a more serious real analysis course (i.e. not just Spivak) and an intro complex analysis course.
Damn, really?? But I hate it when math books (and classes) effectively say “assume this is true” rather than delve into the reason behind things, and those reasons aren’t explained until 2 classes later. Why is it not more pedagogically sound to fully learn something rather than slice it into shallow, incomprehensible layers?
For what it’s worth, I’m doing roughly the same thing, though starting with linear algebra. At first I started with multivariable calc, but when I found it too confusing, people advised me to skip to linear algebra first and then return to MVC, and so far I’ve found that that’s absolutely the right way to go. I’m not sure why they’re usually taught the other way around; LA definitely seems more like a prereq of MVC.
I tried to read Spivak’s Calc once and didn’t really like it much; I’m not sure why everyone loves it. Maybe it gets better as you go along, idk.
I’ve been doing LA via Gilbert Strang’s lectures on the MIT Open CourseWare, and so far I’m finding them thoroughly fascinating and charming. I’ve also been reading his book and just started Hoffman & Kunze’s Linear Algebra, which supposedly has a bit more theory (which I really can’t go without).
Just some notes from a fellow traveler. ;-)
Sigh, well, I’ve been trying to fix it for about ten years (so as long as I’ve been failing. Coincidence?? Probably not). I’m on 2 anti-depressants right this minute (the fourth or fifth cocktail of which I’ve tried). I’ve gone through years of therapy. And the result? Still depressed, often suicidally.
So what else am I supposed to do? I refuse to go to therapy again. I’m sick of telling my whole life story over and over, and looking back on my past therapists, I think they were unhelpful at best and harmful at worst (for encouraging me to pursue my ludicrous pipe dreams, for instance). Moreover, talk therapy (including cognitive behavioral therapy, which some say is the most effective form) is, according to several meta-studies I’ve looked at, of dubious benefit.
I could try ECT, but apparently it has pretty bad side-effects. I’ve looked into submitting myself as a lab rat for deep brain stimulation (for science!), but haven’t been able to find a study that wouldn’t require quitting my job and staying somewhere across the country for two months. So here I am.
But if we can sidestep the ad hominem argument for a moment, it sounds like you’re saying that my aversion to failing at something else is irrational. Would you mind pointing out the error in my reasoning? (This sort of exchange is basically cognitive behavioral therapy, btw.)
lol I almost added a sort of disclaimer addressing that. Yes, I am definitely clinically depressed—partly due to my having failed so epically, imo, but of course I’d say that. ::eyeroll:: However, I don’t see the benefit in just discounting everything I say with the statement “you’re depressed.” Not that you did, but that’s the usual response people usually seem to give.
No one succeeds constantly. Success generally follows a string of failures.
Yeah, so they say. But you have to admit that the degree of success and the length of strings of failures are quite different for each person. If that weren’t true, then every actor would be a movie star. Moreover, success is never guaranteed, no matter how many failures you’ve endured!
I’m dealing with a bout of what I assume is basically superstition. Over the last 10 years, I’ve failed disastrously at two careers, and so I’ve generalized this over everything: I assume I’ll fail at any other career I want to pursue, too.
To me, this isn’t wholly illogical: these experiences prove to me that I’m just not smart or hard-working enough to do anything more interesting than pushing paper (my current job). Moreover, desirable careers are competitive practically by definition, so failing at every other career I try is an actual possibility.
Theoretically, perhaps I just haven’t pursued the career I’m “really” talented at, but now I’m far too old to adequately pursue whatever that might be. (There’s also the fact that sometimes I feel so discouraged that I don’t even WANT to pursue a career I might like ever again, but obviously that’s a different issue.)
I obviously don’t want to be one of those mindless “positive thinking” idiots and just “go for it” and “follow my heart” and all that crap. And I assume you guys won’t dish out that advice. But am I overreacting here? Is it in fact rational to attempt yet another career, or is it safe to assume any attempt will most likely fail, and instead of expending energy on a losing battle, I may as well roll over and resign myself to paper-pushing?
I loved Mathnet! ^_^ 1 1 2 3 5 -- eureka!
CFAR style rationality training might sound less impressive then changing around peoples neurology but it might be an approach with a lot less ugly side effects.
It’s a start, and potentially fewer side effects is always good, but think of it this way: who’s going to gravitate towards rationality training? I would bet people who are already more rational than not (because it’s irrational not to want to be more rational). Since participants are self-selected, a massive part of the population isn’t going to bother with that stuff. There are similar issues with genetic and neurological modifications (e.g. they’ll be expensive, at least initially, and therefore restricted to a small pool of wealthy people), but given the advantages over things like CFAR I’ve already mentioned, it seems like it’d be worth it...
I have another issue with CFAR in particular that I’m reluctant to mention here for fear of causing a shit-storm, but since it’s buried in this thread, hopefully it’ll be okay. Admittedly, I only looked at their website rather than actually attending a workshop, but it seems kind of creepy and culty—rather reminiscent of Landmark, for reasons not the least of which is the fact that it’s ludicrously, prohibitively expensive (yes, I know they have “fellowships,” but surely not that many. And you have to use and pay for their lodgings? wtf?). It’s suggestive of mind control in the brainwashing sense rather than rationality. (Frankly, I find that this forum can get that way too, complete with shaming thought-stopping techniques (e.g. “That’s irrational!”). Do you (or anyone else) have any evidence to the contrary? (I know this is a little off-topic from my question—I could potentially create a workshop that I don’t find culty—but since CFAR is currently what’s out there, I figure it’s relevant enough.)
Given how things are going technology involves in a way where I don’t think we have to fear that we will have no energy when coal runs out. There plenty of coal around and green energy evolves fast enough for that task.
You could be right, but I think that’s rather optimistic. This blog post speaks to the problems behind this argument pretty well, I think. Its basic gist is that the amount of energy it will take to build sufficient renewable energy systems demands sacrificing a portion of the economy as is, to a point that no politician (let alone the free market) is going to support.
This brings me to your next point about addressing politics instead of neurology. Have you ever tried to get anything changed politically...? I’ve been involved in a couple of movements, and my god is it discouraging. You may as well try to knock a brick wall down with a feather. It basically seems that humanity is just going to be the way it is until it is changed on a fundamental level. Yes, I know society has changed in many ways already, but there are many undesirable traits that seem pretty constant, particularly war and inequality.
As for solar as opposed to other technologies, I am a bit torn as to whether it might be better to work on developing technologies rather than whatever seems most practical now. Fusion, for instance, if it’s actually possible, would be incredible. I guess I feel that working on whatever’s practical now is better for me, personally, to expend energy on since everything else is so speculative. Sort of like triage.
I think you underrate the existential risks that come along with substantial genetic or neurological enhancements.
It’s true, I absolutely do. It irritates me. I guess this is because the ethics seem obvious to me: of course we should prevent people from developing a “supervirus” or whatever, just as we try to prevent people from developing nuclear arms or chemical weapons. But steering towards a possibly better humanity (or other sentient species) just seems worth the risk to me when the alternative is remaining the violent apes we are. (I know we’re hominds, not apes; it’s just a figure of speech.)
When it comes to running out of fossil fuels we seem to do quite well. Solar energy halves costs every 7 years.
That’s certainly a reassuring statistic, but a less reassuring one is that solar power currently supplies less than one percent of global energy usage!! Changing that (and especially changing that quickly) will be an ENORMOUS undertaking, and there are many disheartening roadblocks in the way (utility companies, lack of government will, etc.). The fact that solar itself is getting less expensive is great, but unfortunately the changing over from fossil fuels to solar (e.g. phasing out old power plants and building brand new ones) is still incredibly expensive.
The core question is: “What kind of impact do you expect to make if you work on either issue?”
Do you think there work to be done in the space of solar power development that other people than yourself aren’t effectively doing? Do you think there work to be done in terms of better judgment and decision-making that other people aren’t already doing?
I’m familiar with questions like these (specifically, from 80000 hours), and I think it’s fair to say that I probably wouldn’t make a substantive contribution to any field, those included. Given that likelihood, I’m really just trying to determine what I feel is most important so I can feel like I’m working on something important, even if I only end up taking a job over someone else who could have done it equally well.
That said, I would hope to locate a “gap” where something was not being done that should be, and then try to fill that gap, such as volunteering my time for something. But there’s no basis for me to surmise at this point which issue I would be able to contribute more to (for instance, I’m not a solar engineer).
To me it seems much more effective to focus on more cognitive issues when you want to improve human judgment. Developing training to help people calibrate themselves against uncertainty seems to have a much higher return than trying to do fMRI studies or brain implants.
At the moment, yes, but it seems like it has limited potential. I think of it a bit like bootstrapping: a judgment-impaired person (or an entire society) will likely make errors in determining how to improve their judgment, and the improvement seems slight and temporary compared to more fundamental, permanent changes in neurochemistry. I also think of it a bit like people’s attempts to lose weight and stay fit. Yes, there are a lot of cognitive and behavioral changes people can make to facilitate that, but for many (most?) people, it remains a constant struggle—one that many people are losing. But if we could hack things like that, “temptation” or “slipping” wouldn’t be an issue.
The problem with coal isn’t that it’s going to run out but that it kills hundred of thousands of people via pollution and that it creates climate change.
From what I’ve gathered from my reading, the jury is kind of out on how disastrous climate change is going to be. Estimates seem to range from catastrophic to even slightly beneficial. You seem to think it will definitely be catastrophic. What have you come across that is certain about this?
Yeah, that accurately describes their effect on me.
I used to be on Buproprion, but it had unpleasant physical effects on me (i.e. heart racing/pounding, which makes sense, given that it’s stimulant-like) without any noticeable mood effects. I was quite disappointed, since a friend of mine said he practically had a manic episode on it. However, I took it conjunction with an SNRI, so maybe that wouldn’t have happened if I’d just taken it on its own.… Idk.
I’m actually surprised my psychiatrist hasn’t recommended an MAOI to me in that case, since she freaks the hell out when I say I’m suicidal, and I’ve done so twice. I’ll put MAOIs at the bottom of my aforementioned new to-do list. :)
Huh, interesting. Up-managing one’s doctor seems frowned upon in our society—since it usually comes in the form of asking one’s doctor for medications mentioned in commercials—but obviously your approach seems much more valid. Kind of irritating, though, that doctors don’t appear to really be doing their job. :P
The exchange here has made me realize that I’ve actually been skipping my meds too often. Heh.… :\ So if I simply tighten that up, I will effectively increase my dosage. But if that doesn’t prove to be enough, I’ll go the route you’ve suggested. Thanks! :)
Ah, thanks. :)
I agree. Getting downvoted feels bad man, no matter the reason.