I’m glad to see more of this criticism as I think it’s important for reflection and moving things forward. However, I’m not really sure who you’re critiquing or why. My response would be that your critique (a) appears to misrepresent what the “EA mainstream” is, (b) ignores comparative advantage, or (c) says things I just outright disagree with.
~
The EA Mainstream
Perhaps the biggest example of this is the prevalence of “earning to give”. While this is certainly an admirable option, it should be considered as a baseline to improve upon, not a definitive answer.
I imagine we know different people, even within the effective altruist community. So I’ll believe you if you say you know a decent amount of people who think “earning to give” is the best instead of a baseline.
However, 80,000 Hours, the career advice organization that basically started earning to give have themselves written an article called “Why Earning to Give is Often Not the Best Option” and say “A common misconception is that 80,000 Hours thinks Earning to Give is typically the way to have the most impact. We’ve never said that in any of our materials.”.
Additionally, the earning-to-give people I know (including myself) all agree with the baseline argument but believe earning to give either as best for them relative to other opportunities (e.g., using comparative advantage arguments) and/or believe earning to give to actually be best overall even when considering these arguments (e.g., by being skeptical of EA organizations).
~
Contrast this with, for instance, working at a start-up. Most start-ups are low-impact, but it is undeniable that at least some have been extraordinarily high-impact, so this seems like an area that effective altruists should be considering strongly. Why aren’t there more of us at 23&me, or Coursera, or Quora, or Stripe?
I’m not quite sure what you mean by this:
If you’re asking “why don’t more people work in start-ups?”, I don’t think EAs are avoiding start-ups in any noticeable way. I’ll be working in one, I know several EAs who are working in them, and it doesn’t seem to be all that different from software engineers / web developers in non-startups, except as would be predicted by non start-ups providing even better hiring opportunities.
If you’re asking “why don’t more people start start-ups themselves?”, I think you already answered your own question with regard to people being unwilling to take on high personal risk. 80,000 Hours advises people to do start-ups in essays like “Should More Altruists Consider Entreprenuership?” and “Salary or Start-up: How Do Gooders Can Gain More From Risky Careers”. Also, I can think of a few EAs who have started their own start-ups on these considerations. So perhaps people are irrationally risk-averse—that is a valid critique—but I don’t think it’s unique to the EA movement or we can do much about it.
If you’re asking “why don’t more people go into start-ups because these start-ups are doing high impact things themselves and therefore are good opportunities to have direct impact?”, then I think you’ve hit on a valid critique that many people don’t take seriously enough. I’ve heard some EAs mention it, but it is outside the EA mainstream.
~
We want to know what the best thing to do is, and we want a numerical value. This causes us to rely on scientific studies, economic reports, and Fermi estimates. It can cause us to underweight things like the competence of a particular organization, the strength of the people involved, and other “intangibles” (which are often not actually intangible but simply difficult to assign a number to).
I think the EA mainstream would agree with you on this one as well—GiveWell, for example, has explicitly distanced themselves from numerical calculations (albeit recently) and several EAs have called into question the usefulness of cost-effectiveness estimates, a charge that was largely lead by GiveWell.
~
Comparative Advantage
And beyond the “obvious” alternatives of start-ups and academia, what of the paths that haven’t been created yet? GiveWell was revolutionary when it came about. Who will be the next GiveWell? And by this I don’t mean the next charity evaluator, but the next set of people who fundamentally alter how we view altruism.
I definitely agree that fundamentally altering how people view altruism would be very high impact (if shifted in a beneficial way, of course). But I don’t think everyone has the time, skills, or willingness to do this—or that they even should. I think this ignores the benefits of some specialization of trade.
Likewise, instead of EAs taking classes on global security for themselves, many defer to GiveWell and expect GiveWell to perform higher-quality research on these giving opportunities. After all, if you have broad trust in GiveWell, it’s hard to beat several full-time saavy analysts with your spare time. GiveWell has more comparative advantage here.
~
It also can cause us to over-focus on money as a unit of altruism, while often-times “it isn’t about the money”: it’s about doing the groundwork that no one is doing, or finding the opportunity that no one has found yet.
Right. But not everyone has the time or talents to do this groundwork. So it seems best if we set up some orgs to do this kind of groundwork (e.g., CEA, MIRI, etc.) and give money to them to let them specialize in these kinds of breakthroughs. And then the people who have the free time can start projects like Effective Fundraising or .impact.
If you’re already raising a family and working a full-time job and donating 10%, I think in many cases it’s not worth quitting your job or using your free time to look for more opportunities. We don’t need absolutely everyone doing this search—there’s comparative advantage considerations here too.
~
Outright Disagreement
How many would have pointed out that saying that charities vary by a factor of 1,000 in effectiveness is by itself not very helpful, and is more a statement about how bad the bottom end is than how good the top end is?
I think this has been very helpful from a PR point of view. And even if you think flow-through effects even things out more so that charities only differ by 10x or 100x (which I currently don’t), that’s still significant.
And whether that’s condemnation of the bad end or praise for the top end depends on your perspective and standards for what makes an org good or bad. At least, the slope of the curve suggests that a lot of the difference is coming from the best organizations being a lot better than the merely good ones as opposed to the very bad ones being exceptionally bad (i.e., the curve is skewed toward the top, not toward the bottom).
~
Quantitative estimates often also tend to ignore flow-through effects: [...] These effects are difficult to quantify but human and cultural intuition can do a reasonable job of taking them into account.
But can it? How do you know? I think you should take your own “research over speculation” advice here. I don’t think we understand flow through effects well enough yet to know if they can be reliably intuited.
~
Outright Agreement
an effective altruist makes a bold claim, then when pressed on it offers a heuristic justification together with the claim that “estimation is the best we have”. [...] It can appear to an outside observer as though people are opting for the fun, easy activity (speculation) rather than the harder and more worthwhile activity (research).
I agree this is an unfortunate problem.
~
Conclusion
Lest this essay give a mistaken impression to the casual reader, I should note that there are many exemplary effective altruists who I feel are mostly immune to the issues above
This is where I get to the question of who your intended audience is. It seems like the EA mainstream either agrees with many of your critiques already (and therefore you’re just trying to convince EAs to adopt the mainstream) or you’re placing too much burden on EAs to ignore comparative advantage and have everyone become an EA trailblazer.
GiveWell, for example, has explicitly distanced themselves from numerical calculations (albeit recently) and several EAs have called into question the usefulness of cost-effectiveness estimates, a charge that was largely lead by GiveWell.
I’ll speak up on this one. I am a booster of more such estimates, detailed enough to make assumptions and reasoning explicit. Quantifying one’s assumptions lets other challenge the pieces individually and make progress, where with a wishy-washy “list of considerations pro and con” there is a lot of wiggle room about their strengths. Sometimes doing this forces one to think through an argument more deeply only to discover big holes, or that the key pieces also come up in the context of other problems.
In prediction tournaments training people to use formal probabilities has been helpful for their accuracy.
Also I second the bit about comparative advantage: CEA recently hired Owen Cotton-Barratt to do cause prioritization/flow-through effects related work. GiveWell Labs is heavily focused on it. Nick Beckstead and others at the FHI also do some work on the topic.
It seems like the EA mainstream either agrees with many of your critiques already (and therefore you’re just trying to convince EAs to adopt the mainstream)
I think that on some of these questions there is also real variation in opinion that should not simply be summarized as a clear “mainstream” position.
The question to my mind is whether the value of attempting to make such estimates is sufficiently great so that time spent on them is more cost-effective than just trying to do something directly.
Can you give recent EA related examples of exercises in making quantitative estimates that you’ve found useful?
To be clear, I don’t necessarily disagree with you (it depends on the details of your views on this point). I agree that laying out a list of pros and cons without quantifying things suffers from vagueness of the type you describe. But I strain to think of success stories.
I’ll speak up on this one. I am a booster of more such estimates, detailed enough to make assumptions and reasoning explicit.
I generally agree. But I think there’s a large difference between “here’s a first-pass attempt at a cost-effectiveness estimate purely so we can compare numbers” and “this is how much it costs to save a life”. Another problem is that I don’t think people take much into account when comparing figures (e.g., comparing veg ads to GiveWell) is the differences in epistemic strength behind each number, so that could cause a concern.
~
I think that on some of these questions there is also real variation in opinion that should not simply be summarized as a clear “mainstream” position.
I don’t know how much variation there is. I don’t claim to know a representative sample of EAs. But I do think there’s not much variation among the wisdom of EA orgs on these issues of which I proclaim mainstream.
But I think there’s a large difference between “here’s a first-pass attempt at a cost-effectiveness estimate purely so we can compare numbers” and “this is how much it costs to save a life”.
You still have to answer questions like:
“I can get employer matching for charity A, but not B, is the expected effectiveness of B at least twice as great as that for A, so that I should donate to B?”
“I have an absolute advantage in field X, but I think that field Y is at least somewhat more important: which field should I enter?”
“By lobbying this organization to increase funds to C, I will reduce support for D: is it worth it?”
Those choices imply judgments about expected value. Being evasive and vague doesn’t eliminate the need to make such choices, and tacitly quantify the relative value of options.
Being vague can conceal one’s ignorance and avoid sticking one’s neck out far enough to be cut off, and it can help guard against being misquoted and PR damage, but you should still ultimately be more-or-less assigning cardinal scores in light of the many choices that tacitly rely on them.
It’s still important to be clear on how noisy different inputs to one’s judgments are, to give confidence intervals and track records to put one’s analysis in context rather than just an expected value, but I would say the basic point stands, that we need to make cardinal comparisons and being vague doesn’t help.
I agree magnitude is important, for more than just a PR perspective. But it’s possible to compare magnitudes without using figures like “$3400.47”. I think people go a lot less funny in the head when thinking about “approximately ten times better”.
Though I think I agree with [you] that producing figures like “$3400.47” is important for calibration, I don’t think our goal should be to equate the lowest estimated figure with the highest impact cause or even automatically assume that a lower estimated figure is a better cause (not that [you] would say that, of course).
I think there’s a large difference between “here’s a first-pass attempt at a cost-effectiveness estimate purely so we can compare numbers” and “this is how much it costs to save a life”
Note: I do want to know how much it costs to save a life (or QALY or some other easy metric of good). I’d rather have a ballpark conservative estimate than nothing to go off of.
Back when AMF was recommended, I considered the sentence: “we estimate the cost per child life saved through an AMF LLIN distribution at about $3,400.47” to be one of the most useful in the report, because it gave an idea of an approximate upper bound on the magnitude of good to be done and was easy to understand. Sure, it might not be nuanced—but there’s a lot to be said for a simple measure of magnitude that helps people make decisions without large amounts of thinking.
When considering altruism (in the future—I don’t earn yet) I wouldn’t simply have a charity budget which simply goes to the most effective cause—I’d also be weighing the benefit to the most effective cause against the benefit to myself.
That is to say, if i find out that saving lives (or some other easy metric of good) is cheaper than I thought, that would encourage me to devote a greater proportion of income to said cause. The cheaper the cost of good, the more urgent it becomes to me that the good is done.
So it’s not enough to simply compare charities in a relative sense to find the best. I think the magnitude of good per cost for the most efficient charity, in an absolute sense, is also pretty important for individual donors making decisions about whether to allocate resources to altruism or to themselves.
That is to say, if I find out that saving lives (or some other easy metric of good) is cheaper than I thought, that would encourage me to devote a greater proportion of income to said cause. The cheaper the cost of good, the more urgent it becomes to me that the good is done.
This sort of makes sense to me, but it also doesn’t. My view is that even if causes were way worse than I currently think they are, they’ll still be much more important from an utilitarian perspective than spending on myself. Therefore, I do just construct a charity budget out of all the money I’m willing to give up. I can get the sense of feeling like it is even more urgent that you give up resources, but it already was tremendously urgent in the first place...
But, hey, as long as you’re doing altruistic stuff, I’m not going to begrudge you much!
~
So it’s not enough to simply compare charities in a relative sense to find the best. I think the magnitude of good per cost for the most efficient charity, in an absolute sense, is also pretty important for individual donors making decisions about whether to allocate resources to altruism or to themselves.
I agree magnitude is important, for more than just a PR perspective. But it’s possible to compare magnitudes without using figures like “$3400.47”. I think people go a lot less funny in the head when thinking about “approximately ten times better”.
Though I think I agree with Carl Shulman that producing figures like “$3400.47” is important for calibration, I don’t think our goal should be to equate the lowest estimated figure with the highest impact cause or even automatically assume that a lower estimated figure is a better cause (not that Shulman would say that, of course).
I’m still a student and am only planning how i might spend money when i have it (it seems like a good idea to have a plan for this sort of thing). Thus far I’ve been looking at both effective altruism and financial independence (mostly frugality+low risk investment) blogs as possible options. It’s quite possible that once money is actually in my hands and I’m actually in the position of making the trade-off, I’ll see the appeal of the “charity budget” method...or I might discover that my preferences are less or more selfish than I originally thought, etc.
Right now though...suppose the rate was 5$ a life. If I was going to go out and buy a 10$ sandwich instead of feeding myself via cheaper means for 5$, i’d be weighing that sandwich against one human life. I would be a lot more frugal and devote a greater portion of my income to charity, if reality was like that. I’d be relatively horrified by frivolous spending.
On the other extreme, if it costed a billion dollars to save a single life, I could spend all my days being frugal and giving to charity and probably wouldn’t significantly help even one person. I’d fulfill more of my preferences by just enjoying myself and not worrying about altruism beyond the interpersonal level.
More realistically, If it costs $2000 to save a life, buying a sandwich at the opportunity cost of saving <1% of a life … it’s still sort of selfish to choose the sandwich, but I’m simply not that good of a person that I wouldn’t sometimes trade 1/100th of a strangers life for a small bit of luxury. But I’d certainly think about getting, say, a smaller house if it meant I could save an additional 1-2 people a year.
Of course, the “charity budget” model is simple and makes sense on a practical level when the good / dollar rate remains relatively constant—as I suppose it generally does. But I wouldn’t actually know how large to make my charity budget, unless I had a sense of how much good I could potentially do.
If you’re asking “why don’t more people go into start-ups because these start-ups are doing high impact things themselves and therefore are good opportunities to have direct impact?”, then I think you’ve hit on a valid critique that many people don’t take seriously enough. I’ve heard some EAs mention it, but it is outside the EA mainstream.
Especially because most start-ups don’t have a direct impact in anything altruistic. Yeah, there are some really cool start-ups out there that can change the world. There are also start-ups with solid business plans that won’t change the world. And then there are the majority (in our times of cheap VC money) that won’t change the world and often don’t even have a solid business plan.
I’m glad to see more of this criticism as I think it’s important for reflection and moving things forward. However, I’m not really sure who you’re critiquing or why. My response would be that your critique (a) appears to misrepresent what the “EA mainstream” is, (b) ignores comparative advantage, or (c) says things I just outright disagree with.
~
The EA Mainstream
I imagine we know different people, even within the effective altruist community. So I’ll believe you if you say you know a decent amount of people who think “earning to give” is the best instead of a baseline.
However, 80,000 Hours, the career advice organization that basically started earning to give have themselves written an article called “Why Earning to Give is Often Not the Best Option” and say “A common misconception is that 80,000 Hours thinks Earning to Give is typically the way to have the most impact. We’ve never said that in any of our materials.”.
Additionally, the earning-to-give people I know (including myself) all agree with the baseline argument but believe earning to give either as best for them relative to other opportunities (e.g., using comparative advantage arguments) and/or believe earning to give to actually be best overall even when considering these arguments (e.g., by being skeptical of EA organizations).
~
I’m not quite sure what you mean by this:
If you’re asking “why don’t more people work in start-ups?”, I don’t think EAs are avoiding start-ups in any noticeable way. I’ll be working in one, I know several EAs who are working in them, and it doesn’t seem to be all that different from software engineers / web developers in non-startups, except as would be predicted by non start-ups providing even better hiring opportunities.
If you’re asking “why don’t more people start start-ups themselves?”, I think you already answered your own question with regard to people being unwilling to take on high personal risk. 80,000 Hours advises people to do start-ups in essays like “Should More Altruists Consider Entreprenuership?” and “Salary or Start-up: How Do Gooders Can Gain More From Risky Careers”. Also, I can think of a few EAs who have started their own start-ups on these considerations. So perhaps people are irrationally risk-averse—that is a valid critique—but I don’t think it’s unique to the EA movement or we can do much about it.
If you’re asking “why don’t more people go into start-ups because these start-ups are doing high impact things themselves and therefore are good opportunities to have direct impact?”, then I think you’ve hit on a valid critique that many people don’t take seriously enough. I’ve heard some EAs mention it, but it is outside the EA mainstream.
~
I think the EA mainstream would agree with you on this one as well—GiveWell, for example, has explicitly distanced themselves from numerical calculations (albeit recently) and several EAs have called into question the usefulness of cost-effectiveness estimates, a charge that was largely lead by GiveWell.
~
Comparative Advantage
I definitely agree that fundamentally altering how people view altruism would be very high impact (if shifted in a beneficial way, of course). But I don’t think everyone has the time, skills, or willingness to do this—or that they even should. I think this ignores the benefits of some specialization of trade.
Likewise, instead of EAs taking classes on global security for themselves, many defer to GiveWell and expect GiveWell to perform higher-quality research on these giving opportunities. After all, if you have broad trust in GiveWell, it’s hard to beat several full-time saavy analysts with your spare time. GiveWell has more comparative advantage here.
~
Right. But not everyone has the time or talents to do this groundwork. So it seems best if we set up some orgs to do this kind of groundwork (e.g., CEA, MIRI, etc.) and give money to them to let them specialize in these kinds of breakthroughs. And then the people who have the free time can start projects like Effective Fundraising or .impact.
If you’re already raising a family and working a full-time job and donating 10%, I think in many cases it’s not worth quitting your job or using your free time to look for more opportunities. We don’t need absolutely everyone doing this search—there’s comparative advantage considerations here too.
~
Outright Disagreement
I think this has been very helpful from a PR point of view. And even if you think flow-through effects even things out more so that charities only differ by 10x or 100x (which I currently don’t), that’s still significant.
And whether that’s condemnation of the bad end or praise for the top end depends on your perspective and standards for what makes an org good or bad. At least, the slope of the curve suggests that a lot of the difference is coming from the best organizations being a lot better than the merely good ones as opposed to the very bad ones being exceptionally bad (i.e., the curve is skewed toward the top, not toward the bottom).
~
But can it? How do you know? I think you should take your own “research over speculation” advice here. I don’t think we understand flow through effects well enough yet to know if they can be reliably intuited.
~
Outright Agreement
I agree this is an unfortunate problem.
~
Conclusion
This is where I get to the question of who your intended audience is. It seems like the EA mainstream either agrees with many of your critiques already (and therefore you’re just trying to convince EAs to adopt the mainstream) or you’re placing too much burden on EAs to ignore comparative advantage and have everyone become an EA trailblazer.
I’ll speak up on this one. I am a booster of more such estimates, detailed enough to make assumptions and reasoning explicit. Quantifying one’s assumptions lets other challenge the pieces individually and make progress, where with a wishy-washy “list of considerations pro and con” there is a lot of wiggle room about their strengths. Sometimes doing this forces one to think through an argument more deeply only to discover big holes, or that the key pieces also come up in the context of other problems.
In prediction tournaments training people to use formal probabilities has been helpful for their accuracy.
Also I second the bit about comparative advantage: CEA recently hired Owen Cotton-Barratt to do cause prioritization/flow-through effects related work. GiveWell Labs is heavily focused on it. Nick Beckstead and others at the FHI also do some work on the topic.
I think that on some of these questions there is also real variation in opinion that should not simply be summarized as a clear “mainstream” position.
The question to my mind is whether the value of attempting to make such estimates is sufficiently great so that time spent on them is more cost-effective than just trying to do something directly.
Can you give recent EA related examples of exercises in making quantitative estimates that you’ve found useful?
To be clear, I don’t necessarily disagree with you (it depends on the details of your views on this point). I agree that laying out a list of pros and cons without quantifying things suffers from vagueness of the type you describe. But I strain to think of success stories.
I generally agree. But I think there’s a large difference between “here’s a first-pass attempt at a cost-effectiveness estimate purely so we can compare numbers” and “this is how much it costs to save a life”. Another problem is that I don’t think people take much into account when comparing figures (e.g., comparing veg ads to GiveWell) is the differences in epistemic strength behind each number, so that could cause a concern.
~
I don’t know how much variation there is. I don’t claim to know a representative sample of EAs. But I do think there’s not much variation among the wisdom of EA orgs on these issues of which I proclaim mainstream.
Which positions are you thinking of?
You still have to answer questions like:
“I can get employer matching for charity A, but not B, is the expected effectiveness of B at least twice as great as that for A, so that I should donate to B?”
“I have an absolute advantage in field X, but I think that field Y is at least somewhat more important: which field should I enter?”
“By lobbying this organization to increase funds to C, I will reduce support for D: is it worth it?”
Those choices imply judgments about expected value. Being evasive and vague doesn’t eliminate the need to make such choices, and tacitly quantify the relative value of options.
Being vague can conceal one’s ignorance and avoid sticking one’s neck out far enough to be cut off, and it can help guard against being misquoted and PR damage, but you should still ultimately be more-or-less assigning cardinal scores in light of the many choices that tacitly rely on them.
It’s still important to be clear on how noisy different inputs to one’s judgments are, to give confidence intervals and track records to put one’s analysis in context rather than just an expected value, but I would say the basic point stands, that we need to make cardinal comparisons and being vague doesn’t help.
Like I said to Ishaan:
Note: I do want to know how much it costs to save a life (or QALY or some other easy metric of good). I’d rather have a ballpark conservative estimate than nothing to go off of.
Back when AMF was recommended, I considered the sentence: “we estimate the cost per child life saved through an AMF LLIN distribution at about $3,400.47” to be one of the most useful in the report, because it gave an idea of an approximate upper bound on the magnitude of good to be done and was easy to understand. Sure, it might not be nuanced—but there’s a lot to be said for a simple measure of magnitude that helps people make decisions without large amounts of thinking.
When considering altruism (in the future—I don’t earn yet) I wouldn’t simply have a charity budget which simply goes to the most effective cause—I’d also be weighing the benefit to the most effective cause against the benefit to myself.
That is to say, if i find out that saving lives (or some other easy metric of good) is cheaper than I thought, that would encourage me to devote a greater proportion of income to said cause. The cheaper the cost of good, the more urgent it becomes to me that the good is done.
So it’s not enough to simply compare charities in a relative sense to find the best. I think the magnitude of good per cost for the most efficient charity, in an absolute sense, is also pretty important for individual donors making decisions about whether to allocate resources to altruism or to themselves.
This sort of makes sense to me, but it also doesn’t. My view is that even if causes were way worse than I currently think they are, they’ll still be much more important from an utilitarian perspective than spending on myself. Therefore, I do just construct a charity budget out of all the money I’m willing to give up. I can get the sense of feeling like it is even more urgent that you give up resources, but it already was tremendously urgent in the first place...
But, hey, as long as you’re doing altruistic stuff, I’m not going to begrudge you much!
~
I agree magnitude is important, for more than just a PR perspective. But it’s possible to compare magnitudes without using figures like “$3400.47”. I think people go a lot less funny in the head when thinking about “approximately ten times better”.
Though I think I agree with Carl Shulman that producing figures like “$3400.47” is important for calibration, I don’t think our goal should be to equate the lowest estimated figure with the highest impact cause or even automatically assume that a lower estimated figure is a better cause (not that Shulman would say that, of course).
I’m still a student and am only planning how i might spend money when i have it (it seems like a good idea to have a plan for this sort of thing). Thus far I’ve been looking at both effective altruism and financial independence (mostly frugality+low risk investment) blogs as possible options. It’s quite possible that once money is actually in my hands and I’m actually in the position of making the trade-off, I’ll see the appeal of the “charity budget” method...or I might discover that my preferences are less or more selfish than I originally thought, etc.
Right now though...suppose the rate was 5$ a life. If I was going to go out and buy a 10$ sandwich instead of feeding myself via cheaper means for 5$, i’d be weighing that sandwich against one human life. I would be a lot more frugal and devote a greater portion of my income to charity, if reality was like that. I’d be relatively horrified by frivolous spending.
On the other extreme, if it costed a billion dollars to save a single life, I could spend all my days being frugal and giving to charity and probably wouldn’t significantly help even one person. I’d fulfill more of my preferences by just enjoying myself and not worrying about altruism beyond the interpersonal level.
More realistically, If it costs $2000 to save a life, buying a sandwich at the opportunity cost of saving <1% of a life … it’s still sort of selfish to choose the sandwich, but I’m simply not that good of a person that I wouldn’t sometimes trade 1/100th of a strangers life for a small bit of luxury. But I’d certainly think about getting, say, a smaller house if it meant I could save an additional 1-2 people a year.
Of course, the “charity budget” model is simple and makes sense on a practical level when the good / dollar rate remains relatively constant—as I suppose it generally does. But I wouldn’t actually know how large to make my charity budget, unless I had a sense of how much good I could potentially do.
I’m also a student about to graduate and have looked a lot at both EA and financial independence. I think you’re thinking about things correctly.
Especially because most start-ups don’t have a direct impact in anything altruistic. Yeah, there are some really cool start-ups out there that can change the world. There are also start-ups with solid business plans that won’t change the world. And then there are the majority (in our times of cheap VC money) that won’t change the world and often don’t even have a solid business plan.
Obviously it depends on the startup. But I think people undervalue the impact of, say, creating software that significantly boosts productivity.