How Load-Bearing Is Your Ideology?

, Last updated

Imagine you were looking for convincing explanations as to why stealing is wrong. Let’s use the example of stealing from your local mom-and-pop store.

A picture of the outside of a small deli in New York.

You ask a therapist, and they make an appeal to your empathy. Think about how that would make the owners feel: they rely on selling their goods to make a living, and if too much is stolen they may not be able to make ends meet. You have a good amount of empathy, and so you find this argument reasonably convincing.

You ask an economist, and they tell you that theft imposes costs on everyone else. In order to compensate for the loss of income from selling that good, the store will have to raise prices. They might even have to invest in security systems and cameras, and raise prices further to cover the cost. You don’t like it when goods get more expensive, so you find this reasonably convincing.

You ask a judge, and they you it’s good to live under the rule of law. If everyone goes around just stealing whatever they want, it leads to chaos, as people take to more desperate measures to secure the property they need to live their lives and run their businesses. You like living in a society with trustworthy laws, so you find this reasonably convincing.

Then you ask one of the more uneducated Christians in your neighbourhood, and they tell you that stealing is wrong because Jesus said so.1


Say you’re an atheist, or that you’ve somehow never heard of Christianity before. “Why should I listen to what this Jesus guy says?” you ask.

They tell you that Jesus was the Son of God, who sacrificed himself to free humanity from the burden of Original Sin.

“Who’s God?” you ask, “And what is Original Sin?”

God, they explain, is the creator of the entire universe. And Original Sin is the burden all mankind bore, a legacy from our first ancestors in the Garden of Eden who first disobeyed God and were cast out.

“Huh,” you say, “this is a lot to take in. So you’re saying that in order to understand why stealing is wrong, I have to accept that God is the creator of the world, that he made a place called the Garden of Eden where our first ancestors lived, that those ancestors disobeyed God and were kicked out, and that God sent us his Son Jesus who died to make it all okay again? And that Jesus said not to steal, so we shouldn’t?”

Yes, they say, and now that you’ve accepted all that I expect you to get baptized and to see you in church every Sunday.

A church on a clear sunny day

You can see why, if you didn’t already believe in Christianity, this argument would have trouble convincing you. It’s a huge pill to swallow just to get to “stealing is wrong”. You have to absorb an entire ideology first.

Absorbing an ideology like this isn’t necessarily a bad thing! There are folks who became genuinely better people after finding religion. And once you have that ideology in place, you can extend it to other things, like why it’s wrong to commit adultery or kill people.2

But if all you’re trying to get across is an argument for the single claim “stealing is wrong”, you’re not likely to get there by pushing an ideology. You’d be much better off relying on arguments that already fit their existing worldviews.


I bring this up because I’ve recently gotten pulled into the vicinity of another weird Silicon Valley gravitational field: Effective Altruism.

This is a group of mostly good-hearted nerds who are interested in, as they describe it, “using evidence and reason to figure out how to benefit others as much as possible, and taking action on that basis.”

In practice, this usually means promoting charitable work and funding, focusing on charities that are cost-effective: likely to have the biggest positive impact on the largest number of people for the minimum number of dollars.

This is all founded on a philosophy of utilitarianism, which is a whole moral system in and of itself, and most Effective Altruists seem to accept some variety of it.

A cornerstone of Effective Altruist thought is to give people equal moral weight regardless of who they are or where they live. Some take it even further, extending that into caring deeply about the suffering of animals, or about potential future people, our distant descendants hundreds or even thousands of years from now.

As a result, they tend to focus on, again in their words, “global poverty, factory farming, and the long-term future of life on Earth”. Concretely, this means things like:

  • Healthcare interventions for impoverished countries, which help way more people than giving the same amount of money to, say, a first-world hospital.

  • Fighting against factory farming, which if successful would impact the lives of way more animals than, say, wildlife conservation efforts.

  • Aiming at preventing catastrophic risks to humanity like nuclear war or severe pandemics, which are under-resourced and more likely to kill a majority of humanity than things like climate change.3


That’s a lot to take in, right? And there are reasonable objections you might make to it:

  1. Is utilitarianism really the best moral philosophy to base our lives on? In practice, doesn’t it often lead to questionable ends-justify-the-means thinking? And even in theory, doesn’t it lead to weird extreme ideas like the repugnant conclusion?

  2. If we’re funnelling a bunch of money into third-world countries, how do we know that we aren’t making them dependent on that aid by outcompeting local industries and suppliers?

  3. Why should I place equal value on far-off people as on my family? Shouldn’t people mostly take care of their own support networks rather than spreading themselves too thin?

  4. Do potential future people really have the same moral weight as people alive today? Doesn’t that lead to a lot of weird conclusions? And even if we accept this, how do we know how our actions will affect the far-off future?

  5. How many factory-farmed chickens does it take to equal the moral weight of a single impoverished human? How do you even “quantify” suffering like that, or compare different kinds of suffering? Is that even a coherent question to ask?

  6. How do you know that by focusing on the quantifiable, you’re not trading off things that are really important, but hard to quantify?

  7. How do I know these people don’t have ulterior motives? A lot of people that argue you into changing your moral system are doing so for predatory reasons!

If you’re trying to sell someone on the entirety of Effective Altruist thinking, you’ll have to deal with these kinds of questions. Maybe you can give a convincing answer to some of them. Can you convincingly answer all of them?

I doubt it, unless you can convince your conversation partner to read a Bible-sized book. Of which the Effective Altruists have several.4

But what if you’re not trying to sell the whole ideology? What if you’re not looking to convince them to “become Effective Altruists”, or to take in the entirety of the movement’s ideology?

What if, instead, you’re trying to argue for a single claim from the movement, like “You should donate a non-trivial chunk of your income to cost-effective anti-malaria charities”, or “You should focus your career on nuclear disarmament”?

In that case, pushing the entire ideology down their throat is likely to be counterproductive.


Some Effective Altruists know this already. A while ago, a few of them ran a contest to see what kind of philosophical arguments could best persuade people to give $10 to charity. The entries included things like (to paraphrase) “Everyone, including all major religions, agree on the value of charity”, or “Donating to charity does more ‘good’ than keeping the money for yourself”, or “Imagine that a friend of yours swapped places with someone that doesn’t have access to clean water, how much would you pay to get them out of there?”

All of these were better than having no argument at all. But the argument that did best out of all of them just matter-of-factly explained a specific third-world disease:

Many people in poor countries suffer from a condition called trachoma. Trachoma is the major cause of preventable blindness in the world. Trachoma starts with bacteria that get in the eyes of children, especially children living in hot and dusty conditions where hygiene is poor. If not treated, a child with trachoma bacteria will begin to suffer from blurred vision and will gradually go blind, though this process may take many years. A very cheap treatment is available that cures the condition before blindness develops. As little as $25, donated to an effective agency, can prevent someone going blind later in life.

How much would you pay to prevent your own child becoming blind? Most of us would pay $25,000, $250,000, or even more, if we could afford it. The suffering of children in poor countries must matter more than one-thousandth as much as the suffering of our own child. That’s why it is good to support one of the effective agencies that are preventing blindness from trachoma, and need more donations to reach more people.

This argument never directly mentions Effective Altruism or utilitarianism. Its only brush with that is when it asks you to numerically compare the suffering of a poor child with your own, but even then the numbers are presented as so ridiculously skewed that you don’t need to adopt utilitarianism to be convinced by it. All it says is “Here’s something awful that millions of people deal with, it’s really cheap to cure, you can afford to help. Will you?”


This is another example of the general principle I’ve learned in dealing with people with different mindsets or cultures from you: you have to meet them where they’re at.

Some folks have called Effective Altruism a cult. I don’t think it’s a cult—for instance, I’ve never heard of an Effective Altruist organization doing things like deliberately isolating people from their friends and family—but I do think it’s becoming a religion of sorts, in the sense that it’s a moral system promoted by formal institutions that are trying to do good things based on that moral system.

(And they have holy books!)

Changing someone’s moral system is hard, and people will be resistant to it.5 But nearly everyone’s moral system agrees that it’s bad for children to go blind, and that it would be good to have a cheap and trustworthy way to prevent that from happening.

Good arguments don’t rest their weight on load-bearing ideologies. They’re so common-sensically good that they work with a wide variety of ideological or moral systems.

GiveWell does this really well. Every single one of their top charity recommendations is backed by extensive research, not just about how the charities are run, but about how much of a difference your dollars will make to the people they serve. They publish full, transparent reports for those who want to vet them. But for those who don’t have the time or interest, they summarize it simply: anywhere from $3500 to $5500 to save someone’s life.6

Other areas of the Effective Altruist movement aren’t as good.

The biggest red flag I’ve seen is the focus on “community building” as a cause area, especially the ones that aim to increase the number of “Highly Engaged Effective Altruists” they create.

I worry that this risks usurping the movement’s goals, resulting in a focus on expanding the movement for its own sake, promoting the ideology, improving the social status of those within it, or any number of perverse incentives seen in other movements and religions across the world, instead of actually making progress on the causes it ostensibly cares about.


This is especially worrying because utilitarianism can be dangerous if you use it wrong.

In the past few years, some rich folks in the cryptocurrency industry have pledged a lot of money to Effective Altruist organizations. One of the biggest donors was the FTX Future Fund, which was funded by the FTX cryptocurrency exchange.

If you were watching the news late last year, you know that FTX went bankrupt, and their founder, Sam Bankman-Fried, was arrested on fraud charges.

The legal process is still slowly grinding out the “official” story, but generally speaking, what is alleged to have happened is that FTX’s associated hedge fund Alameda Research illegally borrowed a significant chunk of FTX’s customer funds and used them to fund risky bets on crypto assets. This didn’t directly cause them problems until:

  1. A bunch of those crypto assets tanked in value in late 2022, and
  2. A tweet from the head of another crypto exchange shed doubt on FTX, causing a huge number of FTX’s customers to try to withdraw their money at once, only to find that it wasn’t there anymore.

Some people have blamed the Effective Altruists and their ideology for this. The argument is: if you believe that a large number of people’s lives are riding on your ability to make (and donate) money—as Sam certainly did—it’s really easy to convince yourself that the good results of your unethical actions will outweigh the bad. After all, think of how many malaria treatments you can buy with that fraud money!

This is somewhat unfair, and is arguably an exaggeration of the ideology. The Effective Altruists’ responses to these accusations were, overall, “No, what are you talking about, we’ve explicitly said this is not okay.” I genuinely believe they’re being sincere when they say this.

But there’s also a sense in which this isn’t all that unfair a criticism.

If you’re going to use utilitarian ethics, you have to consider not only the short-term immediate effects of what you’re doing but also the second-order and third-order effects, long-term effects, and overall risks. As a general rule, you should assume that approximately every human being on the planet is bad at this.

On the flipside, humans are really good at fooling themselves into thinking that their questionable actions are morally justified if they stand to personally gain from them.

The more complicated your moral system is, the easier it is to get one of the steps wrong. And “Calculate the expected utility of your actions” is a way more complicated moral system than, say, “Thou shalt not steal”.

To wit: you should consider it risky to convince someone to change their moral system, especially if you’re changing it from one that’s currently working well for them to one that’s much harder to “use” properly. If your goal is to do the most good in the world, even if you’re basing that goal on a utilitarian foundation, the risk of accidentally turning people evil should be a factor in your mental calculations.7


If you’re involved in Effective Altruism outreach or community-building or missionary work, whatever you call it, here’s a suggestion: when talking to people, don’t push the whole ideology on people who aren’t explicitly curious about it.

Instead, ask about their interests, pick a specific cause area you think they’d like to hear about, then give people the non-ideologically-based reasons why they might want to care about it. Then step back and give them the space to come to their own conclusions.

If they agree with you that this area is important, that they should donate their money or their time, they might find their way to the rest of Effective Altruism later on.

If, on the other hand, they never fully commit to becoming a part of the movement? Leave them be. They might have a good reason.

Either way, they’ll still be helping make the world a better place, and ultimately that’s what really matters.

  1. There are better Christian arguments against stealing than this, some of which might incorporate some of the other arguments above. Let’s pretend your conversation partner isn’t the kind of person to know those arguments.

    And yes, technically it was Moses that told us via the Ten Commandments, and Jesus just reaffirmed his message. I’m skipping over that part, sorry. ↩︎

  2. …or why it’s wrong to engage in gay relationships. Don’t get me wrong, Christianity isn’t all good. ↩︎

  3. Climate change alone might flood our cities, change weather patterns, and cause mass migration, but it’s unlikely to cause the complete extinction of humanity. Some Effective Altruists do focus on climate change, but in general they tend to prefer working on things that are more potentially catastrophic and comparatively under-resourced.

    “Wait,” you may ask, “how is pandemic preparedness ‘under-resourced’ in the post-covid world?” Surprisingly, even despite covid we’re still under-funding general pandemic preparedness relative to the huge risks involved. But also: these folks were talking about it well before covid, especially in the context of protecting against ever-improving biotech that could pave the way for bioterrorism attacks. ↩︎

  4. I haven’t read those books, but I have read more of their online material than the average person, and I’ve met one or two people involved in the movement. And even I still have fundamental disagreements with them on some of those questions, especially 1, 4, 5, and 6. ↩︎

  5. As they should be, because most people trying to convince you to change or abandon your morals don’t have your best interests at heart. ↩︎

  6. And in case that didn’t tug at your heartstrings enough: that $3500 estimate is for a program from Helen Keller International that focuses exclusively on saving the lives of children↩︎

  7. If you’re looking for an example of how things could go wrong that isn’t FTX, take a look at this post by Will MacAskill from literally a decade ago. In it, he argues that rule #1 for Effective Altruists is to be careful about your own survival and not take unnecessary risks like going hang-gliding, because as an Effective Altruist you’re hopefully going to save the lives of hundreds or thousands of people during your lifetime, and if you die you won’t be able to do that.

    The post, on its own, isn’t that bad. I’ve heard of some Effective Altruists that are so into donating that they feel bad about spending their money on food, healthcare, and other basic life necessities, and this post serves as a counterbalance to that tendency.

    But I can also see how a different sort of person could instead take this as “My life literally has more value than that of others, because of all the good I’m planning to do for the world.”

    And that’s uncomfortably close to ideas like “It is good for me to take power and wield it liberally if I know I’m doing good for the world.”

    Have I mentioned that some Effective Altruists have started expanding into politics↩︎

Like these posts? Want more? Subscribe by RSS or email: