There’s been a lot of buzz since former President Donald Trump was booted from Twitter, and then Facebook. Interestingly, I saw this particular event become a tipping point for Christians to complain about Christian censorship–even though the censorship of Donald Trump had to do with inciting violence and spreading misinformation, things I hope we aren’t claiming as “Christian” positions.

Though the recent surge of this conversation might be tied to the former president, this has been a conversation going on for a good while. I think it’s important to address this debate. But to do that, we have to go deep into the world of social media technologies and try to find every scrap of data we can.

I can’t answer this question with an unqualified Yes or No. I get it, you want me too so you don’t have to read the read of the article. But I can’t in good conscience do that because this matter is anything but simple. So let’s dive in and try to emerge with the truth.

Social Media Icons Tacted
Social media icon photos tacked to a wall.

Difficulties in Discovering Christian Censorship on Social Media

People online keep complaining of Christian censorship on social media. So, what does the data say?

Well, that’s super difficult to determine. Because no researchers have asked this question. That means I had to get a little creative…

The Lack of Data

Very little research has been done on this trend. Many Christian or conservative publications have run with a story about Christian content being removed on a social media platform. All I see are stories, typically one-sided where the offending party never gets to explain the content removal. I keep finding an incident mentioned here and there, but not enough to tell me there is a significant trend. There aren’t statistics on this matter, there aren’t big bipartisan research projects, there aren’t any formal studies to cite.

Some may argue that all the isolated incidents add up to the truth–that there is a pattern. Some have suggested that the media refuses to acknowledge the anti-Christian bias so of course there aren’t stories or studies about the subject. Maybe, perhaps, could be. However, until we see data that shows this is a widespread problem and is DELIBERATE then it’s tough to say the truth.

Conservative vs. Christian

Now, there is a lot more research and discussion about social media discriminations of conservative content. While “Christian” and “conservative” are too often thought interchangeable, they are not synonyms. Conservative content refers to right-leaning political talking points which may include certain stances on policies. Christian content is explicitly about views rooted in a Biblical faith. There’s a difference even if many Christians are conservative.

The confusion of the two types of content, I believe, has lead to the general confusion about Christian censorship. Even if we could prove social media companies had an anti-conservative bias that does not automatically mean they have an anti-Christian bias. Moreso, much perceived anti-conservative bias may actually be about wider issues of misinformation rather than policies (more on that later). Censoring certain conservatives who are peddling conspiracy theories about the vaccine does not mean Facebook is now coming for everyone who says “God loves you” in a post.

While I do end the article talking a lot about conservative censorship because of a specific point I wanted to make, this article is about if social media restricts or bans explicitly Christian content like prayer, Bible verses, or Christian doctrine. That’s the focus. So to examine this matter, since we don’t have research studies to go to, let’s take a step back and look at how social media content moderation works.

How Censoring on Social Media Works

As anyone–right or left, Christian or not–will tell you, the most frustrating part of social media is that they don’t always have clear guidelines on what is and isn’t appropriate. These big tech companies are not going to fork over their patented secrets to the public. Detailed feedback is even rarer, so it is difficult to understand how these giant platforms make decisions.

Nevertheless, we do know the two main parties making the tough calls: people and AI.

Real People Making Tough Calls

Many kinds of content fall in vague areas where real people have to make a decision. We don’t have the exact rules these real people follow, but we do know a little about their working conditions. The Washington Post has interviewed a former content moderator for YouTube and Twitter. He worked up to nine hours a day making serious decisions. About his experience, the Post writes:

He made decisions about whether a child’s genitals were being touched accidentally or on purpose, or whether a knife slashing someone’s neck depicted a real-life killing — and if such content should be allowed online.

Dwoskin, Whalen, and Cabato, “Content moderators at YouTube, Facebook and Twitter see the worst of the web — and suffer silently,” Washington Post, July 25, 2019.

Obviously, this kind of content is much different than a Bible verse. It does demonstrate though the psychological toll that content moderators face. Later in the same article, we get a picture of the workload:

Moderators for Twitter were often expected to review as many as 1,000 items a day, which includes individual tweets and replies and messages, according to current and former workers.

Dwoskin, Whalen, and Cabato, “Content moderators at YouTube, Facebook and Twitter see the worst of the web — and suffer silently,” Washington Post, July 25, 2019.

This shows how a regular person–in this case, a 30-something from the Philippines–is making incredibly tough decisions. It’s not like Mark Zuckerberg is personally shadow-banning you for the fun of it. It’s a real dude, often working aboard, working in an “editorial sweatshop.”

But not only are working conditions, terrible, but there’s also an issue with the human element. Humans may lack the right context for a post and thus make a poor decision on if content should stay up. In 2016, Facebook removed the iconic “Napalm Girl” photo, a black and white picture of a naked 9-year-old girl fleeing from a napalm attack during the Vietnam War. Child nudity obviously goes against community standards and the post was deleted by human moderators (this reminds me of Pharisees, who follow the law to the letter but miss the overall point). After a global petition which argued the photo’s historic significance and the fact that it wasn’t sexualizing children, Facebook caved and restored the photo.

But the importance of context is exactly why Facebook uses mostly humans to make difficult choices. Recently, they have created a “supreme court” oversight board (made up of Nobel prize winners, former prime ministers, and journalists from around the world) to make final calls on the toughest of cases. Real people are the best way to moderate the content. But, of course, real people are also the problem.

Facebook Login Screen
Facebook login screen

Needless to say, let’s give some grace to content moderates who are often from outside the United States, work long arduous hours, and can suffer psychological trauma. Facebook has 15,000 content moderators for their 2.70 billion active monthly users. These kinds of conditions may explain why some non-offensive content is taken down while other more offensive content is kept up. Real people are making judgment calls in harsh conditions, and many don’t have time to give adequate feedback.

None of this is to excuse social media when their calls are dubious. But can we stop for a moment and think rationally before we start petitions or make wild accusations on Facebook? Content moderators for social media are spread out across the world, with different backgrounds and experiences. This group is too small to provide adequate moderation, but it’s too big to be a part of a global conspiracy to silence conservatives and Christians! Everyone, stop and think for a second.

The Artificial Intelligence Problem

While human moderators are used in many cases on social media, artificial intelligence can also flag, hide, and delete posts. While artificial intelligence will not have psychological trauma, AI often fails to understand the intent of a post, where a human might (occasionally).

And AI does a lot of content moderation nowadays due to the small number of human moderators.

According to the platforms’ recent transparency reports, from April to June 2020, nearly 95% of comments flagged as hate speech on Facebook were detected by AI; and on YouTube 99.2% of comments removed for violating Community Standards were flagged by AI.

Novacic, “Censorship on social media? It’s not what you think,” CBS News, Aug. 28, 2020.

But there are inherent problems with this kind of moderation. Carolyn Wysinger, an activist in the LGBTQ community, has had many posts flagged by AI for violent content because she depicted acts of lynching, racism, and transphobia. While her goal was education, machines don’t pick up on that intent, as reported by CBS News. What may come as a surprise to conservative Christians is that Wysinger complains of an implicit bias in social media moderation against minority groups like Blacks and transgenders.

The thing that makes AI great at content moderation–its lack of empathy and emotion–is what makes it bad at content moderation. So if you post a bloody picture of Jesus on the cross, and it’s flagged, blocked, or hidden–it’s probably not religious persecution. It’s just a robot designed to look for offensive content doing its job.

It’s also important to remember that all content on social media is filtered by algorithms. These algorithms make more decisions than just what gets removed. Social media platforms are designed to reward popular content, typically highly emotional content, as opposed to content with a specific bent. While we don’t have a peek into social media algorithms (if we did, bad actors may be able to game the system), research into how content performs on the platforms can help us found out how it works.

Researchers agree that algorithms don’t have a political affiliation or party. Instead, algorithms favor content that elicits strong reactions from users, keeping them hooked so Facebook and Twitter can sell more advertising revenue.

Guynn, “Censorship or conspiracy theory? Trump supporters say Facebook and Twitter censor them but conservatives still rule social media,” USA Today, Nov. 30, 2020.

This is why it’s hard to take many cases of shadow-banning seriously. Shadowing-banning is the idea that while your content isn’t removed, it is being intentionally hidden from your audience’s feed. But just because one post isn’t as popular as another or someone never saw your post on their feed doesn’t mean there is malicious activity afoot. When algorithms are updated, things can change drastically.

My Facebook page Theophany Media, which I’ve had for years under different names, has almost 800 followers but we hardly get engagement. We were hit hard by the 2018 Facebook update that prioritized content from friends and family over pages. In June 2020, Facebook updated the algorithm to emphasize original news content from transparent sources. These big updates and their various little tweaks might explain the rise and fall of engagement. This is a factor to look into before “censorship” is blamed.

There’s one last point to make about artificial intelligence content moderation: sometimes it’s triggered by real people reporting the content. If enough people report the content, Facebook sometimes automatically takes it down. This has happened frequently with atheist Facebook pages, like “Atheist Republic” and “Ex-Muslims of North America.” In this case, they were targets of specific Facebook groups set up to use Facebook’s reporting feature to flag any content deemed by them as Anti-Muslim. Because Facebook is explicit that “People can report potentially violating content, including Pages, Groups, Profiles, individual content, and comments” this ability can be used as a weapon. So it might not be Facebook out to get you. It might be a vengeful person. Luckily, many of these cases are reversible.

Let’s summarize: should the algorithm be fixed? Probably. Is it specifically targeting your religious views? No, no it’s not.

What About When Bad Content isn’t Flagged?

When talking about social media and censorship, a lot of people immediately point out all the things that social media should have censored but didn’t. In one such conversation, someone pointed to a very recent story where Twitter allegedly refused to remove child porn that was shared multiple times. From what I could tell, this story was only shared on conservative-leaning sites and included no links to validate the claims. But if true–as it very well could be–that is very concerning.

But guess what: social media being bad at content moderation doesn’t prove they maliciously target conservative Christians. It proves that moderation is an imprecise science. There will always be issues.

While a persecution fantasy may lead us to believe that these social media platforms have specific agendas, it’s simply not accurate to make such a claim. While, as already has been admitted, many standards are vague, there are set, explicit standards. Most social media platforms do not want violent depictions, misinformation about important topics spread, or sexual content. Also of concern is privacy and the safety of users. Censoring your sermon is not a high priority by any means–unless it breaks one of these rules.

When you sign up for a platform, you agree to follow the rules. It’s not a violation of free speech, since the first amendment applies to the government and not private organizations (see this helpful article). Whenever we enter someone else’s space, we are guests and there are certain expectations. That’s just how it works. You can always read the rules to get a sense of what you can and cannot post:

Sometimes, there might not be a good reason you can think of about why social media content is taken down (blame those stinkin’ bots!). Certainly, the tech giants work in mysterious ways… But before you grab your pitchforks and threaten to go to another platform that will probably last for six months until it goes bankrupt, try to imagine a real reason why a social media platform wanting to be inclusive might flag the content.

Wall of social media icons
Wall of social media icons

Three “Censorship” Case Studies

Some people jump to conclusions about why content is flagged but the reality is often far less exciting. While I won’t defend the reason every piece of content is taken off a social site, I will explore three instances where a group thought they were being discriminated against but there was something else at play.

Here are three case studies–one about anti-conservative bias and two about anti-Christian biases–to help explain some of the complexities with these issues. The following cases have a response from social media execs explaining why the content was removed, which is an important piece of information before we cry persecution. As Christians, we are to love our enemy and that means seeing things from their perspective. So when we can, we need to hear the other side of the story. In the following cases, when social media responded, we can see there weren’t evil intentions.

Case Study #1: Joy Villa’s YouTube Video

The pro-Trump musician Joy Villa posted a music video in August of 2017 called “Make America Great Again.” The song and video were very pro-Trump. However, YouTube took down the video in a few hours. Why? YouTube had received a privacy complaint from someone in the video–though Villa claimed she got consent from everyone before filming. However, once she blurred out a face in the video, it was back up. It was that simple. No big conspiracy.

Even though YouTube told her the issue, she seemed to consider it persecution for her beliefs because she claimed to get all the right consent. Hard to say what’s true. But the fact that the simple fix got the video back up seems to refute the claim that YouTube is particularly anti-Trump. In fact, it’s laughable to think YouTube has a leftist agenda when data has suggested that YouTube’s algorithm and content is responsible for the far-right radicalization of many, many people (See here, and here, and here).

Case Study #2: Facebook Banning Bible Verses

The biggest evidence of social media censoring Christians would be banning Bible verses, especially harmless ones. It’s one thing to ban Christian-influenced opinions on policies or scientific/historical views you hold because of your understanding of the Bible–it’s a whole other thing to outright ban the Bible.

For years I’ve seen a post that will pop up occasionally about how Facebook is banning the Lord’s Prayer so you better share this fast and post it on your timeline before they ban it. It is ironic that I could always see that incredible viral post and Facebook didn’t censor it…. But time and time again this idea has been debunked (see here, here, and here)–in fact, the claim may originally come from a satire website. Facebook does not have a policy against sharing the Lord’s Prayer.

Do not share this viral post. It’s totally false. From Snopes.

Similar claims of censorship keep popping up. As recently as January 25, 2021, a British YouTuber named Paul Joseph Watson created a viral video that alleged that Facebook was blocking users from posting links to Bible verses from BibleHub.com. He had photo evidence of a notification saying “Your post couldn’t be shared, because this link goes against our Community Standards.”

It would certainly be a big issue if Facebook was keeping people from sharing Bible verses! Turns out, however, this wasn’t an attack on Holy Scripture. Facebook confirmed to Newsweek that, yes, links to BibleHub.com were banned. The Facebook spokesperson commented, “The Web site BibleHub.com links were flagged as low-quality content in error, which can restrict sharing capabilities.” Facebook fixed the issue and apologized for the error.

Yeah, it’s weird BibleHub was targeted in their algorithms. But Facebook fessed up to the weird mistake and solved the problem. Facebook has no policies against sharing religious content. We have religious freedom on Facebook.

Case Study #3: Facebook Banning “I Am A Chrisitan” Ad

In 2015, Kevin Sorbo–a Christian actor now famous for playing the atheist caricature in God’s Not Dead— was going to star in a movie about the life of a real Sudanese woman named Meriam Ibrahim (I am only assuming that Kevin, a white male, was not going to play Meriam’s part). The move, funded through an Indiegogo campaign had the title “I Am A Christian.” Yes, they capitalized the article “a” for some reason.

To help raise money, the producers created the following ad:

Are you a Christian? We challenge you to change your profile picture to this ‘I Am A Christian’ photo for one week! Change your picture now, and challenge your friends to do the same. Stand up and declare: ‘Yes, I Am A Christian!!!’

Yet Facebook denied the ad for not following its advertising guidelines for language.

Persecution? Hardly. Turns out Facebook is particular about how ads are written. The producers got in contact with “Frank” from Facebook Ads, and he told them:

“Your ad wasn’t approved because it doesn’t follow our language policies. We’ve found that people dislike ads that directly address them or their personal characteristics such as religion. Ads should not single out individuals or degrade people. We don’t accept language like ‘Are you fat?’ ‘Wanna join me?’ and the like. Instead, text must present realistic and accurate information in a neutral or positive way and should not have any direct attribution to people.”

From Kayser, “Did Facebook Really Ban the “I Am A Christian” Ad?“, Movieguide.

As the conservative entertainment review site Moveguide points out, the filmmakers behind the project could very likely rephrase the advertisement and Facebook would happily take their money. Ben Kayser, the managing editor, makes a strong case that this isn’t discrimination:

If Facebook truly wanted to censor Christians, why would it censor the very small Facebook page “I Am A Christian – Movie” that has 4,000 fans, when a different Facebook page titled “I am a Christian” has 1.9 million fans and is going strong? Other projects that are even more aggressive such as Pureflix’s GOD’S NOT DEAD with 6.7 million fans saw unprecedented success due to effective Facebook advertising. Could it be possible that Facebook is simply holding the producers of I AM A CHRISTIAN to the same advertising standards that they hold to everyone else? This is what must be explored first being accusations are thrown.

Are Christian’s just playing the victim card? True injustice against Christian men and women is happening all over the world. The filmmakers of I AM A CHRISTIAN know this because that’s exactly what their movie discusses.

Kayser, “Did Facebook Really Ban the “I Am A Christian” Ad?“, Movieguide.

Kayser is asking pivotal questions. Why would these guys be censored when other much louder, more significant Christian voices are not? Cries of censorship don’t make a lot of sense here.

Also here’s a fun fact– I Am A Christian never got made. The subject of the movie Meriam Ibrahim said she had already sold her life rights to another movie studio, meaning this movie was being made about her without permission. Her husband Daniel said the people behind I Am A Christian were trying to take advantage of them. So, yikes…

Why Targeted Social Media Censorship Makes No Sense

So far in this article, I have tried to make the point that other factors may play into alleged censorship. There are definitely problems in the system because of the pitfalls of both humans and AI, but that doesn’t mean deliberate and coordinated discrimination against Christians (and conservatives) is occurring. Some cases of supposed censorship weren’t what they appeared to be at first, but were based on legitimate reasons.

Now, I want to turn to a logic-based case against the idea that social media is censoring Christians. I want us to take a broad look at the goals of social media to explain why mass Christian censorship isn’t likely. So even if we find specific claims of censorship that don’t have an adequate explanation, we can see those as isolated incidents because a broad view reveals a different story.

Social Media is Designed to be a Place of Multi-Voices

One reason Christian censorship doesn’t make sense is that–whether you believe it or not–social media is designed as a place of free expression. On my Facebook feed, I see dumb memes from conservatives and from liberals. Everyone has an opinion, and they are all sharing it! To work, social media needs content. They need people on the site. And controversial stuff and alternative opinions will certainly do a good job at stoking engagement.

I know this quote will elicit a strong reaction in those convinced that Facebook is out to get conservatives, but Mark Zuckerberg in a May 2020 interview on Fox News said, “I just believe strongly that Facebook shouldn’t be the arbiter of truth of everything that people say online.” He said this in reference to a tweet from then-President Trump who threatened to “strongly regulate” social media. He goes on to make a solid point: “I think a government choosing to censor a platform because they’re worried about censorship doesn’t exactly strike me as the right reflex there.”

While Twitter’s a bit more strict, Facebook has long resisted shutting down fake news, misinformation, or certain kinds of harmful rhetoric. The big social media platforms found success because of freedom of speech.

Certainly, Facebook has changed a bit since that interview, as they now flag COVID-19 and election content with resources to the best information available. But they still allow politicians to say whatever they want in campaign ads without fact-checking; Elizabeth Warren even tested this and got away with an ad that falsely claimed Zuckerberg endorsed Trump for reelection. Different companies take different stances on what misinformation “crosses the line,” but in general, they are concerned with misinformation (from their protective) that leads to harm, violence, or undermining the integrity of democratic processes.

Yes, there is a lot of bad information that floats around these sites, but if it doesn’t become a serious crisis, why would they stir the pot? I’ve never heard of a case of someone being censored for saying the earth is flat or evolution is a lie, even though those ideas go against the scientific establishment. But peddling COVID-10 conspiracies or cures is a lot more dangerous, so these platforms will often step in to limit that information.

Facebook, Twitter, YouTube and many other social media platforms are global platforms, with millions upon millions of users, which all have different views and backgrounds. Silencing conservatives or Christians would be silencing a large portion of users, which would be an odd move for companies that need all those voices on their platforms for the platforms to work.

Ironically, alternative platforms like Parler or Gab which originated as conservative platforms by design go against the idea of social media as a free-thinking platform. These platforms shield their users from alternative views–this case liberal ones–further encouraging extremist opinions. I believe the large social media platforms don’t want to become one-sided conversations, not only because that’s bad for business, but such exclusiveness has caused lots of problems for these other platforms.

Censoring Christians Would be a Bad Business Decision

This point is even stronger than the previous point. Censoring Christians doesn’t make sense financially. Social media platforms require a) users and b) advertisers to run their business. Kicking off large groups of conservatives or Christians would hurt these businesses financially. And we all know that it’s really money, not ethics, that drive Silicon Valley tech tycoons.

Think about it: there are about 2.5 billion Christians in the world. It’s the largest religion in the world–nearly a third of the globe identifies as a Christian. Why would you boot all those potential customers?

Though the stats are a few years old, Vision Media Interactive reported that evangelical Christians are 300% more active on Facebook than the average user. In fact, among the group of 100k active evangelical Christians on Facebook that the study followed for 30 days, users clicked 23 ads compared to the average all across Facebook of 6 ads clicked. Thus, it’s safe to say Christians are a profitable group for Facebook.

Massive social media companies are not going to seek out particular people to destroy because the amount of cash in their wallets depends on all these users. In fact, as shown below, social media companies famously cave to various groups in an attempt to appear bipartisan.

Social Media Platforms Don’t Have Ideologies as Much as We Think

It’s always weird when someone declares something “liberal” or “conservative.” With publications, it’s a little easier to tell. I use AllSides.com to get a feel for where news outlets are on a political spectrum.

Though then it comes to a social media company, it’s difficult to take a whole organization and put them on the political spectrum. Businesses that don’t produce information as their product are difficult to label. For social media companies, you are essentially the product and they curate your user-generated content.

What might look like “liberalism” does not necessarily assign a party to a company. Big and successful companies tend to be rather inclusive, a value that for some leads to accusations of being liberal. Just because Target allows transgenders to choose their bathroom of choice does not mean the entity of Target is necessarily liberal–it doesn’t mean that if Target had to register for a party, it would register as a Democrat. Instead, this policy is one toward inclusiveness rather than the left.

Technology, like entertainment and media, has often been more progressive and accepting compared to day to day culture. Successful companies have a pulse on what society is wanting or the direction they are headed. When #BlackLivesMatter trended last summer, practically every business changed its social media profile! Even though I support that movement, let’s be honest, for a lot of those places they were doing it because everyone else was doing it. It was good for business. If they didn’t, they risked backlash. Just because a company is on the bandwagon of popular culture does not make them “liberal.”

Social media, likewise, is going to make choices that are good for business. That might look liberal, but don’t be fooled. Maybe these companies do actually care about the matter, but I’m admittedly skeptical when global, unfathomably wealthy juggernauts say they care for the little guy.

So many conservatives I know (people like Phil Vischer or Karen Swallow Prior) are getting called liberal because they think racism still exists, LGBTQ+ people should have rights, and there is more to being pro-life than wanting to overturn Roe v. Wade. Yet we have profoundly misunderstood what it means to be liberal if we throw it around without care every time someone does something not in step with modern conservativism.

You Keep Using That Word, I Do Not Think It Means What You Think It Means |  Know Your Meme
This is my response to people throwing out the world “liberal” willy nilly.

It gets more complicated, nevertheless. The next two points demonstrate why even if you could get Facebook into a political party, the “liberal” camp might not be its preferred choice.

Conservative Content Does Great on Social Media

What’s even crazier about calling social media left-leaning is there is a lot of evidence to show how conservatives thrive in its environment. Facebook, specifically, has as its vice president of public policy Joel Kaplan, a known Republican, and friend of Supreme Court Justice Brett Kavanaugh. Kaplan has fiercely defended conservative outlets Breitbart and the Daily Caller during a discussion about if Facebook should prioritize well-respected news organizations (meaning, not Breitbart and the Daily Caller). Zuckerberg eventually sided with Kaplan, specifically to keep Facebook from appearing liberal. Kaplan also later argued that Facebook should not get rid of fake news pages (most of which were based overseas, had clear financial motives, and leaned right) because it would unfairly affect conservatives who believe all that is true.

The most popular content on Facebook also tell a different story. A myriad of studies has shown that conservative content on this social media platform does quite well. Media Matters (which is left-leaning) reported in October 2020 that after a nine-month study, they found “Right-leaning pages earned more interactions than left-leaning and nonaligned pages.” Right-leaning pages got 6 billion interactions (likes, shares, comments) compared to left-leaning pages getting 3.5 billion and nonaligned pages getting 4.2 billion.

By the way, did you know the most popular post on Facebook in the last 24 hours (from 1/29/21) is a post by conservative Christian Franklin Graham, as reported by the New York Time’s Kevin Roose who runs the Twitter account “Facebook’s Top 10.” Many conservatives are frequently found in the top pages, like Donald Trump (before his ban), Ben Shapiro, and Fox News contributor Dan Bongino. Also, a 2016 Harvard study determined that right-leaning media sites were shared more than left-leaning sites on Facebook, while on Twitter it was slightly more evenly distributed, though still partisan.

There is plenty more evidence to suggest conservative content thrives on social media. In a 2019 study, Media Matters found that right-wing opinions on abortion dominated on Facebook–63% of the top links related to abortion news came from sites considered right-leaning. Also, a Cornell University study demonstrated that former-President Trump was the single biggest driver of COVID-19 information on Twitter, more than any other source. POLITICO, in partnership with the Institute for Strategic Design, analyzed two million social posts on Facebook, Instagram, Twitter, Reddit, and 4Chan during the height of the Black Lives Matter protests last summer. They concluded that the loudest voices about this subject were conservatives like Ben Shapiro, James O’Keefe, and Charlie Kirk. Andy Ngo, of the ultra-conservative Canadian site The Millianial Post, also went viral by sharing alleged violence of Black Lives Matter protesters. POLITICO reports that Ngo’s “top five messages on Twitter, based on shares, likes and retweets, received 35 times more engagement than the most prominent mainstream media post on the topic.” Is social media silencing conservatives? Hardly.

Research studies have not uncovered that social media companies have a pronounced political or religious bias. In an interview with USA Today, Steven Johnson, an information technology professor at the University of Virginia McIntire School of Commerce, said:

I know of no academic research that concludes there is a systemic bias – liberal or conservative – in either the content moderation policies or in the prioritization of content by algorithms by major social media platforms. …If anything, there is evidence that content from highly conservative news sites is favored by Facebook algorithms.”

Guynn, “Censorship or conspiracy theory? Trump supporters say Facebook and Twitter censor them but conservatives still rule social media,” USA Today, Nov. 30, 2020.

Johnson’s research concludes that Facebook seems to favor partisan content because it gets more engagement (See also the comments of media studies professor Siva Vaidhyanathan of the University of Virginia). And as a Harvard study from 2016 revealed, the “center of gravity” of the media landscape tends to be center-left, but the strongest pole on the conservative side is far-right. In America, partisanship tends to come down to center-left vs. far-right, the far-right being more partisan and polarizing so garnering more engagement online.

Social media is not censoring conservatives. Conservatives thrive on most social media platforms, especially Facebook. It isn’t accurate to call these places “liberal” when they do such a good job promoting conservative content.

Liberals Believe they are Being Censored Too

There’s another fact to keep in mind in the conversation about social media censorship. Liberals believe they are being censored too! Pew Research reports that a majority of people, in general, believe social media censors political viewpoints–90% of conservatives believe censorship is taking place, whereas 59% of liberals say this is the case. As is expected, a majority of Republicans (69%) think major technology companies lean Democrat, but what’s fascinating to me is that only 25% of Democrats actually think that’s the case! If technology did favor liberals, I’d expect the liberals to notice more and appreciate it.

When we look at U.S. adults regardless of political leanings, 43% say tech companies favor liberals, 13% say they favor conservatives, and 39% say both leanings are favored equally. This of course isn’t data about what is actually happening, but the Pew study does illustrate how Americans perceive the tech world. There isn’t a majority consensus on what’s happening.

While conservatives have more of a reputation for claiming censorship of their views, liberals don’t see tech companies playing fairly either. Representative Tulsi Gabbard and former presidential candidate, who is a Democrat, sued Google for suspending her ad account and for allegedly putting her campaign emails in people’s spam folders on Gmail. There is also the case where Facebook redesigned their algorithm to minimize exposure of the far-left publications like Mother Jones. Facebook has also declined to fact-check those that deny climate change and declined to take down a Trump campaign ad featuring an unsubstantiated claim about Joe Biden bribing Ukraine when the Biden campaign requested its removal. Many black activists have long pointed out that Facebook’s anti-hate speech policies make it difficult to talk about racism, such a recurring phenomenon that this community even has a term for when Facebook censors their anti-racist posts: “getting Zucked.”

While conservatives have come hard against social media for liberal bias, liberals consistently come hard against social media for getting too cozy with the far-right. Who’s correct? Well, everyone to some extent. Social media loves this divide. As mentioned, these big companies aren’t trying to be one-sided, as that would go against their commitment to diverse ideas, their business model, and the data we’ve actually seen.

Final Thoughts – What Now?

There is a lot more I could have said. I’m restricted both by space, time, and the lack of any concrete evidence that Christians should fear censorship online. If I ever get a grant to do a big research study on Christian censorship on social media, I’ll let you know.

So to cap it all off, I’ve created a handy list of questions to ask when you think you or someone you know is being censored online for political or religious beliefs. Ask yourself:

  • Did I violate the community standards and/or rules of this platform? (Be sure to check the rules)
  • Was I spreading controversial information, making highly suspect claims, or presenting fake news as factual?
  • Could my words be conceived as hate speech or insulting to a group of people that is often discriminated against?
  • Could this be an accident from human moderators or an AI algorithm?
  • Could the intent of my content have been completely missed? Can I make it more clear what I’m saying?
  • Is there evidence to suggest that users coordinated an attack on my account, rather than social media specifically targeting me?

If you answered yes to any of these, I don’t think you were censored. But before you go to social media to oh-so ironically decry social media censorship, try to get in touch with a human to present your case. Many platforms now allow this, though I have heard some stories of the platform never repsonding. In researching these cases, so often the block or ban is reverted when a person got in touch with a real human on the other side. Sometimes these are just accidents. Sometimes you didn’t realize what you were doing was against the rules. That’s all an easy fix.

But if it appears that social media giants took an interest in squelching your personal religious beliefs despite any evidence that this kind of thing regularly takes place, then it’s time to ask the most pivotal question of all:

How should I act if social media censors me?

Hopefully, you respond in a Christ-like manner. Hopefully, you remember that Christians are killed and arrested in many parts of the world and that having one post banned from Facebook is a pretty minor inconvenience. Hopefully, you learn to choose your words carefully so as not to confuse or sow hate. Hopefully, you are cultivating a life that makes people want to know Jesus better–instead of being turned off by self-righteousness, anger, and complaining.

The Bible doesn’t guarantee we will always have free speech. I still absolutely love free speech, but it’s not a guarantee. What is a guarantee is that Jesus is with us through trials and tribulations, teaching us to turn the other cheek and be witnesses to God’s new kingdom with our very lives.

Even if censorship was a reality, I wouldn’t fear it. Social media is a great place to provide a witness to your faith, but it is not the only place where I can be a witness for Jesus. Whatever happens, I’ll keep serving God with whatever platform I’m given.

More Resources

  • Here’s a good article about a good legal answer to the overstep of social media organizations.
  • Here is a good Christian case, from 2018, about why Christians shouldn’t leave Facebook. The author does see some censorship of Christians happening or at least likely to happen, but still reminds us to be lights on social media despite this.
  • I generally agree the government shouldn’t regulate media, but I’m starting to think some kind of oversight is needed. See this article here.