You Don’t Write Community Guidelines and Abuse Policies for ISIS
It’s not every day when Fox News has a discussion about online community guidelines. But that is essentially what happened on Saturday, during their morning talk show, Fox & Friends. Hosts Anna Kooiman, Clayton Morris and Tucker Carlson, joined by Fox News contributor Katherine Timpf, talked about the updated abuse policies that Twitter announced on Wednesday.
Unfortunately, they did not see fit to have a guest who had experience writing and enforcing these types of policies. While all members of the panel can speak with conviction on the realities of using Twitter as a public figure, none have meaningful experience managing social platforms and dealing with these challenges and the resulting fallout.
I thought it would be fun to use this segment as a means of talking about these issues. Let’s pretend they invited me on as a guest for this discussion. Here’s what I would have said.
This Isn’t About ISIS
The discussion is predicated on a falsehood: that the updated rules Twitter announced on Wednesday are the result of “pressure to block ISIS.” But the change is really part of Twitter’s continued efforts to address abusive behavior and harassment on the platform.
If you compare the old rules to the new ones, you will find an addition that mentions terrorism (“including threatening or promoting terrorism”). But the vast majority of the changes relate to the creation of a new “Abusive Behavior” subhead that includes a mix of previous policy and new rules.
Whatever one might make of ISIS’ Twitter usage, virtually none of that discussion is about them using Twitter to harass people. Instead, they are using the service to spread their message and recruit. That’s the concern. If you read Twitter’s blog post about the rules adjustments, it is clear that ISIS isn’t the motivation behind them.
Your Abuse Policies Shouldn’t Mention ISIS… and Should Apply to Everyone
Morris notes that, “critics argue there is no mention of the terror organization, and the wording’s so vague that it could really apply to anyone.”
Well, yes. Both of those things are correct. But that’s how it should be. If you write a policy for a community or a social media platform, you aren’t writing a policy that only applies to evil people or people you don’t like. You’re writing a policy that will apply to all.
When Twitter writes a policy targeted at abusive behavior, it isn’t written to apply strictly for terrorists. It’s written to apply to me, you and anyone else who uses Twitter. If I suddenly change tomorrow and start threatening and harassing people on Twitter, they might take action against me. It doesn’t matter that I am not ISIS or that I am an American. The point of policies like this is, in fact, for them to literally “apply to anyone.”
Can the Left Use These Abuse Policies to Silence Conservatives?
Morris continues to ask, “could the new rules be used to kick conservatives off the platform if they are reported by the left?”
Yes, they could be. But the inverse is true. Liberals could be kicked off of Twitter if they are reported by the right. New York Yankees fans could be kicked off of Twitter because of reports from Boston Red Sox fans. This is a perfect example of correlation not implying causation.
It’s a classic conversation that plays out when someone is banned from an online community. They don’t believe they were banned because they did something wrong. Instead, they were banned because someone disagreed with them or because someone is abusing their power. I’ve had people tell me I banned them for “helping people” and because I used a different web browser from them.
What happens is that people fail to separate the person from the action. The account holder wasn’t suspended because they were a conservative or a Yankees fan. They were suspended because they did X, which was against policy. For the most part, anyway. Mistakes are certainly possible, as with any human action, and there are bad people within any profession.
Reports Don’t Mean Anything Happens
Timpf remarks that the policies are “so vague that you can really get anyone in trouble that you wanted to.” She goes on to say that “this year, the word ‘skinny’ was declared violent, ‘get over it’ is violent, the word ‘freshman’ promotes rape culture; you can pigeonhole anything into falling under this policy.”
This is a common misconception that people have about abuse reports: that just because someone reports something, action is taken. The police receive many calls from people who believe a crime has been committed when no crime has occurred. As such, no arrests are made. This is no different.
When you are sorting through abuse reports, all that matters is whether or not a violation is occurred. If a liberal reports a violation made by a conservative, it’s still a violation. Why does it matter what political leaning they have? Separate the labels and the person from the action. A violation is a violation, no matter who did it or who reports it. Many of the reports that my staff receive generate no action at all. It is completely wrong to assume that reports equal action. They do not.
Twitter’s Anti-Conservative Bias
In this segment, there is a clear effort to imply that a bias exists. This is only true if you believe that there is an effort, within the trust and safety team at Twitter, to apply rules in an unfair way, where liberals receive preferential treatment over conservatives. As far as I can see, there is no credible evidence to support this suggestion. The only possible evidence would be highly anecdotal, along the lines of, “look at this Twitter account that was tied to a conservative that was suspended.”
Which is exactly what occurred during the segment. Carlson referred to 4 specific examples of conservatives being suspended from Twitter. But if you were to go and look up these examples, you would learn that they happened in August 2011, April 2012 and March 2015. 4 suspended accounts in a period of 3 years and 7 months.
But if you consider that Twitter has 320 million monthly active users, 4 accounts in 3 and a half years does not make a trend. 100 accounts wouldn’t, either. Would 1,000? Maybe. The sample is simply too small to be meaningful. It is plain and obvious that there are an innumerable quantity of people (much more than 4 or 100 or 1,000) on Twitter sharing their conservative viewpoints. The fact that they exist on the platform at all is a more meaningful piece of evidence than anything I have read on this topic.
The Practicality of a Conspiracy
If that doesn’t help, think of this from a practical perspective. Try and imagine the number of reports Twitter receives for violations of its harassment and abuse policy (let alone other Twitter rules). The number is likely so large as to be unfathomable for anyone who doesn’t work on this space. Let’s just say that it’s like receiving one email every second and having to quickly decide if you should respond to, delete or sort that email.
When you are reviewing those reports (and it’s a large team of people, not a single person who might be a liberal), you are applying specific standards and making quick decisions. Does it do X, Y or Z? If yes, take action. If not, don’t. Most reports are fairly straightforward and handled in that fashion – if they even reach a human. A popular service will often apply a filter or algorithm to reported posts to easily knock out the easy issues that a machine can deal with, leaving the more nuanced problems to a person.
People who work in trust and safety, especially at large platforms, really deserve some sympathy and understanding. It’s a tough, often thankless job where the loudest noise is made when you make a misstep. This is important work, but rarely does anyone praise you.
If a misstep is made, it’s more likely to be a simple mistake than to be malicious in nature. In which case, you really should think about how you would like to be treated if you made a mistake at your job. Do you make mistakes because you have an agenda or because you simply erred?
Factual Errors
After Timpf mentions that the language of the policy is vague, Morris promises to “read it, so our viewers know,” but he doesn’t actually read the policy. He reads a quote from the blog post announcing the change. The abusive behavior policy is a separate document.
When asked if she has witnessed a trend in Twitter suspending conservatives, Timpf makes a reference to the government. “Of course, we know the government would never target conservative groups. Like the IRS, for example, would never do anything like that. No, of course, it can be used that way.” None of the hosts had mentioned the government and, even though Timpf’s answer was sarcastic, it may lead to viewers actually believing that the government is somehow calling the shots here.
Twitter is free to decide what is and is not appropriate on their service, within the confines of the law. That’s not to say that government officials couldn’t take some sort of action if they felt a law was being broken, but in this case, it’s about Twitter responding to public criticism about abuse on their platform.