Key Takeaways from How CNN and The New York Times Moderate Comments
Last week, I published an interview with Bassey Etim and David Williams. Respectively, they lead the teams that moderate comments for The New York Times (NYT) and CNN. They said a lot of great things, and I really enjoyed reading it.
I didn’t want to dilute their words by making it a 2-parter, but the resulting article was so long (more than 4,000 words), that I decided to hold off on sharing my favorite takeaways. Here they are.
Good Moderation Takes People
No matter what technology you use, no matter the filtering, the sentiment analysis or machine learning that you have in place, it takes real people to be good. Without actual people looking at comments, you are doomed to mediocrity (or worse). You can’t put your comments on autopilot and expect anything of value. You can’t pass the buck to a plugin or an algorithm and think it will take care of most or even many issues. I love technological advances in our space, but they help us to do a better job and to focus on more important tasks. They do not replace us.
Remove or Approve; Don’t Edit
Neither outlet edits comments. It’s just not a good idea. It opens up a can of worms when people feel that you edited their words in a way that changed their message. It creates sloppy documentation and forces your moderators to add another task to their job. Moderation takes people, and making those people edit comments is not normally a good use of their time. If a comment is OK, it is approved. If it isn’t, it’s not.
User Reports Are “Very Important”
Post reports (or flags) sometimes get a bad wrap. I’ve heard community professionals say that their members file a lot of reports for comments that don’t require any action. But it was clear that for CNN and NYT, these reports are important. Even for NYT, where they are almost completely pre-moderation, and moderators have to approve everything, bad comments still slip through. Especially in 1,000+ comment posts, where Bassey said “those flags are indispensable.”
Pre-Moderation and Post-Moderation Both Offer Value
I loved to read Bassey and David discussing the relative merits of pre- and post-moderation. NYT embraces pre-, while CNN is almost all post-moderation. You’ll hear a lot of advocates for post-moderation in community circles; not as many for pre-moderation. But it was clear from their answers that there is no single answer, that both approaches work for them.
On the CNN side, they want to encourage conversation between readers. Post-moderation enables that by being more instantaneous. Meanwhile, NYT wants only comments they deem of “high quality.” They want their comments to live up to the quality of their articles. That isn’t to say, of course, that the content on CNN is of a lesser quality than NYT, but they both have their own approach as far as how comments should measure up, what goals they have and how they accomplish those.
Both Bassey and David did a great job of explaining what the cost of each solution was, as well as the payoff.
“Anonymous Commenting Isn’t the Problem”
“Anonymous commenting isn’t the problem. The problem is when commenters feel anonymous,” David said. While anonymity can lead to a host of issues, the truth is, the alternative often introduces more issues. Bassey illustrated this.
“In the past, we did see real identity as the key to ensuring a more civil comments space,” he explained. “It makes perfect sense in theory – after all, who would say such awful, hateful things in public with their names and job titles attached? Turns out the answer is: An enormous amount of people would say awful and hateful things with their names attached.”
The result was that the people who were comfortable being awful kept doing it and even used the photo, name or other personal info from other members against those members. This led to the “socially functional human beings” posting less comments. And that’s never good.