Robust Exchange or Offensive Libel? How to Handle Online Comments
By Emma Goodman
How liable are news organizations for the comments left by readers on their sites? How much control should they have? A recent European Court of Human Rights decision has potential ramifications for the way that comments are published and managed across Europe.
On October 10, the court upheld a national ruling in Estonia that found a news website liable for offensive comments posted by its users. The ruling referred to a 2006 case in which “highly offensive or threatening posts” were published below an article about a ferry operator on the news portal Delfi, according to the ECHR judgment.
The owner of the company sued the portal, and an Estonian court found Delfi responsible for the anonymous defamatory comments. Delfi appealed to the ECHR on the basis that this breached its right to freedom of expression.
Delfi and its sister publications allow users to post anonymously or with their name, but most choose to be anonymous. An algorithm looks for blacklisted keywords, but after that the portal relies on its audience to point out offensive comments: readers can give a “thumbs up” or “thumbs down” to comments as well as report them directly.
This approach is far from unusual. For the World Editors Forum study Online Comment Moderation: Emerging Best Practices, we spoke with 96 publications that accept comments. Of those, 42 moderate post-publication, and 16 have adopted a mixed pre- and post-publication approach. Many of the 42 rely solely on their readers to flag up comments, often choosing this option because it is seen as the best way to avoid legal liability for comments, as well as allowing a faster flow of conversation among readers and using fewer resources.
The Delfi judgment, however, implies an obligation for news sites to take a more active role in comment moderation. “Given the nature of the article, the company should have expected offensive posts, and exercised an extra degree of caution,” the ruling states, specifying that the comments were not removed “in good time.”
Several editors we spoke with said that their teams keep a more careful eye on articles considered likely to attract controversy, but that this is not always easy to predict. The only way to truly prevent offensive comments is to check everything pre-publication.
When to moderate is one of the first and most important decisions that a publication has to take. If all news organizations were effectively obliged to pre-moderate the content on their sites through fear of being held liable for users’ comments, conversations would be slower and less dynamic, and some news organizations would likely stop allowing comments altogether for lack of resources to read every one.
Another implication of the Delfi ruling is that the portal held more responsibility because the offending comments were anonymous. Our report found mixed opinions about the merits of allowing anonymity versus enforcing real name registration.
Out of the 91 sites that provided us information about their registration systems, 18 didn’t require any kind of registration at all, hence allowing true anonymity, while 20 required real names and made a concerted effort to enforce this. The majority of news organizations (53) required registration with at least an email address, but either actively allowed pseudonyms or made little effort to insist on real names.
Several outlets that have made the switch from free commenting to enforced real name registration, either through an official ID number or through social media, have lost vast numbers of commenters and comments. For example, after the Czech Republic’s iDNES began enforcing real name policies, registered commenters decreased from 200,000 to 48,000, and comments per day went from 3000 down to 1000.
Most editors we spoke with saw this as a positive development because moderating on a smaller scale is easier, and many believe that requiring real names leads to more constructive, civil conversations, while anonymity offers the temptation to indulge in consequence-free behavior and to defy social norms.
However other organizations believe that allowing anonymous commenting is extremely important, as it enables people to offer opinions who might never be able to do so under their real names. Many Burmese commenters on The Irrawady’s articles don’t want to reveal their identity for political or security reasons, and Gawker in the United States uses one-time-only login keys to allow true anonymity with the goal of guaranteeing freedom of expression.
Decisions on when to moderate and whether to enforce real names can be seen as choices to prioritize either “civil” or “robust” discussion. The ideal, of course, is to find a way to combine both, and this is what many publications are trying to do.
One thing agreed upon by many of the editors and managers we spoke to was the positive role that staff participation in a comment thread can play, making readers feel that they are being heard. It dramatically improves the tone of conversation that follows, leading to more productive behavior and better comments, according to editors at Der Standard, Austria; The Guardian, UK; Buzzfeed, US; and Bergens Tidende, Norway.
Other suggested ways to increase the quality of dialogue while not limiting expression include highlighting particularly interesting and thoughtful comments, and giving readers feedback if their comments have to be deleted, to encourage them to come back and try again.
The benefits of allowing comments are clear, in terms of furthering freedom of expression for the public and allowing news sites to learn from and engage with their readers. News organizations should not be encouraged to stifle this conversation.
The report "Online Comment Moderation: Emerging Best Practices" was supported by the Open Society Media Program.
Emma Goodman is author of the report "Online Comment Moderation: Emerging Best Practices."