Hate speech is communication that denigrates a particular person or a group on the basis of race, color, ethnicity, gender, disability, sexual orientation, nationality, religion, or other characteristic. It can be in the form of any speech, gesture or conduct, writing, or display and usually marks incitement, violence or prejudice against an individual or a group. The Recommendation of the Committee of Ministers of the Council of Europe issued in 1997 covers the internationally accepted definition of the term. Accordingly, “the term “hate speech” shall be understood as covering all forms of expression which spread, incite, promote or justify racial hatred, xenophobia, anti-Semitism or other forms of hatred based on intolerance.” As a result it generates stigmas, stereotypes, prejudices and discriminatory practices against those who are constructed as being different.
International Law and national legal frameworks both prohibit such speech. The International Covenant on Civil and Political Rights (ICCPR) states that any advocacy of national, racial or religious hatred that constitutes incitement to discrimination, hostility or violence shall be prohibited by law. The United Nations, Convention on the Elimination of All Forms of Racial Discrimination (ICERD), Article 4 also provides for states to declare an offence punishable by law "all dissemination of ideas based on racial superiority or hatred, incitement to racial discrimination acts of violence or incitement to such acts against any race or group of persons of another color or ethnic origin”.
The laws of individual countries most often regulate Internet content. Some countries have imposed codes of conduct on Internet service providers (ISPs), and some service providers have willingly agreed to create their own. Traditionally, content on the Internet was provided by the person or group that created a website, and user contribution was limited. Social media is more dynamic as users provide most of the content and therefore they share its responsibility along with the service provider or the host of the website.
Most countries have sufficient laws to prohibit hate speech on the Internet and social media but governments rarely intervene except for matters that concern them politically. Hate speech, especially racism on the social media is an often ignored phenomenon. However, the problem is not the law but its implementation.
Social networking sites such as Facebook and Twitter are open and public forums where users can shoot and upload videos; freely express their views on politics, race, religion and sexuality. They can also create pages and groups to join for or against a cause. Hate speech is a very distinguished feature amongst many of these forums. Anyone can create a group that opens up hate against certain religions, sexual preferences, disabilities and racial/ethnic groups. The standards set by these sites and their response to hate speech make for some interesting observations. Content regulation standards are often vague and vary. It is usually outsourced to low-paid employees of third companies. Facebook and Twitter have different approaches to content regulation. Twitter, self-described as “the free speech wing of the free speech party,” has largely resisted any restrictions on content by either governments or citizen groups. In 2010 the Government of Pakistan asked both social media websites Facebook and Twitter to take down references to a page that asked the users to celebrate a day by drawing caricatures of Prophet Muhammad. But micro-blogging site twitter refused to entertain any such request that resulted in its ban in the country. Facebook also initially refused to do so but then complied with the request after the government banned it. Facebook has also taken down anti-semitic content. Nevertheless, the decisions to take down content are made usually after the intervention at the highest level or only when the companies itself has a lot at stake. Users still contribute to hate speech every day and even take it for granted while companies pay little attention to it.
Content with marks of hate speech will always exist and social media has made it more interactive which augments chances of direct conflict. There is need to review hate speech legislation. It should be a living, dynamic document that leaves room for refinement and modification over time. The law should also have specific definitions in place for social media, such as regulating abusive and threatening language or if that language is used to stir hatred against a specific group of people. Nevertheless, this approach will have its limitations because of the problems with its implementation. The law may not always be a panacea to hate, neither do I advocate government censorship. In fact it's very hard to create a legal prohibition or prescription against the free flow of information on social media.
There is need to deal with hate speech on social media in other, more creative ways. The best antidote to hate speech is more speech. Public awareness of hate speech on social media can do a lot to help sensitise users, Internet companies and governments. We may perhaps be more vigilant by removing or reporting news portals, readers’ comments, groups and messages to diffuse hate speech. There is need to popularise and circulate reports and materials related to hate speech on the Internet. Social media has taken our lives from private to a more public sphere. Its influence in our lives grow day by day and it has changed the ways we communicate and interact with each other in a society. This new communication paradigm must be embedded with tolerance and acceptance.
The author is currently pursuing a masters degree in International Law and Justice at the Fordham University School of Law. He researches and write about legal issues that relate to human rights and freedom of expression.
The views expressed by this blogger and in the following reader comments do not necessarily reflect the views and policies of the Dawn Media Group.