According to reports, major US technology companies today called on the European Union to formulate new rules to protect them from being regarded as “informed” when actively deleting hate speech and other illegal or harmful content, so they do not need to bear corresponding legal responsibilities. Simply put, these American tech giants do not want to be responsible for bad content.
The “European Digital Media Association” (EDiMA), which represents technology companies such as Facebook, Google, and TikTok, said on Monday that such protection measures can incentivize platforms to protect freedom of speech while deleting bad content, thereby bringing “high-quality content review.”.
The association said it will submit this document to officials of the European Commission, European Parliament, and Council. If you were expecting to see the companies that made this request, you are not in luck. The report did not specify the American tech giants that are making the request. However, your guess could be as good as mine.
The European Commission is currently formulating digital policies and regulations. These policies aim at making platforms more responsible for the content posted by users on their websites. Its purpose is to reduce the spread of harmful content and illegal products, such as unsafe information. There is no better time for these tech companies to make this request.
In recent years, platforms such as Facebook and YouTube have received close attention for failing to adequately supervise activities such as hate speech.
It’s difficult for tech giants to handle contents without legal cases
EDiMA Director-General, Hilda El Ramly said: “All our members take their responsibilities very seriously and hope to take more measures to address illegal content and activities on the Internet. The EU regulations of these service providers will give them more leeway.”
In addition, Ramley believes that a variety of case laws and the lack of clarity in some explanations prevent the platform from handling bad content more actively. Thus, they constantly face legal risks.
Technology companies can use algorithms or other systems to detect infringements and delete content. However, they believe that if they do this, many people will see the company as “informers”. Subsequently, the company will still be held responsible for such bad content.
According to Ramley, the new regulations will not completely exonerate tech giants. If platforms receive confirmation notices of specific violations, they should still be responsible for their “inactions”.