Facebook platform content supervision is not as transparent as it seems


Facebook

Last week, Facebook’s CEO, Mark Zuckerberg, proposed that big tech companies should report on post deletion and their effects. This is a form of critical supervision of all the contents in the platform. However, experts believe that will not work as Facebook believe and will not be transparent. Although Facebook already has a system in place, it is not very effective.

Facebook

Last Thursday, Zuckerberg told Congress: “Transparency will help make these big technology companies accountable for the accuracy and effectiveness of deleted posts”. He further claims that if such a transparency system becomes the industry norm, Facebook will not need much change. He also said, “As a model, Facebook leads by example in transparency every quarter”.

Zuckerberg has issued similar initiatives many times, calling on social media companies to take more responsibility for the content posted by users. With the increasing number of harmful posts (such as hate speech, violent threats, etc.) on online platforms, big technology companies have been criticized for this. U.S. Congressmen discussed the reform of Section 230 of the Communications Standards Act, which frees Internet platform companies from responsibility for user-generated content.

There are increasing calls for sanctions for major technology companies. The call claims that Social Media played a major role in spreading false information. Controlling content on social media is not entirely easy. We had to cope with a lot of false information especially with regards to the coronavirus, the U.S. elections, and other significant topics.

Gizchina News of the week


For more information on Facebook as well as other apps, click here

Facebook heavily relies on AI which is not entirely trustworthy

Regarding the reform of Section 230, Jenny Lee, a partner of Arendt Fox Law Firm, represents the interests of large technology companies. She said: “Big technology companies at least leave room for relevant discussions”.

However, Facebook’s self-censorship report is not as transparent as it claims. For example, the company’s report in February claims that more than 97% of “illegal” contents did not hit the public. It also claims to respond to 49% of bullying and harassing content on the platform in Q4 2020. In Q3 2020, it had to respond to 26%.

Read Also:  How To Quickly Grow Your Instagram Account With Real Organic Followers

However, the problem here is that in the equation for calculating these numbers, Facebook relies on its artificial intelligence figures and not the total amount of harmful content.

In addition, Facebook did not release relevant information on how many people viewed these harmful posts before they were deleted.

An expert claims that Facebook’s report is shocking and frustrating. The social media company did not disclose when the harmful content was deleted. Was it deleted within a few minutes or within a few days? ——They did not explain.

The focus of this report is on artificial intelligence, which means that Facebook will not disclose content that has been artificially flagged as violations, and the proportion of these artificially flagged content being reviewed and deleted.

Some experts claim that Facebook highly relies on its AI system to handle this issue. However, the artificial intelligence issue has some serious flaws in actual use. He said, “these automated audit systems have an increasing risk of misdetection…”

Source/VIA :
Previous OUKITEL WP10 5G review: how «hard» can it be?
Next Galaxy S21 gets camera and device improvements in new big update