According to investigative journalism nonprofit ProPublica, Facebook is somehow able to view the content of user messages on its WhatsApp messenger. If this is true, then Facebook is likely to face another scandal; as the company has repeatedly stated that it does not have access to user messages.
If WhatsApp has end-to-end encryption, when data is transmitted in encrypted form and decrypted directly on user devices, Facebook should not have access to the content of messages. The source says WhatsApp employees view messages that are users note as inappropriate content. It also notes that the company collects large amounts of metadata to detect prohibited content; without having to view the content of messages. With reference to the staff of the messenger; the moderators have the ability to “check the messages, images and videos of users”.
WhatsApp has a huge user base, which is why the platform is often in use to spread misinformation and prohibited content. The company is making efforts to combat this, using special algorithms for identifying messages based on metadata analysis; applying restrictions on the number of messages sent, etc. According to ProPublica, service moderators still have access to the content of user messages.
Researchers question the alleged privacy of WhatsApp
[An] assurance automatically appears on-screen before users send messages; “No one outside of this chat, not even WhatsApp, can read or listen to them.”
Those assurances are not true. WhatsApp has more than 1,000 contract workers filling floors of office buildings in Austin, Texas, Dublin and Singapore, where they examine millions of pieces of users’ content. Seated at computers in pods organized by work assignments, these hourly workers use special Facebook software to sift through streams of private messages, images and videos that have been reported by WhatsApp users as improper and then screened by the company’s artificial intelligence systems. These contractors pass judgment on whatever flashes on their screen — claims of everything from fraud or spam to child porn and potential terrorist plotting — typically in less than a minute […]
Many of the assertions by content moderators working for WhatsApp are echoed by a confidential whistleblower complaint filed last year with the U.S. Securities and Exchange Commission. The complaint, which ProPublica obtained, details WhatsApp’s extensive use of outside contractors, artificial intelligence systems and account information to examine user messages, images and videos. It alleges that the company’s claims of protecting users’ privacy are false. “We haven’t seen this complaint,” the company spokesperson said. The SEC has taken no public action on it; an agency spokesperson declined to comment.
According to the source, WhatsApp moderators operate in strict secrecy. There is no mention of Facebook or WhatsApp in the job listings for the “Content Moderation Officer” job; and people have to sign a nondisclosure agreement when hiring. Because WhatsApp uses encryption, AI algorithms cannot scan all chats on their own. Instead, moderators gain access to content from a user whose post is inappropriate. An allegedly offensive message is sent to the moderator, along with four previous remarks, including images and videos. All this is transmitted in unencrypted form and goes to the queue, which is processed by the moderators.
In a statement from Facebook on the matter, there was no direct answer regarding end-to-end encryption. “We design WhatsApp to limit the amount of data we collect by providing tools to prevent spam, investigate threats and abuse; including based on user reports we receive. This work requires a tremendous effort on the part of security experts;” said a Facebook spokesman, who also noted that the service has recently added additional features to ensure the privacy of user data.