قالب وردپرس درنا توس
Home https://server7.kproxy.com/servlet/redirect.srv/sruj/smyrwpoii/p2/ US https://server7.kproxy.com/servlet/redirect.srv/sruj/smyrwpoii/p2/ Facebook is trying to limit the scope of misinformation

Facebook is trying to limit the scope of misinformation



On Wednesday, the company announced more than a dozen updates on how it is about misinformation and other problematic content on Facebook, Instagram and Messenger. To promote the various endeavors, the company held a four-hour event at the headquarters in Menlo Park for about 20 journalists, where employees for various Facebook products resumed change and answered questions.

For years, Facebook has been spreading controversial content on its platform, such as misinformation about elections, anti-vaccination stories, violence and hate speech.

Facebook has tried to remove things faster, against its rules, and "reduce" the spread of content that does not explicitly violate its policies, but is still cumbersome, such as clickbait and misinformation.

"We do not remove information from Facebook just because it is false. We believe we need a balance," stated Facebook's VP of Integrity Guy Rosen at the event. "When it comes to false information from real people, we aim to reduce distribution and create context."

For example. Said Facebook that it will reduce the range of groups that often share misinformation. When users in a group often share content that has been considered wrong by Facebook's third-party factor managers, the group's content will be pushed lower in News Feed, so fewer people see it.

There will also be a "click-gap" signal that will affect a link's position in the news feed. With this feature, Facebook hopes to reduce the spread of sites that are disproportionately popular on Facebook compared to other parts of the web.

It works with experts to identify new ways to combat false news on the platform. The Associated Press expands the work it does for Facebook's independent fact-checking program.

The company has often described its problems with problematic content as "conflicting". In the corporate setting, it fights with an enemy who learns and changes tactics. The bundle of changes it announced on Wednesday is its latest weapon.

The Facebook policy prohibits content that it determines may result in "imminent physical violence". Employees on Wednesday defended their decision not to ban any misinformation or anti-vaccination content on their products.

"When it comes to thinking about harm, it's really hard … to draw a line between content and something that happens to people offline," says Tessa Lyons, Facebook's Head of News Feed integrity.

She said that some of the posts that appeared to be anti-vaccination involved people asking questions, seeking information, and having discussions on the subject.

  Facebook and Google will meet Congress of White Nationalism

"There are tensions between enabling expression and discourse and conversation and ensuring people see authentic and accurate information. We do not believe that a private company should make decisions about what information can or cannot be shared online, "she said.

Renee Murphy, chief analyst at the research firm Forrester, covering safety and risk, said that while Facebook's steps are positive, they do not nearly enough to solve some of its major problems.

"Part of me says" awesome [this content] won't go as far as it used to, "she said." The other part says & # 39; I have no confidence in any of this. & # 39; At the end of the day, what will some of this do? How will they do it? "

Facebook is also trying to be more transparent with users about how and why it makes decisions. As part of the effort, the company adds a new section to its community statistics website where users can see the updates Facebook makes to its policies each month. [19659002] Another update allows users to remove comments and other content submitted to a Facebook group after they leave

  Hundreds of millions of Facebook records exposed to Amazon cloud servers

However, Facebook-owned Instagram tries to squash the spread of inappropriate posts that do not break its policies, for example, a sexually suggestive image will still appear in a feed if a user follows this account, but it can no longer be recommended to go to the Explore or on hashtags pages.

Facebook also announced a few updates to its chat service Messenger, including a Facebook-verified badge that would appear in chats to help fight fraudsters unfolding public figures.

Another tool called Forward Indicator will appear in Messenger when a message is forwarded by the sender. WhatsApp, another Facebook-owned app, has a similar feature that is part of an attempt to stop the spread of misinformation. WhatsApp has had major problems with viral hoax messages spreading on the platform, resulting in more than a dozen lynchings in India.
Forresters Murphy believes that the company needs to do more to solve major problems such as violence that become livestreamed and go viral on the platform. Last month, a suspected terrorist was able to stream live video to Facebook from a mass murder in New Zealand. The company said its AI systems could not capture the video and it took 1.5 million videos of the attack in the first 24 hours.

"They have bigger problems. I'm sure [these updates] will help sometimes, but there are bigger problems on foot," she said. "Facebook has much more to do."


Source link