This article is from the source 'nytimes' and was first published or seen on . It last changed over 40 days ago and won't be checked again for changes.
You can find the current article at its original source at https://www.nytimes.com/2019/10/21/technology/facebook-disinformation-russia-iran.html
The article has changed 5 times. There is an RSS feed of changes available.
Version 2 | Version 3 |
---|---|
Facebook Finds New Disinformation Campaigns and Braces for 2020 Torrent | Facebook Finds New Disinformation Campaigns and Braces for 2020 Torrent |
(about 3 hours later) | |
SAN FRANCISCO — Mark Zuckerberg spent 35 minutes and more than 5,000 words last Thursday on a speech extolling the virtues of unfettered expression and how everyone should have a voice on Facebook, the social network he runs. | |
On Monday, Mr. Zuckerberg, the chief executive, detailed some of the costs of that approach. | |
He said that his company had recently found and taken down four state-backed disinformation campaigns, the latest of dozens that it has identified and removed this year. Three of the campaigns originated in Iran, and one in Russia, Facebook said, with state-backed actors disguised as genuine users. Their posts targeted people in North Africa, Latin America and the United States, the company said. | |
Such influence networks are a growing threat, but Mr. Zuckerberg has deliberately staked his company’s future on letting people post almost anything they want in the name of democracy. That has opened the door to foreign operatives and others to spread disinformation on Facebook by posting conspiracy theories, inflammatory messages and false news to divide people. | |
Mr. Zuckerberg said his answer to some of these challenges was to be more transparent about where some of the posts were coming from and to better verify the identities of those putting up messages and ads. So on Monday, Facebook also rolled out new features to label whether posts were coming from state-sponsored media outlets. | |
The company also does not allow what it calls “coordinated inauthentic behavior,” Facebook’s name for actors who hide their identities on Facebook to spread misinformation — as is the case with the state-sponsored account takedowns. | |
“Elections have changed significantly since 2016, but Facebook has changed too,” Mr. Zuckerberg said in a conference call to discuss the disinformation campaigns and election security measures. “We’ve gone from being on our back foot to now proactively going after some of the biggest threats that are out there.” | |
Mr. Zuckerberg announced the moves as Facebook faces a near-daily torrent of criticism from American presidential candidates, the public, the press and regulators around the world, many of whom argue that the company is unable to properly corral its outsize power. | |
Senator Elizabeth Warren, a front-runner for the Democratic presidential nomination, recently accused Facebook of being a “disinformation-for-profit machine” because it allows false information from political leaders to circulate under its free speech stance. The Federal Trade Commission and the Justice Department are conducting investigations into Facebook’s market power and history of technology acquisitions. | |
To combat the critics, Mr. Zuckerberg has ramped up his public appearances. He has recently given several interviews to conservative and liberal media outlets and delivered a robust defense of his company’s policies at Georgetown University in Washington. On Wednesday, he will again be in the spotlight when he is scheduled to testify before Congressional lawmakers about Facebook’s troubled cryptocurrency effort, called Libra. | |
In his conference call on Monday, Mr. Zuckerberg said that Facebook had become better able to seek out and remove foreign influence networks, relying on a team of former intelligence officials, digital forensics experts and investigative journalists. Facebook has more than 35,000 people working on its security initiatives, with an annual budget well into the billions of dollars. | |
“Three years ago, big tech companies like Facebook were essentially in denial about all of this,” said Ben Nimmo, head of investigations at Graphika, a social media analytics agency. “Now, they’re actively hunting.” | |
The company has also embarked on closer, information-sharing partnerships with other tech companies like Twitter, Google and Microsoft. And since 2016, Facebook has strengthened its relationships with government agencies, like the Federal Bureau of Investigation, and those in other countries outside the United States. | |
But as Facebook has honed its skills, so have its adversaries. Nathaniel Gleicher, Facebook’s head of cybersecurity policy, said that there has been an escalation of sophisticated attacks coming from Iran and China — beyond the disinformation campaigns from Russia in 2016 — which suggests that the practice has only grown more popular over the past few years. | |
“You have two guarantees in this space,” Mr. Gleicher said. “The first guarantee is that the bad guys are going to keep trying to do this. The second guarantee is that as us and our partners in civil society and as our partners in industry continue to work together on this, we’re making it harder and harder and harder for them to do this.” | |
The company said the disinformation campaigns it removed on Monday included content that touched on conflict in the Middle East, racial strife and posts involving Alexandria Ocasio-Cortez, a Democratic congresswoman from New York. The posts crossed categories and ideological lines, seemingly with no specific intent other than to foment discord among citizens in multiple countries. | |
While Facebook does not want to be an arbiter of what speech is allowed on its site, it said it wanted to be more transparent about where the speech is coming from. To that end, it will now apply labels to pages considered state-sponsored media — including outlets like the broadcaster Russia Today — to inform people whether the outlets are wholly or partially under the editorial control of their country’s government. The company will also apply the labels to the outlet’s Facebook Page, as well as make the label visible inside the social network’s advertising library. | |
“We will hold these Pages to a higher standard of transparency because they combine the opinion-making influence of a media organization with the strategic backing of a state,” Facebook said in a blog post. | “We will hold these Pages to a higher standard of transparency because they combine the opinion-making influence of a media organization with the strategic backing of a state,” Facebook said in a blog post. |
Renee DiResta, the technical research manager for the Stanford Internet Observatory, had a slightly critical take. “The new policies related to fighting foreign interference and increasing transparency are commendable,” she wrote in an email Monday. “However, it does seem incongruous to reiterate a commitment to fighting misinformation and putting more prominent fact-checks on organic posts while stepping back from that for paid political ads.” | |
The company said it developed its definition of state-sponsored media with input from more than 40 outside global organizations, including Reporters Without Borders, the European Journalism Center, Unesco and the Center for Media, Data and Society. | The company said it developed its definition of state-sponsored media with input from more than 40 outside global organizations, including Reporters Without Borders, the European Journalism Center, Unesco and the Center for Media, Data and Society. |
The company will also more prominently label posts on Facebook and on its Instagram app that have been deemed partly or wholly false by outside fact-checking organizations. Facebook said the change was meant to help people better determine what they should read, trust and share. The label will be displayed prominently on top of photos and videos that appear in the news feed, as well as across Instagram stories. | The company will also more prominently label posts on Facebook and on its Instagram app that have been deemed partly or wholly false by outside fact-checking organizations. Facebook said the change was meant to help people better determine what they should read, trust and share. The label will be displayed prominently on top of photos and videos that appear in the news feed, as well as across Instagram stories. |
How much of a difference the labels will make is unclear. Home to more than 2.7 billion regular users, Facebook and Instagram see billions of pieces of content shared to their respective networks daily. Fact-checked news and posts represent a fraction of that content. A wealth of information is also spread privately across Facebook’s messaging services like WhatsApp and Messenger, two conduits that have been identified as prime channels for spreading misinformation. | |
Mr. Zuckerberg said he believed moves like the ones he announced on Monday, along with building more sophisticated artificial intelligence systems and other preventive technology, would allow Facebook to offer its platform to more people while mitigating harm on the social network. | |
“We built systems to fight interference that we believe are more advanced than what any other company is doing and most governments,” he said. “Personally, this is one of my top priorities for the company.” |