Vote Zuckerberg No 1? Gatekeepers, intermediaries and Corporate Social Responsibility

Facebook vote image, elements via https://pixabay.com/en/ballot-election-vote-1294935/ and https://www.facebook.com/facebookSome things I read today resonated with one another.

First, from The Signal and the Noise (The Economist Special Report on Technology and Politics; pdf; p6):

… online giants, such as Facebook and Google, … know much more about people than any official agency does and hold all this information in one virtual place. It may not be in their commercial interest to use that knowledge to influence political outcomes, as some people fear, but they certainly have the wherewithal. …

Second, from Gizmodo:

Facebook has declared it will never use its product to influence how people on the platform vote. Earlier today, Gizmodo reported that employees had asked Mark Zuckerberg to answer the question, “What responsibility does Facebook have to help prevent President Trump in 2017?” in an internal poll.

In a statement to the Hill and Business Insider, Facebook said:

Voting is a core value of democracy and we believe that supporting civic participation is an important contribution we can make to the community. We encourage any and all candidates, groups, and voters to use our platform to share their views on the election and debate the issues. We as a company are neutral — we have not and will not use our products in a way that attempts to influence how people vote.

In the earlier Gizmodo story, UCLA law professor Eugene Volokh explained that Facebook has no legal responsibility to give an unfiltered view of what’s happening:

Facebook can promote or block any material that it wants. Facebook has the same First Amendment right as the New York Times. They can completely block Trump if they want … or promote him.

I have discussed on this blog the considerable control that large private companies, such as Facebook and Google, can exert over the flow of information (see, eg, here | here | here | here | here). Concerns about the tricky issue of the the transparency of the algorithms used by such companies to control money and information have recently led the FTC to establish the Office of Technology Research and Investigation:

The Office … is located at the intersection of consumer protection and new technologies … and its work supports all facets of the FTC’s consumer protection mission, including issues related to privacy, data security, connected cars, smart homes, algorithmic transparency, emerging payment methods, fraud, big data, and the Internet of Things.

In her superb new book Regulating Speech in Cyberspace. Gatekeepers, Human Rights and Corporate Responsibility (Cambridge UP, 2015; summary), Emily Laidlaw argues that these digital developments need a new system of human rights governance that takes account of private power, in particular by incorporating principles of corporate social responsibility. In the context of whether to seek to influence the voting intentions of its customers, Facebook has undertaken to do the responsible thing. But what about existing algorithms which seek accuracy at the expense of diversity in political viewpoints visible to the users? Worse, what about other companies that might not be as scrupulous or responsible? In such circumstances, we could do worse than to take Laidlaw’s prescription on board.