(The AEGIS Alliance) – Earlier this past week we witnessed an intense carnival-like attraction as CEO of Facebook, and one of the biggest tech companies around the globe, Mark Zuckerberg, testified in front of Senate committees regarding privacy issues in relation to Facebook’s use of user data. It highlights the fact that most U.S. senators – and for this matter most people – don’t understand Facebook’s corporate business model or their user agreement they have already opted into while utilizing Facebook, this spectacle revealed one abundantly clear fact: Zuckerberg intends to implement AI (Artificial Intelligence) for managing censorship of hate speech on Facebook.
During these couple days of testimony, this plan to use algorithmic AI for potential practice in censorship was brought up a multitude of times under the archaic method to contain hate speech, election interference, fake news, terrorist messaging, and discriminatory ads. As a matter of fact, AI was mentioned around 30 times. Mark Zuckerberg claims Facebook is five years to a decade away from implementing a robust platform of AI. The other four of the large five tech conglomerates — Google, Microsoft, Amazon, Apple — are developing AI as well, many for the common purpose of controlling content.
For easy to see reasons, this should be worrying civil liberty activists along with anyone who has concerns about the erosion of first amendment rights on the internet. This massive participating ghost of a corporate-government propaganda alliance isn’t just a conspiracy theory. Just over a month ago, Facebook, Twitter and Google testified before Congress and announced the initiation of a ‘counterspeech’ campaign where moderate and positive posts will be targeted at persons consuming and in production of radical or extremist content.
Similar to the other major social networks, Facebook has already participated in the past by accusations of censorship against alternative and conservative sources of news. The Electronic Frontier Foundation (EFF) outlines other examples of Facebook’s “overzealous censorship” in over this last year:
“High-profile journalists in Palestine, Vietnam, and Egypt have encountered a significant rise in content takedowns and account suspensions, with little explanation offered outside a generic ‘Community Standards’ letter. Civildiscourse about racism and harassment is often tagged as ‘hate speech’ and censored. Reports of human rights violations in Syria and against Rohingya Muslims in Myanmar, for example, were taken down—despite the fact that this is essential journalist content about matters of significant global public concern.”
Facebook now believes AI is the answer to all its problems. “We started off in my dorm room with not a lot of resources and not having the AI technology to be able to proactively identify a lot of this stuff,” Zuckerberg said during his testimony. “Over the long term, building AI tools is going to be the scalable way to identify and root out most of this harmful content.”
To be honest, AI is already used for Facebook. “Today, as we sit here, 99 percent of the ISIS and al-Qaeda content that we take down on Facebook, our AI systems flag before any human sees it,” Zuckerberg stated.
He admitted the linguistic nuances of hate speech will be one of the tougher issues for AI.
Is it possible at all for the “information gatekeepers” such as Facebook and Google to utilize AI for regulating content without the practice of censorship? EFF noted that, “Decision-making software tends to reflect the prejudices of its creators, and of course, the biases embedded in its data.”
Obviously, in a time when the government increasingly tends to be a corporate-ocracy with ever revolving doors between the State Department and Silicon Valley, discussing corporate censorship does include an acknowledgment of propaganda by the government, which has been legalized officially in 2012 with Obama’s NDAA. Is this realistic for us to not expect overlapping between what the government desires us to believe and what corporations are allowing as free speech?
At a point during Zuckerberg’s testimony, a senator had asked Zuckerberg whether he believes Facebook is more trusted with user data than the government. After a lengthy pause, Zuckerberg replied, ‘Yes.’ This moment was overlooked, but Zuckerberg blatantly confirmed in a single word that despite all of this privacy violation talk, he still thinks the government is worse when it comes to privacy issues. And after everything brought to light by Edward Snowden and Wikileaks, is he wrong about that?
It’s important to note that because AI will be harboring the values and biases of the entity which creates it, why would we just assume that AI would make humans safer? AI (at least earlier AI) will do the bidding of the maker. Although machine learning may be the future ultimate settlement of free speech, it will be government and corporate programmers who determine the protocols. As we know already, citizens rights and the rights of technocrats aren’t the same.
Kyle James Lee – The AEGIS Alliance – This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.