We live in a world where AI is already woven through the darker fabric of our social reality, but not in ways our imaginaries have been trained to envision.
There is no single evil character to focus on, sitting high in a tower stroking a velvet grey cat, or in a cyberpunk basement surrounded by glowing banks of screens mounted on a black steel frame.
Instead, what drives technology creation and its use to manipulate and control, are networks of opaque individual minds and machines who share something, anything...
An addiction to cat photos, the task of sorting and publishing news, carving chess sets, targeting individuals with personalised content, an interest in bump stocks, determining the best route to the mall, a fetish for BDSM wear, a job at Facebook, a job in a Macedonian village sharing the news that Hillary is a pedophile or being a blood relative of Donald J. Trump, etc.
Together, we make up companies, charities, political structures and movements, universities, communities of shared interest - anything around which is organised human action and the tech it is entangled with.
Together, we gradually put in place the conditions for the sudden.
We chip away at creating dark lit networks, at building the hidden machinery of things like social media platforms, multinational corporations and political parties. Pretty soon, we fight to understand the results of our labours, and make sense of things we didn't set out to achieve.
In this way, dangerous things happen - a pattern accelerated by the ubiquity, complexity, and opacity of digital networks.
For instance, in a few short years, nefarious forces contrived to propel Trump from the periphery he shared with other celebrities, a laughing stock who played the fool on reality television, to a central position and supreme power.
Integral to Trump's success was the influence of Facebook's content personalising AI combined with the failure of its news AI during the 2016 US Presidential Election campaign.
This powerful and volatile mix has galvanised Facebook to pursue even greater technological sophistication - they are catching up on other tech giants to become one of the prime-movers in the world of AI research, and all in the pursuit of predicability and control.
But what if they are gradually building the conditions for still greater volatility, and opening opportunities for more sudden and violent influence?
How has an AI driven Facebook disaster unfolded? Simply put, via the sheer scale of the network.
According to Facebook, there were:
1.4 billion daily active users on average for December 2017
- 2.13 billion monthly active users as of December 31, 2017 (Facebook Newsroom, 2018)
The company are in possession of vast quantities of resources, including real-time, individuated data and as the disasters of 2016 onward show - the so called "post-truth" era - they became determined to make changes toward predictability and control.
Facebook is the world's largest distributor of news, and makes use of AI to curate this content. In 2016 this technology, the "Trending News Module", was discovered to be monitored by a distributed editorial team.
Fearing public response to the progressive leanings of these humans, the team was sacked and news selection fell back on a non-partisan, algorithm only approach, with disastrous results - including publication of misinformation and tasty news like:
- “SNL Star Calls Ann Coulter a Racist C*nt,” and
- “BREAKING: Fox News Exposes Traitor Megyn Kelly, Kicks Her Out For Backing Hillary.”
- The trending #McChicken, linked to a video of a man masturbating with a McChicken burger, which had gone viral.
The story, however, is far more complicated than this. Wired published an overview - a tour de force - in February 2018, of the past two years of Facebook, an epic beginning with the outing of the politics of its editors, and concluding this period has:
altered Facebook’s fundamental understanding about whether it’s a publisher or a platform. The company has always answered that question defiantly—platform, platform, platform—for regulatory, financial, and maybe even emotional reasons. But now, gradually, Facebook has evolved. Of course it’s a platform, and always will be. But the company also realizes now that it bears some of the responsibilities that a publisher does: for the care of its readers, and for the care of the truth. You can’t make the world more open and connected if you’re breaking it apart. So what is it: publisher or platform? Facebook seems to have finally recognized that it is quite clearly both. (Thompson and Vogelstein, 2018)
This fundamental shift and experience has, we are told, also upended CEO Zuckerberg's reality. Wired cite an unnamed executive who claimed it has "massively changed his personal techno-optimism... It has made him much more paranoid about the ways that people could abuse the thing that he built.” (Thompson and Vogelstein, 2018)
He has good reason to be fearful, for anything we build moves beyond our control almost immediately - and an invention that operates on this unprecedented scale elevates the unpredictable dimension of its use.
But the whole "Russia made Trump win using Facebook" thing deserves some qualification, and the most recent intelligence reveals it was part of a sophisticated, multi-pronged and highly resourced effort over time.
The Mueller investigation has now levelled indictments that show tens of millions of dollars were spent over several years to create a public mood and build a set of narratives that would "premediate", or prepare the way for a Trump victory.
Here a range of content and activities achieved, and continue to achieve (in the wake of the recent Florida high school massacre), the goals of subversion and division, promoted through a variety of means (including tech like "bots", with varying levels of AI from none to racist Tay, not that Tay is a Russian bot...) with platforms like Twitter and Facebook as the central protagonists.
However, these are fundamental challenges to the fabric of society, wrought by a network effect amplified by AI and by now, commonly accepted as fact (by all but Trump), and Zuck is right to be scared about the role Facebook played, and plays in them.
Facebook is behind other large AI research oriented companies, but catching up fast.
As long ago as 2015, CEO Mark Zuckerberg said he was aiming in the next five to 10 years to "get better than human level at all of the primary human senses: vision, hearing, language, general cognition.” (McCracken, 2015)
Over the last 5 years, Facebook has rapidly expanded it's AI research and development investment, with a team of over 100 researchers now distributed around the globe and including some of the world's leading experts.
It has established the Facebook Artificial Intelligence Researchers (FAIR) group under Director Yann LeCun, and is conducting research and fora designed to engage a larger scholarly community, and to:
understand and develop systems with human-level intelligence by advancing the longer-term academic problems surrounding AI. Our research covers the full spectrum of topics related to AI, and to deriving knowledge from data: theory, algorithms, applications, software infrastructure and hardware infrastructure. (FAIR, 2018)
Hilary Mason of Cloudera and founder of machine-learning research firm Fast Forward Labs complimented their research for its speed and results, and said they are "incredibly alluring" to AI talent, who “are attracted to where the data is, and Facebook has some of the most interesting data.” (Harwell, 2018)
Think about it - you are a brilliant AI scholar, and universities are chasing you alongside tech giants with deep pockets like Amazon and Google’s DeepMind, and of course, Facebook, where the AI you will deploy is used by over 2 billion users each month.
That's a pretty attractive scale of test group.
What About You?
It is not unreasonable to suggest that the populations and infrastructure of the world's advanced economies - the wealthiest nation states - are the primary guinea pigs of this research effort, though attempts are being made to expand this. (Shearlaw, 2016)
Africa, for instance, has 557 million people using mobile devices, where Facebook has only managed to capture 170 million (94% use mobile to log in), and 7 of 10 internet users in Africa now use Facebook. (Shapshak, 2017)
In 2016, The Guardian reported that Facebook had agreements with "almost half the countries in Africa – a combined population of 635 million" to a free internet service in a "controversial move to corner the market in one of the world’s biggest mobile data growth regions" where they are trialling satellite and drone technology to deliver internet to remote communities. (Shearlaw, 2016)
Though in September 2016, Elon Musk's SpaceX rocket blew up at Cape Canaveral, taking with it a Facebook satellite in spectacular style - CEO Zuckerberg happened to be in Africa at the time spruiking the project. He said,
"I'm here in Africa, I'm deeply disappointed to hear that SpaceX's launch failure destroyed our satellite that would have provided connectivity to so many entrepreneurs and everyone else across the continent". (Fung, 2016)
This upset Zuck immensely, on behalf of entrepreneurs and non-entrepreneurs, of course. They're real people too.
Facebook have built machines that grow ever more capable of learning about and predicting the behaviours of the one thing that really matters: you.
These predictions are not based on abstract influences, no, instead they are dedicated to particular, real-world events, such as the likely outcome to your exposure to a suite of stimuli presented in quick succession. Let's keep away from bigotry in an example, imagine...
A preview from a romantic film set in a rustic village of northern Spain, a still image of a young couple wandering a narrow, cobbled street in old Bilbao, and words that beckon: imagine yourself, in Spain?
AI is crucial to Facebook attracting new users and keeping current ones engaged, and underpins longstanding stock features like facial-recognition in photo tagging, and algorithms that decide where and when content arrives in a users’ News Feed.
On its own, this AI and the data it produces and uses, may not be or become nefarious - but it is ready and waiting to be exploited.
Along comes Cambridge Analytica, the business front for an opaque technology that now famously claims to use its own modelling, public data and Facebook data to permit organisations to understand the most intimate detail of individual audience members. (Kirchgaessner, 2016)
The Trump campaign made use of their services, and it is fortunate for Cambridge Analytica that the news media has attributed to them a clandestine status, for the infamy that followed has only been possible due to scale and sophistication in Facebook user numbers and AI tech.
But Wired reported in October 2017, Cambridge staff helped the campaign to make of list of voters who seem likely to swing to Trump, along with those most likely to donate funds to the campaign. In August 2016 "Cambridge was critical to helping the campaign raise $80 million in the prior month, after a primary race that had been largely self-funded by Trump". (Lapowsky, 2017)
As an example of how the less-than-virtuous cycle of data extraction and voter manipulation called for highly complex manipulation via Facebook AI, the Trump campaign went on to deliver 175,000 versions of the same Facebook advertisement during the period leading up to the third presidential debate during October.
This is a frightening potential, is it not? This level of personalisation of advertising is not possible without high tech solutions - and the AI would also yield powerful, precise data about the success or failure of the ad messages to further inform the campaign.
Trump's team down played the role of Cambridge publicly, not surprising given Trump backer and billionaire Robert Mercer is the main financial backer of the firm - and the now sacrificed Steve Bannon also held a position on its board. (Lapowsky, 2017)
But Cambridge Analytica is not nearly so powerful, nor opaque, as the data source: Facebook itself.
Growing pressure on Facebook to take greater responsibility for content delivered by the platform led to changes in early 2018.
CEO Mark Zuckerberg published a series of posts on Facebook in January, beginning on the 4th, when he wrote “we currently make too many errors enforcing our policies and preventing misuse of our tools”, a pattern he intended to set about “fixing” (Zuckerberg, 2018).
On January 12, he announced a shift in News Feed content away from “public content” from businesses, brands and media and toward posts from friends, family and groups with the goal of fostering “more meaningful social interactions” (Zuckerberg, 2018).
On January 20, he wrote:
There's too much sensationalism, misinformation and polarization in the world today. Social media enables people to spread information faster than ever before, and if we don't specifically tackle these problems, then we end up amplifying them. That's why it's important that News Feed promotes high quality news that helps build a sense of common ground…. [A]s part of our ongoing quality surveys, we will now ask people whether they're familiar with a news source and, if so, whether they trust that source. (Zuckerberg, 2018)
The rationale Zuckerberg offers for the use of these surveys was objectivity:
We considered asking outside experts, which would take the decision out of our hands but would likely not solve the objectivity problem. Or we could ask you -- the community -- and have your feedback determine the ranking. We decided that having the community determine which sources are broadly trusted would be most objective. (Zuckerberg, 2018)
Effectively, the “fixes” outlined here appear to reduce the overall quantity of news any given news feed will supply to the user, and introduce measures to allow the community to define trust: placing responsibility for the authenticity of news content at the feet of users.
Facebook’s move then, is toward further claims at neutrality – and the ongoing attempt at maintaining the vision of a platform that facilitates networked communities, rather than determining them through editorial intervention.
But it's not that simple
At the time of writing, as Robert Mueller and his cohorts bring to bear the weight of their findings in the form of criminal charges, it has become evident that a primary source of their information, and the power that accompanies it, is the AI Facebook has created.
In February of 2018, as Deputy Attorney General Rod J. Rosenstein announced the indictment of 13 Russians linked to a troll farm, the role of Facebook was further evidenced, and Rob Goldman (Facebook’s vice president for advertising), had a brain snap.
Goldman tweeted a series of confused and damaging tweets, writing he was "very excited" by the indictments as Facebook had “shared Russian ads with Congress, Mueller and the American people... Most of the coverage of Russian meddling involves their attempt to effect the outcome of the 2016 US election. I have seen all of the Russian ads and I can say very definitively that swaying the election was *NOT* the main goal.”
Instead, it was to “divide America by using our institutions, like free speech and social media, against us. It has stoked fear and hatred amongst Americans. It is working incredibly well.”
Well, that's a relief!
The fact that the indictment also sets down that the Russians were not only buying ads, but using fake American identities to obtain PayPal accounts and Social Security numbers, and create Facebook pages, seems to have slipped between the cracks for Goldman.
Pages that facilitated groups like “Blacktivist,” “Secured Borders”, “Army of Jesus”, and, well... as Anne Applebaum argued, the Russians "did indeed use those pages to spread fear and hatred, reaching tens and possibly hundreds of millions of people". (Applebaum, 2018)
It is clear that if Facebook "play ball", the Mueller investigation may yield a series of startling discoveries, and perhaps even bring down a standing President - what data is hidden in their repositories that might tell the hidden story, and might Mueller already have discovered?
Long Live AI
The children of these machines are fabulous and violent.
Facebook highlights the danger created by removing responsibility for ethical decision making about what to publish, and when, from human hands and displacing it to the network and the dark machinery of AI tech.
Facebook has set itself an impossibly difficult task, for how does one compose AI capable of assisting a small group of humans to ethically convene a network of over 2 billion souls?
Even if this were possible, should a company be responsible for so many, and so much consequence?
The intertwined forces of human will and technology, particularly AI, have been realised as a presence capable of undoing centuries of graft - of destabilising, rapidly, a grand set of interlocking institutions, charged to provide a safe, humanistic structure for a diverse culture, and many millions of individuals to thrive within.
And now, what is Facebook building, deep within that heavily fortified, incredibly powerful organisation? They have taken the best and brightest, and are pouring stupendous resources into this race against other tech giants who are doing the same.
All this, despite the evidence offered so far.
All this, while those who are attentive to the nefarious and unburdened by the ethics that human emotions generally sustain, intend to make much of the unplanned opportunities in networked structures that emerge through human activity.
They do very well. Very very well. Terrifically, fantastically. Wonderfully well.
They are ever ready, and rarely bothering with stroking hypoallergenic cats in their eyrie (though often own one), or inhabiting mouldy basements and copping eye strain.
No, instead the cat stroker or the basement dweller are out networking with groups of brilliant individuals who bring to their attention the potentials in emergent structures, and without needing to grasp the complexity beneath, leap in, and exploit in full measure the breach.
Applebaum A (2018) Opinion | Why Facebook is afraid of Robert Mueller. Washington Post, Available from: https://www.washingtonpost.com/news/global-opinions/wp/2018/02/19/why-facebook-is-afraid-of-mueller/?utm_term=.be2d05af524c (accessed 1 March 2018).
Donald Trump falsely denies that he denied Russian meddling (2018) @politifact, Available from: http://www.politifact.com/truth-o-meter/statements/2018/feb/19/donald-trump/donald-trump-falsely-denies-he-denied-russian-medd/ (accessed 1 March 2018).
Five monitor setup, Battlestation (2018) Know Your Meme, Available from: http://knowyourmeme.com/photos/986826-battlestation (accessed 1 March 2018).
Facebook AI Research (FAIR) (2018) Facebook Research, Available from: https://research.fb.com/category/facebook-ai-research-fair/ (accessed 1 March 2018).
Facebook Newsroom (2018) Newsroom.fb.com, Available from: https://newsroom.fb.com/company-info/ (accessed 28 February 2018).
FRENKEL S and WAKABAYASHI D (2018) After Florida School Shooting, Russian ‘Bot’ Army Pounced. Nytimes.com, Available from: https://www.nytimes.com/2018/02/19/technology/russian-bots-school-shooting.html?emc=edit_nn_20180220&nl=morning-briefing&nlid=77905747&te=1 (accessed 1 March 2018).
Fung B (2016) That SpaceX explosion blew up one of Facebook’s most ambitious projects. Washington Post, Available from: https://www.washingtonpost.com/news/the-switch/wp/2016/09/01/that-spacex-explosion-blew-up-one-of-facebooks-most-ambitious-projects/?utm_term=.039ec917f120 (accessed 1 March 2018).
Harwell D (2018) Shake-up at Facebook highlights tension in race for AI. Washington Post, Available from: https://www.washingtonpost.com/business/economy/shake-up-at-facebook-highlights-tension-in-race-for-ai/2018/01/24/5d21239a-0138-11e8-9d31-d72cf78dbeee_story.html?utm_term=.7d37b3e81fdf (accessed 1 March 2018).
Kirchgaessner S (2016) Cambridge Analytica used data from Facebook and Politico to help Trump. the Guardian, Available from: https://www.theguardian.com/technology/2017/oct/26/cambridge-analytica-used-data-from-facebook-and-politico-to-help-trump (accessed 1 March 2018).
Lapowsky I (2017) What Did Cambridge Analytica Really Do for Trump's Campaign?. WIRED, Available from: https://www.wired.com/story/what-did-cambridge-analytica-really-do-for-trumps-campaign/ (accessed 1 March 2018).
McCracken H (2015) Inside Mark Zuckerberg’s Bold Plan For The Future Of Facebook. Fast Company, Available from: https://www.fastcompany.com/3052885/mark-zuckerberg-facebook (accessed 1 March 2018).
Shapshak T (2017) Facebook Has 170 Million African Users, Mostly On Mobile. Forbes.com, Available from: https://www.forbes.com/sites/tobyshapshak/2017/04/05/facebook-has-170m-african-users-mostly-on-mobile/#54228bce53dc (accessed 1 March 2018).
Shearlaw M (2016) Facebook lures Africa with free internet - but what is the hidden cost?. the Guardian, Available from: https://www.theguardian.com/world/2016/aug/01/facebook-free-basics-internet-africa-mark-zuckerberg (accessed 1 March 2018).
Thielman S (2016) Facebook fires trending team, and algorithm without humans goes crazy. the Guardian, Available from: https://www.theguardian.com/technology/2016/aug/29/facebook-fires-trending-topics-team-algorithm (accessed 1 March 2018).
Thompson N and Vogelstein F (2018) Inside Facebook's Two Years of Hell. WIRED, Available from: https://www.wired.com/story/inside-facebook-mark-zuckerberg-2-years-of-hell/ (accessed 2 March 2018).
Zuckerberg M (2018) Mark Zuckerberg. Facebook.com, Available from: https://www.facebook.com/zuck/posts/10104445245963251?pnref=story (accessed 24 February 2018).
Zuckerberg M (2018) Mark Zuckerberg. Facebook.com, Available from: https://www.facebook.com/zuck?hc_ref=ARRz1tSmopYqQxGVB-SipKWUYQN08gexl701vULHmPFdcH8hSHI8wOEqXounD3DH5mM&fref=nf&pnref=story (accessed 24 February 2018).
Zuckerberg M (2018) Mark Zuckerberg. Facebook.com, Available from: https://www.facebook.com/zuck/posts/10104413015393571 (accessed 24 February 2018).
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Spain License.