Art has always been an ideal vessel for propaganda: persuading with emotion can cut through the need for rational argument. With Facebook’s release of thousands of examples of propaganda created for social media in 2016, it’s becoming clear that artlessness is just as good.

After Congressional hearings in the United States, Facebook has announced an “Action Plan Against Foreign Interference” that would double its security team in 2018, and is planning to release a tool for users to check if they’ve clicked on any propaganda in 2016. Two conservative activists on Twitter were recently revealed to be bots; that’s two out of the company’s estimated 36,746 bot accounts, though a private investigation found 150,000 such bots operated to influence the Brexit campaign. While Facebook and Congress points to Russia, that country denies any involvement. Meanwhile, third-party tools, such as botcheck.me, can evaluate Twitter account histories for bot-like patterns. Other social networks are pointing to AI solutions that never seem to arrive.

Today’s propaganda artists are making use of the “creative” algorithm: an emerging trend where data is channeled into “inspiration” for content, then channeled back into creative products. In line with our past events examining cyberthreats and digital humanitarianism, we’re looking at how creative algorithms seem to work (or fail) and how it’s influencing the next wave of propaganda. What happens when bots talk — and people listen?

Batman Elsa Birthday Babies

Artist and researcher James Bridle recently took a critical look at YouTube videos crafted for children. The children’s market is a ripe target for this kind of content: toddlers love repetition, parents love the endless stream of (unwatched) content, and producers love their low costs and production values.

Bridle writes that the algorithms aren’t just curating this content. They are surfacing the most powerful combinations of keywords, and using them to dictate what content is produced for the site. YouTube selects videos matching similar keywords for its “up next” queue, which are played automatically when one video ends. Create a video that matches these keywords, and you assure that your video will join the infinite stream of content shown to a child searching for Elmo or Frozen.

There is no shortage of cheap, quickly created content with word-salad titles like “Batman Finger Family Song?—?Superheroes and Villains! Batman, Joker, Riddler, Catwoman.” The audience for that title isn’t a child, or parents. The audience isn’t human at all: the audience is the YouTube algorithm. Once the keywords are crafted for that algorithm, the content is second nature: It’s clear that you throw those characters together and back it with the “family finger song.” What happens next doesn’t matter, because the child will watch it through to the next video. Keywords dictate the content, not to benefit a child, but to ensure that the algorithm plays a video in any automated queues related to its title.

Bridle points out that something is amiss in these videos. They certainly allow less-than-scrupulous actors to inject weird content into a child’s stream. One nightmarish example shows Spiderman, the Hulk, and Elsa all being bashed in the head by the Joker and other villains, who then bury these favorite children’s characters alive in quicksand.

That’s blatantly outrageous content created by anonymous bad actors. But even in “harmless” videos, there’s something weird about inverting the relationship between keywords and content. Keywords are a categorization of what content contains. By knowing the types of content people are looking for, breaking those words apart from any context and re-assembling them, you create a formula for search results or, at least, high placement in auto-generated content streams.

The Dark Art of SEO

This is what used to be considered the dark arts of “SEO” — Search Engine Optimization. It’s a tool used for writing blog spam that could show up in search results. The impact of blogspam was somewhat limited to 500-word texts redirecting you to purchase products. Today, we’re seeing SEO create epic, 30-minute-long animated videos that don’t explicitly ask you for money, but generate revenue just by being clicked.

The content of these videos is secondary. Kids watch whatever is dictated by the most valuable keywords. Humans create this content quickly in response, resulting in a surrealist mash-up of arbitrary search terms: the digital storytelling equivalent of empty calories.

Machine learning processes take human inputs, strip them into basic units, and then reassemble them into infinite variations. It’s this blend of human and alien processes that make “AI consciousness” a very specific kind of weird: uncanniness.

Rethinking the Uncanny

A great example of uncanniness comes from the Dadabots‘ album, “Deep the Beatles!” The album is the result of a machine learning program “listening” (scanning sound data) to Beatles records and producing something that is very much the Beatles and very much not the Beatles. 

Ernst Jentsch first defined a certain emotion, “uncanniness,” in 1906: “In telling a story, one of the most successful devices for easily creating uncanny effects is to leave the reader in uncertainty [of] whether a particular figure in the story is a human being or an automaton, and to do it in such a way that his attention is not focused directly upon his uncertainty, so that he may not be led to go into the matter and clear it up immediately.”

It’s an oddly prescient line of thinking that seems to describe the entire internet experience as of 2016. The uncanny has moved from literature into the virtual world, spreading a residue of low-grade, unsettling surrealism into our everyday lives. Looking at a Twitter account with 38,800 followers posting nothing but unsourced political memes in 2015, we might have asked how this person had so much time on their hands. Today, we have to ask if they’re actually a person.

In its congressional hearings, Facebook shared 3,000 images it claims originated from a shadowy organization in St. Petersburg, Russia, intended to influence American voters. What we see in these images is the surface-skimming of keywords, created from real political debates, boiled down to their most toxic and potent forms.

Facebook is transcribing your online actions and reducing them into easily-digestible traits. It can tell if you’re neurotic, a reader, a beach-lover, extroverted. It can tell if you’re gay or straight, married, religious, or have children. It can tell if you’re worried about immigrants, guns, or unemployment. These categories can be skimmed and recycled into content. Just like a four-year-old who wants to watch an Elsa video, advertisers can tell if you want to see racist or toxic political content, and deliver it.

The Meme War

Two anonymous researchers are creating an online archive of bot-linked political campaign imagery. They include groups across the spectrum, from “Army of Jesus” to gay groups, “Woke Blacks,” a conservative-leaning “Missouri News” as well as a page called “Feminist Tag.” They target pro- and anti-immigrant sentiment. If there was a set of keywords that could be targeted with divisive political rhetoric, there was a group created to appeal to those keywords. From there, real people, selected by the algorithms, boosted and amplified messages that were dictated by those same algorithms.

The social media propaganda images aren’t sophisticated. They’re full of spelling errors, extremist language and imagery. In one, Satan suggests that Hillary Clinton would win the election if he beat Jesus in an arm-wrestling contest. The viewer was encouraged to “like” the post to “help Jesus win.” That content was created specifically for people whose personalities showed a strong affinity to the Bible, Jesus, God, Christianity, and Fox News commentator Bill O’Reilly.

The ads can also create associations that rely on several layers of deception. A few targeted Facebook accounts of people with clear anti-immigrant bias and presented advertisements from a fake pro-Muslim group. The ads included an image of Hillary Clinton hugging a woman in a burka with the message “Support Hillary to Save American Muslims.” The idea seems to be that this would be shown to Islamophobic voters, who would share it out of a sense of outrage, deepening their animosity toward the Clinton campaign.

When Propaganda goes viral

Sharing is an impulse built into all social media, and it’s the real mechanism being “hacked” in contemporary propaganda. We share things we relate and respond to, because they reflect who we are, how we want to be seen, and who we want to connect with.

After Freud, psychoanalyst Jacques Lacan took on the study of the uncanny. For Lacan, the uncanny reflects a conflicted appeal to our ideas of ourselves. The images and messages reveal a sense of our identities being reduced, partitioned, and invaded. Something uncanny emerges in this process. These are strange objects pretending to be familiar.  

Looking at these archives of propaganda images is unsettling, because it reveals parts of us we know — the political memes, ideas, and philosophies we believe in — and so they belong to us. But they also push the boundaries of those beliefs, including our ideas of what other people believe about us. It’s an environment that contributed to an especially toxic online atmosphere in 2016.

What’s next?

Not all creative algorithm content is created equal. In 2013, Netflix analyzed extensive tags it had created for every piece of its content to see what worked for most of its subscribers. From that data, they were able to discern a “Venn diagram” for a successful streaming series, which they agreed to produce, sight unseen. That show was “House of Cards.” But that wasn’t just the product of blind faith in data. Instead, it pointed to a new kind of intelligence, as described by Tim Wu in his New Yorker piece about the show:

“It is a form of curation … whose aim is guessing not simply what will attract viewers, but what will attract fans—people who will get excited enough to spread the word. Data may help, but what may matter more is a sense of what appeals to the hearts of obsessive people, and who can deliver that.”

The similarities between the art of crafting algorithms into fan-favorite entertainment and crafting successful online propaganda campaigns? You might say it’s uncanny.


swissnex San Francisco is exploring a number of topics around AI and ethics in 2018. Stay tuned with our event newsletter to stay up to date.

Share