Many of us, especially those under thirty, engage in society through social media. On average we spend about two hours a week on Facebook. Two hours. In the The Return of the King, within two hours Aragorn has summoned an entire actual army of the living dead to fight Sauron. In that same time we view status updates like ‘Cocktails in Manhattan #hardlife #not’. But we also engage in political issues through social media—and there is a possibility that Facebook is structured in such a way that it affects our entire political outlook.

I began to notice something odd about my Facebook newsfeed recently. More and more, I was seeing statuses about ‘progressive’ liberal issues of the kind I broadly have sympathy for. Links to stories about an atheist schoolchild in Texas being put in detention for questioning religion; a story about a young feminist campaigner being cyber-bullied on a college campus; endless links about how the Tories want to ban being poor or whatever. It struck me—I haven’t seen a conservative viewpoint in such a long time. Or, if not conservative, at least a viewpoint skeptical of the handwringing liberal causes I believe in.

A Chinese mystic may pose the question like this: If a status is made on Facebook, and Zuckerberg doesn’t let you see it, did it ever actually happen?

So why were my more conservative acquaintances not speaking out on Facebook with their hawkish take on the issues of the day? Had they all decided they were wrong and that actually smug, latte-drinking Islington liberals like myself were entirely correct? No, it turns out. They were posting links, putting forward viewpoints that would challenge my own, making snarky comments about liberal values. It’s just that Facebook was blocking me from seeing those statuses.

Facebook actively builds a ‘bubble filter‘ that learns what you like seeing and shows you more of that, to the detriment of views that might challenge your own. Over time, a neutral algorithm that as far as I can tell was designed by Jesse Eisenberg and Justin Timberlake, ruthlessly discards those links and statuses that might be bum out for you to see and presents you with cool, chiller viewpoints that you already broadly hold. It’s like being in a pub where Justin Timberlake, slowly and discreetly, goes around telling people who think things that you don’t find cool to leave. Eventually you are only surrounded by people talking about causes you generally already believe in.

Is there anything wrong with that? We are at least still educating ourselves, engaging in issues that before social media simply wouldn’t have come to our attention. Causes like Everyday Sexism relied initially on social media to grow in support and to gather vast anecdotal evidence about elements of a hidden cultural warfare against women. From that basis it both provided an outlet for every woman who has been heckled in the street for wearing a certain length of skirt, and raised a more general awareness of this secret sexism. Laura Bates, who started the project, now talks regularly in the media and has even spoken to the UN. That is democracy in action if anything is—a complex problem facing millions being brought to the attention of the world.


But if we increasingly engage in politics through a social media that has its own algorithms on what we actually get to see, that does something to the nature of our politics. In his book In Defence of Politics, Bernard Crick argues that what defines politics is a way of balancing vastly different viewpoints within a state without the resort to oppression of any particular view, no matter how much any one person may find that view abhorrent. This, Crick argues, isn’t even an issue of democracy—under certain constitutional autocracies throughout history (such as England after the Glorious Revolution of 1688) such political frameworks have existed. A Whig may despise a Tory’s outlook on life, but of course their view must be taken into account. But politics, in this sense, relies on being aware of what that opposing view actually is. To be respectful of someone’s challenging worldview, we at first have to be aware of what that view actually consists of. We cannot alter Voltaire’s aphorism to ‘I may hate what you say, but I defend to the death your right to say it (as long as I am in no way made aware of what you said)’. To engage in politics via algorithm, as Facebook leads us to do, is the same as not allowing our ‘opponents’ free speech, in some sense. Because from our own subjective experience of events, they may as well have not said anything at all. A Chinese mystic may pose the question like this: If a status is made on Facebook, and Zuckerberg doesn’t let you see it, did it ever actually happen?


I strongly believe that having one’s views challenged in a meaningful way is essential to having thoughtful opinions—and if you disagree with me you ought to be shot. Sartre, fresh from his Resistance work and trying to define the conundrum of the post-fascist human, raises a real life example in his Existentialism and Humanism. During the war a man was trying to decide whether to stay home and look after his sick mother, or whether to join the Resistance. He asked around for advice. But Sartre says at the moment he is choosing who to ask advice for, he is already making a choice in some way about which option he will eventually choose. He knows if he sees the priest he will get a strong suggestion one way or that if he sees his friend he will be urged another way. Sartre’s point is the basic existentialist one that at the moment of deciding who we turn to for opinions, we are already acting on some framework of opinion ourselves. In the case of the modern world, where we engage with political viewpoints that have been cherry picked by a machine, that first order structural decision isn’t even being made by us anymore.

All that can happen in this context is an entrenchment of beliefs we already have. Suddenly our already-held political outlook seems to be the only one, because everyone we see in the cyber coffee house is urging that view onwards. We may learn new things and fight specific causes we weren’t aware of, but it is harder for us to stop and think “actually, maybe welfare should be cut” or “well what would be the reasons for cutting the minimum wage for disabled people?” Those views may produce a Pavlovian shudder in one, but that’s the point. Differing opinions should. Rather than be offended by someone even suggesting such a thing, might there be a reason to look at a meaningful argument about those things? Or even more broadly, to be aware that there are as many snide jokes about how machine-like and dull the Labour front bench is as there are jokes about how David Cameron went to Eton? Even just being open to those totally other political atmospheres forces our political horizons to broaden.

Why should our political horizons be broader? Mill, in On Liberty, suggests that allowing opposing views – no matter how awful we find them – and listening to them serves a utilitarian purpose. Firstly, the other side might be right. History is full of minority opinions that in the end turned out to be the more correct or the more useful to society. The ‘minority’ in this case may be the friends from a different part of the political spectrum that have been silenced by the machine—the minority within our personal cyber society. Secondly and more likely, they may be partly right. By being open to their views, we may add parts of their outlook to our own and therefore strengthen our own view. Thirdly, they may be entirely wrong. But even then, by engaging with that ‘wrong’ opinion, we are forced to critically strengthen our own view to show why it should win the day. In each case, ‘progress’ is served. Now often these opposing views come down to subjective value systems, and so Mill’s talk of opinions being ‘right’ and ‘wrong’ seems inappropriate. But if we are already accepting our own view is based on a subjective normative value that isn’t objectively ‘true’, isn’t that more reason to listen to those who we disagree with? We can’t be humble in accepting the normative nature of all values and then somehow argue that our own are the only ones worth listening to.

Of course, this doesn’t just apply to Facebook. Google uses similar systems to show you ‘relevant’ information. Twitter, we might think, allows a free hand in what views we pick. But at the moment of picking who to follow, we are often already looking for those whose views we share. A quick test to see if your Twitter feed is politically open is to think about how many times you think “God that’s an awful idea” and then read the article being linked to. But the argument about Twitter is for another day. In the closer circle of ‘friends’ on Facebook, who we interact with in a more meaningful way, it may be that we are becoming part of our own self-created ‘bubble’. In an era of apparently radical free choice, a hidden machine is selecting what choices it thinks are relevant to us.

If you think I’m wrong, feel free to comment—though obviously I’m not going to actually read what you write.