Decoding dark ads in political media
This post is the first in a series where I will post Media Studies essays I wrote at university. I’m sharing them here because I genuinely enjoyed the process of writing them (also feeling a bit nostalgic!)
Media Studies offers valuable insight into how media shapes our perception of the world – or lack thereof. It can be a powerful tool for change and activism, but it’s just as easily exploited by bad actors to influence public opinion.
This essay was my final written submission for ATS2240 – The Public Sphere (very interesting concept – I highly recommend looking into it), where I explored the topic of dark ads and their political implications. Please note that I have replaced formal referencing with hyperlinks to better suit the blog format.
Happy reading!
Introduction
When Habermas wrote “The Structural Transformation of the Public Sphere” 60 years ago, he was highly critical of what he deemed the transformation of the public sphere, which is now dominated by advertising and PR, with big corporations and governments imposing ideologies. This is striking if we think about the current situation of our increasingly automated society – data collected on users’ behaviours are fed into the algorithms that prioritise engagements over objectivism and literacy. One of the many major problems includes dark ads.
Dark ads are a certain type of advertisement whose content change depending on the user they are aiming at. Dark ads pose a serious threat to societies through their potential to sway political processes and trigger polarisation if not seriously monitored; therefore, strict and transparent measures must be taken.
Literature review
Existing literature for dark ads is not as abundant. This might be due to the recent rise of political campaigning and the “dark” nature of dark ads. It is also because platform companies like Facebook refuse to disclose further information, as such making it hard to conduct investigations. However, there are still prominent scholars in the field whom I consulted for my work. Frank Pasquale and Mark Andrejevic, for example, have written extensively on the automated and algorithmic sphere. The latter, in specific, is also a notable scholar for dark ads and regulations. I also consulted the work of Robbie Fordyce, Nina Li, and Verity Trott on dark ads, and other researchers and journalists who were investigating many dark ads political campaigns at the time. Government and industry reports also referred to as political campaigns tend to invoke legal consequences, thus prompting government intervention.
Terminologies
What are dark ads? It is no doubt that we are living within the automated public sphere world, where “the processes of media production and consumption are increasingly automated and algorithmically dictated.” This development of the public sphere means that it is developing in the way that Habermas always feared - instead of rationalism, we now have powerful players prioritising profits and dominating ideologies. Dark ads are a part of this automated world. As briefly mentioned, they are specific types of advertisements that are not available to the general public. An ad that one sees is different from what the other person sees, and usually, it is very hard to trace back to the ad. They are the advertisements created by algorithms and data harvesting with “self-reinforcing logics of rapid, vapid, viral dissemination”. This ‘dark’ nature is also a stubborn one, given the clear advantages it gives to advertisers to micro-target users and at the same time avoid scrutiny from the public and even persecution.
Dark ads are ubiquitous in many corners of social media platforms, as they can be used in many different ways, be it to promote a product, a service, or an issue. Recent investigations have shed light on this problematic practice: people denounced how Facebook allowed advertisers to target and exclude people. For example, research by ProPublica, a non-profit organization that focuses on investigative journalism, found that Facebook let advertisers exclude people based on their race, gender, and age in employment ads. However, the focus of this essay will be specifically on political dark ads, run during campaigns and elections. Political dark ads allow competing parties to target voters based on the data from users’ digital footprints. However, more often than not this has allowed for many unlawful practices, noticeably, for example, unauthorised data harvesting, discriminatory targeting, or disinformation ads. The Cambridge Analytica scandal of 2018 possessed all of these characteristics and is arguably one of the most significant events in Facebook's history. It also brought dark ads into the forefront, exposing the opaque data collection and user targeting process, which was largely kept in the dark by platform companies, parties, and third parties.
The Cambridge Analytica scandal
The scandal began when, in 2018, former Cambridge Analytica employee and whistleblower Christopher Wylie claimed that the firm was illegally collecting data from more than 87 million users on Facebook for psychological analysis. Users were taken to a survey page, prompting them to answer questions and put in their personal data. This data was then used to categorise users based on personal traits. This data pool was then used to assist then-presidential candidate Donald Trump in his 2016 U.S. election by micro-targeting ads to users. Trump and his team were also alleged to have used micro-targeting to discourage black voters from voting.
This scandal undoubtedly shocked millions of users given its scale and nature. In the broadest term, data points are collected on a massive scale, which is then fed into the algorithms. Users are then targeted with personalised advertising about candidates. It is also noticeable that dark ads are based on personalisation and fragmentation. Just like how Netflix knows everything about a user’s watch history and behaviours to recommend new content, this type of targeting was employed by Cambridge Analytica in their quest to influence voters.
I now want to draw attention to our central case study where both dark ads and the Cambridge Analytica firm played a crucial role – the United Kindom EU Membership Referendum of 2016.
Background
In 2016, then Prime Minister David Cameron decided to hold a referendum asking citizens to decide whether or not to remain in the EU. The result was in favour of leaving, with the vote gaining a close majority of 52%. The campaigning period witnessed two sides of the spectrum with two designated official campaigns: “Remain - Britain Stronger in Europe”, headed by Cameron, and “Vote Leave - Take Back Control” headed by Boris Johnson. There were significant investments in online targeted advertising by both sides, relying on mass data collection of users.
Among the many chaoses that ensued, doubts arose as to the integrity of the campaigns from both sides on social media, especially the “Vote Lead” campaigns. Christopher Wylie, besides exposing Cambridge Analytica’s work in the U.S. election, also testified that the firm also played a role in the Brexit “Leave” campaigns. He claimed that a company called AggregateIQ, which allegedly had its link to Cambridge Analytica, did work with the two Leave campaigns - Vote Leave and BeLeave. It was with AggregateIQ that Vote Leave spent about 40% of its campaign budget of £7m. Wylie claimed in his testimony that AggregateIQ had used the data harvested by Cambridge Analytica to create ads targeted at users, thus influencing their voting decisions. This subsequently led to an investigation by the DCMS, on behalf of the House of Commons of the UK Parliament, into the role of Facebook, Cambridge Analytica, and AggregateIQ and their work of psychological targeting. After requesting the evidence from Facebook, on 26 July 2018 it published the ads run by AggregateIQ for the Leave campaigns.
The Leave ads
From the data collected from users’ digital footprints, AggregateIQ then employed micro-targeting to target users of specific demographic distributions, this goes as specifically as animal lovers, tea lovers (including kettle lovers), to people who hold skeptical or negative views of immigration or healthcare system. The information on some of the ads was harshly debunked as false. For example, ads encouraging the “leave” vote because it would give the NHS (UK’s health care system) the £350m it gave to the EU every week, which was not correct, or because it would stop the UK from funding Turkey £1 billion to join the EU, which was also not correct.
The data harvesting process continued well into the campaign when the company also employed ads that served as a data collection tool for insights into users’ voting behaviour. Specifically, there was an ad that offered a £50m prize for football fans - to enter the draw, users would have to input their name, address, email, phone number, and how they intended to vote in the referendum. Dominic Cummings, director of the Vote Leave campaign, later admitted on his blog that this ad was used to collect data from users - in this case, young men who were football fans and who usually ignore politics - to “effectively” target them later.
Unsettlingly, research has shown that this form of psychological targeting is effective at affecting people’s behaviours when people are more likely to be influenced by persuasive content that matched their personalities. Chris Sumner during the Def Con Hacking Conference showed his finding that psychological targeting has the power to influence political opinions, and such is used by companies like AggregateIQ and Cambridge Analytica in their dark ads campaigns.
Suggestions
The blatant cover-ups from the platform, the parties, and the third-party company AggregateIQ called for stronger solutions. With regards to platforms, recent research by FARE that examined seven major platforms found that they are all not transparent enough in providing the mechanisms underlying their ads. It is thus crucial that platform companies become transparent. After many pressures, Facebook created its Ad Library which has information about ads that are currently run, but there is still no information about how they are used to target.
…the non-functionality of the Facebook ad library is unlikely to be an accident, given the demand on the part of advertisers engaging in strategies that rely on dark ads – Trott et al. (2021:766)
This, however, needs to be changed. With regard to political parties, there also needs to be strict regulations in terms of responsibility and what can be said during campaigns. One such regulation, for example, is one proposed by the Electoral Commission to enforce digital imprints, that is, the names provided on the materials to indicate the party responsible.
Discussion
This case was a clear demonstration of a democratic process being challenged and manipulated by powerful players, and evokes the serious problem concerning democracy and truth itself:
Will we lose the ability to form a shared understanding of a political candidate, because each of us receives a completely different set of messages from that candidate? – Andrejevic et al (2022:11)
People are subject to manipulations, and powerful players become used to employing manipulations. This goes as unsettling as running ads that psychologically target and exclude people and ads that serve as fear-mongers, spreading hate and disinformation. It also brings us back to the discussion of the public sphere, where the coffee house model Habermas envisioned is directly challenged. This debate, thus, helps us understand more about the importance of a model based on rationality and objectivity that is clearly lacking in our current media environment, and about where we should be headed in the future to preserve democracy.
Conclusion
I have tried to shed light on the practice of dark ad targeting, discussing research of the automated public sphere drawing from the Habermasian model, as well as political dark ads campaigns. The UK Referendum of 2016 case study was used to demonstrate how dark ads could become threatening to democratic processes, and thus, to Habermas’ model of a rational and objective public sphere where citizens equally and freely get the chance to voice their opinions. This suggests that more regulations from lawmakers and transparency from politicians and companies should be promoted.
However, my essay and argument are largely focused on the previous case studies and demonstrations that imply the negative aspects of dark ads. The debate continues as to what the actual implications of dark ads, and at large, automated public sphere, are for societies and democracies.
Cover image: NordWood Themes