By Richard Forbes.
Cambridge Analytica has made headlines for the past week having allegedly used data collected from Facebook and online surveys to profile and ultimately manipulate voters in recent elections in favour of prominent right-wing candidates.
This story has dominated Canadian news media given Cambridge Analytica’s whistleblower, Christopher Wylie has a connection to Canada as a B.C. resident.
In the mid-noughties, Wylie also worked on the Dion and Ignatieff campaigns for the Liberal Party of Canada as an early cheerleader for microtargeting.
At the centre of this controversy is ‘psychographics’, the practice of segmenting a target audience by psychological attributes – for which claims of both efficacy and impropriety have been outrageously exaggerated by sensationalist journalists hoping to draw some urgency to their story and boardroom blowhards overselling their marketing analytics.
Contrary to the recent howls from alarmists, psychographics are not the insidious and shadowy dark magic they’ve been made out to be. The real benefit of psychographics is not in ‘altering’ the perspectives of voters, many of whom are committed partisans, but helping campaigns understand who their predisposed and ‘likely’ voters are and what policies and messages will motivate them to vote.
Psychographics are (mostly) harmless, (sorta)
The story of Canada’s own flirtations with psychographics began with electoral boffin Patrick Muttart. As Lawrence Martin explains in Harperland, Muttart, was an early champion of psychographics and microtargeting in the federal Conservative ranks. 1 According to Martin, it was Muttart’s psychographic post-mortem of the 2004 general election results which inspired the Harper government’s future penchant for boutique tax credits. Muttart’s analysis showed these tax credits were viewed favorably by likely Conservative households due to their visibility and the direct association between the credit’s subject and routine family expenses. However, the Parliamentary Budget Office (PBO) consistently found these tax policies – despite being distributed like candy throughout the Harper years – were inefficient, costly, and largely favoured wealthy Canadians, proving a policy’s popularity doesn’t necessarily translate into effectiveness.
As a tool for informing campaigners and targeted advertising, its data analysis simply informs campaigns about their ‘base’ and helps parties make the best case for themselves to targeted audiences. But as a tool for policy development, psychographics trades the economic soundness of policies for electability. It’s fair to conclude then that psychographics impacts parliament far more negatively than the campaign trail.
All of the future Harper campaigns – 2006-2015 – made heavy use of psychographics. Even today, the Responsive Marketing Group, the voter contact firm that the Harper Conservatives contracted for robocalling in thousands of riding campaigns, still boasts on their website that they use “transactional, psychographic, demographic and behavioural models” to identify potential new donors. Under the heading, ‘telefundraising,’ the Responsive Marketing Group also claims it “builds a model of individuals most likely to give over the phone using transactional, psychographic, behavioural and demographic overlays.”
Beyond the buzzwords though is a whimpy electoral track record. If psychographics were such electoral dark magic, Stephen Harper ought to have been able to muster a convincing majority government against someone who wasn’t Michael Ignatieff. Psychographics isn’t about growing a party’s tent, it’s about solidifying a voting ‘base’ and getting more money out of them. Indeed, the legal case against Cambridge Analytica actually centres around U.S. election law’s ban on the participation of foreign consulting firms, for which Cambridge Analytica allegedly violated by participating in recent U.S. elections on behalf of London’s SCL Group as a shell company. The “Analytica” story is thus as much a campaign finance story as it is a canary in democracy’s coal mine.
The next generation of Cambridge Analyticas
The real potential for the psychological manipulation of democratic institution lies, not in profiling voters, but rather ‘making’ them. Central to the hustings of an election is the psychological phenomenon researchers call, “the bandwagon effect.” A well-recognized cognitive preference for the objects of popularity which drives voters to winning campaigns and moves polls with an almost mad frenzy; the bandwagon effect is as old as the tulip bubbles and railway manias that once led Charles Mackay to deride “popular delusions” in his early mid-nineteenth work on crowd psychology. Undecideds swing with the movement of the polls, especially poll results that subvert expectations. They like a winner. To that end, artificial intelligence and sophisticated inbound interactive voice response (IRV) technology has the potential to foster the appearance of voter support, enough so to convince other real voters and exploit the power of the bandwagon effect in ways unanticipated by today’s federal election laws.
A current and practical example of this is the use of political ‘bots’ which have been known to disrupt social media platforms, plump up a politician’s follower count, retweet and distribute (often untrue) messages to targeted audiences, gain access to Facebook accounts, collect personal data, and target opponents online. Putin’s ‘Troll Army’ (or ‘Troll Factory’ if you prefer) is a rather extreme example of an organized cyber-attack; the last U.S. presidential election saw “quasi-government trolls” flooding social media with automated pro-Trump messages outnumbering pro-Clinton messages at an outrageous and clearly manufactured rate. But I foresee this kind of voter manipulation continuing, not just on the cyber-warfront between foreign powers, but as a part of the routine and everyday arsenal of local political operators.
Simply googling “buy fake twitter followers” will lead you to a wild rabbit-hole of providers willing to sell you automated Twitter followers (- and skirt Twitter’s own terms of service.) A batch of two thousand followers, the same number that Conservative leader Andrew Scheer is being accused of buying this past week to grow his own account’s numbers, will set you back about $46 CAN. These accounts – many devoid of profiles, activity, and a semblance of reality – are generally discernible and remain dormant, loyal followers of their purchasers. CBC recently reported that Doug Ford’s leadership campaign may also have used a team of ‘bots’ to circulate its ‘#crookedchristine’ message across Twitter against leadership hopeful Christine Elliott. The interview concluded on a cautionary note, speculating the use of bots may be even more common, albeit more difficult to identify, on Facebook.
The legality of this voter and media manipulation remains uncertain as regulators work towards ‘catching up’ to technological developments inconceivable a decade ago. Elections Canada prepared a discussion paper last August musing about the possibility of a Voter Contact Agency like the existing one that requires the disclosure and registration of caller service providers during elections.
Transparency is the key to fostering the earnest self-regulation of campaign conduct, but the limitations of this plan are apparent. Pursuing regulation through the Canada Elections Act lets parties off the hook for registering the purchase of bots outside of election time – i.e., 95% of the time. Defrauding the public with ‘fake’ voters ought to be treated on the same level as criminal offences like fraud, deceptive marketing, and misrepresentations or electoral offences like impersonation. If anything we ought to be requiring the disclosure of all consulting and analysis firms that campaigns hire, not just those that are cold-calling voters. For instance, the Harper campaign team refused for several days in 2015 to authenticate reports from anonymous insiders that their campaign had hired controversial Australian political consultant Lynton Crosby of the Crosby Textor Group, in part because his reputation for developing wedge political issues would have further sullied the campaign’s image.
I also anticipate we’re not far from the prospect of AI cost-effectively directing the manipulation of public polling: a service that firms could sell to campaigns desperate for better polling numbers in the heat of an election. I call this theoretical tactic, a ‘reverse push poll.’ Instead of a fake poll manipulating real voters, a reverse push would use fake voters to manipulate real polls. Individual pollsters are vulnerable to artificial intelligence in different ways due to differences in methodology. Online pollsters, especially those with their own open, in-house respondent pools, often don’t take enough steps to ensure their respondents are human. While IVR pollsters can be forgiven for not anticipating the possibility of a sophisticated, multi-channel inbound IVR system responding to their own outbound IVR callers with ‘fake’ respondents programmed to answer predictable horse-race poll questions and pollute research data. That’s to say, a robo-caller polling another robo-caller.
The real challenge to manipulating IVR polling isn’t the artificial intelligence required (it exists) but rather harvesting a large enough bank of phone numbers – by my estimation, around 1.16M in the Canadian context – necessary to significantly contaminate the caller lists that major pollsters have purchased to such an extent that any random selection from the caller list would in all likelihood choose a sizable crop of these numbers. A few lines of code could activate a farm of fake phone numbers this size with a central AI fielding questions from different pollsters, ultimately boosting a party’s polling numbers by 6-9.6% – enough to swing a party’s political fortunes in many circumstances. The pollsters least vulnerable to fake respondents would be those that still use live telephone polling or who outsource their online panels to reputable research companies like Research Now that recruit by invitation and screen their online panelists.
Manipulation through the proliferation of automated voices in the political arena will present itself as one of the great challenges to modern democracy. Astroturfing of this scale has and will continue to undermine our trust in other citizens, leaders, news media, analysts, and our democracy overall in new and creatively unethical ways. It’s in that sense, I’m not worried about Cambridge Analytica per se or the targeted marketing of voters, but the iceberg that lies prospectively beneath. The ability to manufacture personal support is a harbinger of worse things to come: the advent of political hyperrealities where old cognitive prejudices towards the bandwagon are exploited, and our own perceptions of political support and controversy cannot be trusted.
- Lawrence Martin, Harperland (Toronto: Penguin Group, 2010), 94.
Richard Forbes studied Political Science and Philosophy at the University of Waterloo. Winner of the Peter Woolstencroft Prize in Canadian Politics (2015).
When asked what ‘one does exactly’ with said degree, he laughs and politely declines to answer. A perfect night for him involves a cup of Lady Grey, writing and a re-run of Yes Minister.