The recent focus on election security on vulnerability of voting machines and voter registration databases is important, but ignores how our elections are actually being hacked. It’s clear now that social media and the ads targeted on social media are divisive, inflammatory and being exploited for political ends by foreign interests. The algorithms to place ads are expertly gamed and the ads/memes are produced with reliable results to foment political change and promote nationalist agendas in traditional democracies across Europe and the United States.
A Senate Intelligence Committee investigation into the 2016 presidential election uncovered a systematic pattern where Russian firms purchased Facebook ads to sow division along racial and political lines in the U.S.
Facebook estimates over 120 million people viewed these ads on its platform. Twitter and Google have each found evidence of political ad purchases from Russian firms linked to the Internet Research Agency in the 2016 election season in their preliminary analyses as well. Independent research now confirms that fronts for Russian propagandists purchased ads intended to sow discontent in targeted swing state elections by stoking flames of racial, xenophobic and religious hatred, while pitting conservative versus liberal activists against each other sometimes in actual protests and counter protests.
Journalism professor Mie Kim captured data from Facebook ads on real user accounts in the five-week run up to the 2016 presidential election. Her findings are revealing of the extent of targeted propagandist political ad campaigns from suspicious groups, i.e., non-FEC registered dark money and Russian groups, 1 in 6 of which were linked to the Internet Research Agency. States in tight political races with big electoral college stakes were highly targeted by these suspicious advertiser campaigns. For instance, Pennsylvania, Wisconsin and Virginia were targeted by these campaigns more than any other states. Kim’s research found the propagandist ads hit on the core wedge issues that divide the American electorate: abortion, LGBT issues, guns, immigration, nationalism, race and candidate scandals. Targeting was demographic specific. For example, white voters were served 87 percent of all immigration ads and Wisconsin voters were served gun ads 72 percent more often than the national average.
While accepting campaign contributions or in-kind assistance from foreign nationals is against U.S. Federal Election Commision (FEC) law, there is nothing illegal about targeted campaigns themselves. In fact, the entire online advertising industry is based around the concept of targeted advertising. It should come as little surprise that firms are using personal data collected from advertising supported platforms (e.g. almost all social media platforms) to target voters for political purposes. As the expression goes, “If you are not paying for the product, you are the product,” i.e. your data is being bought and sold, and you are being influenced for better or for worse.
Cambridge Analytica developed behavioral models from over 87 million Facebook user profiles (roughly a third of the U.S. electorate) to enable targeted advertising to influence elections for a particular political outcome. The term psychographics used to describe the behavioral profile of individuals in a demographic is really marketing speak for targeted advertising. We all are targeted with ads online. You know this whenever you shop online, and then see ads show up for that particular item or similar items on other sites.
The combination of large scale data collection from social media profiles, big data analysis of these profiles including clustering and correlation algorithms, and the ability to micro-target audiences via sophisticated advertising tools such as Facebook’s Custom Audiences will play a central role in influencing the 2018 midterm elections in the U.S.
The question of just how the influence operations were carried out is now coming to light under Congressional scrutiny and the tactics look startlingly familiar to cybercrime campaigns. The Washington Post reports that Russian firms employed micro-targeting technology using Facebook’s Custom Audiences tool, which is designed for advertisers to specifically target groups and individuals with specific product or message pitches.
In October 2014, two years before the election, threat researchers from security firm Invincea (now a Sophos company), published a detailed expose on how nation state-linked threat groups used micro-targeting tools to target defense and financial firms. Using malware-laced advertising, also known as malvertising, they were able to selectively target specific firms and individuals based on their interests and profiles collected from websites. Interestingly, and perhaps coincidentally, these same tactics were employed by Russian firms in the 2016 campaign.
The connection between micro-targeting tactics used for cybercrime and election influence has not been established to my knowledge prior to this article. However, with this knowledge in hand, we can lean on the body of work done to counter malvertising for clues on how to combat election influence operations.
Micro-targeting your audience
Advertising tools give any organization the ability to micro-target the demographic segment they are trying to reach. In the case of malvertising, one can target specific firms, or individuals, interested in particular technologies, e.g. laser radar or political leanings, based on their likes, shares and browsing history, and then deliver a malicious payload.
On Facebook, you can see or change your profile by visiting this link: www.facebook.com/ads/preferences. On Twitter, see: www.twitter.com/settings/your_twitter_data. Facebook’s single login across multiple sites also gives them the ability to collect data on your browsing activities on other sites.
Your personal profile is linked to an identifier that is stored online and used to deliver targeted advertising. As long as the advertising is benign or innocuous, the implicit deal we make with free Internet sites is that we are willing to tolerate advertising in exchange for free use of online sites.
However, if the advertising is used to deliver malware or campaigns of disinformation from foreign governments, then these platforms have a responsibility to protect users.
Facebook not only maintains your personal profile, but has a detailed history of all the interests you post, like and share on the platform. These preferences can be mined and then used to target ads to an audience ripe for a particular product or message.
Organizations, for better or worse, can target people by building behavioral profiles of users from their collected data, including:
- Geography: people can target specific neighborhoods and states using Geo-IP addressing or zip codes in online profiles.
- Interest-related content: shares or likes on social media platforms, or profiles built by social media platforms can be used to target a person based on their interests.
- Advertising profiles: DoubleClick, Google and other online advertising firms maintain profiles based on where you shop and visit on the Internet and store them in cookies to be used by advertisers to target you.
- Specific organization IP address ranges: specific organizations including .gov, .mil, corporate and university IP address ranges can be targeted with ads by domain or IP address range.
The tools available to any influencer allow them to target their desired demographic, leveraging the pervasive data collection that social media profiles have built on each of us. We know from cyber crime malvertising campaigns that these tools are highly effective in reaching their desired targets while avoiding unnecessary exposure by indiscriminately infecting anyone.
Countering the threat
Most cybersecurity professionals naturally shy away from involving themselves in geo-political events. What we’ve learned from 2016 is that cyber exploitation is the tool to influence elections. This will continue to be the tool of choice unless the cost of doing it is greater than the return. As we learn more about the sophistication of the tactics used by our adversaries, we need to develop effective countermeasures.
Understanding the tactics employed, such as micro-targeting audiences using advertising tools and social media platforms, allows us to be equally creative in combating foreign influence in otherwise free and fair elections.
As with cybersecurity, the first and most important step is to detect the adversary and know when you are being attacked. Social media companies play a pivotal role here in providing transparency on political ad purchases. The retrospective analysis now being shared with Congress is important to establish the extent of the problem, but going forward, social media platform companies need to provide continuous auditing and transparency of disinformation campaigns. Though these companies employ “screeners,” who are often unskilled labor in countries such as the Philippines, to filter bad content, they lack the requisite skills to evaluate issues of freedom of speech in the context of the U.S. Constitution, and foreign interference. Better would be independent third-party classification of content that a user can use to determine the origin, authenticity and political content of the ad (much as we see on TV). For instance, if people knew ad content was developed by a Russian-linked firm from spoofed accounts with political purpose, would they be as likely to share and even show up for protests organized by these fake accounts?
As an industry, we’ve developed tools that counter malvertising throughout the supply chain including holding websites and third-party advertising networks accountable, while providing protection for our customers using detection tools. Tools to identify bot-generated and shared content can also play a pivotal role in countering proliferation of disinformation. On an individual level, we can control our personal profiles on major platform sites, browse anonymously, or install ad-blocking tools to guard against targeted advertising. While useful for professionals, these tactics rarely scale well to non-security professionals.
The upcoming midterm elections will be fiercely contested as Americans go to the polls this November. Expect foreign influence through social media influence campaigns, if not actual voting machines and voter registration servers hacked. In our roles as cybersecurity professionals, we need to spotlight how our otherwise free and fair elections are being targeted and our electorate hacked by foreign governments so that we may educate public officials, our friends and family who are being targeted, and immunize ourselves from those who would otherwise attempt to weaken our democratic system of government.
This article is published as part of the IDG Contributor Network. Want to Join?