The Medium is the Manipulation, Part 2: Detecting the AI in Campaign Advertising
Technology Policy Brief #99 | By: Steve Piazza | October 19, 2023
Photo taken from: thenewshouse.com
This series looks to explore the extent to which campaign ads and speeches, as well as policy frameworks of political candidates, employ deliberate strategies of disinformation and fallacy to not only discredit their political opponents but also add to the continued abusive miseducation of the U.S. populace and thus further increase the national divide. Campaign ads are not in and of themselves policy, but their message reflects a candidate’s or party’s policy of sorts, namely on how far it is willing to go to get what it wants.
Politicians on both sides of the aisle have railed against the abuses of using artificial intelligence (AI) during political campaigns. During a hearing last month of the U.S. Senate Committee on Rules and Administration, members spelled out the harm of disinformation, particularly deep fakes (the use of AI to alter or replace faces/bodies). They also agreed that imposing laws wouldn’t be easy. Nonetheless, bill’s requiring ads to contain disclaimers when AI is being used have been introduced in both chambers.
Outside Congress, The Federal Elections Commission (FEC) is holding a public hearing period as it considers issuing regulations to prevent abuses. But such regulations cannot entirely resolve the problem, since the FEC doesn’t have the authority to stop groups like Political Action Committees (PACs) from using it, let alone users on social media.
Only acts of Congress can do that, acts that are risky in an environment that is akin to a cold war.
Political campaigns are perpetually in search of the most efficient and far-reaching methods to increase their volunteer base, expand the reach of direct contact with voters, and raise more funds. Many of these efforts integrate the latest technologies, if nothing but to assure that the other side does not gain advantage. AI is one of those technologies.
A number of companies actively serve both political parties, and have become a normal part of the landscape. Companies like DSPolitical, Quiller, and Sterling Data have realized a good deal of success in Democratic campaigns, an indication Democrats are way out ahead in the use of AI.
Yet, since more popular chatbots and image generators, like Open AI’s ChatGPT (text) and DALL-E 2 (images), have been criticized as catering to the left, right leaning companies have been inspired to develop products for conservative clients. For example, Targeted Victory’s online tools and Tusk Browser’s chatbot known as GIPPR (yes, inspired by former President Reagan’s nickname), have been gaining in popularity.
Yet, current concerns are not with pragmatic approaches to logistics. AI most certainly provides highly effective mechanisms to process and analyze information, but it also contains ways it can be manipulated.
Gathering and organizing data via AI is one thing, but using it to persuade voters can become hazardous.
For example, AI can not only determine reader’s preferences, but it can even copy what’s been previously read, and paste it directly into newly acquired content to assure the reader is getting the continuity to remain influenced. And even though new methods have been developed to help readers break free from so-called content-recommendation bubbles, new schemes using such AI methods place them right back in.
Perhaps what is most disconcerting is when AI produces modified images and/or sound that to the casual observer might on the surface appear real, yet is certainly not. Images have long been doctored for aesthetic reasons, but now images are being created to stand as the primary sources themselves.
Imagine seeing AI generated images of a candidate saying something completely false. Or, consider receiving a fundraising phone call from a candidate or celebrity that is in reality a completely fabricated audio recording.
Potential threats have already become a grim reality. One example is an ad released by the Republican National Committee showing a barrage of apocalyptic images that indicate what would result if Joe Biden were to be reelected. Another, this one produced by the DeSantis campaign, shows Donald Trump hugging Anthony Fauci.
Some major companies, such as Open AI and Google have pledged to prevent deceiving ads from being generated, though, with some artistic exceptions. Yet, without any state or federal laws, such efforts can only do so much.
From a consumer standpoint, there are many ways that people can protect themselves from being manipulated. It’s important to maintain good search practices by cross-checking other sites to verify information, and examining all photos for irregular spacing and poses. A number of sites claim to expose instances of misinformation online, but it’s important to also make sure that these sites are legitimate. Readers can also be on the lookout for new AI software and add-on tools, such as those like NYU’s Pyrorank, which attempts to break through AI generated recommendations to provide a larger, and more importantly, varied number of choices.
For now, the leviathan has been unleashed in an industry that is already enormously profitable. As Statista predicts, $15.2 billion political ad revenue is to be made in 2024. With the amount of money at stake, it’s no wonder the talk about passage of laws seems more symbolic. In an ironic reflection of AI, just because there seems to be protective action taking place doesn’t necessarily mean that it actually is.
- For a comprehensive look at the use of AI in political campaigns this piece by Tectonica, a group dedicated to changing the political landscape through the use of technology, is worth a read: https://www.tectonica.co/ai_reading_list
- Read more on researchers work on NYU’s Pyrorank here: https://www.nyu.edu/about/news-publications/news/2023/july/researchers-devise-algorithm-to-break-through–search-bubbles-.html