The deepfake age in elections began in the early 2020s, when generative artificial intelligence was able to realistically mimic public personalities and elected officials. With platforms like chatbots, artificial intelligence (AI) tools might produce lifelike voice, photos, and videos of nearly anyone, expanding their reach. Deepfakes have become more prevalent very quickly, which has presented a serious threat to democracies because they can distort public opinion and make it more difficult for voters to make educated decisions when voting.
The Rise of Deepfakes in Elections
Recent events have brought attention to the dishonest usage of deepfakes in political campaigns. Deepfake audio recordings from the October 2023 election that appeared to show Michal Šimečka, the leader of the Progressive Slovakia party, talking about raising beer prices and rigging the election went viral on social media. These dishonest strategies have the potential to affect voter sentiment and affect results, particularly when used soon before elections.
AI-generated content was used in political commercials by Republican primary contenders in the United States as the 2024 election drew near, one of which being Florida Governor Ron DeSantis. For example, the DeSantis campaign unveiled artificial intelligence (AI)-generated pictures of the late President Donald Trump hugging Dr. Anthony Fauci, who is a divisive figure among Republicans because of the Covid-19 policies.
Concerns regarding the entry of ever more sophisticated false communications into political contests have been raised by the rapid advancement of deepfakes and synthetic media over the past year. Legislators at the federal and state levels put up measures to govern AI in response to this growing threat, with a focus on deepfakes and manipulated content in elections. The problematic use of AI in political communications has been the subject of several laws, and state and federal regulators are taking steps to counter this issue.
Regulating Manipulated Media
Legislators are having difficulty striking a balance between regulation and the right to free speech and expression, despite their best efforts to modernise the legislation. It was necessary to carefully distinguish manipulating content for valid purposes—like satire or commentary—from dishonest tactics. Legislators are having difficulty deciding whether to focus exclusively on artificial intelligence (AI)-produced media or to regulate all modified media.
When creating regulations, precise and well-stated goals must be followed. It emphasised that regulating manipulated media is necessary to protect the integrity of the democratic process and to encourage an informed voter. To confront the expanding threat, policymakers were asked to take into account disclaimers, transparency regulations, and maybe outright bans on specific types of manipulated media.
Regulation of manipulated media raised ethical questions about First Amendment rights and protected expression. Careful thought had to go into creating regulations that struck a compromise between protecting free speech and combating dishonesty, with an emphasis on achieving goals without unreasonably limiting other forms of communication.
Artificial intelligence (AI) has completely changed the way politicians interact with voters in political campaigns, which has raised ethical questions regarding the possibility of swaying public opinion. From forecasting legislative results to swaying individual voters, the application of big data and machine learning in elections has developed, creating a dark side of political AI that threatens our democracies.
Threatening the Stability of Democracy
The use of political bots on social media platforms is one of the main problems. Disguised as regular human users, these autonomous accounts disseminated bogus news and propaganda, distorting public opinion. Swarms of bots penetrated online spaces during events like the Brexit referendum and the US presidential election of 2016, contributing to 25% of Twitter activity related to the election and silencing criticism on social media. These bots’ effects went so far as to deter voters from the opposite side, igniting worries about voter manipulation and the possible weakening of free and fair elections.
The moral conundrum is made worse by the use of AI to influence specific voters. Big data and machine learning were used by a skillfully designed micro-targeting campaign in the US presidential election to sway votes through psychological manipulation. The operation’s covert nature and the customised messaging intended to play on emotions brought attention to the possibility of AI being abused in political settings. The manipulative character of the campaigns and their deceptive use, which aims to fool the public rather than inform them, pose a greater threat than the technology itself.
There are concerns regarding the stability of the political system due to the unprecedented scope of computational propaganda. Free and fair elections are essential to representative democracies, but the electoral process itself is seriously threatened by the improper use of artificial intelligence. It becomes clear that artificial intelligence (AI) in politics needs to take a human-centered approach, focusing on solutions that benefit voters rather than control them.
AI’s Dual Nature: Threats and Opportunities
Though there are worries about AI’s detrimental effects on politics, there are also chances for morally sound uses. For example, political bots can be trained to refute false information, and microtargeting campaigns can be created to enlighten voters about political matters instead of playing on their emotions. AI can also be used to improve communication between constituents and their elected officials, guaranteeing that a variety of viewpoints are heard.
More regulation is demanded, together with stronger guidelines for algorithmic responsibility and data protection, in order to solve these ethical issues. The difficulty, though, is that regulations are changing more slowly than technology. It is hoped that democratic principles would be upheld and leaders chosen through the legal frameworks for AI in politics debate.
All of this has made it clear how urgent it is for politicians to deal with the problems caused by manipulated media and deepfakes during elections. Regulation has been regarded necessary to defend democratic processes, but in order to strike a balance between protecting democracy and preserving free expression, it was important to carefully define the objectives, consider the types of media that were covered, and choose the appropriate regulatory techniques.