As the US general elections near, OpenAI has stated that it would not allow the use of its AI for political campaigning or undermining the democratic electoral process, in a blog post on January 15. It also added that it will not allow the AI chatbots to impersonate the political candidates.
OpenAI on AI in elections
These measures have been taken after concerns were raised regarding the possibilities of potential interference and influence by AI in the election results globally and in India. OpenAI states in the blog post that it aims to enhance platform safety during the 2024 global elections by promoting accurate voting information and increasing transparency.
Furthermore, it also said that “protecting the integrity of elections requires collaboration from every corner of the democratic process, and we want to make sure our technology is not used in a way that could undermine this process,” adding, “We want to make sure that our AI systems are built, deployed, and used safely.”
OpenAI also acknowledged the concerns that have been raised regarding deepfake and its abuses and chatbots impersonating candidates, adding that they were working to address them.
OpenAI also claimed that it employs a thorough approach before releasing new systems, involving red teaming, user engagement, and safety mitigations. The AI, DALL-E, which generates text into images, has set up guards to avoid generating images of real individuals, including political candidates. The company prohibits applications for political campaigning, lobbying, or chatbots pretending to be real people or institutions.
The company also insisted that it attempts to provide transparency in image attribution and is testing an origin classifier for DALL-E. ChatGPT is also being integrated with real-time news sources to provide global updates with attribution and links, promoting transparency and information balance. OpenAI also expressed a commitment to collaborate with partners to prevent misuse of their tools during the upcoming global elections.
OpenAI Attempts to combat misinformation
OpenAI has also claimed that it is implementing measures to combat election misinformation, including prohibiting chatbots that impersonate candidates or institutions. Applications for political campaigning and lobbying are restricted until the effectiveness of personalized persuasion tools is better understood.
It also stated that it is attempting to prohibit misinformation about voting processes to encourage democratic participation. To enhance transparency, they plan to implement digital credentials for images generated by DALL·E by allowing voters to know the image’s origin. They are experimenting with a tool to detect AI-generated images, aiming to provide it to testers, including journalists and researchers, for feedback.
OpenAI added that it was actively working on several initiatives to enhance the capabilities and safety of its AI systems in the upcoming elections, including,
Labelling AI-Generated Content:
OpenAI plans to implement the Coalition for Content Provenance and Authenticity’s digital credentials for images generated by DALL·E 3. These digital credentials use cryptography to encode details about the content’s origin. This approach aims to provide better transparency regarding the origin of AI-generated images which will then allow the users to detect which tools were used in their production.
Detecting AI-Generated Content:
The company is experimenting with a new tool designed to detect images generated by DALL-E. The goal is to have a system capable of identifying AI-generated content. OpenAI intends to make this tool available to a select group of testers, including journalists, platforms, and researchers, to gather feedback and work upon its effectiveness.
ChatGPT Citing Sources:
OpenAI is integrating ChatGPT with existing sources of information in order to enable users to access real-time news reporting globally. The integration will include attribution and links, promoting transparency and allowing users to verify the sources of the information that is given by ChatGPT.
Furthermore, OpenAI is also collaborating with the National Association of Secretaries of State in the United States. Through this partnership, ChatGPT will guide users to CanIVote.org, a trusted source for US voting information whenever they ask specific election-related questions, This initiative aims to provide accurate information to users.
These efforts collectively suggest that OpenAI is attempting to improve transparency and mitigate the potential harm, and ensure the responsible use of its AI technologies in the global elections. Last year, the drama around Open AI revealed that there were clashes amongst groups of people over the risks that generative AI holds. OpenAI is taking measures to address those concerns that have been raised related to misinformation, fake content, and manipulation in the context of critical events like elections.
Comments 1