OpenAI Announces Tools to Counter Election Disinformation

OpenAI, the world’s leading artificial intelligence (AI) firm, has revealed its plans to launch tools to counter disinformation and improve voting information in elections around the globe.

The company, which is behind the popular ChatGPT and DALL-E systems, said in a blog post that it is working to prevent abuse, provide transparency, and improve access to accurate voting information with its AI technology.

The announcement comes at a time when fake news and misinformation pose a serious threat to the electoral processes in many countries, including Pakistan, India, US and the European Union. The World Economic Forum (WEF) has also identified AI-driven misinformation as the “biggest short-term threat” to the global economy in its Global Risk Report.

OpenAI said it wants to ensure that its technology is not used in a way that could undermine the integrity of elections and that its AI systems are built, deployed, and used safely.

“We want to make sure that our technology is not used in a way that could undermine this process. We want to make sure that our AI systems are built, deployed, and used safely. Like any new technology, these tools come with benefits and challenges. They are also unprecedented, and we will keep evolving our approach as we learn more about how our tools are used,” the company said.

To prevent abuse, OpenAI said it has implemented several safety measures before releasing new systems, such as red teaming, user feedback, and external partnerships. It also said that DALL-E, which can generate images from text, has guardrails to decline requests that ask for image generation of real people, including candidates.

OpenAI also said it is still exploring how effective its tools might be for personalised persuasion and that it will not allow people to build applications for political campaigning and lobbying until it is clear. Moreover, it has created new GPTs that allow users to report potential violations and has prohibited builders from creating chatbots that pretend to be real people or institutions. It has also banned applications that can discourage people from participating in democratic processes.

To provide transparency, OpenAI said it is working on several provenance efforts that would attach reliable attribution to the text generated by ChatGPT, and also give users the ability to detect if an image was created using DALL-E 3.

“Early this year, we will implement the Coalition for Content Provenance and Authenticity’s digital credentials — an approach that encodes details about the content’s provenance using cryptography — for images generated by DALL-E 3.”

The coalition, also known as C2PA, is a group of tech companies that aims to improve methods for identifying and tracing digital content. Its members include Microsoft, Sony, Adobe and Japanese imaging firms Nikon and Canon.

To improve access to accurate voting information, OpenAI said it has collaborated with the National Association of Secretaries of State (NASS) where ChatGPT will direct users to — an authoritative website on US voting information.

ChatGPT, the company added, is increasingly integrating with existing sources of information and will soon provide users with real-time news reporting globally, including attribution and links.

Back to top button