OpenAI Updates Policies Against Election Misinformation

Bans AI tool use for impersonation, campaigns, and voter suppression efforts.
Chatgpt-4: These Latin American Startups Already Work With The Latest Version Of The Chatbot Chatgpt-4: These Latin American Startups Already Work With The Latest Version Of The Chatbot
Imagen: Adobe Stock/Contxto

Keep up to Date with Latin American VC, Startups News

Yesterday, a TikTok deepfake video sparked concerns about potential election misinformation. In response, OpenAI updated its policies, prohibiting the use of its AI tools, including ChatGPT and Dall-E, for impersonating political candidates or local governments. The new rules also ban these tools in campaigns, lobbying efforts, discouraging voting, or misrepresenting the voting process.

OpenAI’s stance on election misinformation has strengthened, aligning with efforts by the Coalition for Content Provenance and Authenticity (C2PA). The coalition, which includes major players like Microsoft and Adobe, aims to combat misinformation through AI, particularly in image generation. OpenAI plans to integrate C2PA’s digital credentials into images created by Dall-E, making it simpler to identify AI-generated images.

The initiative extends to directing U.S. voting queries to CanIVote.org, a reliable source for voting information. However, these measures are still rolling out and rely heavily on user reports to identify misuse. In the face of AI’s evolving nature, the effectiveness of these steps in mitigating election misinformation remains uncertain. As AI continues to surprise us with its capabilities and challenges, media literacy remains a crucial defense against misinformation.

Keep up to Date with Latin American VC, Startups News