- Microsoft and OpenAI’s new fund will support projects countering deepfakes and misinformation in elections.
- The initiative aligns with global efforts, including those from Facebook and Google, to limit AI’s role in election interference.
- Newly introduced tools include OpenAI’s deepfake detector and collaborations with groups like C2PA and OATS.
Microsoft and OpenAI have launched a $2 million “societal resilience fund” to address the increasing challenges posed by AI and deepfakes in the context of global elections.
This year, an unprecedented number of voters, totaling 2 billion, are expected to participate in elections across about 50 countries. The fund is part of a broader commitment to safeguard democracy by promoting AI literacy and protecting against misinformation.
The rise of generative AI technologies and their potential misuse in creating convincing but false digital content has prompted major tech firms to adopt measures to protect electoral integrity. These companies, including Microsoft and OpenAI, have pledged to build a common framework to specifically address deepfakes that could mislead voters. In addition to these efforts, OpenAI recently launched a deepfake detector aimed at identifying fake content, enhancing the tools available to fight disinformation.
As part of this initiative, Microsoft and OpenAI will distribute grants to organizations such as Older Adults Technology Services (OATS) and the Coalition for Content Provenance and Authenticity (C2PA). These funds are intended to improve understanding and literacy regarding AI among U.S. populations over 50 and within vulnerable communities globally.
Teresa Hutson of Microsoft emphasized the ongoing dedication to these efforts, underscoring the importance of collaboration with organizations that align with their goals of promoting responsible AI use and education.