OpenAI, the AI research company, announced the formation of a new team, the Collective Alignment team, on January 16, 2024. This team, comprising researchers and engineers, aims to integrate public feedback into the development of OpenAI’s products and services. The initiative stems from OpenAI’s commitment to align AI technologies with broader human values, a vision first articulated through its grant program launched in May of the previous year.
The grant program, focused on establishing a democratic process for AI governance, funded diverse projects ranging from video chat interfaces to platforms for crowdsourced AI audits. OpenAI has made all the code from these projects publicly available, along with project summaries and key insights. This effort reflects OpenAI’s strategy to develop AI systems governed by rules that resonate with public consensus and ethical considerations.
Despite OpenAI’s efforts to separate these initiatives from its commercial interests, skepticism remains. CEO Sam Altman, along with other top executives, has criticized EU regulations, arguing that the rapid pace of AI innovation outstrips the capacity of existing regulatory frameworks. This stance has led to accusations from competitors, like Meta, of attempting to influence AI industry regulation. In response, OpenAI highlights its grant program and the Collective Alignment team as evidence of its commitment to open and responsible AI development.
Concurrently, OpenAI faces increasing regulatory scrutiny, particularly in the UK over its ties with Microsoft. The company has also been proactive in addressing EU data privacy concerns through strategic operations via its Dublin-based subsidiary. In a related move, OpenAI recently announced collaborations to mitigate the potential misuse of its technology in influencing elections, including methods to identify AI-generated content and to flag AI-created images.