The National Institute of Standards and Technology (NIST) is under pressure with a daunting task: to develop robust safety standards for AI by July 2024. This initiative, vital for mitigating AI’s risks, is part of a broader White House strategy to ensure AI systems are free from biases and rogue tendencies. However, NIST’s current budget falls significantly short of what’s necessary to independently fulfill these ambitious goals, leading to concerns about potential over-reliance on the private sector for expertise and resources.
Recent discussions at the NeurIPS AI conference shed light on the “almost impossible deadline” set for NIST, highlighting the disparity between the agency’s resources and the vast funds available to private AI developers like OpenAI, Google, and Meta. While NIST has a track record of standardizing diverse fields and recently released an AI risk management framework, the challenge of independently stress-testing AI systems remains daunting due to budgetary constraints.
Congressional awareness of these limitations has led to bipartisan calls for increased transparency and apprehension about the agency’s approach to involving private entities. Lawmakers emphasize the nascent state of AI safety research and the need for clear, scientifically sound standards, expressing concern over NIST’s direction under current constraints.
The scenario is further complicated by the secretive nature of commercial AI models, which poses significant hurdles to measurement and standardization. Experts advocate for NIST’s role as a neutral body to navigate AI risks, emphasizing the need for substantial support to fulfill its mandate effectively. As the agency solicits external input for evaluating and adversarially testing AI models, it faces the critical challenge of balancing immediate pressures against the long-term objective of developing comprehensive, responsible global standards for AI development.