The US artificial intelligence company Openai, humanity and Google’s leading companies have warned the federal government that the American technical lead in AI is “wide and narrow.” According to documents submitted to the US government in response to requests for information on the development of AI action plans, Chinese models like the Deepseek R1 show increased capabilities.
These recent submissions from March 2025 highlight urgent concerns about the risks of national security, economic competitiveness, and the need for a strategic regulatory framework to maintain US leadership in AI development amid growing global competition and advances in China’s national integration in the field. Humanity and Google submitted their responses on March 6, 2025, and Openai’s submission continued on March 13, 2025.
China Challenge and Deep Seek R1
The emergence of the Chinese Deepseek R1 model has sparked great concern among major US AI developers. This is not superior to American technology, but we see it as compelling evidence that the technology gap is rapidly closing.
Openai explicitly warns that “Deepseek indicates that our leads are not wide and narrow,” characterizing the model as “at the same time state aid, state control, freely available.”
According to Openai’s analysis, Deepseek poses similar risks to those associated with Chinese telecommunications giant Huawei. “Like Huawei, building on top of Deepseek models in critical infrastructure and other high-risk use cases is a huge risk, given that DeepSeek can manipulate the model with CCP and do harm.”
The company also raised concerns about data privacy and security, noting that Chinese regulations could require DeepSeek to share user data with the government. This will allow the Chinese Communist Party to align more sophisticated AI systems with national interests while undermining individual privacy.
Humanity’s assessment focuses on its impact on biosecurity. Their ratings revealed that Deepseek R1 “conforms to most bioweaponization questions, even if it was formulated with obviously malicious intent.” This willingness to provide potentially dangerous information is in contrast to the safety measures implemented by major US models.
“While America is still continuing its AI lead today, Deepseek shows that our leads are not wide and narrow,” humanity has been reflected in its own submission, reinforcing the urgent tone of warnings.
The companies framed the competition in ideological terms, and Openai describes a contest between the US-led “democratic AI” and China’s “dictatorial, authoritarian AI.” They suggest that Deepseek reported an willingness to generate “illegal and harmful activities such as identity fraud and intellectual property theft.”
The emergence of Deepseek R1 is undoubtedly a significant milestone in the world’s AI race, demonstrating China’s growth capabilities despite US export controls on advanced semiconductors, highlighting the urgency of coordinated government actions to maintain American leadership in the field.
Impact on national security
Submissions from all three companies approach these risks from a variety of angles, but highlight the important national security concerns arising from sophisticated AI models.
Openai’s warnings focus heavily on the potential impact of CCP on Chinese AI models like Deepseek. The company emphasizes that China’s regulations could force DeepSec to “compromise critical infrastructure and sensitive applications” and require user data to be shared with the government. This data sharing will enable the development of more sophisticated AI systems alongside the interests of China’s states, creating both immediate privacy issues and long-term security threats.
Human concerns are concentrated on the biosecurity risks posed by advanced AI capabilities, regardless of country of origin. In a particularly surprising disclosure, humanity revealed that “our latest system, Claude 3.7 Sonnet, demonstrates improvements in its ability to support aspects of biological weapon development.” This candid entrance highlights the dual use of advanced AI systems and the need for robust protection.
Humanity has also identified what they call the “regulatory gap in US chip restrictions” related to NVIDIA’s H20 chips. These chips meet the reduction in China’s export performance requirements, but are “excellent at text generation (“sampling”). This is a fundamental component of advanced reinforcement learning methods that are important for the advancement of the current frontier model’s capabilities. “Humanity called for “immediate regulatory measures” to address this potential vulnerability in the current export control framework.
Google acknowledges the security risks of AI, but proposes a more balanced approach to exporting controls. We warn that current AI export regulations “may undermine our economic competitiveness goals by imposing a disproportionate burden on US cloud service providers.” Instead, Google recommends “balanced export controls that protect national security while enabling US exports and global operations.”
All three companies emphasize the need to strengthen government evaluation capabilities. To better understand potential misuse by the enemy, humanity is particularly sought to build the federal government’s ability to test and evaluate strong AI models of national security capabilities, including preserving and enhancing the AI Safety Institute, developing NIST security assessments, and building a team of interdisciplinary experts.
Comparison table: Openai, Anthropic, Google
Area of focus | Openai | Humanity | |
Main concerns | Political and economic threats from state-controlled AI | Biosecurity risks from advanced models | Maintain innovation while balancing security |
Viewing DeepSeek R1 | “State subsidies, state management, freely available” with risks like Huawei | I’ll be happy to answer the “bio-weaponization question” with malicious intent | Deepseek is not too specific, focusing on wider competition |
National Security Priorities | Impact of CCP and data security risks | Biosecurity Threats and Chip Export Loops | Balanced export control that doesn’t put a burden on US providers |
Regulatory Approach | Voluntary partnership with the federal government. Single point of contact | Strengthening government testing capabilities. Enhanced export control | “Pro-Federal Framework”; Sector-specific Governance |
Infrastructure Focus | Government adoption of frontier AI tools | Energy expansion for AI development (50GW by 2027) | Energy-related action, permission to reform |
Characteristic recommendations | A hierarchical export control framework that promotes “democratic AI” | Immediate Regulations on Nvidia H20 Chips Exported to China | Industry access to openly available data for fair learning |
Economic Competitive Strategy
Infrastructure requirements, particularly energy needs, emerge as important factors in maintaining US AI leadership. Humanity warned that “by 2027, training a single frontier AI model will require a networked computing cluster that draws approximately 5 gigawatts of power.” They proposed an ambitious national goal of building 50 additional gigawatts specialized in the AI industry by 2027, and proposed measures to allow and speed up the approval of the communication line.
Openai is once again framed the competition as an ideological contest between “democratic AI” and “authoritarian, authoritarian AI” built by the CCP. Their vision for “democratic AI” emphasizes “a free market that promotes free and fair competition” and “the freedom to manipulate and direct our tools so that developers and users can see appropriately.”
All three companies provided detailed recommendations to maintain US leadership. Humanity emphasized the importance of “enhancing America’s economic competitiveness” and “ensure that AI-driven economic interests are widely shared across society. They argued for “securing and expanding the US energy supply” as an important prerequisite for maintaining AI development within American borders, warning that energy constraints can push developers abroad.
Google called for decisive action to “overcharge US development” focusing on three key areas: investment in AI, accelerated adoption of government AI, and an international approach. The company highlighted the need for “adjusted federal, state, local and industry actions on policies that allow for a surge in transmission and reform,” along with “balanced export controls” and “continued funding for basic AI research and development.”
Google’s submission specifically highlighted the need for a “pro-AI federal framework” that ensures industry access to openly available data for training models, while hindering a patchwork of state regulations. Their approach emphasizes “concentration, sector-specific, and risk-based AI governance and standards” rather than broad regulation.
Regulatory recommendations
A unified federal approach to AI regulation emerged as a consistent theme across all submissions. Openai warned against “regulatory arbitrages created by individual American states” and proposed “a holistic approach that allows for voluntary partnerships between the federal government and the private sector.” Their framework envisages surveillance by the Department of Commerce through a potentially reimagining the US AI Safety Institute, providing a single contact for AI companies to engage with governments on security risks.
Regarding export restrictions, Openai has proposed a step-by-step framework designed to promote the adoption of American AI in line with democratic values, while limiting access to China and its allies. Similarly, humanity called for “strengthening export controls to expand US AI leads” and “dramatically improving the security of US frontier racers by increasing collaboration with the Intelligence Reporting Agency.”
Copyright and intellectual property considerations have been notable in both Openai and Google recommendations. Openai emphasized the importance of maintaining fair use principles so that AI models can learn from copyrighted materials without compromising the commercial value of existing works. They warned that excessively restricted copyright rules could be at a disadvantage for US AI companies compared to their Chinese competitors. Reflecting this view, Google proposed “balanced copyright rules for fair use and text and data mining exceptions.” This is “important to enable AI systems to learn from prior knowledge and published data.”
All three companies emphasized the need to accelerate government adoption of AI technology. Openai called for an “ambitious government recruitment strategy” to modernize the federal government process and safely deploy frontier AI tools. They specifically recommended removing obstacles to AI adoption, including outdated accreditation processes such as Fedramp, restriction testing authorities and flexible procurement routes. Humanity likewise advocated “to promote rapid AI procurement across the federal government,” revolutionising its business and strengthening national security.
Google has proposed “rationalizing outdated accreditation, approval and procurement practices” within the government to accelerate adoption of AI. They emphasized the importance of effective public procurement rules in government cloud solutions and the interoperability of government cloud solutions to promote innovation.
Comprehensive submissions from these major AI companies present a clear message. Maintaining America’s leadership in artificial intelligence requires multifront coordination of federal action, from infrastructure development and regulatory frameworks to national security protection and government modernization, particularly in the face of increasing competition with China.