Eight years ago, Vladimir Putin declared that whoever leads in artificial intelligence (AI) would dominate the globe. One might think Western tech sanctions following Russia’s clash with Ukraine would have put an end to his vision of AI supremacy by 2030. However, perhaps that conclusion is premature. Just last week, the Chinese research group DeepSeek introduced R1, an AI competitor to OpenAI’s top-tier reasoning model, o1. Impressively, R1 matches o1 in performance while operating on significantly less computing power and at a fraction of the cost. Not surprisingly, back in 2025, one of Putin’s strategic moves was to collaborate with China on AI advancements. The unveiling of R1 seems quite timely, especially as Donald Trump threw his support behind OpenAI’s ambitious $500 billion Stargate initiative aimed at overtaking its competitors. OpenAI has identified DeepSeek’s parent company, High Flyer Capital, as a formidable rival. Additionally, at least three Chinese laboratories are contending they can either match or exceed OpenAI’s feats.
In anticipation of stricter US chip sanctions, Chinese firms have amassed essential processors to push their AI models forward despite limited hardware access. The triumph of DeepSeek demonstrates the resourcefulness that arises from necessity: without enormous data centers or specialized, powerful chips, they made significant progress via improved data curation and model optimization. Unlike proprietary platforms, R1 offers an open source code, enabling skilled individuals to adapt it as needed. However, its transparency has boundaries; under the governance of China’s internet authority, R1 adheres to “core socialist values.” Should users mention topics like Tiananmen Square or Taiwan, the conversation reportedly halts.
The emergence of DeepSeek’s R1 highlights the ongoing debate on AI’s future trajectory: Should AI be kept under the control of a select number of large corporations, or should it be open-sourced to stimulate global innovation? One of the last measures of the Biden administration was to restrict open-source AI because of national security concerns; the fear being that highly capable AI in the wrong hands could pose serious threats. Interestingly, Trump later reversed this decision, arguing that restricting open-source development stymies innovation. Advocates for open-source, such as Meta, argue that recent breakthroughs in AI owe much to a decade of shared code. Still, the dangers can’t be ignored. For example, in February, OpenAI disabled accounts tied to state-sponsored cybercriminals from China, Iran, Russia, and North Korea who manipulated its tools for phishing and malware activities. By midsummer, OpenAI had stopped providing services in those nations.
The US might maintain an upper hand in the future owing to heavier control over key AI hardware, which could stifle competition. OpenAI offers “structured access,” which governs user interaction with its models. Meanwhile, DeepSeek’s accomplishments suggest that open-source AI can ignite innovation through creativity, not just raw computing force. There’s a distinct tension here: while open-source AI democratizes technology and propels progress, it also opens doors for abuse. Resolving this conflict between innovation and security will require a global cooperative effort to avert misuse.
The race in AI is as much about holding sway globally as it is about technological superiority. Putin has called on developing countries to band together to challenge US technological leadership. Absent worldwide regulations, the intense thrust for AI dominance brings tremendous risks. It’s prudent to listen to Geoffrey Hinton, the AI luminary and Nobel Prize winner. He cautions that the dizzying speed of advancement heightens the possibility of disaster. In chasing control of AI, the greatest threat isn’t falling behind; it’s losing control altogether.