OpenAI has announced major changes to its agreement with the United States government regarding the use of artificial intelligence in classified military operations. The decision came after strong criticism from users, experts, and technology observers who raised concerns about how AI systems could be used in warfare and surveillance.
The controversy began when news emerged that OpenAI had entered into a partnership with the U.S. Department of Defense. The agreement allowed the potential use of advanced AI tools in sensitive government projects. Many critics feared the technology could be used for mass surveillance or autonomous weapons, sparking widespread debate about ethical limits in artificial intelligence.
On Monday, OpenAI Chief Executive Officer Sam Altman confirmed that the company would update the agreement to include stricter safeguards. One of the most important additions clearly bans the use of OpenAI systems to spy on American citizens or conduct domestic surveillance. Altman said the company wanted to remove confusion and reassure the public about how its technology would be used.
He admitted that the original announcement had been rushed and poorly communicated. According to Altman, the situation was complex and required clearer explanations. He said OpenAI’s intention was to prevent larger risks but acknowledged that the deal appeared “opportunistic and sloppy” to many observers.
Under the revised terms, intelligence agencies such as the National Security Agency will not be allowed to use OpenAI’s systems unless additional contract changes are approved. OpenAI also stated that the agreement now includes stronger guardrails than previous classified AI partnerships, including those involving rival company Anthropic.
The debate intensified after a dispute between Anthropic and the Pentagon became public. Anthropic reportedly refused to allow its AI model, Claude, to be used in projects linked to mass surveillance or fully autonomous weapons. Following that disagreement, attention shifted toward OpenAI’s cooperation with the military.
Public reaction was swift. Data from market intelligence firm Sensor Tower showed that the number of users uninstalling ChatGPT increased sharply after the announcement. The daily uninstall rate reportedly rose by around 200 percent compared to normal levels. At the same time, Anthropic’s Claude climbed to the top position on Apple’s App Store rankings.
The incident highlights growing global concerns about the role of artificial intelligence in modern conflict. As governments seek technological advantages, questions remain about how much control private companies should have over powerful AI systems and how ethical boundaries can be enforced.
The Pentagon has not commented publicly on its discussions with Anthropic or the updated OpenAI agreement. However, the debate has already sparked a wider conversation about transparency, responsibility, and the future of AI in military operations.
With rapid advances in artificial intelligence, companies and governments now face increasing pressure to balance innovation with public trust, ensuring that powerful technologies are used safely and responsibly.