Senior artificial intelligence hardware executive Caitlin Kalinowski has resigned from her leadership role at OpenAI after the company’s controversial agreement with the United States Department of Defense sparked intense debate over the military use of advanced AI technologies.
Kalinowski, who led OpenAI’s robotics and consumer hardware division, announced her decision publicly, explaining that the move was motivated by ethical concerns about how artificial intelligence could potentially be deployed under the new partnership with the Pentagon. She emphasized that her decision was based on principle rather than personal disagreements with colleagues at the company.
In her statement, Kalinowski acknowledged that artificial intelligence can play an important role in national security but warned that certain boundaries require careful consideration before technology is deployed in sensitive areas such as surveillance and autonomous weapons systems. She expressed particular concern that the agreement could open the door to uses of AI that involve large scale monitoring or lethal autonomous systems without sufficient human oversight.

Kalinowski said surveillance of citizens without judicial oversight and autonomous weapons without direct human authorization were issues that deserved more deliberation before any agreement was finalized. According to her, the announcement of the partnership appeared to have been made too quickly without clearly defined safeguards governing how the technology could be used.
Despite her concerns, she made it clear that she continues to respect OpenAI’s leadership and its chief executive Sam Altman, noting that she remains proud of the work accomplished by the robotics team during her tenure at the company. She also described her departure as difficult, highlighting the close relationships she had built with colleagues while developing robotics initiatives.
OpenAI confirmed Kalinowski’s resignation shortly after her announcement and defended its partnership with the Pentagon, saying the agreement was designed to allow responsible national security applications of artificial intelligence while establishing strict boundaries on how the technology can be used. The company said its policies prohibit the use of its AI models for domestic surveillance and for fully autonomous weapons systems.
The agreement between OpenAI and the Pentagon allows certain AI tools developed by the company to be deployed within classified government environments. Defense officials have expressed interest in using artificial intelligence to support tasks such as cybersecurity analysis, logistics planning, and intelligence processing, areas where advanced algorithms can help analyze large volumes of information more quickly than humans alone.
However, the partnership has triggered strong reactions from researchers, technology workers, and digital rights advocates who fear that integrating AI into military systems could accelerate the development of automated warfare or mass surveillance capabilities. Critics argue that clear ethical frameworks must be established before powerful technologies are integrated into national security infrastructure.
The controversy surrounding the agreement intensified after reports revealed that OpenAI’s rival Anthropic had declined a similar proposal from the US government because it sought stronger guarantees preventing the technology from being used for surveillance of citizens or for autonomous weapons systems. The collapse of those negotiations created an opening that ultimately led to OpenAI’s deal with the Pentagon.
The decision has also sparked broader debates across the artificial intelligence industry about how companies should balance commercial opportunities with ethical responsibilities. Many researchers believe that as AI becomes more powerful, developers must establish stronger governance mechanisms to ensure the technology is not misused.

Public reaction to the Pentagon agreement has been significant, with some OpenAI employees and technology professionals expressing concerns about transparency and oversight in large government contracts involving artificial intelligence. The backlash has highlighted growing tensions within the technology sector over the militarization of AI tools and the responsibilities of companies developing frontier technologies.
For OpenAI, Kalinowski’s resignation represents a notable departure from its leadership ranks, particularly as the company continues to expand its robotics ambitions alongside its widely used AI systems such as ChatGPT. Kalinowski joined the organization in 2024 after previously leading augmented reality hardware development at Meta Platforms, bringing significant experience in advanced hardware engineering and emerging technologies.
Her exit comes at a time when the global race to develop powerful artificial intelligence systems is accelerating, with governments and private companies investing heavily in AI capabilities that could reshape industries ranging from healthcare to defense.
The incident has once again underscored the complex ethical questions surrounding artificial intelligence, particularly when advanced technologies intersect with national security priorities. As governments increasingly seek partnerships with private technology companies to strengthen their capabilities, debates about transparency, accountability, and responsible use of AI are expected to intensify across the global technology landscape.
Intron launches voice AI platform supporting 24 African languages

