Four Silicon Valley executives have been recruited into a specialist tech-focused unit of the US Army Reserves in a bid to “bridge the commercial-military tech gap” and make the armed forces “more lethal.”

According to an official US Army press release, the tech executives have been instantly appointed to senior officer ranks in Detachment 201: The Army’s Executive Innovation Corps, which is being established “to fuse cutting-edge tech expertise with military innovation”.

Under their part-time military roles, the four executives – including Kevin Weil, OpenAI’s head of product; Bob McGrew, a former OpenAI head of research who is now advising Mira Murati’s company Thinking Machines Lab; Shyam Sankar, the chief technology officer (CTO) of Palantir; and Andrew ‘Boz’ Bosworth, the CTO of Meta  – “will work on targeted projects to help guide rapid and scalable tech solutions to complex problems”.

The press release added that bringing private sector knowledge into the US military in this way will help “make the force leaner, smarter and more lethal”, and that the swearing-in of executives aims “to inspire more tech pros to serve”.

Speaking with Computer Weekly about the development, Elke Schwarz, a professor of political theory at Queen Mary University London and author of Death machines: The ethics of violent technologies, said the move smacked of “cosplaying” on the part of the executives, who were wearing full fatigues during their swearing-in ceremony, and questioned both the ethics and necessity of the arrangement.

“When I saw this, I thought this must be a joke or satire,” she said. “I think people intuitively understand this feels not quite right.”

She also questioned the broader implications of the collaboration, including concerns about the role of technology in lowering the threshold of resorting to violence, and the dangers of embedding high-level tech sector employees directly into the military hierarchy.

“There has been a rhetorical shift in the past five years towards ‘we need to make the military more lethal’, and that ultimately places the priority on killing other people,” she said. “But in recent wars, the people who are usually killed have been civilians, the ones who suffer are civilians.”

Highlighting the UK government’s commitment to making the British military “10x more lethal” via new technologies in its Strategic Defence Review, Schwarz said rhetoric around increasing lethality is often not sufficiently interrogated: “if you implicitly allow a greater number of civilian casualties, and that is packaged into this word ‘greater lethality’, then that’s a real problem. Do these technologies help us fight wars more ethically, or are they just tools for more destruction?”

For Sophia Goodfriend, a cultural anthropologist who examines the impacts of artificial intelligence (AI) on military conflicts in the Middle East, while military leaders can fantasise about algorithmic systems turning killing into “an exact science”, and use the underlying logic to “parse the death and destruction of warfare into something that sounds rational and efficient”, there’s “a lot of critique” over continued civilian harm, and what these weapons will mean in practice on the ground.

“Gaza is a good example of how the proliferation of AI-assisted weaponry, especially targeting, makes it easier for militaries to wage warfare for a longer amount of time, because they can expedite the process of finding targets, selecting them, bombing them and can continue doing that with less and less manpower,” she said, adding this rationalises a protracted and unending mode of warfare.

“The army has relied on various algorithmic systems to produce a record-breaking number of targets. But many of these systems have helped lend a veneer of technical rationality to military campaign that has been marked by brutal destruction.”

‘An Oppenheimer-like situation’

For Brynt Parmeter, the Pentagon’s first chief talent officer who spearheaded the creation of Detachment 201 after meeting Sankar at a conference in early 2024, the idea of the unit is to establish “an Oppenheimer-like situation” where the executives could serve right away, without leaving their current jobs.

Unlike ordinary reservists, the four executives – now lieutenant colonels – will not be required to undergo basic training, and will have the flexibility to spend some of their roughly 120 annual hours working remotely.

The US Army has also confirmed that the executives will not be deployed in any theatres of war, meaning they will not be personally placed in any life-threatening situations despite their explicit remit to make military technologies even “more lethal”.

It also claimed there is no conflict of interest in having individuals privately employed in senior commercial roles acting as advisers to the military on tech-related subjects, adding that they will have no say in what contracts the US Army makes with the private sector.

Wired editor-at-large Stephen Levy has noted that “the expertise they offer, however, seems inseparable from the sectors of AI, [virtual reality] VR and data mining at the centre of their companies’ business models”.

It’s a significant shift to give a select four companies a tonne of power and influence within the Armed Forces
Sophia Goodfriend, cultural anthropologist

He further added “while these soldiers are serving in a personal capacity, their employers will undoubtedly benefit from the inside-the-perimeter knowledge that they will gather while simultaneously working on military contracts”.

Goodfriend said that from the US military perspective, the move towards more partnerships with both large civilian conglomerates and smaller AI-focused startups represents an attempt to “remodel the military into a more innovative and technologically sophisticated apparatus”.

While Goodfriend noted this endeavour would likely benefit the US military, which has for years been complaining about a “technological lag” behind the private sector, she also highlighted concerns about having such a “tight relationship” between private and public actors in this context.

“The ties between the civilian technology sector and the military are as long and old as the civilian technology sector itself, but this symbolises, I think, a new strategy,” she said. “It’s a significant shift to give a select four companies a tonne of power and influence within the Armed Forces.”

She added that there is clearly scope for conflicts of interest when you have representatives from private companies that already have quite hefty contracts with the US military, who are being tasked with pushing even more emerging technologies onto the Armed Forces.

“It’s really important to think and take a critical appraisal of just how effective those systems are, and it’s harder and harder to do that if the people who are also in high-level roles within the US military are executives at those companies,” she said, adding that many AI-powered systems for use in military surveillance and autonomous weapons are quite new and therefore largely unproven on the battlefield.

“It’s also really important to have the safety mechanisms in place to ensure that these systems work correctly, to ensure that they’re deployed without driving human rights abuses, that they work as promised, essentially. And it’s harder to do that if you have these kinds of entanglements between the private companies and making the technologies with the people who are deploying them and bringing them into the military.”

No conflict of interest?

As it stands, the companies the executives are employed at already receive substantial sums from their military contracts, or are otherwise angling into defence-related work.

Palantir, for example, has billions of dollars’ worth of US government deals, including a wide variety of contracts with the US Army for advanced AI systems, while OpenAI recently announced a $200m defence contract to “develop prototype frontier AI capabilities”.

Meta has also recently partnered with US defence tech company Anduril to build augmented and virtual reality technologies for the military, which CEO Mark Zuckerberg said will help “protect our interests at home and abroad”.

Commenting on the partnership, Anduril CEO Luckey Palmer, who was previously fired by Meta in 2018 over his donations to political groups supporting Donald Trump in the 2016 election, added “of all the areas where dual-use technology can make a difference for America, this is the one I am most excited about. My mission has long been to turn warfighters into technomancers, and the products we are building with Meta do just that.”

Thinking Machines Lab is the only firm involved with no active military contracts, as it was launched by a number of former OpenAI employees in early 2025.

Highlighting how venture capital defence firms will often make an effort to bring in military specialists and former personnel into their fold for the purposes of gaining both expertise and credibility, Schwarz said Detachment 201 constitutes “the other end of the revolving door”, in that military organisations are now not only bringing in the technologists for the same purposes, but deeply embedding them in the hierarchy of the military itself.

“Ultimately, they’re gaining access in a way that suggests hierarchical positioning, because coming in as lieutenant colonels, they have to be saluted,” she said, adding that the executives are essentially taking up leadership roles in an organisation with a very specific culture that they have no real experience of being in.

“People who enlist…understand what it means to risk your life and potentially take the lives of others. It’s not taken lightly; people are habituated into these values. However robust they may or may not be, those values are there, so by having [these executives] at a distance from that has the potential to create a misunderstanding of the task, the weight of the task and the moral responsibility involved in this.

“They’re not being asked to sacrifice their lives in the same way that other reservists would be, or in the same way that army values would apply to anybody else who enlists and who swears that oath. It makes it a little bit bizarre…why was this deemed necessary?”

I can only assume this is done to consolidate not just financial power, but also positional or hierarchical power directly within a government organisation
Elke Schwarz, Queen Mary University London

She added that because many tech companies – and particularly Palantir via its Maven Smart Systems contract – are already so embedded in working with the US military, the creation of this pathway suggests it will offer a distinct level of access and influence to the executives: “I can only assume this is done to consolidate not just financial power, but also positional or hierarchical power directly within a government organisation.

“The access is obviously going to be enlarged, but there’s also a loosening of the boundaries between civilian and non-civilian…and we’re increasingly asked to just believe the proclamation, rather than have it be demonstrated, that we can trust these individuals to display the utmost integrity and that they aren’t prioritising business development.”

Schwarz added that this is particularly “problematic” when the views of those in their companies are taken into account, noting “you can’t necessarily separate that from the individuals that have now joined”.

 In a September 2024 interview with CNBC, for example, Palantir CEO Alexander Karp said: “I support inflicting pain…if you touch an American, we will inflict pain on you for generations. That should be the US policy, whether that happens in Gaza, whether that happens in Ukraine.”

Commenting further, Schwarz said: “The C-suite of Palantir in particular is very outspoken in terms of the kind of foreign policy they would advocate for…It’s about domination, it’s about ‘defending the West’. It’s problematic if you just think about what that actually entails.”

‘The business of inflicting violence’

In an op-ed penned for the Free Press, Palantir CTO Sankar – who was integral to the other three executives being recruited – shared similar sentiments to Zuckerberg and Palmer about the need to defend American interests, writing that despite not having any free time between fatherhood, their day jobs “and a dozen other demands”, each of the executives “feel called to serve”.

However, he added that while it would have been “unthinkable for so many tech heavyweights to openly align with the US military” a decade ago, or for the military to so directly “enlist the support of the nation’s business elite”, the “urgency and seriousness” of the current historical moment has created a sea change.

“More and more, the nation’s technologists are realising we face threats to our freedom as serious as any we faced in the 20th century. And they’re rediscovering Silicon Valley’s roots in national defence during the Second World War and Cold War,” he wrote. “But unlike in 1940 or 1960, the architects of American technical dominance today are too often absent from the rooms where national security decisions are made.”

He added that “for the first time in a generation”, the gap between Silicon Valley and Washington is being bridged: “The uniform I’m putting on today is a symbol of gratitude transformed into action; of success converted into service; of understanding that in America’s moment of need, those who can serve, must. The arsenal of democracy needs its architects back. Who else will answer the call?”

In comment provided to Wired, Weil also acknowledged the controversy of their swearing-in: “10 years ago, this probably would have gotten me cancelled. It’s a much better state of the world where people look at this and go, ‘Oh, wow, this is important. Freedom is not free’.”

He added that donning the uniform would also make military personnel more likely to listen to their civilian perspectives: “There’s nothing wrong with being a contractor, but if we’re off supporting an exercise somewhere, it’s different that we’re wearing the same uniform, having taken the same oath.”

While not unprecedented – as many Silicon Valley firms have a long-standing history of working with the US military – critics have observed a notable shift in the relationship between the two, which can be characterised by a growing closeness and a willingness on the part of tech firms to be seen openly collaborating with military institutions.

In February 2025, for example, Google dropped its pledge to not use AI for weapons systems or surveillance tools, citing a need to support the national security of “democracies”.

Google – whose company motto ‘Don’t be Evil’ was replaced in 2015 with ‘Do the right thing’ – defended the decision to remove these goals from its AI principles webpage in a blogpost co-authored by Demis Hassabis, CEO of Google DeepMind; and James Manyika, the company’s senior vice-president for technology and society.

“There’s a global competition taking place for AI leadership within an increasingly complex geopolitical landscape. We believe democracies should lead in AI development, guided by core values like freedom, equality and respect for human rights,” they wrote on 4 February.

I support inflicting pain…if you touch an American, we will inflict pain on you for generations. That should be the US policy
Alexander Karp, Palantir

“And we believe that companies, governments and organisations sharing these values should work together to create AI that protects people, promotes global growth and supports national security.”

Noting that, in 2018, leaked correspondence revealed that Google executives thought of military AI as a “PR liability”, Goodfriend said that seven years on, “that’s entirely not the case”.

“There’s been a larger cultural shift within Silicon Valley, where something that was once seen as being really bad for business and quite unpopular is now being touted as the ultimate test of patriotism by the tech sector, and that shift can map on to larger political shifts in the United States,” she said. “Silicon Valley has pivoted away from a kind of dyed-in-the-wool liberalism and increasingly embraced militarism and conservative politics in recent years.

“This move is the pinnacle of those political transformations, insofar as you have tech executives proudly taking on leadership roles in the US military. Maybe six years ago, that would have been met with large protests from employees at the companies, but now it’s largely accepted as the status quo.”

Schwarz shared similar sentiments, noting that while many in the tech sector “didn’t want anything to do with the business of inflicting violence, that changed with the Russia-Ukraine conflict”.

She added that after this point, there was a shift in the framing of warfare as morally necessary: “Of course, for Ukraine, it is morally necessary to defend itself, there’s absolutely no doubt about that, but the conflict allowed the discourse to shift from ‘how about not profiteering from war’ to ‘it is morally imperative that we invest our money in defence companies’, so that they can ‘defend democracy’ and various other marketing taglines.”

Noting that Silicon Valley-US military collaboration has historically taken place in the context of war, Schwarz said military-industry partnerships are now increasingly framed as pre-emptive, with emerging technologies being viewed as a deterrent. She added that such perspectives lend themselves to the use of violence over alternative political or diplomatic solutions.

“Deterrence theory is problematic at the best of times, but it really doesn’t shake out in this particular context. Rather, what this is more likely to produce is an expansion of violence,” she concluded.



Source link

By admin

Leave a Reply

Your email address will not be published. Required fields are marked *