Silicon Valley is closer to the world’s militaries than ever. And it’s not just big companies, either – start-ups are getting a look in as well.
The war in Ukraine has added urgency to the drive to push more AI tools onto the battlefield. Those with the most to gain are start-ups such as Palantir, which are hoping to cash in as militaries race to update their arsenals with the latest technologies. But long-standing ethical concerns over the use of AI in warfare have become more urgent as the technology becomes more and more advanced, while the prospect of restrictions and regulations governing its use looks as remote as ever.
Let’s not lose touch…Your Government and Big Tech are actively trying to censor the information reported by The Exposé to serve their own needs. Subscribe now to make sure you receive the latest uncensored news in your inbox…
On 30 June 2022, NATO announced it is creating a $1 billion innovation fund that will invest in early-stage start-ups and venture capital funds developing “priority” technologies such as artificial intelligence, big-data processing, and automation.
The Chinese military likely spends at least $1.6 billion a year on AI, according to a report by the Georgetown Centre for Security and Emerging Technologies, and in the US there is already a significant push underway to reach parity, says Lauren Kahn, a research fellow at the Council on Foreign Relations. The US Department of Defence requested $874 million for artificial intelligence for 2022, although that figure does not reflect the total of the department’s AI investments, it said in a March 2022 report.
It’s not just the US military that’s convinced of the need. European countries, which tend to be more cautious about adopting new technologies, are also spending more money on AI, says Heiko Borchert, co-director of the Defence AI Observatory at the Helmut Schmidt University in Hamburg, Germany.
The French and the British have identified AI as a key defence technology, and the European Commission, the EU’s executive arm, has earmarked $1 billion to develop new defence technologies.
Since the war started, the UK has launched a new AI strategy specifically for defence, and the Germans have earmarked just under half a billion for research and artificial intelligence within a $100 billion cash injection to the military.
In a vaguely worded press release in 2021, the British army proudly announced it had used AI in a military operation for the first time, to provide information on the surrounding environment and terrain. The US is working with start-ups to develop autonomous military vehicles. In the future, swarms of hundreds or even thousands of autonomous drones that the US and British militaries are developing could prove to be powerful and lethal weapons.
Many experts are worried. Meredith Whittaker, a senior advisor on AI at the Federal Trade Commission and a faculty director at the AI Now Institute, says this push is really more about enriching tech companies than improving military operations.
In a piece for Prospect magazine co-written with Lucy Suchman, a sociology professor at Lancaster University, she argued that AI boosters are stoking Cold War rhetoric and trying to create a narrative that positions Big Tech as “critical national infrastructure,” too big and important to break up or regulate. They warn that AI adoption by the military is being presented as an inevitability rather than what it really is: an active choice that involves ethical complexities and trade-offs.
Despite the steady march of AI into the field of battle, the ethical concerns that prompted the protests around Project Maven haven’t gone away. The Pentagon’s Project Maven was an attempt to build image recognition systems to improve drone strikes. Google pulled out of Project Maven in 2018 following employee protests and outrage.
[Further reading: Intelligence agency takes over Project Maven, the Pentagon’s signature AI scheme, C4ISRNET, 27 April 2022]
There have been some efforts to assuage those concerns. Aware it has a trust issue, the US Department of Defence has rolled out “responsible artificial intelligence” guidelines for AI developers, and it has its own ethical guidelines for the use of AI. NATO has an AI strategy that sets out voluntary ethical guidelines for its member nations.
All these guidelines call on militaries to use AI in a way that is lawful, responsible, reliable, and traceable and seeks to mitigate biases embedded in the algorithms.
One of their key concepts is that humans must always retain control of AI systems. But as the technology develops, that won’t really be possible, says Kenneth Payne, who leads defence studies research at King’s College London and is the author of the book ‘I, Warbot: The Dawn of Artificially Intelligent Conflict’.
“The whole point of an autonomous [system] is to allow it to make a decision faster and more accurately than a human could do and at a scale that a human can’t do,” he says. “You’re effectively hamstringing yourself if you say ‘No, we’re going to lawyer each and every decision’.”
There is a global campaign called Stop Killer Robots that seeks to ban lethal autonomous weapons, such as drone swarms. Activists, high-profile officials such as UN chief António Guterres, and governments such as New Zealand’s argue that autonomous weapons are deeply unethical because they give machines control over life-and-death decisions and could disproportionately harm marginalized communities through algorithmic biases.
Swarms of thousands of autonomous drones, for example, could essentially become weapons of mass destruction. Restricting these technologies will be an uphill battle because the idea of a global ban has faced opposition from big military spenders, such as the US, France, and the UK.
Ultimately, the new era of military AI raises a slew of difficult ethical questions that we don’t have answers to yet.
One of those questions is how automated we want armed forces to be in the first place, says Payne. On one hand, AI systems might reduce casualties by making war more targeted, but on the other, you’re “effectively creating a robot mercenary force to fight on your behalf,” he says.
The above are excerpts taken from an article titled ‘Why business is booming for military AI startups’ published by MIT Technology Review on 7 July 2022. Read the full article HERE.
Featured image: AI Warning: Robot Soldiers Only 15 Years Away From ‘Changing Face’ Of Warfare – Expert, Impact Lab, 28 November 2020
Subscribe now to make sure you receive the latest uncensored news in your inbox…
Your Government & Big Tech organisations
such as Google, Facebook, Twitter & PayPal
are trying to silence & shut down The Expose.
So we need your help to ensure
we can continue to bring you the
facts the mainstream refuse to…
We’re not funded by the Government
to publish lies & propaganda on their
behalf like the mainstream media.
Instead, we rely solely on our support. So
please support us in our efforts to bring you
honest, reliable, investigative journalism
today. It’s secure, quick and easy…