Breaking News

The invasion of Ukraine has prompted militaries to update their arsenals – and Silicon Valley stands to capitalise

Getting your Trinity Audio player ready...
Print Friendly, PDF & Email

Silicon Valley is closer to the world’s militaries than ever. And it’s not just big companies, either – start-ups are getting a look in as well.

The war in Ukraine has added urgency to the drive to push more AI tools onto the battlefield. Those with the most to gain are start-ups such as Palantir, which are hoping to cash in as militaries race to update their arsenals with the latest technologies. But long-standing ethical concerns over the use of AI in warfare have become more urgent as the technology becomes more and more advanced, while the prospect of restrictions and regulations governing its use looks as remote as ever.


Let’s not lose touch…Your Government and Big Tech are actively trying to censor the information reported by The Exposé to serve their own needs. Subscribe now to make sure you receive the latest uncensored news in your inbox…


On 30 June 2022, NATO announced it is creating a $1 billion innovation fund that will invest in early-stage start-ups and venture capital funds developing “priority” technologies such as artificial intelligence, big-data processing, and automation.

The Chinese military likely spends at least $1.6 billion a year on AI, according to a report by the Georgetown Centre for Security and Emerging Technologies, and in the US there is already a significant push underway to reach parity, says Lauren Kahn, a research fellow at the Council on Foreign Relations. The US Department of Defence requested $874 million for artificial intelligence for 2022, although that figure does not reflect the total of the department’s AI investments, it said in a March 2022 report.

It’s not just the US military that’s convinced of the need. European countries, which tend to be more cautious about adopting new technologies, are also spending more money on AI, says Heiko Borchert, co-director of the Defence AI Observatory at the Helmut Schmidt University in Hamburg, Germany.

The French and the British have identified AI as a key defence technology, and the European Commission, the EU’s executive arm, has earmarked $1 billion to develop new defence technologies.

Since the war started, the UK has launched a new AI strategy specifically for defence, and the Germans have earmarked just under half a billion for research and artificial intelligence within a $100 billion cash injection to the military.

In a vaguely worded press release in 2021, the British army proudly announced it had used AI in a military operation for the first time, to provide information on the surrounding environment and terrain. The US is working with start-ups to develop autonomous military vehicles. In the future, swarms of hundreds or even thousands of autonomous drones that the US and British militaries are developing could prove to be powerful and lethal weapons.

Many experts are worried. Meredith Whittaker, a senior advisor on AI at the Federal Trade Commission and a faculty director at the AI Now Institute, says this push is really more about enriching tech companies than improving military operations.

In a piece for Prospect magazine co-written with Lucy Suchman, a sociology professor at Lancaster University, she argued that AI boosters are stoking Cold War rhetoric and trying to create a narrative that positions Big Tech as “critical national infrastructure,” too big and important to break up or regulate. They warn that AI adoption by the military is being presented as an inevitability rather than what it really is: an active choice that involves ethical complexities and trade-offs.

Despite the steady march of AI into the field of battle, the ethical concerns that prompted the protests around Project Maven haven’t gone away. The Pentagon’s Project Maven was an attempt to build image recognition systems to improve drone strikes. Google pulled out of Project Maven in 2018 following employee protests and outrage.

[Further reading: Intelligence agency takes over Project Maven, the Pentagon’s signature AI scheme, C4ISRNET, 27 April 2022]

There have been some efforts to assuage those concerns. Aware it has a trust issue, the US Department of Defence has rolled out “responsible artificial intelligence” guidelines for AI developers, and it has its own ethical guidelines for the use of AI. NATO has an AI strategy that sets out voluntary ethical guidelines for its member nations.

All these guidelines call on militaries to use AI in a way that is lawful, responsible, reliable, and traceable and seeks to mitigate biases embedded in the algorithms.

One of their key concepts is that humans must always retain control of AI systems. But as the technology develops, that won’t really be possible, says Kenneth Payne, who leads defence studies research at King’s College London and is the author of the book ‘I, Warbot: The Dawn of Artificially Intelligent Conflict’. 

“The whole point of an autonomous [system] is to allow it to make a decision faster and more accurately than a human could do and at a scale that a human can’t do,” he says. “You’re effectively hamstringing yourself if you say ‘No, we’re going to lawyer each and every decision’.” 

There is a global campaign called Stop Killer Robots that seeks to ban lethal autonomous weapons, such as drone swarms. Activists, high-profile officials such as UN chief António Guterres, and governments such as New Zealand’s argue that autonomous weapons are deeply unethical because they give machines control over life-and-death decisions and could disproportionately harm marginalized communities through algorithmic biases.

Swarms of thousands of autonomous drones, for example, could essentially become weapons of mass destruction. Restricting these technologies will be an uphill battle because the idea of a global ban has faced opposition from big military spenders, such as the US, France, and the UK.

Ultimately, the new era of military AI raises a slew of difficult ethical questions that we don’t have answers to yet.

One of those questions is how automated we want armed forces to be in the first place, says Payne. On one hand, AI systems might reduce casualties by making war more targeted, but on the other, you’re “effectively creating a robot mercenary force to fight on your behalf,” he says.

The above are excerpts taken from an article titled ‘Why business is booming for military AI startups’ published by MIT Technology Review on 7 July 2022.  Read the full article HERE.

Featured image: AI Warning: Robot Soldiers Only 15 Years Away From ‘Changing Face’ Of Warfare – Expert, Impact Lab, 28 November 2020

Share this page to Telegram

Categories: Breaking News, World News

Tagged as:

5 1 vote
Article Rating
Subscribe
Notify of
guest
12 Comments
Inline Feedbacks
View all comments
A Pauline
A Pauline
1 year ago

As I read this article it comes to mind that these AI military tools can be used by tyrants to control a country’s population without having to worry if their country’s soldiers would refuse to fight their fellow countrymen. A lot easier to send an army of unemotional and unethical robots. If the Globalists win at world control, there would be no use for war weapons in theory but a great need to control local populations. Drones and robots would do the job. But then if the few Globalists have their way, there is always the potential for infighting to become the ONE Leader of the World. So first there is collaboration to subdue the world’s populations (what is left after eugenic weapons), then infighting to rule alone. All better done with drones and robots that are harder to corrupt in a way, but also hopefully easier to hijack.

LRS
LRS
Reply to  A Pauline
1 year ago

Agreed. Is it a prologue to them to own private armies?

Augustus
Augustus
1 year ago

The ghost in the machine will kill us all. Well, those that remain after the WEF death cult removes the low hanging fruit.

Forbury
Forbury
1 year ago

AI computer-aided decisions unable to be influenced by hackers/countermeasures/flaws for operating weapons is a recipe for disasters for all living beings. Then prime them with bioweapons that supposedly only incapacitate the live opponents. Is this worthy of taxpayer funding?

Rabbi Seamus
Rabbi Seamus
1 year ago

Putin threw out central banking same as Napoleon and Hitler.

You’ll notice the MSM has the same obsession with the person, Putin, and not the country.

If Europeans are again dumb enough to kill eachother because central bankers tell them lies on TV, then by all means defence stocks will defend your portfolio.

LRS
LRS
Reply to  Rabbi Seamus
1 year ago

I don’t know… the best defense stock – if you have place – is long shelf life food. Maybe something to shoot the drones out of the sky.

They just try to keep up the two sides legend. They managed to win again and again financing both sides of the wars.

But I see another pattern I consider more dangerous than their robots. Our pets. The DM published articles almost every day about a dog attacks with fatal outcomes. I know that that sometime happens but it either happens weirdly often these days or the DM cherry pick these news to highlight them.

Two options, both are equally bad for us and them.

  1. The dogs go mad because of a change in the environment, if you read Firstenberg’ reports on smart meters, etc, you’ll notice that that level of radiation make pets ‘crazy’, behaving weirdly, clearly suffering .
  2. DM cherry picks these news because of an agenda targeting your pets – an agenda that was announced openly via the WEF:
ourpets.JPG
trackback
1 year ago

[…] Read More: The invasion of Ukraine has prompted militaries to update their arsenals – and Silicon… […]

trackback
1 year ago

[…] Read More: The invasion of Ukraine has prompted militaries to update their arsenals – and Silicon… […]

trackback
1 year ago

[…] Expose’ reports that, “On 30 June 2022, NATO announced it is creating a $1 billion innovation fund that will […]