Artificial Intelligence – Influenced Weapons Require More Regulation | Technology

Artificial Intelligence - Influenced Weapons Require More Regulation | Technology
Artificial Intelligence – Influenced Weapons Require More Regulation

AI attack weapons need better regulation

Weapons are error-prone and may hit the wrong target.

Against the backdrop of Russia’s aggression against Ukraine, the United Nations recently held a meeting to discuss the use of autonomous weapons systems, commonly known as killer robots. These are basically weapons programmed to find a class of targets and then attack by identifying a specific person or object within that class, with little control over the decisions made.

Russia has been at the center of the talks, in part because of its potential in space, but also because its diplomats have failed to negotiate weapons, saying sanctions prevent it from participating properly. For such a slow negotiation, plundering Russia further reduces it.

I have been following the development of autonomous weapons and have been involved in the UN debate on this issue for over seven years, and Russian aggression has become an unfortunate test of how artificial intelligence (AI) warfare can and should continue.

READ MORE: A new programme aims to boost AI hardware innovation in the future | Technology

The technology behind some of these weapon systems is immature and error-prone, and there is little clarity about how the systems work and make decisions. Some of these weapons will always hit the wrong target, and the competitive pressure may be to deploy more systems that are not ready for the battlefield.

To avoid the loss of innocent lives and the destruction of vital infrastructure in Ukraine and beyond, we need strong diplomatic efforts to ban and regulate the use of these weapons, in some cases the technology behind them, and in others, artificial Intelligence and Machine Learning. This is important because when military operations are not going well, countries may be inclined to use new technologies to their advantage. An example is the Russian weapon KUB-BLA, which is able to identify targets using artificial intelligence.

READ MORE: Artificial Intelligence will help Google Maps stop 100 million abusive edits in 2022 | Technology

Data provided in AI-based systems can teach long-range weapons what a target looks like and what to do when it gets there. Although artificial intelligence technologies for military use, such as facial recognition tools, have multiple effects, especially when used to sabotage and kill, experts have expressed concerns about bringing them into dynamic combat environments. While Russia may succeed in thwarting live talks on these weapons, it is not alone. The US, India and Israel are fighting this dangerous regime.

AI can become more sophisticated and well-known in cyber warfare, including increasing malware attacks or masquerading as more trusted users to access critical infrastructure, such as power grids. But great powers use it to create weapons of physical destruction. Russia has made significant strides in autonomous tanks, machines that can operate without human operators and theoretically bypass bugs, while the U.S. has demonstrated a number of capabilities, including the ability to destroy surface ships using large numbers of drones weapon. Artificial intelligence is used to create swarms of drones. Instead of futuristic robots from sci-fi movies, these systems use existing military platforms that use artificial intelligence technology. Simply put, some line code and new sensors can distinguish whether a military system is operating autonomously or under human control. What’s more, the introduction of AI into military decision-making could make the technology too realistic, affect military decision-making, and potentially escalate conflict.

READ MORE: Google has identified a cool AI-assisted manga art generator for those who can’t draw | Technology

AI-based combat may sound like a video game, but last September, according to Air Force Secretary Frank Kendall, the Air Force used AI for the first time to help identify “smooth, immediate targets” or “actions.” “. Artificial intelligence is used to identify and kill human targets.

Little information is available on the mission, including whether there were casualties. What input was used to identify these people, and were there potential errors in the identification process? AI strategies have been shown to be biased, especially against women and individuals in minority communities. Incorrect definitions disproportionately affect marginalized and racial groups.

If there’s any indication of the recent discussion on social media within the AI ​​community, developers, primarily from the private sector, are largely unaware of the impact of those who are creating some of the new technologies that the military is already deploying. Tech journalist Jeremy Kane argued in Fortune that there is a dangerous disconnect between developers and major military powers, including the U.S. and Russia, which are used in decision-making and data analysis artificial intelligence. The developers appear to be aware of the versatility of some of the tools they have created and how the military can use them in warfare, including against civilians.

READ MORE: This haptic glove allows you to experience the virtual reality metaverse | Technology

There is no doubt that the lessons learned from the current offensive will also shape the technical programs that follow the military. Currently, the group is led by the United States, but a joint statement by Russia and China in early February said they “aimed at building a new type of joint international relations,” particularly their goal of forming a new regime. technology, which I believe will be the military use of artificial intelligence.

The United States and its allies are independently setting standards for responsible military use of artificial intelligence, but often not in dialogue with potential adversaries. In general, countries with more technologically advanced military forces are reluctant to accept any restrictions on the development of AI technologies. This is where international diplomacy comes into play: there must be restrictions on such weapons, and everyone must agree on common standards and transparency for the use of technology.

READ MORE: How organizations are utilising artificial intelligence to monitor the risk of sanctions in the Black Sea and beyond | Technology

The war in Ukraine should be a wake-up call to the need to use technology and control artificial intelligence tactics to ensure the protection of civilians. The uncontrolled and potentially rapid development of AI military applications will undermine international humanitarian law and standards related to the protection of civilians. Despite the chaos of the international system, it is not diplomacy, not the military, that resolves current and future crises, and the next meeting of the United Nations or any other organization will have to quickly resolve this new age of war.

Source: Branka Marijan, SCIENTIFIC AMERICAN, a division of Springer Nature America, Inc., Direct News 99

Leave a Comment