Human and AI Military Education: Is Artificial Intelligence created in the Imagination of Humanity

Human and AI Military Education: Is Artificial Intelligence created in the Imagination of Humanity | Technology
Human and AI Military Education

Human and AI Military Education: Artificial intelligence is not like us.  For all the various applications of AI, human intelligence is not in danger of losing its most distinctive features in favor of its artificial creations.

However, when AI applications intervene in matters of national security, they are often subject to an anthropomorphic bias that inappropriately associates human intellectual abilities with AI-enabled machines.  A rigorous military education in AI should recognize that this anthropomorphism is irrational and problematic, reflecting a poor understanding of human and artificial intelligence.  The most effective way to mitigate this anthropomorphic bias is to engage in the study of human cognition: cognitive science.

This article explores the benefits of using cognitive science as part of AI education in Western military organizations.  Tasked with educating and training personnel in AI, military organizations must convey not only that anthropomorphic bias exists, but also that it can be overcome to enable better understanding and development of AI-enabled systems.  This enhanced understanding would help both the perception of reliability of AI systems by human operators and the research and development of artificially intelligent military technology.

For military personnel, having a basic understanding of human intelligence enables them to properly frame and interpret the results of AI demonstrations, understand the current nature of AI systems and their potential trajectories, and interact with AI systems in ways based on deep appreciation.  for human and artificial capabilities.

Artificial Intelligence in Military Affairs

The importance of AI to military affairs is receiving increasing attention from national security experts.  Portents of “A New Revolution in Military Affairs” are in full swing detailing the myriad ways that AI systems will change the conduct of wars and how armies are structured.  From “microservices” such as unmanned vehicles conducting reconnaissance patrols to swarms of lethal autonomous drones and even spy machines, AI is being presented as an all-encompassing, game-changing technology.

As the importance of AI to national security becomes increasingly apparent, so does the need for rigorous education and training for military personnel who will interact with this technology.  Recent years have seen an increase in comments on this topic, including on War on the Rocks.  “Intellectual Preparation for War” by Mick Ryan, “Trust and Technology” by Joe Chapa, and “The Devil You Know” by Connor McLemore and Charles Clark, to name a few, emphasize the importance of education and trust in AI in military organizations.

Because warfare and other military activities are fundamentally human endeavors, requiring the execution of any number of tasks on and off the battlefield, uses of AI in military affairs are expected to perform these roles at least as well. as humans would.  As long as AI applications are designed to perform characteristically human military functions, ranging from possibly simpler tasks like target reconnaissance to more sophisticated tasks like determining the intentions of actors, the dominant standard used to assess their successes or failures will be the how humans perform these tasks.

But this poses a challenge for military education: how exactly should AIs be designed, evaluated and perceived during operation if they are intended to replace, or even accompany, humans?  Addressing this challenge means identifying anthropomorphic bias in AI.

Anthropomorphizing AI in Military Affairs

Identifying the trend to anthropomorphize AI in military affairs is not a novel observation.  US Navy Commander Edgar Jatho and Naval Postgraduate School researcher Joshua A. Kroll argue that AI is often “too fragile to fight.”  Using the example of an automated target recognition system, they write that to describe such a system as “recognition” effectively “anthropomorphizes algorithmic systems that simply interpret and repeat known patterns.”

But the act of human recognition involves several cognitive steps that occur in coordination with each other, including visual processing and memory.  A person may even choose to reason about the content of an image in a way that has no direct bearing on the image itself, but makes sense for target recognition.  The result is a reliable judgment of what is seen even in novel scenarios.

An AI target recognition system, by contrast, relies heavily on its existing data or programming, which may be inadequate to recognize targets in novel scenarios.  This system does not work to process images and recognize targets within them like humans.  Anthropomorphizing this system means oversimplifying the complex act of reconnaissance and overestimating the capabilities of AI target recognition systems.

By framing and defining AI as a counterpart to human intelligence, as a technology designed to do what humans have normally done for themselves, concrete examples of AI are “measured by [their] ability to replicate human mental abilities”, as De Spiegeleire, Maas, and Sweijs put it.

Commercial examples abound.  AI applications like IBM’s Watson, Apple’s SIRI, and Microsoft’s Cortana excel at natural language processing and voice responsiveness, capabilities we measure against human language processing and communication.

Even in the discourse of military modernization, the Go-playing AI “AlphaGo” caught the attention of high-ranking People’s Liberation Army officers when it defeated professional Go player Lee Sedol in 2016. Some Chinese authorities considered the victories of AlphaGo as “a turning point that demonstrated the potential of AI to engage in complex analysis and devise strategies comparable to those needed to wage war,” as Elsa Kania points out in a report on AI and Chinese military might.

But, much like the attributes projected into the AI ​​target recognition system, some Chinese officials imposed an overly simplified version of war strategies and tactics (and the human cognition from which they spring) on ​​AlphaGo’s performance.  In fact, one strategist noted that “Go and war are quite similar.”

Just as troubling, the fact that AlphaGo was anthropomorphized by commentators in both China and the United States means that the tendency to oversimplify human cognition and overestimate AI is cross-cultural.

AI researcher Eliezer Yudkowsky succinctly describes the ease with which human abilities are projected into AI systems like AlphaGo: “Anthropomorphic bias can be classified as insidious: it occurs without deliberate intent, without conscious realization, and in the face of apparent knowledge.”  Without realizing it, people inside and outside of military affairs attach human meaning to demonstrations of AI systems.  Western militaries should take note.

For military personnel training to operate or develop AI-enabled military technology, it is critical to recognize this anthropomorphic bias and overcome it.  This is best achieved through a commitment to cognitive science.

The Relevance of Cognitive Science

The anthropomorphization of AI in military affairs does not mean that AI always receives high marks.  It is now a cliché for some commentators to compare human “creativity” with the “fundamental fragility” of machine learning approaches to AI, with an often frank acknowledgment of the “narrowness of artificial intelligence.”  This cautious comment about AI may lead one to believe that the overestimation of AI in military affairs is not a widespread problem.  But as long as the dominant standard by which we measure AI is human abilities, the mere acknowledgment that humans are creative is not enough to mitigate AI’s harmful anthropomorphism.

Even commentary on AI-enabled military technology that acknowledges AI’s shortcomings fails to identify the need for AI education to be grounded in cognitive science.

For example, Emma Salisbury writes in War on the Rocks that existing AI systems rely heavily on “brute force” processing power, but do not interpret the data “or determine whether it is actually meaningful.”  Such AI systems are prone to serious errors, particularly when they move outside their narrowly defined domain of operation.

Such shortcomings reveal, as Joe Chapa writes of AI education in the military, that an “important element in a person’s ability to trust technology is learning to recognize failure or failure.”  Therefore, human operators should be able to identify when AIs are working as intended and when they are not, for the sake of trust.

Some high-profile voices in AI research echo these lines of thinking and suggest that the cognitive science of humans should be consulted to chart a path toward improving AI.  Gary Marcus is one such voice, noting that just as humans can think, learn, and create due to their innate biological components, AIs like AlphaGo also excel in narrow domains due to their innate, richly task-specific components like playing Go. .

Moving from a “narrow” to a “general” AI (the distinction between an AI capable only of recognizing targets and an AI capable of reasoning about targets within scenarios) requires a deep look at human cognition.

The results of AI demonstrations, such as the performance of an AI-enabled target recognition system, are data.  Like the results of human demonstrations, these data must be interpreted.  The central problem with the anthropomorphization of AI is that even cautious talk of AI-enabled military technology obscures the need for a theory of intelligence.  To interpret AI displays, theories are needed that borrow heavily from the best example of intelligence available, human intelligence.

The relevance of cognitive science to military AI education goes far beyond revealing the contrasts between AI systems and human cognition.  Understanding the fundamental structure of the human mind provides a baseline account from which artificially intelligent military technology can be designed and evaluated.  It has implications for the “narrow” and “general” distinction in AI, the limited usefulness of human-machine confrontations, and the development trajectories of existing AI systems.

The key for military personnel is being able to frame and interpret AI demonstrations in a way that can be trusted for both operation and research and development.  Cognitive science provides the framework to do just that.

Lessons for a Military AI Education

It is important that a military AI education is not planned in such detail as to stifle innovative thinking.  Some lessons for such education, however, are readily apparent using cognitive science.

First, we need to reconsider “narrow” and “general” AI.  The distinction between limited and general AI is a distraction: far from dispelling the unhealthy anthropomorphism of AI within military affairs, it merely tempers expectations without generating a deeper understanding of the technology.

The anthropomorphization of AI stems from a misunderstanding of the human mind.  This poor understanding is often the implicit framework through which the person interprets AI.  Part of this misunderstanding is taking a reasonable line of thought: that the human mind should be studied by breaking it down into separate capabilities, such as language processing, and transferring it to the study and use of AI.

The problem, however, is that these separate capabilities of the human mind do not represent the fullest understanding of human intelligence.  Human cognition is more than these capacities acting in isolation.

Thus, much of AI development proceeds under the banner of engineering, as an effort not to artificially recreate the human mind, but to perform specialized tasks, such as target recognition.  A military strategist may point out that AI systems do not need to be human-like in the “general” sense, rather Western militaries need specialized systems that may be limited but reliable during operation.

This is a serious mistake for the long-term development of AI-enabled military technology.  The distinction between “narrow” and “general” is not only a poor way to interpret existing AI systems, it also clouds their trajectories.  The “fragility” of existing AIs, especially deep learning systems, may persist until a more complete understanding of human cognition is developed.  For this reason (among others), Gary Marcus notes that “deep learning is hitting a wall.”

A military AI education would not avoid this distinction, but rather incorporate a cognitive science perspective that allows trainees to reconsider inaccurate assumptions about AI.

Man-Machine confrontations are poor indicators of intelligence

Second, pitting AIs against exceptional humans in domains like chess and go are seen as indicators of AI progress in business domains.  The US Defense Advanced Research Projects Agency got in on this trend by pitting Heron Systems’ F-16 AI against an expert Air Force F-16 pilot in simulated dog-fighting tests.  The goals were to demonstrate the AI’s ability to learn combat maneuvers while earning the respect of a human pilot.

These confrontations reveal something: some AIs really excel in certain narrow domains.  But the insidious influence of anthropomorphism lurks just below the surface: there are strict limits to the usefulness of human-machine confrontations if the goals are to measure AI progress or better understand the nature of wartime tactics and strategy. .

The idea of ​​training an AI to take on a veteran-level human in a clear scenario is like training humans to communicate like bees by learning the “wiggle dance.”  It can be done, and some humans can dance like bees quite well with practice, but what is the real use of this training?  It tells humans nothing about the mental life of bees, nor does it give them insight into the nature of communication.  At best, any lessons learned from the experience will be tangential to the actual dance, and better advanced by other means.

The lesson here is not that man-machine confrontations are useless.  However, while private companies may benefit from AI commercialization by pitting AlphaGo against Lee Sedol or Deep Blue against Garry Kasparov, the benefits to the military may be less substantial.  Cognitive science keeps the individual rooted in an appreciation of limited utility without losing sight of its benefits.

Human-Machine teaming is an imperfect solution

The formation of human-machine teams can be considered a solution to the problems of the anthropomorphization of AI.  To be clear, it’s worth pursuing as a means of offloading some of the human responsibility onto AIs.

But the problem of trust, perceived and real, comes up once again.  Machines designed to take on responsibilities previously supported by human intellect will have to overcome the hurdles already discussed to become trustworthy to human operators;  understanding the “human element” is still important.

Be Ambitious But Stay Humble

Understanding AI is not a simple matter.  Perhaps it should come as no surprise that a technology with the name “artificial intelligence” evokes comparisons to its natural counterpart.  For military affairs, where the stakes in the effective implementation of AI are much higher than for commercial applications, ambition grounded in an appreciation of human cognition is critical to AI education and training.  Part of “a basic AI literacy” within the military must include some level of engagement with cognitive science.

Even accepting that existing AI approaches do not claim to be like human cognition, both anthropomorphism and the misunderstandings about human intelligence that it entails are prevalent enough among diverse audiences to warrant explicit attention for military AI education.  Certain lessons in cognitive science are poised to be the tools with which this is done.

Leave a Comment