Showing posts with label artificial intelligence; military; 4th industrial revolution; data. Show all posts
Showing posts with label artificial intelligence; military; 4th industrial revolution; data. Show all posts

2024-06-17

Artificial Intelligence and Fog of War

Done Improperly, Artificial Intelligence May Enhance the “Fog of War” Rather than Improve the Situational Awareness

For centuries, technology has been used to improve situational awareness, but the realisation has sometimes fallen short. The 1990s network-centric warfare initiative in the U.S. DoD developed operative situational awareness but neglected the tactical level, which exposed the military to micro-manage battlefield. Tactical Data Links have provided superior connectivity since the 1970s but a vast array of deployed datalinks have delayed the update of tactical communications and now there is a need for a wide leap from formatted messages to Internet Protocol data transfer. Computerisation of battle management has left the commanders with screens full of up-to-date, detailed information, but exposed to challenges in recognising the essentials from the amount of information. The development of technology introduces first time a cognitive-level companion to soldiers. Are they ready to trust artificial advice in stressful situations?

Figure 1: Evolution of Military Systems of systems

Recently, Artificial Intelligence technology (AI) has promised to bring visibility through the “fog on the battlefield”. There are four mistakes that the military should avoid in implementing Artificial Intelligence in this concern: 

1. New technology makes soldiering harder for individuals, although it adds capability;

2. AI introduces a new kind of cognition on the battlefield;

3. Decision-making will be accelerated to machine speeds; 

4. AI will introduce new ways for deception on the battlefield. 

Let’s look at each of them more in detail.

1. AI Will Make Individual Soldiers’ Jobs Harder Even Though It Increases the Capability of the Force


In the beginning, just flying a fighter used to be an all-consuming task for pilots. Then digitalization introduced expert logic to make flying simpler but same time introduced more sensors and weapon systems. Then sensors, aerodynamic systems, and weapons were integrated, requiring automation to manage threat, target acquisition, and flying situations.  Furthermore, systems become more complicated, including, e.g., guided weapons and electronic attacks in 5th-generation aircraft. The next (6th generation) fighter aircraft will be surrounded by several “autonomous air systems” (UAS) flying in formation alongside and the pilot needs to operate an even more complex swarm of platforms, sensors, and weapons. The complexity is beyond human control and needs AI enablement.
Figure 2: Wingman UAS aircraft concept

2. New Kind of Battlefield Cognition

Human understanding or cognition has been the ultimate decision-maker in previous wars. We spend long hours in general staff college to learn to understand the battle and study different analytical methods to make the best decisions. The implementation of AI technology in command and control introduces an “alien cognition”. AI has gone through a different military education. It does not necessarily follow human morale or values. The AI considers statistical correlations, calculates a long chain of probabilities, and optimizes through large decision trees. All of these are intuitively impossible for human cognition. The future battlefield requires social cognition and AI cognition to communicate and understand each other and thereby work together in human-machine teams better than individually. Commanders need to be educated in dealing with complex issues within human-machine relationships and build intuition to recognise when a human understanding is subordinate to machine cognition. 
Figure 2: New mixture of cognition on the battlefield

3. Human-speed vs Machine-speed

The pace of warfighting has increased throughout the history of war. The decision cycle (OODA) is getting shorter, and situations are more ambiguous and stressful. Ethical implementation of Artificial Intelligence requires humans in the OODA loop to ensure compliance with Laws of War and Rules of Engagement. All good when the situation unfolds at the pace of human understanding. But when hypersonic weapons are guided by artificial intelligence or an approaching fighter is piloted by AI optimized in a dog fight, slowly reacting humans in the loop will end with more casualties. Left in autonomous mode, the AI may conclude the situation totally against the mission of the operation and end up slaughtering innocent by-passers. On the other hand, current military risk-avoiding cultures are already keeping their defensive systems in auto-mode. The probability of this behaviour does not decline with more semi-automated systems on the battlefield. 
Figure 4: Intelligent hypersonic weapons change the pace of combat

4. Deception at Machine-speed

Information operations (INFO-OPS) are a significant part of contemporary military operations. INFO-OPS requires a massive amount of data that only Artificial Intelligence may make sense. Artificial Intelligence will command the information impact on individual behaviour at higher sophistication, scale, and lower cost than anytime before. It is already challenging for humans to recognise deep fake videos (manipulated real-time videos) from real ones. On the other hand, AI-enabled sensors are doing the primary image recognition in the machine-speed battlefield. Currently, people are playing with autonomous cars and taping traffic signs to appear different from the autonomous vehicles. The cyber, electromagnetic, and physical realms open a variety of attack vectors to mislead your machine or human sensors.
Figure 5: Earlier deception of intelligence analysts does not affect the Artificial Intelligence

Conclusion

In summary, AI deployment, just like new technology, may lead to mistakes and fatalities in military affairs. Nevertheless, the promise of military effects and impacts in adversary systems drives the development and fielding of AI-enabled sensors, effectors, and integrators. Soldiers need to be trained to understand the new artificial cognition, communicate with it, train it, recognise its strengths and weaknesses, and work together with it to win more fights than the adversary. The future battlefield requires officers with science, technology, engineering, and mathematics (STEM) skills more than ever before.

Sources:
https://warontherocks.com/2020/03/fog-friction-and-thinking-machines/
https://www.popsci.com/future-air-force-fighters-leading-drone-swarms/
https://www.popsci.com/china-drone-swarms/

2023-06-03

The Promise and Peril of Generative Artificial Intelligence (especially ChatGPT4.0) from a Military Viewpoint

 


Figure 1: Will ChatGPT provide world dominance? (Composed using Canva)

There is an Ongoing Global Competition for Disruption and Gaining a Strategic Advantage

There is an ongoing competition to use Artificial Intelligence and related technologies to gain a strategic advantage between the three military superpowers or wannabes.

  • "Whoever becomes the leader in this sphere [of Artificial Intelligence] will become the ruler of the world." Putin 2017 
  • "Chinese official documents and their enunciation of military doctrine indicate that the country's leaders see massive promise in AI's utility and are working to leverage this emerging technology into their force posture." (Bommakanti, 2020)
  • "Emerging technologies are transforming warfare. The technological innovations expected to play increasingly important roles on future battlefields include artificial intelligence, sensors, unmanned air and ground systems, and cyber capabilities." (Weissmann & Nilsson, 2023)

Did we see one of these disruptions happening before our eyes at the beginning of 2023? The OpenAI product Chat GPT conquered the Internet with the speed of one million users in 5 days (Gartner, 2023) and 100 million monthly active users within two months after launch , and Sam Altman, the CEO of OpenAI, testimonies to US Congress that "AI could be as big as "the printing press" but acknowledged its potential dangers." 

Possibly – but not the way you may first think!

Generative AI Meets the Lower Levels of Bureaucratic Creativity

Since 2015, OpenAI has been delivering breakthroughs in AI algorithms, competing successfully in human games and creating human-like content. Their release of ChatGPT 4.0 has impressed the world with abilities for conversation and logical text generation. For some of the users, ChatGPT 4.0 has appeared as an artificial general intelligence (AGI)  with human-like consciousness to some of the users. Fortunately, that is not the case.

Figure 2: An extract from a discussion with ChatGPT

Generative AI can create content from given data. This content can be delivered in multiple modalities, like text (articles or answers to questions), images (photos or paintings), videos, and 3-D representations (scenes and landscapes for video games). Generated content has been winning digital-art awards and scoring among or close to the top 10 per cent of test takers in numerous tests, including the US exam for lawyers and the math, reading, and writing portions of the SATs, a college entrance exam used in the United States. 

The ChatGPT is a combination of three functions:

  1. The user interface in the application defines Chatbot. So easy to use it can create an illusion of chatting with a human counterpart. 
  2. Fine-tuned and continuously learning discussion engine. Fine-tuning adjusts the weights of the neural network or adds layers to help the model better understand the nuances of the task. 
  3. The GPT model has a complex machine learning algorithm of a deep neural network with 96 layers managing around one trillion parameters . The statistic large language model (LLM) has been taught with more than 45 terabytes of human-produced text acquired from the Internet (over 300 km of bookshelf space, beyond any human to read through).  The LLM has identified test patterns from this vast data and chooses the following word using the learned values and probabilities. 

In summary, people use ChatGPT because it is convenient, replies fast and mostly rationally, and is available 24/7, unlike most human counterparts. (Based on ChatGPT answer through Bearly.ai interface 03. June 2023) Besides the convenient and funny private discussions, organisations are seeking several ways to benefit from the NLP and LLM. Financial services giant Morgan Stanley is testing the technology to help its financial advisers better leverage insights from the firm's more than 100,000 research reports. The government of Iceland has partnered with OpenAI in its efforts to preserve the endangered Icelandic language. Salesforce has integrated the technology into its popular customer-relationship-management (CRM) platform. 


Military opportunities with generative AI

Can the ChatGPT provide an advantage in the assessment of the situation? 

"The general who wins the battle makes many calculations in his temple before the battle is fought. The general who loses makes but few calculations beforehand." Sun Tzu 

No, but if military culture enforces rules, then the GPT model may be fine-tuned with digitised military rules, policies, doctrines, tactics, techniques and procedures, and an Officers Companion application can warn if an officer is going to deviate from the authorities when deciding.

Can the ChatGPT master adversary at the strategic or operational level?

"For to win one hundred victories in one hundred battles is not the acme of skill. To subdue the enemy without fighting is the acme of skill." Sun Tzu 

No, statistical algorithms of large language models do not understand war at its different levels. Even though, if one trains a model with a large variety of tactical plans, there may be possible to generate combinations of these plans as graphs. Other AI algorithms are more proficient in strategy, gaming, and tactical confrontation than LLM. 

Where may the military use the ChatGPT like generative artificial intelligence, then?

Suppose we use the Figure 3 military impact model which illustrated the evolution of the cyber environment. In that case, there are several apparent points where the military may benefit from generative AI:

  • Man-Machine Interface (MMI) may be improved using Chatbot and LLM for translation of text and speech, free soldiers' hands from the keyboard of the battle management system, provide a dutiful companion for a lonely soldier in a trench to ease the anxiety or trauma, or act as a virtual instructor/trainer in the military metaverse.

  • Generating content and establishing virtual relationships for information operations. For example, 

"Russia has operationalised the concept of perpetual adversarial competition in the information environment by encouraging the development of a disinformation and propaganda ecosystem."  

Generative AI provides affordable means to generate disinformation, and misinformation, and manipulate people through social media as the furthest-reaching weapon after intercontinental missiles.   Furthermore, Cyber attackers may adopt new means to create more believable phishing emails, generate cyber-attacks, and craft new malware. 

  • Writing computer program code.  Since computer application programming uses very abstract languages, Generative AI may translate applications from one programming language to another, create programs to solve coding problems, simplify code, write documentation, or test code to find failures. 

"For many developers, generative AI will become the most valuable coding partner they will ever know." 

As the military is becoming more software-defined, the generative AI may provide an edge for Armed Forces to establish their code factories. 

  • Generative AI can generate synthetic data based on patterns and relationships learned from actual data.  Synthetic data may accelerate the learning of other AI algorithms, for example, to counter swarming drones, see faster the adversary behavioural pattern from clutter, or provide optimisation advice  from a smaller amount of data points.

  • Generative AI may enable the military to see a wider variety of options in the tactical situation through "machine hallucinations".  Moreover, as generative models can use statistics from large amounts of data points, they may illustrate historical battleground schemas on the current tactical situation and expand these with other variations  providing a broader foresight for tactical planners.

  • Generative AI may be used for 3D object generation  to accelerate military metaverse development, wargaming and simulation. The acceleration may create the next wave of revolution in military education (Chatman, 2009) and training, making the force generation able to execute continuous training more affordable and evolve the training content faster than currently. Furthermore, the ChatGPT may finally move the 2nd industrial teaching methods forward and have instructors focusing on critical-­thinking and problem-solving skills rather than copying textbooks for answers. 


Figure 3: Levels of Military Impact and Evolution of Cyber Environment (Mattila, 2022)

The above, possibly surprising, impacts are not easy to achieve, however!

Military Challenges with Generative AI

The adoption of generative AI proliferates in commercial and open-source segments ; organisations must address several critical challenges to ensure the success of their Generative AI initiatives. These challenges may include:

  1. Data Management: Effective data management is critical to the success of generative AI, as models rely on high-quality, well-labelled data. Organisations must ensure that their data is accurate, consistent, correctly labelled, managed securely, and in compliance with relevant regulations.
  2. Model Complexity: Generative AI models can be complex and resource-intensive, requiring significant computing power and technical expertise. Creators must ensure they have the resources and skills to develop and deploy these models effectively. In addition, the acquisition may need to verify the function of algorithms before deployment. 
  3. Like any analytical model, Generative AI has proven vulnerable to deliberate manipulation by sophisticated adversaries.  For example, data poisoning - the data used to train the models may be manipulated, and adversarial attacks — feeding algorithms malicious inputs, may be used to counter AI-enhanced features.
  4. Ethical Considerations: As mentioned earlier, ethical considerations, such as bias and privacy, are becoming increasingly important in the development and deployment of generative AI.  Creators must ensure their models are fair and transparent and respect user privacy in peacetime use. 


Nevertheless, US DoD states that the U.S. public and private sectors cannot afford to pause their artificial intelligence pursuits amid an international race for technological supremacy.  

Fear is relative!



Credit: Colin Anderson Getty Images and The Conversation

A human composed this article using AI enhanced Google and Bing searches to find sources, Bearly to summarize found articles, provide different wording and have discussions along the writing process, Word writing assistant to guess the next word, Canvas to create graphics, and Grammarly to proof-read the text.

Link to original article in Adobe https://acrobat.adobe.com/link/track?uri=urn:aaid:scds:US:a62996a1-24a6-3cfc-b69c-c7b5fde8088e