"Human out of the Loop" Military Systems

From SI410
Revision as of 01:17, 28 November 2011 by Guo (Talk | contribs)

Jump to: navigation, search
The Loitering Attack Munition (LAM) is capable of flying over a defined area and autonomously seeking out targets.

"Human out of the loop" military systems are those that are capable of full operation without the involvement of a human operator. Such systems can also be referred to as "man out of the loop" or "fully autonomous." Although this term can be applied to any autonomous military system, it is more commonly applied to systems that are capable of operating in a offensive or defensive manner, as opposed to autonomous military reconnaissance systems. This term is likely not to be used to describe semi-autonomous military systems, such as Predator UAVs, which are incapable of preforming many critical operations without operator input. (Back to index)


First "Human out of Loop" Military Systems

During WWII, Norbert Wiener and other engineers where working on a new weapon that needed no human intervention. The weapon was an anti-aircraft cannon that was able to (quoted from source):

  1. Perceive the presence of an airplane
  2. Gather information about its speed and trajectory
  3. Predict its future position a few seconds later
  4. Decide whether to aim and when to fire the shell
  5. Carry out that decision.

After completing this weaponry project, Wiener stated that the emerging field of "Human out of the Loop" Military Systems (which was refered to as cybernetics in WWII) had "enormous potential for good and for evil." [1]

Current Trends and Media Portrayal

Within the past decade there has been a tremendous increase in the number of autonomous military systems being developed and deployed into areas of conflict.[2] Unlike the previous generation of semi-autonomous systems, which were designed to take action only if given the clear by a human operator (human in the loop), many of these newer systems are capable of using lethal force without operator input. Currently, nearly all "human out of the loop" systems operate in a defensive role, in which their autonomous use of lethal force is justified by their employment in situations in which human operators are unable to react quickly enough to take meaningful action.

Popular culture tends to portray "Human out of the Loop" systems negatively with films such as Terminator series, I, Robot, and Eagle Eye where something goes horribly wrong and the machines we built to protect ourselves turn against us.

Sea-Based Automated Defenses, Information and Ethicality of

US Phalanx Close-In Weapons System (CIWS)

Of the automated defensive weaponry systems, the sea-based variants are perhaps the most likely to function as intended with minimal accidental human casualties. As the open ocean is a relatively simple environment, free from structures or terrain, or other non-ship objects that can confuse automated tracking systems. Due to this simplistic environment, sea-based automated defenses are capable of operating via a relatively simplistic fire control system are were among the first of such modern automated defenses to be developed.[3]

As with all other automated defense systems, the possibility of error is still present. As such systems are designed to identify, track, and destroy incoming enemy missiles (automatically) and aircraft (typically with human input). The true ethical issues with these sea-based systems lies in their ability to automatically destroy incoming projectile. The primary issue with automated response is that incoming friendly aircraft are quite capable of flying in a manner that could be mistaken for an incoming projectile. Certainly, if the right circumstances aligned, such as malfunctioning tracking systems, disabled Friend-or-Foe systems, or environmental interference, such a system could fire on friendly forces.

Currently Deployed Systems:

Stationary Land-Based Automated Defenses, Information and Ethicality of

File:Patriot 08.jpg
US Patriot Missile Defense System

Much like their sea-based counterparts, stationary land-based defenses have the ability to automatically identify, track, and destroy incoming threats. However, unlike the sea-based defenses, these systems operate within a far more complex environment and have been developed to engage more types of targets, such as rockets and mortars. Of ethical note is the increased probability, relative to sea-based systems, of these systems mistakenly firing due to the general increase in human activity over land. Of further note is the possibility of collateral damage even in a best-case scenario of the destruction of a hostile target. Fortunately, many of these systems have precautions to prevent their own expended ordinance from falling back to earth and causing unintended damage.[4] However, as all of these systems destroy their targets mid-flight, it is quite possible for debris of the target to rain down on innocents below. Currently, these automated systems do not account for such circumstances and it would be quite possible for a perfectly functioning system to indirectly kill innocents that happened to be in the path of falling target debris.

Currently Deployed Systems:

Mobile Land-Based Automated Defenses, Information and Ethicality of

File:Trophy.jpg
Israeli Trophy Active Protection System Intercepting a RPG.

Due to their mobile, vehicle-mounted nature, operation in extremely complex environments, and intended targets, these are the most ethically ambiguous automated defense systems.[5] These systems are typically mounted on high-value ground vehicles, such as main battle tanks, that would likely be the target of RPGs, missiles, or other anti-vehicular rounds. These vehicles commonly operate within populated areas and are often supported by friendly infantry. These conditions are of importance due to these systems methods of destroying incoming enemy rounds, accomplished by detonating missiles near the projectile, or by firing shotgun-like rounds such that the incoming projectile is peppered with shrapnel.

Such systems would likely be deactivated when operating nearby friendly forces, yet this may not be the case when operating in urban environments- where such vehicles are far more likely to be attacked. As such, if fired upon, such vehicle-mounted systems could cause a great deal of harm to bystanders while attempting to defend the vehicle.[6]

Currently Deployed Systems:

Reported Incidents Involving Automated Weaponry:

  • During a 2007 South African National Defense Force firing exercise, a MK5 automated anti-aircraft cannon malfunctioned and opened fire on friendly soldiers, killing nine and seriously wounding 14.[7]
  • In 2003, a US Patriot missile site automatically fired upon a British fighter jet, killing both pilots.[8] Days later, another Patriot site locked onto a US fighter, causing the pilot to fire upon and destroy the site’s radar.[9]
  • In 1991, a Phalanx system on the USS Jarrett automatically fired on countermeasures deployed from the USS Missouri, causing four rounds to strike USS Missouri.[10] No injuries resulted.

References

  1. Bynum, T.W., "Milestones in the History of Information Ethics," in Himma and Tavani (2008), pp.25-48.
  2. "Meeting the Increased Demand for Military Robots." Robotic Trends. 22 Apr 2009. Web. 09 Dec 2009. <http://www.roboticstrends.com/security_defense_robotics/entry/meeting_the_increased_demand_for_military_robots_an_interview_with_foster_m/>.
  3. Stoner, Robert. "The Story of the Phalanx Close-In Weapons System (CIWS)." NavWeaps . 30 Oct 2009. Web. 09 Dec 2009. <http://www.navweaps.com/index_tech/tech-103.htm>.
  4. "Active Protection Systems." Web. 5 Dec 2009. <http://www.globalsecurity.org/military/systems/ground/aps.htm>.
  5. "C-RAM: The Art of Winning the Peace?." Think Defense. 11 Apr 2009. Web. 11 Dec 2009. <http://www.thinkdefence.co.uk/2009/04/c-ram-the-art-of-winning-the-peace/>.
  6. "Trophy Active Protection System." Defense Update. Web. 09 Dec 2009. <http://defense-update.com/products/t/trophy.htm>.
  7. Engelbrecht, Leon. "Did Software Kill Soldiers?." IT Web. 16 Oct 2007. Web. 05 Dec 2009. <http://www.itweb.co.za/index.php?option=com_content&view=article&id=6157&catid=96:defence-and-aerospace-technology >.
  8. "Patriot Missiles Seemingly Falter For Second Time; Glitch in Software Suspected." GolbalSecurity.org. 26 Mar 2003. The Washington Post, Web. 05 Dec 2009. <Patriot Missiles Seemingly Falter For Second Time; Glitch in Software Suspected>.
  9. Guynn, Jessica. "2 friendly fire incidents renew concern over reliability of Patriot missiles." GlobalSecurity.org. 29 Mar 2003. Knight Ridder/Tribune News Service, Web. 05 Dec 2009. <http://www.globalsecurity.org/org/news/2003/030329-patriot01.htm>.
  10. "TAB H -- Friendly-fire Incidents." GulfLINK. Web. 08 Dec 2009. <http://www.gulflink.osd.mil/du_ii/du_ii_tabh.htm >.