As autonomous systems become increasingly integrated into military operations, the Defense Advanced Research Projects Agency (DARPA) has initiated an ambitious program to establish the first quantitative framework for evaluating the ethical behavior of AI-powered systems. The new initiative, dubbed Autonomy Standards and Ideals with Military Operational Values (ASIMOV), aims to bridge the gap between technical performance and ethical decision-making in military autonomous systems.

Named after science fiction author Isaac Asimov, known for his influential “Three Laws of Robotics,” the program represents a crucial step forward in ensuring that autonomous systems can reliably adhere to human ethical norms while operating in complex military scenarios.

Seven Industry Leaders Selected

DARPA has awarded contracts to seven organizations to spearhead different aspects of this groundbreaking research:

  • CoVar
  • Kitware, Inc.
  • Lockheed Martin
  • RTX Technology Research Center
  • SAAB, Inc.
  • Systems & Technology Research
  • University of New South Wales

These organizations will develop prototype generative modeling environments to assess ethical scenarios and establish benchmarks for future autonomous systems evaluation.

CoVar’s GEARS System

Among the selected contractors, CoVar has been tasked with developing a particularly innovative component called GEARS (Gauging Ethical Autonomous Reliable Systems). This testing infrastructure aims to create a new “mathematics of ethics” by representing ethical scenarios and commander’s intent through knowledge graphs that both humans and machines can understand.

CoVar has assembled a multidisciplinary team including ethics professors, published authors in AI/ML trust, engineers and ethicists with combat command experience, and Duality AI, whose Falcon digital twin platform will support autonomous system simulation.

Beyond Technical Performance

“ASIMOV is tackling a tremendously complex problem with an infinite set of variables,” says Timothy Klausutis, Strategic Technology Office program manager at DARPA. “We don’t have any illusions we’ll figure everything out we want to in the initial stages of this program, but the stakes are too high not to try everything we can.”

The program will establish a shared language for ethical autonomy, enabling the Developmental Testing/Operational Testing (DT/OT) community to:

  • Quantitatively assess the ethical complexity of specific military scenarios
  • Evaluate autonomous systems’ capability to perform ethically within those scenarios
  • Create benchmarks for measuring ethical readiness
  • Develop standards for future autonomous system development

Ethical Oversight and Public Engagement

An Ethical, Legal, and Societal Implications (ELSI) advisory group will provide ongoing guidance throughout the project, ensuring that broader implications are considered at every stage. DARPA plans to make the program public, allowing the broader community to test and use future tools and technologies developed under ASIMOV.

Dr. Pete Torrione, CTO of CoVar, emphasizes the program’s significance: “If this work is successful, it will represent the first quantitative ELSI-based evaluation framework suitable for testing ethics of autonomous systems. This will empower the US Department of Defense to deploy AI/ML capable autonomous systems with a clear understanding of not only the technical capabilities of the systems, but also the ethics of their behaviors.”

The ASIMOV program marks a critical milestone in the evolution of military autonomous systems, acknowledging that as these systems take on more complex decision-making roles, ensuring their ethical behavior is just as crucial as their technical performance. By developing quantitative measures for ethical behavior, DARPA aims to set new standards for responsible AI development in military applications.

Leave a comment

Trending