“The Challenge of Advanced Cyberwar and the Place of Cyberpeace”, Elias G. Carayannis, John Draper2023-05-30 (, )⁠:

This chapter highlights that an artificial superintelligence (ASI) emerging in a world where war and especially cyberwar are still normalized constitutes a catastrophic existential risk. This risk could arise either because the ASI might be employed by a nation-state to wage cyberwar for the nation-state, or the ASI might wage war for itself, for global domination, i.e. ASI-enabled and ASI-directed cyberwarfare. These risks are not mutually incompatible as the first can transition to the second.

Presently, few states declare war or even war on each other, in part due to the 1945 UN Charter, which states Member States should refrain “from the threat or use of force against the territorial integrity or political independence of any state”, while allowing for UN Security Council-endorsed military measures and self-defense. However, costly interstate conflicts, both ‘hot’ and ‘cold’, still exist, for instance the Kashmir Conflict and the Korean War, and cyberwarfare is becoming more prevalent. Further, a ‘New Cold War’ between AI superpowers looms.

An ASI-directed/enabled future conflict could trigger ‘total war’, including nuclear war, and is therefore high risk. The global risk reduction strategy we advocate is in line with existing thinking on cyber peacekeeping and cyber peacemaking and relies on international relations and non-killing peacebuilding theory and optimizes peace both through an arms control-based Cyberweapons and Artificial Intelligence Convention, which we present for the first time herein, and through a post-Covid Universal Global Peace Treaty.

This treaty could contribute towards the ending of existing wars and the prevention of future wars, including cyberwars, through conforming instrumentalism. While a treaty-based strategy cannot readily cope with non-state actors, it could influence state actors, including those developing AGIs and eventually ASIs, or an ASI with agency, particularly if it values non-killing and conforming instrumentalism.

[Keywords: AI arms race, artificial superintelligence, conforming instrumentalism, existential risk, international relations, non-killing, peace]