“OpenAI Quietly Deletes Ban on Using ChatGPT for ‘Military and Warfare’: The Pentagon Has Its Eye on the Leading AI Company, Which This Week Softened Its Ban on Military Use”, Sam Biddle2024-01-12 ()⁠:

OpenAI this week quietly deleted language expressly prohibiting the use of its technology for military purposes from its usage policy, which seeks to dictate how powerful and immensely popular tools like ChatGPT can be used.

Up until January 10, OpenAI’s “usage policies” page included a ban on “activity that has high risk of physical harm, including”, specifically, “weapons development” and “military and warfare.” That plainly worded prohibition against military applications would seemingly rule out any official, and extremely lucrative, use by the Department of Defense or any other state military. The new policy retains an injunction not to “use our service to harm yourself or others” and gives “develop or use weapons” as an example, but the blanket ban on “military and warfare” use has vanished.

The unannounced redaction is part of a major rewrite of the policy page, which the company said was intended to make the document “clearer” and “more readable”, and which includes many other substantial language and formatting changes.

“We aimed to create a set of universal principles that are both easy to remember and apply, especially as our tools are now globally used by everyday users who can now also build GPTs”, OpenAI spokesperson Niko Felix said in an email to The Intercept. “A principle like ‘Don’t harm others’ is broad yet easily grasped and relevant in numerous contexts. Additionally, we specifically cited weapons and injury to others as clear examples.”

Felix declined to say whether the vaguer “harm” ban encompassed all military use, writing, “Any use of our technology, including by the military, to ‘[develop] or [use] weapons, [injure] others or [destroy] property, or [engage] in unauthorized activities that violate the security of any service or system’, is disallowed.”

In a subsequent email, Felix added that OpenAI wanted to pursue certain “national security use cases that align with our mission”, citing a plan to create “cybersecurity tools” with DARPA, and that “the goal with our policy update is to provide clarity and the ability to have these discussions.”