AI Chatbot Platform ChatGPT, Hints To Using Its Technology For “Weapons Development, Military And Warfare” After TOS Revision

511
ChatGPT
Up until January 10 this year, OpenAI’s usage policy categorically excluded the use of its technology for “weapons development, military and warfare.” However, recent reports reveal an update to this policy. File photo: Ascannio, ShutterStock.com, licensed.

OpenAI, the renowned parent company of the popular artificial intelligence chatbot platform ChatGPT, has recently ignited a fascinating debate in tech circles. The company’s change in policy, subtly shifting their stance on collaborating with military operations, resonates with the ongoing discussions about the ethical implications of AI and its potential uses and misuses.

Up until January 10 this year, OpenAI’s usage policy categorically excluded the use of its technology for “weapons development, military and warfare.” However, recent reports reveal an update to this policy. The revised ethos makes room for usage that doesn’t “bring harm to others,” according to an article by Computer World.

An OpenAI spokesperson shared with Fox News Digital:

“Our policy does not allow our tools to be used to harm people, develop weapons, for communications surveillance, or to injure others or destroy property. There are, however, national security use cases that align with our mission.”

This strategic change in policy bears significant potential for OpenAI’s relationship with military institutions. The ability to work closely with the military has been a divisive point among company officials, challenging the shared vision for limitations on technology usage. This divide perhaps arises from different perceptions of how the military might harness OpenAI’s cutting-edge technology.

Extracting insight from this internal discord, Christopher Alexander, the Chief Analytics Officer of Pioneer Development Group, suggests the disconnect stems from misconceptions about military usage of AI technology. He highlights that the concern is likely from fears about AI becoming uncontrollably powerful. Elucidating the practical implications, Alexander spoke to Fox News Digital stating,

“The most likely use of OpenAI is for routine administrative and logistics work, which represents a massive cost savings to the taxpayer.” He welcomed the understanding within OpenAI’s leadership that enhancing DOD capabilities could lead to heightened effectiveness and potentially save lives on the battlefield.

Yet, as AI continues to break boundaries and expand its scope in various sectors, worries about its potential dangers are escalating in parallel. In May, hundreds of tech stalwarts and influential public figures signed an open letter cautioning about the possible cataclysmic impacts of AI. The letter appeals for a cooperative global effort to prevent potential risks and compares the urgency with other societal-scale threats such as pandemics and nuclear war.

Echoing this collective concern for mitigation of AI risks, OpenAI’s CEO Sam Altman joined the chorus of voices calling for more robust regulation. His signature underpins the company’s long-standing commitment to limiting AI’s potentially harmful capacities.

Comment via Facebook

Corrections: If you are aware of an inaccuracy or would like to report a correction, we would like to know about it. Please consider sending an email to [email protected] and cite any sources if available. Thank you. (Policy)