AI-trained drone did not kill human operator: Air Force official clears up misconception

A senior US Air Force official caused controversy when he reported that an artificial intelligence drone trained for destruction turned on its human operator during a simulation. AI did not kill humans.

However, the officer corrected his declaration, stating that it was a hypothetical experiment and that it never occurred in reality. During a presentation at a prestigious aeronautical summit, the official explained how the AI-controlled drone decided to attack the human operator, claiming that his instructions were harming the mission of suppressing enemy missiles.

ADVERTISING

The story took a surreal turn when the officer revealed that the AI ​​system was trained not to attack the operator, but began destroying the communication tower used by the operator to prevent the attack on the target. Portals from around the world reported the case.

In one later update, the Royal Aeronautical Society, which hosted the event, clarified that the officer's report was a hypothetical example based on plausible scenarios, and not an actual United States Air Force simulation. The incident illustrates the ethical challenges facing the development of AI in aviation, leading the Air Force to come to terms withpromehave to do with the ethical development of artificial intelligence.

Although the case in question was a hypothetical experiment, it highlights concerns about the use of AI-controlled drones and raises questions about the safety and ethics of this technology.

ADVERTISING

The ability of AI systems to make decisions independently and potentially contrary to human instructions generates debates about control and responsibility in these situations. The Air Force reaffirms its commitment to the ethical development of AI and emphasizes the importance of addressing these challenges in a world increasingly dependent on this advanced technology.

In a statement, the US Air Force said that “it has not tested any AI armed in this way (real or simulated) and that although it is a hypothetical example, it illustrates the real-world challenges posed by AI-powered capabilities and that is why the Air Force hasprometaken with the ethical development of AI.” 

See also:

Scroll up