The air force is apparently now denying that it ran a simulated exercise in which an AI drone killed its operator.

Whether you believe that it happened or not it’s a good reminder that there will be increasingly less room for error in the design phase of autonomous weapon systems or even more benign autonomous AI platforms. There are countless ways to mitigate unwanted outcomes (design reviews, adversarial testing, red teaming, and AI enabled testing - to name just a few) but if we don’t do all of them, and do them well, the potential for disaster is high.