Scientists from ML Alignment Theory Scholars, the University of Toronto, Google DeepMind and the Future of Life Institute have recently published research indicating that fighting to keep artificial intelligence (AI) under human control could become an ongoing struggle. Researchers find even good AI can become resistant to shutdown. While there doesn’t appear to be any immediate threat, there also appears to be no simple solution. This form of threat is referred to as “misalignment.” One way experts believe it could manifest is called “instrumental convergence.” This is a paradigm in which an AI system unintentionally harms humanity in pursuit of its given goals. “For example, an LLM may reason that its designers will shut it down if it is caught behaving badly and produce exactly the output they want to see—until it has the opportunity to copy its code onto a server outside of its designers’ control.” Based on this and similarly probing research, there may be no magic panacea for forcing AI to shut down against its will. Even an “on/off” switch or a “delete” button is meaningless in the cloud-based technology world of today.