As AI systems become more complex and evolve, it is essential to take measures to ensure that they remain safe and reliable. John Laird, professor of computer science and engineering at the University of Michigan, has highlighted the need for data to be considered a truthful representation of the world in order to teach AI systems how to perform tasks. Additionally, adversaries can capture physical equipment, such as drones and weapon systems, in which AI systems are housed. To protect against artificial intelligence attacks (AI attacks), the military is prioritizing the development of AI systems that increase human control rather than replace it.
To protect against AI attacks, tests should be conducted to assess the application's vulnerability, the consequences of an attack, and the availability of alternative non-AI-based methods. Once these questions have been answered, they must be weighed to determine the risk posed by the system and inform implementation decisions. Best practices should be formulated with the collaboration of security experts and experts in the field for each application. These practices may include transmitting only data over classified or encrypted networks, encrypting stored data, and keeping details of the system secret.
At the same time, artificial intelligence as a service is becoming more common. This could lead to a “shared monoculture” scenario where a single AI system is used by multiple organizations. In this case, adversaries could easily find attack patterns to design an attack against any system trained with the data set. Overall, it is essential to take measures to ensure that AI systems remain safe and reliable over time as they evolve and become more complex.
By creating AI-based weapons and defense systems, protecting data samples used to train models, conducting tests to assess vulnerability, formulating best practices with security experts, and taking steps to protect against a “shared monoculture” scenario, organizations can ensure that their AI systems remain secure.