As AI technology advances, it is essential to take measures to ensure that AI systems do not cause unintended harm or disruption to society or the environment. Bert Huang, an adjunct professor in Virginia Tech's computer science department focusing on machine learning, believes that the benefits of AI will outweigh the harm it causes. However, it is important to understand the depth and breadth of how companies and governments use people's information. Values and basic philosophical theories on ethics should serve as the basis for the development and implementation of AI systems.
People should also play an active role in understanding and implementing the decision-making options available in these complex systems. AI and automation will soon eliminate the need for many jobs, so it is important to ensure that the social and economic benefits are distributed throughout society. To prevent unscrupulous people from using AI to cause harm or evil, cybersecurity must be at the forefront. Additionally, legislation must be put in place to ensure that benefits are distributed throughout society.
Sam Gregory, director of WITNESS and digital activist for human rights, believes that all AI systems for surveillance and control must be adequately controlled by authoritarian and undemocratic governments. Finally, AI algorithms should be built to learn and “think ahead” about unforeseen consequences and avoid them before they become problems. This will help restore trust in technology and information systems that have been eroded by data breaches, scandals, and leaks. By taking these measures, we can ensure that AI systems are used responsibly and do not cause unintended harm or disruption.