Security Experts: Artificial Intelligence Is Ripe For Misuse By Bad Actors
Monday, February 26, 2018
Over the years, bad actors (e.g., criminals, terrorists, rogue states, ethically-challenged business executives) have used a variety of online technologies to remotely hack computers, track users online without consent nor notice, and circumvent privacy settings by consumers on their internet-connected devices. During the past year or two, reports surfaced about bad actors using advertising and social networking technologies to sway public opinion.
Security researchers and experts have warned in a new report that two of the newest technologies can be also be used maliciously:
"Artificial intelligence and machine learning capabilities are growing at an unprecedented rate. These technologies have many widely beneficial applications, ranging from machine translation to medical image analysis... Less attention has historically been paid to the ways in which artificial intelligence can be used maliciously. This report surveys the landscape of potential security threats from malicious uses of artificial intelligence technologies, and proposes ways to better forecast, prevent, and mitigate these threats. We analyze, but do not conclusively resolve, the question of what the long-term equilibrium between attackers and defenders will be. We focus instead on what sorts of attacks we are likely to see soon if adequate defenses are not developed."
Companies currently use or test artificial intelligence (A.I.) to automate mundane tasks, upgrade and improve existing automated processes, and/or personalize employee (and customer) experiences in a variety of applications and business functions, including sales, customer service, and human resources. "Machine learning" refers to the development of digital systems to improve the performance of a task using experience. Both are part of a business trend often referred to as "digital transformation" or the "intelligent workplace." The CXO Talk site, featuring interviews with business leaders and innovators, is a good resource to learn more about A.I. and digital transformation.
A survey last year of employees in the USA, France, Germany, and the United Kingdom found that they, "see A.I. as the technology that will cause the most disruption to the workplace." The survey also found: 70 percent of employees surveyed expect A.I. to impact their jobs during the next ten years, half expect impacts within the next three years, and about a third percent see A.I. as a job creator.
This new report was authored by 26 security experts from a variety of educational institutions including American University, Stanford University, Yale University, the University of Cambridge, the University of Oxford, and others. The report cited three general ways bad actors could misuse A.I.:
"1. Expansion of existing threats. The costs of attacks may be lowered by the scalable use of AI systems to complete tasks that would ordinarily require human labor, intelligence and expertise. A natural effect would be to expand the set of actors who can carry out particular attacks, the rate at which they can carry out these attacks, and the set of potential targets.
2. Introduction of new threats. New attacks may arise through the use of AI systems to complete tasks that would be otherwise impractical for humans. In addition, malicious actors may exploit the vulnerabilities of AI systems deployed by defenders.
3. Change to the typical character of threats. We believe there is reason to expect attacks enabled by the growing use of AI to be especially effective, finely targeted, difficult to attribute, and likely to exploit vulnerabilities in AI systems."
So, A.I. could make it easier for the bad guys to automated labor-intensive cyber-attacks such as spear-fishing. The bad guys could also create new cyber-attacks by combining A.I. with speech synthesis. The authors of the report cited examples of more threats:
"The use of AI to automate tasks involved in carrying out attacks with drones and other physical systems (e.g. through the deployment of autonomous weapons systems) may expand the threats associated with these attacks. We also expect novel attacks that subvert cyber-physical systems (e.g. causing autonomous vehicles to crash) or involve physical systems that it would be infeasible to direct remotely (e.g. a swarm of thousands of micro-drones)... The use of AI to automate tasks involved in surveillance (e.g. analyzing mass-collected data), persuasion (e.g. creating targeted propaganda), and deception (e.g. manipulating videos) may expand threats associated with privacy invasion and social manipulation..."
BBC News reported even more possible threats:
"Technologies such as AlphaGo - an AI developed by Google's DeepMind and able to outwit human Go players - could be used by hackers to find patterns in data and new exploits in code. A malicious individual could buy a drone and train it with facial recognition software to target a certain individual. Bots could be automated or "fake" lifelike videos for political manipulation. Hackers could use speech synthesis to impersonate targets."
From all of this, one can conclude that the 2016 elections interference cited by intelligence officials is probably mild compared to what will come: more serious, sophisticated, and numerous attacks. The report included four high-level recommendations:
"1. Policymakers should collaborate closely with technical researchers to investigate, prevent, and mitigate potential malicious uses of AI.
2. Researchers and engineers in artificial intelligence should take the dual-use nature of their work seriously, allowing misuse-related considerations to influence research priorities and norms, and proactively reaching out to relevant actors when harmful applications are foreseeable.
3. Best practices should be identified in research areas with more mature methods for addressing dual-use concerns, such as computer security, and imported where applicable to the case of AI.
4. Actively seek to expand the range of stakeholders and domain experts involved in discussions of these challenges."
Download the 101-page report titled, "The Malicious Use Of Artificial Intelligence: Forecasting, Prevention, And Mitigation" A copy of the report is also available here (Adobe PDF; 1,400 k bytes)here.
To prepare, both corporate and government executives would be wise to both harden their computer networks and (re)train their employees to recognize and guard against cyber attacks. What do you think?
Comments
You can follow this conversation by subscribing to the comment feed for this post.