Saturday, 20 April 2024 | Login
Malicious use of A.I. could turn self-driving cars and drones into weapons, top researchers warn

Malicious use of A.I. could turn self-driving cars and drones into weapons, top researchers warn Featured

With advances in artificial intelligence, the risks of hackers using such technologies to launch malicious attacks are increasing, top researchers warned in a report released on Wednesday.

They could use such AI to turn consumer drones and autonomous vehicles into potential weapons, for instance, said researchers from universities such Oxford, Cambridge and Yale, as well as organizations like the Elon Musk-backed OpenAI, in a report.

The report, titled "The Malicious Use of Artificial Intelligence," cautioned against various security threats posed by the misuse of AI.

 

Self-driving cars, for example, could be tricked into misinterpreting a stop sign that might cause road accidents, while a swarm of drones, controlled by an AI system, could be used for surveillance or launching quick, coordinated attacks, the report said.

Intelligent machines, according to the report, could lower the cost of carrying out cyberattacks by automating certain, labor-intensive tasks and more effectively scoping out potential targets.

One example the report pointed to was "spear phishing," where attackers use personalized messages for each potential target, in order to steal sensitive information or money.

"If some of the relevant research and synthesis tasks can be automated, then more actors may be able to engage in spear phishing," the researchers said.

On the political front, AI could be used for surveillance, creating more targeted propaganda and spreading misinformation.

For example, "highly realistic videos" of state leaders making seemingly inflammatory comments they never actually made, could be made using advances in image and audio processing, according to the report.

"We also expect novel attacks that take advantage of an improved capacity to analyse human behaviors, moods, and beliefs on the basis of available data," the report said.

AI can already be tapped on to create fake, superimposed images onto another person in videos. For example, videos known as "deepfakes" superimpose a person's face over actors in adult films to create fake pornographic videos. Recently major websites moved to clamp downon the practice.

To be sure, the researchers said that the scenarios highlighted in the report were not definitive predictions of how AI could be maliciously used — some of the scenarios might not be technically possible in the next five years, while others were already occurring in limited form.

"Other malicious uses will undoubtedly be invented that we do not currently foresee," they added.

Wednesday's report did not offer any specific ways that malicious use of AI could be stopped.

But it provided certain recommendations that include more collaboration between policymakers and researchers, and called for the involvement of more stakeholders to tackle the misuse of AI.

Though the technology is still nascent, billions of dollars have been spent on developing artificially intelligent systems. International Data Corporation, last year, predicted that by 2021, global spending on cognitive and artificial intelligence systems could reach $57.6 billion.

AI is predicted to be so massive that Google CEO Sundar Pichai recently said it could have a more profound impact than possibly electricity or fire — two of the most ubiquitous innovations in history.

At the same time, there are plenty of skeptics of AI. High-profile physicist Stephen Hawking said last year that AI could be the "worst event in the history of our civilization" unless society finds a way to control its development.

Additional Info

  • Origin: cnbc/GhAgent