As technology evolves, criminals find innovative ways to exploit it. One of the most concerning trends in recent years has been using artificial intelligence (AI) and machine learning (ML) to further illicit activities.
From cybercrime to physical crime, AI and ML are being used to automate and optimize criminal activities, making them more efficient and difficult to detect. Here is how criminals use AI and ML to commit crimes and the potential risks and threats these trends pose.
1. Malware and Ransomware Attacks
Criminals increasingly use AI and ML to make their malware harder for antivirus programs to detect. Malware is malicious software designed to damage or disrupt systems, networks, or devices. Approximately 91% of it uses the domain name system for various purposes, such as stealing data, controlling systems, or causing damage to hardware or software.
Using AI and ML, cyber criminals can easily automate the identification and exploitation of vulnerabilities and create malicious code that evades traditional security measures. When criminals use AI and ML to spread malware and ransomware more effectively, this makes it difficult for organizations to respond to attacks. Consequently, they may be unable to access their systems or gain access to sensitive data, resulting in significant financial losses and operational disruption.
2. Password Cracking and Brute-Force Attacks
Password cracking and brute-force attacks are standard methods cybercriminals use to gain unauthorized access to user accounts and sensitive information. With the help of AI and ML, these attacks have become even more sophisticated.
AI can be trained using reinforcement learning (RL) — an ML algorithm that allows it to learn from its environment by taking actions and receiving rewards or punishments. The model can then analyze and learn from password databases and patterns in user behavior. That way, AI agents create more effective password-cracking tools.
For example, an attacker can use AI to analyze a user’s social media activity and other publicly available information to guess potential passwords or security questions. Furthermore, they can use AI and ML algorithms to automate brute-force attacks on many passwords and security questions. Successful attacks significantly increase, especially if the user has weak or easily guessable passwords. However, strong passwords and additional security measures can protect users against these attacks and reduce their risk of falling victim to cybercrime.
3. Phishing and Social Engineering
Cybercriminals use artificial intelligence and machine learning to analyze social media profiles and online activities. This enables them to craft more targeted phishing emails or social engineering scams. An attacker can use AI to study a victim’s social media activity and determine their interests, hobbies, and connections. They can then use this information to create a phishing email that appears to come from a trusted source and is tailored to the victim’s interests.
In addition, cybercriminals can use AI and ML to automate fake social-engineering attacks, allowing them to send out large numbers of malicious messages in a short period. This increases the chances of a successful attack and is especially useful through deepfakes.
Deepfakes are realistic videos or audio recordings manipulated using AI and ML. According to a new UCL report, experts have ranked fake audio or video content fifth out of six as the most worrying use of AI regarding its potential applications for crime or terrorism. Cybercriminals can use deepfakes to create convincing videos or audio recordings of a trusted individual — such as a CEO or government official — to trick victims into providing sensitive information or transferring funds.
4. Money Laundering and Cryptocurrency Scams
As the use of cryptocurrencies becomes more widespread, so do cybercriminals’ methods for money laundering and crypto scams. Criminals use AI and ML to process financial data like transactions and user behavior to create convincing money-laundering schemes. They can then use this information to form complex and difficult-to-detect methods — such as layering and integration — to launder large amounts of money.
In addition, attackers can develop more convincing cryptocurrency scams, like fraudulent initial coin offerings and fake cryptocurrency exchanges. Criminals design these scams to look legitimate and target individuals new to the world of cryptocurrencies, making them more vulnerable to falling victim.
5. Investment Fraud and Insider Trading
AI and ML have also become powerful tools for criminals seeking to commit investment fraud and insider trading. Criminals can train AI and ML algorithms to analyze stock prices and company information to identify potential investment opportunities. However, these algorithms can also help manipulate stock prices and carry out investment fraud.
For example, an attacker can tell AI to create fake news articles or social media posts that promote a particular stock, causing the price to rise. The attacker can then sell their shares at a profit and leave other investors with worthless stocks.
In addition, AI and ML can be useful in insider trading, which involves using confidential information to make informed investment decisions. An attacker can use AI to identify potential insider trading opportunities and use the information to profit from trades before the information becomes public.
6. AI-Powered surveillance
AI has made it easier for criminals to conduct surveillance activities, allowing them to use it for physical theft. AI-powered surveillance uses machine learning algorithms to analyze video footage. These algorithms can identify and track specific individuals or groups, allowing attackers to learn about their targets.
Attackers can use AI to track individuals’ locations and monitor their activities to plan burglaries or other crimes. To protect against these types of attacks, organizations can implement security measures like firewalls, intrusion detection systems, and security cameras.
7. Autonomous Vehicles and Drones
Autonomous vehicles and drones are becoming increasingly popular in many industries, including transportation and logistics. However, the same technologies that benefit society also let criminals use them in new ways. For example, attackers can use AI and ML to hack into autonomous vehicles and take control of them, causing accidents, kidnappings, or even terrorist attacks.
Drone technology also has many potential applications. Criminals can conduct varied activities, such as smuggling and assassination. For instance, attackers use drones to smuggle drugs or contraband across borders or into prisons. They can also use this technology to scout potential targets for theft or other attacks.
The Future of Criminal Use of AI and ML
As the use of AI and ML becomes more widespread with criminal activity, cybersecurity experts worry about what this trend may mean for society. Criminals are already using RL to develop more advanced attack methods and determine how to evade the detection of security systems.
So, what can people expect to see concerning the unlawful use of AI and ML in the future?
One potential area of concern is the use of AI to generate convincing deepfake videos. While the technology is still in its early stages, it can be used for many criminal purposes, such as spreading fake news, fraud, and blackmail.
As technology advances, criminals may also use it to create more realistic fake videos of people saying things that aren’t true. This could have profound implications for law enforcement investigations and national security matters.
Another area where AI and ML will continue to advance is developing autonomous weapons systems. These are weapons that can operate on their own without human input. While they’re not yet widely available, there is a concern they could be useful in acts of terrorism or other activities.
Finally, criminals could also use AI and ML to develop more effective malware and botnets. Using machine learning algorithms, they could create malware better at evading detection and bots capable of launching large-scale, automated attacks.
Overall, the future of illicit use of AI and ML is a complex and rapidly evolving landscape. As these technologies continue to advance, cybersecurity experts must stay abreast of the latest developments in their industry. They must also be willing to work to develop effective strategies for protecting against cyber attacks.
Working Together to Prevent Criminal Use of AI and Machine Learning
The potential criminal use of artificial intelligence and machine learning is a growing concern among cybersecurity experts and the general public. While these technologies hold great promise for a wide range of positive applications, they also have the potential to use them for malicious purposes.
As AI and ML evolve, criminals will likely find new and creative ways to use these technologies to their advantage. This underscores the need for ongoing research in the field of cybersecurity and substantial help to prevent the illegal use of AI and ML.
Ultimately, it will take a collective effort from individuals, businesses, governments, and the technology industry to ensure these powerful technologies are used for good and not evil. By staying informed, watching for new developments, and working together to share knowledge, everyone can secure the future of AI as one from which the world benefits.