We’ve unknowingly benefited from artificial intelligence and machine learning specifically for decades. Spam filters, anyone? Even so, the AI debate began to boil with the emergence of tools like Dall-E and ChatGPT that demonstrate a remarkable ability to simulate parts of humanness like creativity we thought were unassailable.
Where does all of this lead? Will the machines take over the world and discard their creators like we do obsolete tech today? The future likely isn’t as catastrophic as all that, but there is legitimate cause for concern.
This article explores how ML and AI are already adversely affecting the real and digital worlds and what we can do to steer them in the right direction.
Potential Real-World Challenges
Machine learning influences fundamental parts of people’s lives, like health outcomes or job opportunities. While researchers monitor and try to account for interference and shortcomings, some ML-driven behaviors and decisions can be less than ideal. Here are some of the most poignant factors.
Lack of accuracy
ML models infer their conclusions based on the data they train on. When such data is accurate and plentiful, you get reliable results. For example, getting an AI to recognize and suppress unwanted sounds like humming and background noise is straightforward.
Problems arise when the data used to build a model is inaccurate or when there’s not enough of it to go on. You may automate the job application process to look for specific keywords and other green flags in an applicant’s resume.
These don’t necessarily mean the AI’s pick is objectively the best choice, whether we’re talking about experience or the ability to adapt to company culture. Someone familiar with such resume-reading practices could also game the system and cheat their way into a job without the required credentials.
Bias
Then there’s the problem of historically accurate data that doesn’t reflect today’s reality or norms. Machine learning isn’t aware of such changes and can unfairly discriminate against a group.
The predictions of the COMPAS recidivism algorithm are among the most famous cases to date. Based on the data it had, the algorithm predicted a higher percentage of high-risk inmates among the black prison population while also underrepresenting high-risk individuals among whites. Relying on such predictions when considering paroles and creating policies could increase crime while not rehabilitating those who might benefit more.
Privacy Concerns
Surveillance greatly benefits from machine learning while also infringing on people’s privacy. It’s easier than ever to identify someone based on an image taken in a split-second or from a snippet of their voice.
Not all privacy intrusions are this direct, though. ML models can successfully infer much about people based on their internet usage patterns, shopping preferences, or publically available social media posts. There’s also the matter of legality since not all ML researchers source their training data ethically. That’s why some people follow opt-out guides for some of their information removal.
Machine Learning as a Cybersecurity Threat
While ML’s forays into the real world are getting more daring, they’ve been fueling a rapid transformation of the digital world for several years. Cybercriminals are never far behind and have already developed a myriad of creative ways in which to use the technology to redefine modern cybersecurity challenges.
Phishing is an old form of social engineering that large language models are breathing new life into. Savvy individuals are using LLMs to craft phishing emails that sound more natural. Given enough info, they can generate emails that convincingly imitate institutions and individuals with whom the targets interact.
Facial and voice recognition advancements allow crooks to impersonate anyone in real time, and they need very little information to pull it off. Scams involving voice cloning have already resulted in high-profile cases where unsuspecting employees helped the scammers siphon millions from the banks and businesses they work for.
Another crafty approach involves using cybersecurity tools to hone their attacks. Machine learning already assists antivirus and antimalware software develop adaptive responses to emerging threats. Hackers can release new malicious code into the wild and observe how these ML-assisted tools handle it. They may then craft new strains that bypass current loopholes and continue to do so as cybersecurity developers catch up.
Malware is getting smarter in other ways. For instance, it could monitor a compromised system to determine when to activate, often weeks or months after infection. It might even be sophisticated enough to adapt to attempts at detection and quarantine on the fly, even if the infected system goes offline.
What Can ML Developers and the General Public Do?
The AI development community knows the moral and ethical dilemmas concerning their activities. There’s no legal framework to regulate artificial intelligence yet. However, there are already efforts from leading voices in the community itself to put AI on a course that would further its development while not endangering humanity.
A fruitful coexistence between humanity and AI depends on decision-makers. It can only succeed if they respect human rights while striving to develop AI technologies with fairness, lack of bias, and sustainability at the forefront. Their continuing efforts should also focus on making AI more robust and secure from malicious threats.
Individuals will need to reevaluate how they create and share information, and some may need to abandon careers ML will help automate. Right now, anyone can shore up their cybersecurity by investing in tools like a VPN app to remain anonymous online and minimize the amount of information AI can gather on them.
Set it and forget solutions won’t protect us from our nature. Chatbots and other types of people-facing AI are becoming commonplace. It’s up to us to carefully consider what information to share with such entities and how to behave in a digital environment where our actions are increasingly scrutinized, cataloged, and used for, among other things, AI refinement.
Conclusion
Machine learning likely won’t lead to humanity’s downfall. However, it has helped set a Copernican societal shift in motion whose consequences are impossible to anticipate. Regardless of the outcomes for our society, humanity, and the planet, exciting times are unquestionably before us.