Helpful and harmful AI – Benefits and threats of artificial intelligence

Every major technological innovation brings social potential or damage. The data processing and analysis capability of artificial intelligence can help alleviate some of the most pressing problems in the world, from the development of diagnosis and treatment of diseases to transportation, urban development, and climate change mitigation. However, the same skills can also help monitor the most vulnerable to an unprecedented degree, identify and discriminate and revolutionize the economy so no job retraining program can continue to operate. And despite the critical steps in AI development, the so-called “artificial intelligence revolution” is only a decade old.

Below are some ways in which AI can help or harm societies. Note that even “helpful” AI uses can have negative consequences. Many healthcare AI applications, for example, pose severe threats to the protection of privacy and risk to underserved communities and focus on data ownership in large companies.

At the same time, the use of AI to mitigate harm may not solve underlying problems and should not be considered a cure for social issues. For example, although AI may alleviate the need for under-served medical practitioners, it does not provide the resources or stimuli that professionals need to relocate. Similarly, there were some cases of a use classified as “harmful” due to good intentions but which have caused significant harm.

Benefits of AI

Improved access to healthcare and disease outbreak prediction: Significant progress was already made in the diagnostics and prevention of diseases through the use of AI. AI is also used in regions lacking access to healthcare. AI also allows health officials to intervene early to contain an outbreak before it begins.

Making life easier for the visually impaired: Image recognition tools help people with visual impairment better navigate the Internet and the real world.

Optimization of agriculture and adaptation of farmers to change: AI combines global satellite imagery with weather and agricultural data to support farmers in improving crop production, diagnosing and treating plant disease and adapting it to changing environments. This approach is known as precision farming and can contribute to increasing agricultural productivity to feed more people in the world.

Climate change mitigation, natural disaster prediction and wildlife conservation: With the effects of climate change globally, machine learning helps scientists make climate models more accurate. AI is used already to classify climate models, to forecast extreme weather events, to anticipate extreme weather events better, and to respond to natural disasters. AI also helps to identify and apprehend wild animals and to locate and catch animals that spread disease.

To increase efficiency and accessibility of government services: While the use of new technologies is often slow, governments around the world use AI to make public services more efficient and accessible at local and national levels, with emphasis on “smart cities” development. AI also serves to allocate government resources and to optimize budgets.

Risks of AI

Continuing bias in criminal justice: In the criminal justice system, many documented AI cases are wrong. The use of AI often occurs in this context in two different areas: risk assessment— to assess whether or not an accused is likely to re-offend to recommend sentence and bail — or the so-called “predictable police” — using insights from various data points to predict what or when a crime will take place and direct law enforcement action.

These efforts are most likely well-intentioned. The use of machine learning to assess the risks of the defendant is announced as removing known human partiality of judges in decisions on sentences and bail. And predictive police efforts are aimed at best allocating frequently restricted policing resources to prevent crime, although the mission is always at high risk. However, the recommendations of these IAs often exacerbate the prejudicial nature of these systems, either directly or by incorporating factors that are proxies for prejudice.

Facilitating mass monitoring: As AI provides the ability to process and analyze multiple data streams in real-time, it is no wonder that mass surveillance is already being used worldwide. AI in facial recognition software is the most pervasive and dangerous example. While technology remains imperfect, governments are looking for the technology of facial recognition as a tool to monitor their citizens, facilitate profiling of groups, and even identify and locate people.

Discriminatory profiling: Facial recognition software is used not only to monitor and identify but also to target and discriminate.

Spread of disinformation: AI can develop and disseminate targeted propaganda, which is compounded by social media algorithms that most likely promote content. Machine learning allows social media companies to develop targeted advertising user profiles. Also, bots disguised as real users spread material from targeted social media circles by sharing links to false sources and actively interacting with users as chatbots through natural language processing.

Moreover, AI systems that produce realistic video and audio recordings of real people believe that technology will be used for malicious purposes to make the forged videos of world leaders. Although deep falsehoods still seem to have to be used as part of actual propaganda or disinformation campaigns, and forged audio and video are still not good enough to be completely human, there is still a potential for seeding chaos, sparking conflict and causing a crisis of truth not to be discounted.

Perpetuating job-market bias: Recruitment procedures have long been full of prejudice and discrimination. In response, a whole industry emerged that uses AI to remove human prejudice. Ultimately, however, many products risk perpetuating their own prejudices. The main reason for this is the common use of historical data from past “successful” employees to train the ML model, naturally reproducing the bias in previous recruitment.

Financial discrimination against the marginalized: Algorithms have long been used to create credit scores and inform credit screening. With the rise of big data, however, systems now incorporate and analyze non-financial data points with a machine learning method to determine creditworthiness, how people live, how they browse the Internet, and how they buy. These systems’ outputs are known as e-scores and are mostly unregulated, contrary to formal credit scores. As data scientist Cathy O’Neil pointed out, these scores are often discriminatory, causing harmful feedback.