Why most people are afraid of machine learning (ML)?

ai machine learning

There is a vast amount of potential benefits from machine learning (ML) uptake across industry sectors. The economic effects of this powerful technology can play a central role for new businesses to thrive and address the productivity gap worldwide.

Today, machine learning is deployed in various systems and situations that shape our daily lives, whether by detecting credit card fraud, providing recommendations, or supporting search engines. As a result of this increasing pervasiveness, many of us interact with ML-based systems without necessarily realizing what a powerful technology this is.

Yet, studies show that early adoption does not guarantee continued support by all, or most, of the public. The disruptive nature of ML brings with it various challenges for society at large. Its new and groundbreaking applications raise questions about public confidence and acceptability. Furthermore, there is a fear that the undesirable use of ML in one area can undermine confidence in its use in other areas.

Although people, in general, are content with robots, which draw on machine learning to carry out autonomous functions, most people, including leading tech personalities, believe that robots could be dangerous or harmful for humans. Therefore, people look less favorably on robots in more personal or caring roles, largely due to the fear of losing human-to-human contact.

Therefore, continued public confidence in ML systems is central to its ongoing success and realizing the benefits it promises across sectors and applications. Unfortunately, the public does not have a single view of ML. The attitudes, be it positive or negative, vary depending on the circumstances in which machine learning is used.

The following are the key general concerns about machine learning and its applications.

  • the potential for machine learning systems to cause harm.
  • the possibility that machines in the workplace could replace people.
  • the extent to which systems using machine learning might make experiences less personal or human.
  • the idea that machine learning systems could restrict the choices open to an individual.

1. Harm

Central to many concerns about ML is the fear that individuals would be harmed in the process, either directly (physical harm as a result of interacting with an embodied system) or indirectly due to the implications of a machine learning-driven system as misclassification or misdiagnosis.

The strength of concern about the harm varies across different machine learning applications. The creation of machine agents or robots that can act autonomously in the physical world plays a role in determining the extent to which harm is a key area of concern. It is most apparent that driverless vehicles or social care, where a physical agent operates independently, tend to be associated with a greater risk of harm.

What are the steps to give more confidence in deploying such systems, and how do we, therefore, address concerns about potential harms?

  • Ensuring reassurance that systems would be robust, with appropriate validation and testing;
  • Providing strong evidence of safety, and in some cases evidence that machine learning would be more accurate than humans carrying out an equivalent function; and
  • in cases where the outcomes at stake were significant, some level of human involvement, either by making a decision based on a machine’s recommendation or by taking an oversight role.

2. Replacement

Concerns that machine learning systems could replace humans are manifest in two ways. Firstly, the potential impact of machine learning on employment is a clear area of concern. It is an issue of high salience for the public. People can see clear links to previous advances in technology and the resulting impact on the workforce, for example, how the automation of car production lines had replaced human roles in the workforce.

If previous technological advances had replaced the workforce, the huge range of potential applications – a key opportunity – that people see for machine learning intensified concerns about displacing human roles. Where previous advances in automation had affected a specific group, such as those involved in car production, one fear expressed is that the versatility of machine learning could cause mass unemployment.

Secondly, the applicability of machine learning to everyday activities prompts whether it will replace individual skills, such as reading a map or driving a car. Such an over-reliance on technology and potential de-skilling raise questions about people’s ability to maintain effective judgment in situations where the relevant technology is not available.

3. Impersonal experiences or services

A related concern is that of depersonalization. For some, this is an intense reaction to the possibility of machine learning changing their relationship with an activity of personal significance; feelings of freedom or autonomy arising from driving a car, for example, or enjoyment taken from reading poetry, and relating to the person who wrote it. This reaction is closely linked to developing specific applications in an area they consider integral to expressing their individuality or fulfillment.

For others, concerns about depersonalization are connected to the delivery of key services or to interactions with key personnel, frequently in caring roles or other scenarios where the ability to give an accurate response is not the sole measure of success. Qualities such as human empathy or personal engagement are generally desirable and particularly important in health or social care areas. The prospect of reducing meaningful human-to-human interaction is, therefore, a concern.

4. Restrictions on human experience

Machine learning as technology allows analysis of vast quantities of data – more than humans could deal with – and use of this data to make decisions or predictions. However, people question the ability of machine learning to generate a nuanced interpretation and broad generalizations rather than individual predictions. Two areas of concern arise from this lack of confidence in the accuracy of machine learning systems, namely:

  • that people could be mislabelled or inadvertently stereotyped and have their activities mistakenly restricted as a result.
  • that machine learning could generate an algorithmic bubble, in which unusual or challenging opinions, experiences, or interactions are filtered-out, ultimately narrowing the horizons of its users. The impact of mislabelling has a significant influence on individual freedoms, finances, or safety.

To remove these public concerns about machine learning, continued engagement between researchers and the public is necessary. People working in machine learning should be aware of public attitudes to the technology, and large-scale programs in this area should include funding for public engagement activities by researchers. Besides, the government could further support this through its public engagement framework.