Five industries facing tough AI ethics questions

ai-movie

Artificial intelligence is shaping the future of the modern world. Should that worry people?

AI used to be firmly confined to the realm of science fiction. Sometimes it was dangerous, like in “The Terminator.” Other times it was helpful, like KITT, the AI car in “Knight Rider.” Today, AI seems to be both those things at once, and it is becoming more difficult to tell when it crosses a line.

Ethical debates surrounding AI are cropping up in virtually every industry, from data bias to artificial morality. Here are a few that are at the center of the controversy.

1. Autonomous Vehicles

Some of the most challenging ethical debates concerning AI are in the autonomous vehicle (AV) industry. This may sound surprising to those unfamiliar with the issue. After all, AI is also being used for very serious applications like law enforcement and military functions. However, the everyday nature of autonomous vehicles is exactly what makes the major ethical debate surrounding them so complex.

It comes down to accident programming. If an AV is in a situation on the road where an accident is inevitable, what should the AI prioritize? If it opts to save the operator, a family in another nearby vehicle could be hurt. If it opts to save a pedestrian in the crash zone, the AV operator could get injured instead. How do programmers determine what to tell the AI to do?

Should some types of people, like the elderly and children, be prioritized over others? This might sound like a logical solution on the surface, but it opens up Pandora’s box of new ethical dilemmas. Every culture has different standards for who should be protected. A 2016 study designed to investigate this exact issue used a game to survey people on how they would want an AV to respond in a crash situation. Those saved in these pretend cases revealed significant bias and diversity in values. This makes it difficult to pin down a fair universal values system.

If drivers were allowed to select their accident programming preferences, wouldn’t everyone choose to prioritize themselves over other people on the roads? Would this make it more dangerous for non-AV drivers? No one has yet been able to find an answer that everyone can agree on. This debate could very well delay the mainstream adoption of AVs.

2. Medicine and Health Care

Health care is in a particularly tricky position regarding AI adoption, which is why it’s not yet a widely accepted diagnostic tool in medicine. Patients are just as critical as doctors. The ethical dilemma behind this skepticism draws on complex issues that have plagued the entire history of health care.

Many wonder whether AI can be trusted to deliver unbiased medical treatment and diagnoses, offering comprehensive care for people of any race, gender, or age. At the same time, doctors and patients want to know that they can trust AI. Care standards and practices are different worldwide, making things even more complicated. Patients and doctors alike are doubtful that AI should be allowed to be used in medicine at all, at least for diagnostic purposes.

For example, one researcher from the Delft University of Technology, Dr. Olya Kudina, commented on her perspective as a Dutch doctor in a report from Yale University. “Right now, many AI technologies work within a narrow, restricted viewpoint that overlooks cultural and societal assumptions, expectations, and truths,” Kudina explained. “This needs to change to be relevant for use in more than just one setting.”

Another concern surrounding medical AI is that the training data used for programming will lead to invisible biases against certain groups of people. This is an infamous issue with black box AI, which works so that developers and users can’t see how it makes decisions. For example, it would be difficult to detect if an AI developed a bias against diagnosing women with certain rare conditions. This could lead to incorrect diagnoses, even if a patient presents all the symptoms.

3. Court and Legal System

Law might not be the first application that comes to mind when people think of AI. However, it could be one of the most impactful and controversial. Law and government models are already leading to some extremely difficult and important conversations about AI trust and the role of technology in law.

For example, one application for AI in law and government is profiling people convicted of committing crimes. A machine-learning algorithm designed to rate individuals for risk of reoffending showed significant bias against people of color. This wasn’t intentional, but the results were clear. The algorithm rated a Black woman who stole a bike “high risk” while a white man who committed armed robbery multiple times was rated “low risk.” These assessments can significantly impact courtrooms, so such a blatant data bias is a big deal.

Legal technology experts have stressed the need for some kind of regulation or legislation to control the use of AI in legal settings. AI is being used for dozens of different tasks and applications in law, including advising, judge direction, and research. In all these applications, AI bias can cause serious harm to those involved in court cases.

This begs the question: Is it right for AI to be used in legal proceedings? The law is deeply based on human ideas of what is right and wrong. AI has no way of understanding ethics and morals, so how can humans trust it to make morally just decisions and judgments?

4. Policing and Law Enforcement

One particularly concerning area of government and legal AI applications is law enforcement. Debates around police AI have become heated over recent years due to data bias. AI is adding another layer of ethical controversy in a field troubled by discrimination concerns. A tool that many hoped would resolve discrimination seems to only be increasing the problem.

Data bias is a severe issue in law enforcement AI. Experience has shown that AI reflects racial discrimination rather than eliminates it, from facial recognition to predictive policing. A United Nations panel confirmed this reality, pointing out the danger that policing AIs have to create “feedback loops” that worsen discrimination. AI’s mysterious, invisible nature only deepens civilian distrust in law enforcement.

Law enforcement AI models, such as predictive policing algorithms, may have been created with good intentions. The problem is that the training data used to inform them is based on decades, if not centuries, of heavily biased data against people of color. This includes them being pulled over or arrested without just cause, a problem that continues today. Many fear that AI will only perpetuate discriminatory policing practices, except it will be more difficult to tell where that bias is coming from because a computer is responsible.

Facial recognition AIs have become infamous for their data bias. Experts have discovered trends in facial recognition AIs that clearly show discrimination against non-whites, particularly men. One AI model flagged far more people of color as suspicious or “abnormal,” with men almost twice as likely as women to be flagged as such. This reality is sparking distrust and backlash worldwide. Some cities, such as Portland, Oregon, are even implementing legislation banning facial recognition AIs in law enforcement.

5. Business

One of the most sought-after applications for AI is as a hiring tool in the business world. Managers can get applications from dozens of applicants for a single position, if not hundreds or even thousands. A digital tool for narrowing down the field would be extremely helpful. It seems inevitable that AI will infiltrate the business world, but its effects could be far different than expected. Many people believed AI could eliminate gender and racial bias in hiring, a hope that has brought lots of disappointment over the years.

Unfortunately, the problem of data bias has created controversy and division in the business world. Perhaps the most infamous case is Amazon’s now-retired hiring AI, which favors male candidates over women. Resumes that so much as mentioned the word “women” or “woman,” such as “captain of the women’s chess team,” would be deemed lesser qualified compared to male candidates. Stories like this have led to widespread distrust in hiring AIs and a general sentiment that using them is lazy at best and discriminatory at worst.

The problem comes back to training data, but the solution is not as clear. Men are historically more common in certain roles in many fields due to decades of discrimination and exclusion for women, such as engineering and executive positions. Therefore, when an AI is trained to recognize a high-quality candidate based on previous employees or resumes, it may connect “male” and “good candidate.” This artificial advantage doesn’t have any basis in reality, but an AI doesn’t know that.

There is no way for developers or users to recognize that this bias in AI exists until the results start coming in and demonstrating it. The only viable solution for discrimination in hiring and other applications is a new type of AI altogether, one designed with transparency as a priority. This explainable AI is engineered so people can see and understand exactly how it makes its decisions, allowing data bias to be caught and stopped early on.

Ethical Innovation

There are clear similarities between many ethical debates around AI in different industries. The reoccurring theme is that it isn’t enough to develop new technology that can do amazing new things. Innovation must be ethical, designed to improve the lives of all people, regardless of race, gender, age, or any other factor. As a global society, we must decide where we will draw the line between humans and algorithms.

The world must take a new approach for AI to move forward and fulfill the ambitions early adopters hoped it would. New models and algorithms must be designed to put transparency and trust first.