Robotics and AI not only have the potential to make our lives more comfortable but also to help us solve the significant challenges of modern society. In healthcare, for instance, they can significantly improve the early detection and treatment of illnesses. Nursing assistance systems, in particular, could support high-quality care and thereby bolster the autonomy and quality of life of people in need of care. Besides, robotics and AI technologies can take over dangerous, monotonous, repetitive, physically unhealthy, or strenuous activities from humans, such as inspection work in mines or cleanup work in radioactively contaminated areas.
While discussing these opportunities, we must not, however, forget about the ethical and social challenges that remain to be solved. The ethical concerns raised by robotics and AI depend on their capabilities and domain of usage. In the following segment, we outline some of the top 10 ethical concerns and challenges.
Bias – When equipped with machine learning, robotics and AI systems can often have a bias in their decision-making. When they are trained on an ethnically biased set of images, for instance, they are more likely to fail to recognize certain ethnic groups as human, due to skin color or clothing, placing such groups at higher risk.
Deception – Humanoid or zoomorphic robots present the risk, especially to naïve or vulnerable users, of emotional attachment or dependency (given that it is relatively easy to design a robot to behave as if it has feelings). The 4th EPSRC Principle of Robotics states, “Robots are manufactured artifacts. They should not be designed in a deceptive way to exploit vulnerable users; instead, their machine nature should be transparent.”
Employment – The introduction of robots and AI might displace certain classes of workers, e.g., taxi drivers and operators of quarrying machines.
Opacity – Where decisions are not transparent or open to scrutiny, there is a possibility that they are both unfair (unjust) and not open to correction. The introduction of the General Data Protection Regulations (GDPR) brings with it a “right to explanation,” motivated by the problem of opacity.
Safety – Robotics and AI can both positively and negatively impact safety. The original motivation for research on autonomous vehicles was to improve road safety by reducing or removing human errors as an accident cause. But as recent accidents with AVs in the US have shown, such technology can also cause fatalities.
Oversight – Robots and AI systems also come with challenges of operating in open environments where it is difficult to monitor and assess their behavior.
Privacy – Robotics and autonomous systems may contain, and be able to provide to third parties, data that could violate an individual’s right to privacy. For example, an AV is likely to know where the owner or occupant traveled, and this might allow a stalker to track them, or to show they were involved in criminal activity.
Here, we recommend five ways to solve these challenges.
1. Compliance with the European Union’s principles
In developing and using robots and AI systems, general ethical principles must be observed. Therefore, we should ensure that the deployment of robots and AI systems safeguard the principles of human rights, including data protection and privacy, as well as the related principles of human dignity, human freedom, and autonomy. We recommend that AI systems — as far as technically possible —are designed transparently and that decisions taken by AI systems made comprehensible (“explainable AI”). Experts are currently discussing the implementation of an “Ethical Black Box” and alternatively the “Counterfactual Explanations“ approach. Besides, more far-reaching values and general principles such as justice and fairness, diversity and inclusion, solidarity, and protecting vulnerable people must also be taken into account in the course of developing and deploying new technologies.
2. Safety and security of technologies
We must ensure that robotics and AI technologies are safe to use and comply with general development standards. Critical questions in this context are whether there is a need to adapt existing rules and whether new measures are needed. The safety and security of new technologies include both machine safety and IT security, as well as the integration of these aspects. Attention should also be paid to enabling a high subjective sense of safety in using robots and AI systems.
3. Human Oversight
Although robots and AI systems are becoming more and more intelligent, their moral status is still unclear. In practical applications, the question arises whether robots and autonomous AI systems can be moral agents (in an efficient, not philosophical sense) both able and to make ethical decisions on their own. This aspect is discussed, for example, in connection with autonomous vehicles, as they might be involved in the circumstance where they have to make ethical decisions. Such decisions could range from breaking laws to avoid an accident (e.g., ignoring a stop sign to prevent a rear-end collision) to deciding which life is more worth protecting in the event of an unavoidable crash (e.g., that of the passengers or other road users).
In the view of increasing autonomy in AI and robotic systems, it is necessary to ensure human oversight in every situation and to define a clear moral and legal framework that clearly outlines the responsibilities of all parties involved (e.g., developers, manufacturers, operators, users, customers, etc.). Algorithms and computational precautions for ethical behavior be directly integrated into machine architecture and machine design to minimize possible ethical violations and amoral behavior in autonomous machines.
4. Consideration of social and ecological consequences
Robotics and AI technologies must align with human needs, respect human behavior, and social diversity and must not enforce any inhuman or inhumane adaptation of humans to technology. It must be ruled out that technologies lead to biased or discriminatory treatment of individuals. It is crucial to assess the potential economic and employment impact of new technologies at an early stage and to develop technologies that support sustainable economic activity that is both economically and ecologically beneficial.
5. Initiating public debate on Robotics and AI
We must ensure that political decision-makers already today reflect fundamental social questions in connection with robotics and AI, which might not be relevant immediately, but which we will be confronted with within the medium term. Experts from various disciplines, interest groups, and the general public are to be involved in a meta-level in a comprehensive and interactive discourse on fundamental issues. It includes, for example, a fundamental debate on how AI technologies can influence our democratic system and how we can uphold democracy and the rule of law in the long term. We also need to discuss how work — paid and unpaid — is distributed equally between humans and machines in the future.
To sum up, robotics and AI will undoubtedly play an essential role in addressing these issues. However, we have to clarify first which positions we want to assign to these technologies.