10 principles for ethical artificial intelligence (AI) at workplace

Over the past 30 years, technology has been a significant source of new job creation and opportunity. In the recent past, however, with the rapid development of digital and artificial intelligence (AI) capabilities, technology has started to displace skilled workers, creating several concerns about future job losses.

A large number of small and large scale businesses in various sectors favor automating typical human resource tasks, either complementing or even substituting workers for efficiency, productivity, and significant cost savings. This growing trend is expected to cause unintended negative consequences in the job market soon.

According to the recent OECD analysis, 14% of jobs are at high risk of automation, and another 32% of workers are likely to see a substantial change in their careers. The risk of automation is highest among senior workers and teenagers. Experts believe that the rapid shift to automation and autonomous systems will impact not only freight, taxi, delivery, and other service jobs but lawyers, medical personnel, finance and hiring professionals, etc.

IBM’s Watson and DeepMind Health outperformed human physicians in diagnosing rare cancers in 2016, while in 2018, a rob lawyer successfully appealed over USD 12 million worth of traffic tickets. AI has proven better than finance professionals in predicting stock exchange variations. Notably, Alibaba no longer employs temporary workers to handle consumer inquiries on days of high volume or special promotions. During Alibaba’s biggest sales day in 2017, the chatbots dealt with more than 95% of customer questions, responding to some 3.5 million consumers.

To maintain a healthy balance of power and safeguard workers’ interests in workplaces, Switzerland-based UNI Global Union, therefore, put forth ten key principles for ethical AI, seeking innovative policies and partnerships. A leading voice on the global political and industrial stage, UNI represents more than 20 million workers from over 150 countries. You can read UNI’s top 10 principles for ethical AI below:

1. Transparency

Workers should have the right and opportunity to demand transparency in AI systems decisions, outcomes, and underlying algorithms. They also need to be consulted on the implementation, development, and deployment of AI systems. UNI argues that open source code is neither necessary nor sufficient for transparency since clarity cannot be obfuscated by complexity. It urges that in case of accidents, AI should be accountable.

2. Ethical Black Box

All AI systems should contain a built-in “ethical black box,” a device that can record all decisions, movements, and sensory data to ensure transparency and accountability. The information should explain the actions in human language users can understand, foster better relationships, and improve user experience.

3. Serving people and planet

This involves a code of ethics for the development, application, and use of AI, so that AI systems remain compatible throughout their entire operational process and enhance the principles of human dignity, integrity, freedom, privacy, cultural and gender diversity, and fundamental human rights. Besides, AI systems must protect and even improve our planet’s ecosystems and biodiversity.

4. Human-in-command approach

The development and deployment of AI must be responsible, safe, and useful where machines maintain the legal status of tools and where legal persons retain control over these machines and are responsible for them. This entails the design and operation of AI systems to comply with existing laws, including privacy. Workers must have the ‘right of explanation’ when AI systems are used in human-resource procedures, such as recruitment, promotion, or dismissal.

5. Genderless and unbiased AI

It is vital that the system is controlled for negative or harmful human-bias in the design and maintenance of AI and artificial systems, and that any bias, whether it is gender, race, sexual orientation, or age, is identified and not propagated by the system.

6. Sharing the AI benefits

To the benefit of all humanity, the economic prosperity created by AI should be distributed broadly and equally. Consequently, both global and national policies aimed at bridging the digital economic, technological, and social divide are needed.

7. Fundamental freedom and rights

Since AI systems could displace workers, it is vital that policies are put in place to ensure social security, continuous lifelong learning to remain employable, and a transition to the new digital reality, including specific governmental measures to help the displaced workers find new jobs. All AI systems have to include a check and balance as to whether their deployment and increase go hand in hand with the workers’ fundamental rights.

8. Global governance mechanism

UNI recommends the establishment of multi-stakeholder governance bodies on global and regional levels. The bodies should include AI designers, developers, researchers, manufacturers, owners, employers, lawyers, CSOs, and trade unions. They should establish whistleblowing mechanisms and monitoring procedures to ensure the transition and implementation of ethical AI.

9. Legal responsibility

UNI asserts that legal responsibility for a robot should be attributed to a person. Robots should be designed and operated to the extent possible to comply with existing laws and basic rights and freedoms, including privacy.

10. AI arms race

UNI urges to ban lethal autonomous weapons, including cyber warfare. UNI is also calling for a global convention on ethical AI which will help address and prevent the unintended negative consequences of AI, while at the same time accentuating its benefits for workers and society.

LEAVE A REPLY

Please enter your comment!
Please enter your name here