2022 consumer trends in Artificial Intelligence (AI)

ai-ml

AI represents the third era of computing, defined as a machine’s ability to perform cognitive functions better than humans.

AI is now used across most industries and is so deeply entwined with every aspect of life that it will reshape the knowledge economy over the next decade, automating some tasks currently performed by people and augmenting others.

AI helps designers and writers solve problems, detect fraud, improve crop yields, manage supply chains, recommend products, and solve business problems. AI can predict call volume in customer service centers and recommend staffing levels and the emotional state and behavior of the person calling, allowing businesses to anticipate desirable solutions.

AI can also improve customer lifetime value and service by revolutionizing how we gather product and consumer insights. This post will look at some of the most important AI customer trends for 2022.

1. Detecting Emotion

A new type of neural network can detect emotional states. AI can detect subtle changes in heart rhythms, perform pattern analysis, and predict someone’s emotional state at any given time using radio waves. When people were shown different videos, a Queen Mary University of London team used a transmitting radio antenna to bounce radio waves off test subjects and trained a neural net to detect fear, disgust, joy, and relaxation. The system correctly identified emotional states 71% of the time, opening up new possibilities for health and wellness apps, job interviews, and the government/military intelligence community.

2. Simulating Empathy and Emotion

Biomarkers that indicate a person’s emotional state, such as agitation, sadness, or giddiness, can now be measured by AI. Human emotion detection is difficult, but companies with large enough datasets develop accurate models. The Amazon Rekognition API uses facial recognition and physical appearance to infer someone’s emotions. Replika evaluates voice and text with AI and then mirrors the user over time. Affectiva Human Perception AI uses speech analytics, computer vision, and deep learning to analyze complex human states. The automotive industry, for example, employs Affectiva’s technology to detect a driver’s emotional state—such as sleepiness or road rage—and offer real-time suggestions to help them improve their driving.

3. Theory of Mind Models

Machines are being taught unconditional love, active listening, and empathy by the Loving AI and Hanson Robotics research teams. Machines will be able to convincingly display human emotions such as love, happiness, fear, and sadness in the future. It raises the question of what constitutes genuine emotion. The ability to imagine others’ mental states is known as the theory of mind. This has long been thought to be a human and certain primal trait. AI researchers are working on teaching machines to create their theories of mind models.

Existing AI therapy applications, such as Woebot, a relational agent for mental health, could benefit from this technology. These technologies could eventually end up in hospitals, schools, and prisons, providing emotional support robots to patients, students, and inmates, by designing machines to respond with empathy and concern. According to health insurer Cigna, the rate of loneliness in the United States has doubled in the last 50 years. People report feeling more isolated in our increasingly connected world. Future governments, such as South Korea, facing a massive mental health crisis, may turn to emotional support robots to address the problem on a large scale.

4. Spotting Fakes

Researchers recently demonstrated how artificial intelligence could be used to compose text so well that humans couldn’t tell a machine wrote it. From mass-producing suggestive social media posts and fake reviews to forging documents by world leaders, OpenAI demonstrated why this was problematic. Even if humans can’t spot the fake, AI can detect when a machine generates the text.

This is the case because AI essays rely on statistical patterns in text and have little linguistic variation. The Giant Language Model Test Room, developed by researchers at the MIT-IBM Watson AI Lab and Harvard University, searches for words likely to appear in a specific order. Forgery, intentional records falsification, email phishing campaigns, and corporate espionage can all be detected with this technology.

5. Consumer-grade AI Applications

Amazon Web Services (AWS), Azure, and Google Cloud will make low-code and no-code offerings available to everyday people who want to build their own AI applications and deploy them as quickly as possible on a website. We see a shift away from highly technical AI applications professional researchers use and toward lighter, more user-friendly apps aimed at tech-savvy consumers. Non-experts can now build and deploy predictive models using new automated machine learning platforms. Platforms hope that, like Microsoft Office and Google Docs, we will soon use various AI applications as part of our daily work.

6. Ubiquitous Digital Assistants Get Smarter

Digital assistants (DAs), such as Siri, Alexa, Google Assistant, and Alibaba’s Tin Mo, use semantic and natural language processing and our data to anticipate what we want or need to do next, sometimes even even even even before we ask. Alibaba’s cutting-edge DA can not only interact with real people but also handle interruptions and open-ended responses with ease.

Like Google Assistant’s Duplex, Tin Mo can make calls for you, but it also understands your intent. If you try to schedule an appointment and say you usually commute in the morning, the system assumes you won’t be available at that time. By 2020, nearly half of Americans will own and use a digital assistant, according to a 2017 Future Today Institute study. (Around 62 percent of Americans now use digital assistants.) Amazon and Google dominate the smart speaker market, but digital assistants can be found in various places. DAs are now tracked and responded to by thousands of applications and gadgets. DAs can be used by news organizations, entertainment companies, marketers, credit card companies, banks, local governments, political campaigns, and many others to both surfaces and deliver critical information.

7. Deepfakes for Fun

Wombo is a lip-syncing app that allows users to turn any photo of someone into a video of them singing. MyHeritage brings old photos to life. Faceswap is a deepfake app powered by TensorFlow, Keras, and Python that is free and open-source. Deep Art Effects creates stylized art from images using desktop and mobile apps. Reface is a face swapping app that morphs your face into celebrity bodies and generates GIFs for social media sharing. Jiggy is a deepfake who can make anyone dance. For the time being, they all produce images and GIFs that appear to have been manipulated—but with technology becoming so accessible, how long before we can’t tell real from the fake?

8. Personal Digital Twins

Several startups are developing programmable platforms to learn from you and represent you online via personal digital twins. The annual Spring Festival Gala broadcast by China’s state broadcaster (CCTV) in 2021 featured performances by synthesized celebrities. The AI copies mimicked their human counterparts without pre-scripted behaviors, speeches, or routines in front of an estimated billion people. Replika, on the other hand, is a programmable digital twin that you can give to your friends. Molly, a Y Combinator-backed startup, uses text to answer questions. Digital twins for professionals in various fields, including health and education, maybe available shortly.