Over the last 50 years, machine learning (ML) has evolved from a small group of computer scientists who wanted to see if computers could learn to play games to a broad discipline that has produced fundamental statistical-computational theories on learning processes and learning algorithms that are widely used in commercial systems for data mining, image and speech recognition, computer vision, recommendations, self-driving, virtual personal assistants, and medical diagnosis.
Today, machine learning is attempting to answer the following questions: How can we create computer systems that improve themselves over time, and what are the fundamental laws that govern all learning processes? This question encompasses a wide range of learning tasks, including how to design autonomous mobile robots that learn to navigate based on their own experiences, how to mine historical medical data/records to understand which patients will respond best to which treatments, and how to build search engines that automatically customize to the interests of their users.
Machine Learning is a natural intersection between Computer Science and Statistics. The defining question in Computer Science is, “How can we build machines that solve problems, and which problems are inherently tractable/intractable?” Meanwhile, Statistics deals with the question, “What can be inferred from data plus a set of modeling assumptions, with what reliability?” The defining question for Machine Learning is based on both, but it is a distinct question.
While Computer Science has primarily focused on programming computers manually, machine Learning is concerned with getting computers to program themselves. Machine learning includes additional questions about what kind of computational architectures and algorithms can be used to capture, store, index, retrieve, and merge these data most effectively, how multiple learning subtasks can be orchestrated in a larger system, whereas statistics has focused primarily on what conclusions can be inferred from data.
This post will discuss some of the fundamental questions we need to address in the long term. Answers to these questions will hold the potential to significantly change the face of machine learning over the coming decade.
1. Can we build never-ending learners?
Until now, the vast majority of machine learning work has involved running programs on specific data sets, then putting the learner aside and relying on the outcome. Learning, on the other hand, is a continuous process in humans and other animals. The agent learns various skills, often in a sequential manner, and then applies these skills and knowledge in a highly synergistic manner. Why not create machine learners that learn in the same cumulative manner, increasing their competence rather than plateauing?
For example, a robot that spends months or years in an office building should learn various skills, starting with simple tasks (like how to recognize objects in the dark end of the hallway) and progressing to more complex problems. It builds on what you’ve already learned (e.g., where to look first to find the missing recycling container). Similarly, a program to teach people how to read the web might teach a graded set of skills, starting with basic abilities like recognizing people’s and places’ names and progressing to extracting complex relational information spread across multiple sentences and web pages. Self-supervised learning and developing an appropriate graded curriculum are two key research topics here.
2. Can ML theories and algorithms help explain human learning?
Theories and machine learning algorithms have recently been discovered to be useful in understanding aspects of human and animal learning. For example, reinforcement learning algorithms and theories predict the neural activity of dopaminergic neurons in animals during reward-based learning. Machine learning algorithms for finding sparse representations of naturally occurring images also predict the types of visual features found in animals’ early visual cortex surprisingly well. On the other hand, animal learning theories take into account factors that have yet to be considered in machine learning, such as motivation, fear, urgency, forgetting, and learning over multiple time scales. There is a lot of room for cross-fertilization here and the potential to develop a general theory of learning processes that applies to both animals and machines and implications for better teaching strategies.
3. Can we design programming languages containing machine learning primitives?
Is it possible for a new generation of computer programming languages to directly support the creation of learning programs? Standard machine learning algorithms are combined with hand-coded software to create a final application program in many current machine learning applications. Why not create a new computer programming language that allows you to write programs with some hand-coded subroutines and others marked as “to be learned”? The programmer could declare the inputs and outputs of each “to be learned” subroutine in such a programming language, then choose a learning algorithm from the primitives provided by the programming language. Designing programming language constructs for declaring what training experience should be given to each “to be learned” subroutine, when, and what safeguards against arbitrary changes to program behavior are among the new research issues raised.
4. Will computer perception merge with machine learning?
Can we develop a general theory of perception based on learning processes, given the increasing use of ML for state-of-the-art computer vision, computer speech recognition, and other forms of computer perception? Incorporating multiple sensory modalities (e.g., vision, sound, touch) to provide a setting in which self-supervised learning could be used to predict one sensory experience from the others is an intriguing opportunity here. According to developmental psychology and education researchers, learning can be more effective when people are given multiple input modalities, and work on co-training methods from machine learning suggests the same.
The problems listed above are some of the ones that will shape machine learning in the coming decade. While it is impossible to predict the future, more machine learning research will undoubtedly yield more powerful answers about where and when the resulting technology should be used.