Machine Learning: The complete history in a timeline

Machine learning is a typical tech term we hear everywhere. One of the most notable trends in technology today, machine learning algorithms, based on mathematical models, enable computer systems to recognize and learn directly from patterns in the data and perform complex tasks intelligently, rather than following pre-programmed rules or using explicit instructions.

By applying machine learning techniques, companies are gaining significant competitive and financial advantages in delivering better customer experiences and reacting more swiftly to market shifts. Machine learning is widely used today in web search, spam filters, recommender systems, ad placement, credit scoring, fraud detection, stock trading, drug design, and many other applications.

Advantages of Machine learning

There are many benefits businesses gain from machine learning.

  • Quickly discover specific trends, patterns and implicit relationships in vast, complex datasets
  • Has the ability to learn and make predictions without human intervention
  • Continuous improvement in accuracy, efficiency, and speed
  • Good at handling multidimensional problems and multivariate data
  • Help businesses make smarter and faster decisions in real-time
  • Eliminate bias from human decision making
  • Automate and streamline predictable and repetitive business processes
  • Better use of data – both structured and unstructured.

Now, let’s have a quick trip through origin and short history of machine learning and its most important milestones.

18th century — Development of statistical methods: Several vital concepts in machine learning derive from probability theory and statistics, and they root back to the 18th century. In 1763, English statistician Thomas Bayes set out a mathematical theorem for probability, which came to be known as Bayes Theorem that remains a central concept in some modern approaches to machine learning.

1950 — The Turing Test: English mathematician Alan Turing’s papers in the 1940s were full of ideas on machine intelligence. “Can machines think?” He asked. In 1950, he suggested a test for machine intelligence, later known as the Turing Test, in which a machine is called “intelligent” if its responses to questions could convince a human.

1952 — Game of Checkers: In 1952, researcher Arthur Samuel created an early learning machine, capable of learning to play checkers. It used annotated guides by human experts and played against itself to learn to distinguish right moves from bad.

1956 — The Dartmouth Workshop: The term ‘artificial intelligence’ was born during the Dartmouth Workshop in 1956, which is widely considered to be the founding event of artificial intelligence as a field. The workshop lasted six to eight weeks and was attended by mathematicians and scientists, including computer scientist John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon.

1957 — The Perceptron: Noted American psychologist Frank Rosenblatt’s Perceptron was an early attempt to create a neural network with the use of a rotary resistor (potentiometer) driven by an electric motor. The machine could take an input (such as pixels of images) and create an output (such as labels).

1967 — Nearest neighbor algorithm: The Nearest Neighbor (NN) rule is a classic in pattern recognition, which appeared in several research papers in the 1960s, especially in an article written by T. Cover and P. Hart in 1967. The algorithm mapped a route for traveling salespeople, starting at a random city but ensuring they visit all cities during a short tour.

1973 — The Lighthill report and the AI winter: The UK Science Research Council published the Lighthill report by James Lighthill in 1973, presenting a very pessimistic forecast in the development of core aspects in AI research. It stated that “In no part of the field have the discoveries made so far produced the major impact that was then promised.” As a result, the British government cut the funding for AI research in all but two universities. This period of reduced funding and interest is known as an AI winter.

1979 — Stanford Cart: The students at Stanford University invented a robot called the Cart, radio-linked to a large mainframe computer, which can navigate obstacles in a room on its own. Though the entire room crossing took five hours due to barely adequate maps and blunders, the invention was state of the art at the time.

1981 — Explanation Based Learning (EBL): Gerald Dejong introduced the concept of Explanation Based Learning (EBL), which analyses data and creates a general rule it can follow by discarding unimportant data.

1985 — NetTalk: Francis Crick Professor Terry Sejnowski invented NetTalk, NETtalk, a program that learns to pronounce written English text by being shown text as input and matching phonetic transcriptions for comparison. The intent was to construct simplified models that might shed light on human learning.

1986 — Parallel Distributed Processing and neural network models: David Rumelhart and James McClelland published Parallel Distributed Processing, which advanced the use of neural network models for machine learning.

1992 — Playing backgammon: Researcher Gerald Tesauro created a program based on an artificial neural network, which was capable of playing backgammon with abilities that matched top human players.

1997 — Deep Blue: IBM’s Deep Blue became the first computer chess-playing system to beat a reigning world chess champion. Deep Blue used the computing power in the 1990s to perform large-scale searches of potential moves and select the best move.

2006 — Deep Learning: Geoffrey Hinton created the term “deep learning” to explain new algorithms that help computers distinguish objects and text in images and videos.

2010 — Kinect: Microsoft developed the motion-sensing input device named Kinect that can track 20 human characteristics at a rate of 30 times per second. It allowed people to interact with the computer through movements and gestures.

2011 — Watson and Google Brain: IBM’s Watson won a game of the US quiz show Jeopardy against two of its champions. In the same year, Google Brain was developed its deep neural network which could discover and categorize objects in the way a cat does.

2012 — ImageNet Classification and computer vision: The year saw the publication of an influential research paper by Alex Krizhevsky, Geoffrey Hinton, and Ilya Sutskever, describing a model that can dramatically reduce the error rate in image recognition systems. Meanwhile, Google’s X Lab developed a machine learning algorithm capable of autonomously browsing YouTube videos to identify the videos that contain cats.

2014 — DeepFace: Facebook developed a software algorithm DeepFace, which can recognize and verify individuals on photos with an accuracy of a human.

2015 — Amazon Machine Learning: AWS’s Andy Jassy launched their Machine Learning managed services that analyze users’ historical data to look for patterns and deploy predictive models. In the same year, Microsoft created the Distributed Machine Learning Toolkit, which enables the efficient distribution of machine learning problems across multiple computers.

2016 — AlphaGo: AlphaGo, created by researchers at Google DeepMind to play the ancient Chinese game of Go, won four out of five matches against Lee Sedol, who has been the world’s top Go player for over a decade.

2017 — Libratus and Deepstack: Researchers at Carnegie Mellon University created a system named Libratus, and it defeated four top players at No Limit Texas Hold ’em, after 20 days of play in 2017. Researchers at the University of Alberta also reported similar success with their system, Deepstack.


Please enter your comment!
Please enter your name here