The term “Machine Learning” is coined by Arthur Samuel in 1959 while at IBM.
Brief History of ML
Date | Details |
1950 | Alan Turing creates the “Turing Test” to determine if a computer has real intelligence. To pass the test, a computer must be able to fool a human into believing it is also human. |
1950 | Arthur Samuel wrote the first computer learning program. The program was the game of checkers, and the IBM computer improved at the game the more it played, studying which moves made up winning strategies and incorporating those moves into its program. |
1957 | Frank Rosenblatt designed the first neural network for computers (the perceptron) |
1967 | The “nearest neighbor” algorithm was written, allowing computers to begin using very basic pattern recognition. This could be used to map a route for traveling salesmen, starting at a random city but ensuring they visit all cities during a short tour. |
1979 | Students at Stanford University invent the “Stanford Cart” which can navigate obstacles in a room on its own. |
1981 | Gerald Dejong introduces the concept of Explanation Based Learning (EBL), in which a computer analyses training data and creates a general rule it can follow by discarding unimportant data. |
1985 | Terry Sejnowski invents NetTalk, which learns to pronounce words the same way a baby does. |
1997 | IBM’s Deep Blue beats the world champion at chess. |
2006 | Geoffrey Hinton coins the term “deep learning” to explain new algorithms that let computers “see” and distinguish objects and text in images and videos. |
2008 | DJ Patil and Jeff Hammerbacher coined the term “Data Scientist” |
2011 | IBM’s Watson beats its human competitors at Jeopardy. |
2012 | Google’s X Lab develops a machine learning algorithm that is able to autonomously browse YouTube videos to identify the videos that contain cats. |
2014 | Facebook FB develops DeepFace, a software algorithm that is able to recognize or verify individuals on photos to the same level as humans can. |
2016 | Google’s artificial intelligence algorithm beats a professional player at the Chinese board game Go, which is considered the world’s most complex board game and is many times harder than chess. |
According to Michael I. Jordan, the ideas of machine learning, from methodological principles to theoretical tools, have had a long pre-history in statistics. He also suggested the term data science as a placeholder to call the overall field. You can refer to below, one the most famous venn diagram for Data Science.
How ML is different from AI ?
In the early days of AI, an increasing emphasis on the logical, knowledge-based approach caused a rift between AI and machine learning. By 1980, expert systems had come to dominate AI, and statistics was out of favor.
Machine learning, reorganized as a separate field, started to flourish in the 1990s. The field changed its goal from achieving artificial intelligence to tackling solvable problems of a practical nature. It shifted focus away from the symbolic approaches it had inherited from AI, and toward methods and models borrowed from statistics and probability theory.[11] It also benefited from the increasing availability of digitized information, and the ability to distribute it via the Internet.
Here is another famous venn diagram.
Hal Varian, Google’s chief economist, predicted in 2008 that the job of statistician will become the “sexiest” around. Data, he explains, are widely available; what is scarce is the ability to extract wisdom from them. Data are becoming the new raw material of business: an economic input almost on a par with capital and labour.
Machine Learning is a peer-reviewed scientific journal, published since 1986
Further reading