30 October 2018
Learning from experience. How many times have we heard this sentence addressed to us. A warning, a spur to improve our choices by trying to elaborate our experiences and activities. Here, at the base of the concept of Machine Learning lies the human learning process.
Men (not all) are often able to learn from past experiences. Computers, software and their applications, considered in a bidimensional point of view, are not. They need to know what they have to do everytime. They need to be programmed and will follow the instructions given.
The key question is, “Are we able today to make computers sensitive to past experiences?”. The answer is yes, we can. Here, this brief thinking is the basis of automatic learning: we have entered the world of Machine Learning.
The fast, immediate, and real time experiences represent, for computers, the data. How can we make computers "sensitive" to previous data, already encountered? Today we can through with algorithms, simple and easy. Nothing to be wary of. Over time, the concept of automatic learning has reached a less abstract and more tangible level.
1959: Arthur Samuel, American IT expert, pioneer in the field of Artificial Intelligence (AI), coined the term "Machine Learning", identifying two distinct approaches:
Approach to the neural network, which leads to the development of machines with automatic learning for general use through a randomly connected switching network, following a learning routine based on reward and punishment (learning by reinforcement);
Specific approach that, through a highly organized network, leads to the development of automatic learning machines only for specific activities. A procedure that, only through supervision and reprogramming, reaches the maximum efficiency under the computational point of view.
Arthur Samuel, applying his intuitions into practice, successfully created the first draughts program based on automatic learning, giving an early demonstration of the fundamental concepts of AI (which will also find great application in videogames) and Machine Learning.
April 30, 1986: Tom M. Mitchell, American professor and IT expert, publishes Machine Learning, the forerunner of all manuals about "modern" approach to the subject, and writes:
«It is said that a program learns from experience E with reference to some classes of tasks T and with measurement of performance P, if its performance in task T, as measured by P, improve with experience E».
Thus Mitchell reformulates Samuel's "learning by reinforcement", insisting on the assumption that a program learns if there is an improvement in performance after a task performed.
Let's see some tangible examples. (Sorry for the possible inaccuracy or lack of detail, these are simply explanatory examples).
Real estate market
Let's suppose that we are examining the real estate market and that our task is to predict the price of a particular house knowing its size. We have - as previous data - a small house that costs 70000 euros and a large house that costs 160000 euros, and we must estimate the price of a medium-sized house.
How to do it best?
Let's place the size of the houses in square meters on the abscissa and the price in dollars of the properties on the ordinate. We place, in this graph, the prices of the houses we know, adding prices of past examples (fixed for houses of other sizes).Drawing a line that best fits the available dates we will have the best possible response in relation to the price to be set for the house unknown (method of linear regression).
How can a computer get to draw the best line? By reducing the margin of error, through attempts and precisely with the method known as “gradient discent”, that will measure the best combination of all points, for example the combination with the sum of the smallest distances between all existing points and the line to be drawn. In real life a software does nothing but applying this method (further minimizing errors through the “least square” method).
Let's proceed with another example, closer to the experience of (almost) each of us. How to intercept the dozens of spam mails that arrive, every day, in our mailbox? Let's say we've already done the following action in the past: out of a range of 100 incoming emails, we've flagged 25 of them, tagging them as spam.
Let's focus on the 25 spam mails that we placed.
Of these 25, 20 contain the word “cheap”. So the split that is created is 80/20, that is: about the mails containing the word cheap, 80% configure a spam mail, 20% do not.
By combining other "filters" of this type, you can decode emails containing spam with optimized accuracy.
We are applying another Machine Learning algorithm, known as “Naive Bayes” Algorithm, also used today for many applications.
These are just 2 examples of algorithms that allow a "machine" the automatic learning that, through a set of previous data, leads to the optimal choice.
Machine learning (and AI) are essential for companies. From the real estate market, to the mailbox, from Google to marketing, from health care to institutions, the fields of application cover every sector.
Computers learning specific tasks without being programmed to do so, thanks to the recognition of patterns between the data, generate incredible profits for companies in all aspects of "production".
Let's think about this article.
Often, in order to write articles, we normally use the search engines, to verify informations. By typing one or more keywords, Google and other search engines, offer lists of results, the so-called Search Engine Results Page (Serp), that is algorithms of Machine Learning. They provide as output information considered relevant to the search performed and can do so on the basis of the analysis of data and patterns, models and structures existing in the data themselves.
Learning from experience, we said. Often (not always) it allows us, human beings, to improve. Learning from experience is, for a "machine", synonymous of improvement because a machine is less subject to external variables that often affect a person's choices.
Alan Turing, a famous British mathematician, since 1950, asked himself: "Can machines think?" (in the famous article "Computing Machinery and Intelligence"). Not yet.
Can machines do what we think we can do? Yes.
Graphs and infographics from > https://www.youtube.com/watch?v=IpGxLWOIZy4