daco deep learning

Machine Learning: the first step

Understand Machine Learning is quite easy. The idea is to train algorithms on large databases to make them able to predict results from new data.

Let’s give a simple example: we want to predict the age of a tree thanks to its diameter. This database contains only three kinds of data: inputs (x, tree diameter), outputs (y, tree age) and features (a, b: type of tree, forest location…). These data are linked by a linear function “y = ax + b”. With a training on this database, Machine Learning algorithms will be able to understand the correlation between x and y and define the accurate value of features. Once this training phase completed, computers will be able to predict the right tree age (y) from any new diameter (x).

This is a voluntarily over-simplistic description. However, it gets more complicated when we talk about image recognition.

For a computer, a picture is millions of pixels, a lot of data to process and too many inputs for an algorithm. Researchers had to find a shortcut. The first solution was to define intermediary characteristics.

Imagine you want computers to recognize a cat. First of all, a human has to define all main features of a cat: a round hair, two sharp hears, a muzzle …  Once key features defined, a well-trained neural network algorithm will analyze them and define if the picture is a cat, with a sufficient level of accuracy.

 

daco illustration for basic machine learning

 

What if we took a more complex item?
For example, how would you describe a dress to a computer?

 

daco illustration for characteristics definition in machine learning

 

You get the first limit of Basic Machine Learning for image recognition: we’re often incapable of defining discriminating characteristics that would near a 100% recognition potential.

 

Deep Learning: see & learn without human intervention

In 2000’s, Fei Fei Li, director of Stanford’s AI Lab and Vision Lab, had a good intuition: how do children learn object names? How are they able to recognize a cat or a dress? Parents do not teach this by showing characteristics but rather by naming the object/animal every time their child sees one. They train kids by visual examples. Why couldn’t we do the same for computers?

However, two issues remained: databases availability and computing power.

First, how can we get a large enough database to “teach computers how to see”? To tackle this issue, she and her team launched the Image Net project in 2007. Collaborating with more than 50 thousand people in 180 countries, they created the biggest image database in the world in 2009: 15 million of named and classified images, covering 22 000 categories.

Computers can now train themselves on massive image databases to be able to identify key features, and this without human intervention. Like a 3-year-old child, computers see millions of named images and understand by themselves the main characteristics of each item. These complex features extraction algorithms use “deep neural networks” and require thousands or millions of nodes.

daco illustration for deep learning

 

It is just a beginning for Deep learning: we managed to make computers see as a 3-year-old child, “but the real challenge is ahead: How can we help our computer to go from 3 to 13-year-old kid and far beyond?” (Fei Fei Li, Ted conference)

We will talk more about neural network in another article, follow @daco_io on Medium and Twitter to receive our latest articles!

Note Oct 11th: This article is now featured on TechCrunch