Tech analysts love trending topics. In fact, that’s their job: forecast and analyze trends. Some years ago we had “Big Data”, more recently “Machine Learning”, and now it s the time of “Deep Learning”. So let’s dive in and try to understand what‘s behind it and what impact it can have on our society.
Neural Network algorithms are the main science behind Deep Learning. They are not new but became more popular in the mid-2000s after Geoffrey Hinton and Ruslan Salakhutdinov published a paper explaining how we could train a many-layered feedforward neural network one layer at a time. The large-scale impact of Deep Learning in Big Tech Companies began around 2010 with speech recognition.
It took around 30 years to become mainstream. Computers were not powerful enough and companies didn’t have such a large amount of data. When the researcher Yann LeCun played with his first algorithms in the 80’s it took him 3 days to run it! As you can see on the previous diagram, it’s only been 3 years since Deep Learning became more mainstream. Indeed in 2012 ImageNet, a popular challenge for scientists in the field of image recognition, was first won by Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton thanks to Deep Learning. This result put lot of attention on this field in the tech sector.
The technology behind Deep Learning is Neural Networks stacked together into multiple layers. One of the challenges for the humans who implement them is to understand the exact information extracted by each layer. Each stack of neurons extracts higher level information so that at the end they can recognize very complex patterns. Humans are sometimes skeptical of this model because, even though it’s based on well-known mathematical equations, we know little about why some models works.
This is only the beginning. There are many challenges to tackle on topics like “NLP” (Natural Language Processing) or understanding spoken language. One key for this is the context. When speech is limited to a small scope (e.g., in a legal document or a food recipe) machines can interpret the meaning. For now, much of the nuance and complexity of human language is difficult for machines (for instance it’s very hard for a machine to understand a joke).
This is a big turn in history. Before Neural Networks, humans thought they were the best at designing code. Now they need to accept that the machine can beat them even in writing an algorithm. Machines programmed to recognize patterns with Deep Learning beat the old “Rule-Based” algorithms.
This video of a simple machine trained with the DeepMind algorithm is a very good illustration of the superior “intelligence” of the machine. The computer learns to win the game and at the end discovers tricks that nobody found before. It’s no longer about Brute-Force algorithms, but about the replication of complex human behavior. For instance, the same DeepMind team (recently bought by Google) won the game of Go against the best European player, something that no computer could do before Deep Learning.
A well-known application of Deep Learning is face recognition. Google Photos, for instance, is a very good example of this technology. It can even recognize your face from 20 years ago! To simplify, we could say that the first layer of neurons can recognize a circle, the second an iris, and the third an eye. If the computer has been trained well enough, it can recognize abstract entities like a face with a good probability.
After videos, speech, and translation, Google now uses Deep Learning for search, its core business. The ranking doesn’t rely anymore only on human-designed algorithms (like the well known PageRank) but thanks to RankBrain, a Deep Learning algorithm, Google now has more accuracy and precision.
Of course, one of the trending topics in Deep Learning is the autonomous car. The National Highway Traffic Safety Administration said the Artificial Intelligence system piloting a self-driving car could be considered as the driver under federal law. This is a major step toward ultimately winning approval for autonomous vehicles on the roads.
Many tech companies have recently understood the benefits that new A.I. techniques can bring. Facebook, Google, Apple, Microsoft, IBM and many others are building Deep Learning teams to tackle these challenges.
Facebook hired Yann LeCun to head its new A.I. lab and one year later hired Vladimir Vapnik, a main developer of the Vapnik–Chervonenkis theory of statistical learning. Apple recently bought three startups in Deep Learning as well. Google, as we underlined before, hired an amazing crew including Geoffrey Hinton. Finally, Baidu hired Andrew Ng, one of the most famous teachers and scientists in Machine Learning, to head its new research lab.
The battle is starting and we don’t know yet who will win this deep learning fight. The main question is about how it will impact our daily life. Will we become as powerful as James Bond, a personal version of Moneypenny in our pocket (the Facebook virtual assistant “M” )? Will we all lose our jobs and be replaced by machines? Maybe both?
What kind of future will appear?
The super-intelligence of connected machines, which humans may not be able to fully understand, could become a potential threat tomorrow. Stephen Hawking, Bill Gates, and Elon Musk have warned us about it.
“I am in the camp that is concerned about super intelligence. First, the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.” said Bill Gates
Indeed, it’s certain that we will save many lives and reduce boring and automated tasks with Deep Learning, but it could also have a huge negative impact on our society. As we all may have seen in more recent sci-fi movies, Artificial (super-)Intelligence could be used destroy things or manipulate humans. One way to limit this potential threat is to open source code so that the whole community can be aware of the algorithm and know the state of the art. TensorFlow or OpenAI are good examples of this idea.
“Because of AI’s surprising history, it’s hard to predict when human-level AI might come within reach. When it does, it’ll be important to have a leading research institution which can prioritize a good outcome for all over its own self-interest.” in the manifest of OpenAI.
One of the other consequences we fear most is the end of many jobs. Because each major technological innovation spreads across the whole economy, it’s certain that many sectors will be impacted by the exponential growth of such technologies. As economist Joseph Schumpeter taught us, it will also probably create many jobs in other sectors (mainly services and on-demand jobs). Maybe “this time will be different” and we will need new social institutions to take care of this. New economic ideas like Basic Income could be an interesting way to decrease the shock caused by the invasion of Deep Learning everywhere. Some institutions are already prepared to experiment with it.
Today, each AI is built with data from Internet sources like Google searches or Facebook feeds. But in the near future, each AI could be built with data from our personal devices. We don’t know already which applications will emerge. We can be sure, as Andy Rubin the co-founder of Android stated, that Deep Learning will become easier and cheaper to implement so that every piece of software or hardware will be able to run its own intelligent algorithms.
Deep Learning is on its way to becoming a commodity….
This article was contributed by a student at Holberton School and should be used for educational purposes only.