Artificial intelligence (AI) is the wave of the future. Artificial intelligence is a great concept. Artificial intelligence is already a part of our daily lives. All these statements are correct. It depends on the type of AI you are talking about.
The media used the terms AI, machine learning, and deep learning to explain how Google DeepMind’s AlphaGo algorithm confused Korean champion Lee Sedol in board game Go earlier this year. And all three are the factors behind AlphaGo’s victory over Lee Sedol. However, they are not the same.
The easiest way to visualize those relationships is as concentric circles centered around AI (original term), machine learning (later growing), and finally deep learning (promoting the explosion of AI today). Think of them.
From the Great Recession to the Economic Boom
AI is part of our imagination and has been boiling in the lab since 1956, when a large number of computer scientists gathered at Dartmouth conferences around the world to shape the topic of AI. Welcomed as the key to the brightest future of our civilization, AI has been criticized for decades as stupid. Until 2012, it was a mixture of both.
AI has increased in recent years, especially since 2015. Much of this is due to the widespread availability of GPUs. This makes parallel processing faster, cheaper, and more powerful than ever. It also has to do with a one-two punch of almost endless storage and a flood of data of all kinds, including photos, text, transactions and mapping data (the entire big data movement).
Let’s see how computer scientists have moved from the collapse until 2012 to the boom of hundreds of millions of users using applications every day.
Human Intelligence vs. Artificial Intelligence Machines were on display.
The dream of these AI pioneers in the summer of 1956 was to build complex machines (enabled by the development of computers) with the same characteristics as human intelligence. This is what we call “general AI.” It’s a great machine with all our senses (and maybe more), all the reasons, and the ability to think like us. You saw these machines in the movie as both friends and enemies like C3PO and Terminator. For good reason, AI computers in general are limited to movies and science fiction literature. At least I can’t do it yet.
What we can do is fall into a narrow AI category. Technology that can do some work in the same way, if not better than humans. Image classification on Pinterest and face recognition on Facebook are examples of narrow AI.
These are real examples of narrow AI. This technology reveals several aspects of human intelligence. But how do you do that? What is the source of this intelligence? That leads to the next circle, machine learning.
A Method to Achieve Artificial Intelligence Using Machine Learning
At the most basic level, machine learning is the process of analyzing data, learning from it, and making decisions or predictions about something in the real world. Instead of manually programming software routines with a specific set of instructions to perform a task, the machine is “trained” with a large amount of data and algorithms that allow it to learn how to perform the task.
Machine learning quickly emerged from the heads of the early AI crowds, and algorithmic approaches such as decision tree learning and inductive logic programming evolved over time. Techniques used include clustering, reinforcement learning, and Bayesian networks. As you know, no one has succeeded in achieving the ultimate goal of general AI, and early machine learning technology could hardly achieve narrow AI.
Computer vision has been one of the best uses of machine learning for many years, but it still required a lot of manual coding to get the job done. People went in and manually coded a classifier like an edge detection filter to tell the program where the object started and ended. Shape detection to see if there are 8 sides. A classifier for recognizing the letters “STOP”. They have developed an algorithm that understands the image and “learns” whether it is a stop sign using all of these hand-coded classifiers.
It’s good, but not overwhelming. Especially on foggy days when the signs are completely invisible or hidden in trees. There is a reason why computer vision and image recognition haven’t caught up with humans until recently. They were too fragile and error prone.
Deep Learning is a machine learning implementation technique.
Artificial neural networks, another algorithmic technique from the early camps of machine learning, have been in and out for decades. Studies of our brain biochemistry (all connections between neurons) have influenced neural networks.
However, unlike the biological brain, which can connect any neuron to other neurons within a certain physical distance, these artificial neural networks have separate layers, connections, and data propagation directions. For example, you can take an image, divide it into many tiles, and feed it to the first layer of the neural network. Individual neurons in the first layer send information to the second layer. The tasks of the second layer of neurons are performed.
Each neuron weights the input based on how accurate or inaccurate it is in relation to the task at hand. The sum of these weights is used to determine the final output. Consider an example of a stop sign. The neuron decomposes and “analyzes” the attributes of the stop sign image, such as octagons, red shades of fire trucks, prominent characters, traffic sign size, movement (or lack thereof). The task of the neural network is to determine if it is a stop sign. Based on the weights, it produces a “probability vector”, which is essentially a very knowledgeable prediction. In our case, the algorithm may be 86% confident that the image is a stop sign and 7% confident that it is a speed limit sign.