Deep Learning - Introduction
This blog is part of the book I am working on "Video Analytics using Deep Learning and TensorFlow" which will be published by Apress Media Inc. (Subsidary of Springer.)
Deep learning has a prolonged history and desires to mimick human biological brain. Several proposed techniques in literature still haven't reached their optimum objectives and expectations. Researchers are delving deep into this field by proposing sophisticated and revolutionary ideas to achieve their aspirations. Traditionally deep learning researchers have mainly focused on supervised learning, but recently deep learning is expanding its horizon to other methods of learning like semi-supervised, unsupervised and reinforcement learning.
Deep learning can be condensed down to essentially performing “A” to “B”mappings. “A” and “B” broadly represents the input and the output vectors respectively. A more accurate and comprehensive interpretation is, deep learning is performing universal function approximations, meaning with sufficient labeled data it can approximate any functions. We can increase the complexity of a function by extending its layer and by adding more units at each layer, means simply add more variables and parameters to a function. Deep learning has better learning ability as compared to traditional machine learning algorithms, hence it can select better features from datasets. Due to increased availablity of computing, better results and its practicality, deep learning got more attentions from researchers and has become a widely practiced algorithm in the field of Artificial Intelligence.
We will learn about the fundamental methods and techniques to approximate parametric functions. We will start our discussion from fundamental feedforward network called “multilayer layer perceptron”commonly known as “neural network”, which is basically the principal technique behind recent deep learning algorithms. Further, we will get know about optimization techniques which will be used to achieve best approximated function.
Introduction to ANN
Neural Networks are inspired from human bioligicial brain and its inner workings. Before we delve deeper we will look at how human brain works and its parallels within the Artificial Neural Networks. It is very difficult to intricately understand how brain works, but we can get a quick general overview. Human brain is composed of approximately 100 billion nerve cells, called neurons. Each neuron can relate to thousands of other neurons, that are found in a large network. A neuron communicates with other neurons in the form of electro-chemical signals. Every feeling, thought and emotion that we have are due to millions of small nerve cells called neurons. Without these neurons working and carrying messages throughout our body we would feel and be able to do nothing, kind of like a normal day in math class.
These neurons keep passing information through out our whole body constantly. Let's look at specific parts of neurons and their functions:
Dendrites: This is the root like part of the cell that streches out from the cell body. These will receive messages from outher neirons by grabbing on to neurotransmitters.
Soma: This contains the nucleus of the cell or the brain of the cell. This tells the neuron either to fire or not.
Axon: The structure of an axon is wire like which extends from soman to axon terminal buttons. This is like a highway in which the messages will travel down to the neuron
Myelin Sheath: This is a fatty layer of tissue that surrounds the axon. It helps speed up neural impulses.
Axon Terminal Buttons: The branches at the end of the axon that contain neurotransmitters and send them shooting across the synapse. They also suck up excess neurotransmitter in a process called reuptake.
Neurotransmitters: Chemicals contained in the terminal buttons that enable neurons to communicate. Neurotransmitters fit into receptor cells on the dendrites on other neurons like a key in a lock.
Synapse: The space in between neurons (neurons never touch each other).
When the neuron is doing nothing, it is called resting potential and it has slightly negative charge inside of it. When a neuron decides to go to work its called aaction potential. Action potential is a elecctro-chemical process, which means half of the job is electrical and half is chemical.
Different Types of Neural Network Architectures:
A typical Neural Network consists of many artifical neurons called units arranged in a series of layers. A typical Artificial Neural Networks consists of different layers:
Input layer: It contains those units (Artificial Neurons) which receive input from the outside world on which network will learn, recognize about or otherwise process.
Output layer: It contains units that respond to the information about how it's learned any task.
Hidden layer: These units are in between input and output layers. The job of hidden layer is to transform the input into something that output unit can use in some way.
The most basic form of artificial neural network will have two input units and output units with no hidden layers is sometimes also called Single Layer Perceptron.