In this tutorial, we’ll review the Multi-layer Perceptron (MLP) and Deep Neural Networks (DNN): what they are, how they are structured, what they are used for and in what ways they differ.
2. Multi-layer Perceptron
When we talk of multi-layer perceptrons or vanilla neural networks, we’re referring to the simplest and most common type of neural network. MLPs were initially inspired by the Perceptron, a supervised machine learning algorithm for binary classification. The Perceptron was only capable of handling linearly separable data hence the multi-layer perception was introduced to overcome this limitation.
An MLP is a neural network capable of handling both linearly separable and non-linearly separable data. It belongs to a class of neural networks known as feed-forward neural networks, which connect the neurons in one layer to the next layer in a forward manner without any loops.
An MLP is an artificial neural network and hence, consists of interconnected neurons which process data through three or more layers. The basic structure of an MLP consists of an input layer, one or more hidden layers and an output layer, an activation function and a set of weights and biases:
The input layer is the initial layer of the network, taking input in the form of numbers. Secondly, we have the hidden layer, which processes the information received from the input layer. This processing is in the form of computations. There is no restriction on the number of hidden layers, however, an MLP usually has a small number of hidden layers.
Finally, the last layer, the output layer is responsible for producing results. The result is the output from the computations applied to the data through the network.
Another characteristic of MLPs is found in backpropagation, a supervised learning technique for training a neural network. In simplified terms, backpropagation is a way of fine-tuning the weights in a neural network by propagating the error from the output back into the network. This improves the performance of the network while reducing the errors in the output.
Lastly, due to their simplicity, MLPs will usually require short training times to learn the representations in data and produce an output. They would also typically require more powerful computing units than the average computer. An example is a device with a Graphics Processing Unit (GPU).
MLPs are usually used for data that is not linearly separable, such as regression analysis. Alternatively, due to their simplicity, they are most suited for complex classification tasks and predictive modeling. Additionally, MLPs have been used for machine translation, weather forecasting, fraud detection, stock market prediction and credit rating prediction.
3. Deep Neural Network
A Deep Neural Network (DNN) is simply an artificial neural network with deep layers. Deep layers in this context mean that the network has several layers stacked together for processing and learning from data. DNNs were originally inspired by neurobiology. Particularly, the way humans learn to perceive and recognize physical things.
Due to their complex nature, DNNs usually require long periods of time to train the network on the input data. Additionally, they require powerful computers with specialized processing units such as Tensor Processing Units (TPU) and Neural Processing Units (NPU).
Similar to the structure of an MLP, a DNN is composed of an input layer, hidden layers, output layers, weights, biases, and activation functions. Alternatively, in the case of a CNN, the neural network would be composed of a pooling and convolutional layer in addition to the components already mentioned.
DNNs are powerful algorithms due to their deep layers. Hence, they are usually used for handling complex computational tasks. Computer vision is one of these. This is a branch of artificial intelligence that deals with enabling computers to derive meaning from visual data such as images and videos.
Computer vision tasks are accomplished with the use of a CNN. A CNN receives input data in the form of pictures and videos and then processes this data. The processing is done in such a way that the computer is capable of recognizing images of a similar format. This process is similar to how human beings learn to see and perceive their physical surroundings.
Other examples of the applications of DNNs are in natural language processing, genomics, machine translation, speech recognition, and self-driving cars.
4. Differences Between Multi-layer Perceptron and Deep Neural Network
There are subtle differences between MLPs and DNNs, which are found in the structure of the underlying neural network, and the tasks they are used for. These are summed up in the table below.
In this tutorial, we reviewed MLPs and DNNs. MLPs are neural networks with at least three layers while DNNs are neural networks with additional or deeper layers. DNNs and MLPs are both capable of performing such complex tasks as compared to traditional machine learning algorithms.