Feed-Forward Neural Network
Feed-Forward Neural Network
Feed-forward neural networks are the networks in which information flows only in the forward direction. They are also termed a multi-layered network of neurons. The data initially enters the input layer, then flows across the hidden layers, and finally comes out through the output layer. There is no connection to feed the output back to the network.
Let us understand the working of Feed-Forward Neural Network with a sample problem- prediction of weather. Let the inputs be-
                                     x1=day/night
                                     x2=temperature
                                     x3=month
let's assume the threshold value as 15. If the output is greater than 15, then it will be raining, otherwise, a sunny day. let the given input (x1,x2,x3) be (0,14,8), initial weights (w1,w2,w3) be (0.2,1,1) and biases as(1,0,0).
Data Computation takes place in three steps-
- Multiplication of weights and inputs:
                                      (x1* w1) = (0 * 0.2)= 0
                                  (x2* w2) = (1 * 14) = 14
                                  (x3* w3) = (1 * 8) = 8  
     2. Adding the biases: In this step, the output obtained in the previous step is added to their biases.
                                      (x1* w1) + b1 = 0 + 1
                                  (x2* w2) + b2 = 14 + 0
                                  (x3* w3) + b3 = 8 + 0
               Sum=(x1* w1) + b1 + (x2* w2) + b2 + (x3* w3) + b3 = 22
     3. Activation Function: It transforms the weighted sum of input to the output node. It triggers the activation of neurons and also governs the strength of the output signal.
    4. Output signal: After feeding the weighted sum into the activation function, the weighted sum is turned into the output signal. In our example, the weighted sum is greater than 20, so it predicts to be a rainy day.
Loss Function in Neural Networks
The loss function determines if any correction is required for the learning model. 
Mathematically, the loss is calculated as the difference between the actual output and the predicted output. 
                            loss = y_{predicted} - y_{original}
Different loss functions will return different errors for the same prediction, having a considerable effect on the performance of the model.                           
  Cross entropy for multi-class classification                         
Gradient descent:
Gradient descent is the optimization technique that measures the updated weights concerning the change in error. "gradient" refers to the quantity change of output on changing the inputs. Let us see the gradient learning algorithm-
- Initialize weight and biases w,b
- Iterate over data:
- compute y^
- compute L(w,b)
- w11=w11-ɳΔw11
- w12=w12-ɳΔw12
- .......
- till satisfied
Backpropagation
Error is calculated after comparing the predicted value to the expected outcome. Then, the error is propagated back to the network, one layer at a time, updating the weights. This is called the Backpropagation algorithm. This process is repeated for the entire training dataset. One round of updating the network for the entire training dataset is called an epoch. A network may be trained for tens, hundreds, or many thousands of epochs.
Conclusion
Deep Learning is research-oriented. There are various neural network architectures.While training the model with large datasets, neural networks need massive amounts of computational power and hardware acceleration. This can be achieved by configuring GPUs. To have an efficient model, iterating over architecture is needed.

 
Good one..
ReplyDeleteGood work
ReplyDeleteThanks
DeleteKeep it up
ReplyDeleteInformative 👍
ReplyDeleteThanks
DeleteVery good.
ReplyDeleteThanks
ReplyDeleteGreat work
ReplyDeleteThanks
DeleteVery Informative!!
ReplyDeleteThanks ☺️
DeleteGood work appu
ReplyDeleteThanks
DeleteNice
ReplyDeleteThanks 🙏
DeleteGood work
ReplyDeleteThanks
Delete