.
A neural network is actually just a mathematical function. You enter a vector of values, those values get multiplied by other values, and a value or vector of values is output.
.
They are very useful in problem domains where there is no known function for approximating the given features (or inputs) to their outputs (classification or regression). One example would be the weather - there are lots of features to the weather - type, temperature, movement, cloud cover, past events, etc - but nobody can say exactly how to calculate what the weather will be 2 days from now. A neural network is a function that is structured in a way that makes it easy to alter its parameters to approximate weather predication based on features.
.
Therefore, Neural Network is a function and has a structure suited to "learning". One would take the past five years of weather data - complete with the features of the weather and the condition of the weather 2 days in the future, for every day in the past five years. The network weights (multiplying factors which reside in the edges) are generated randomly, and the data is run through. For each prediction, the NN will output values that are incorrect.
.
Using a learning algorithm based in calculus, such as back-propagation, one can use the output error values to update all the weights in the network. After enough runs through the data, the error levels will reach some lowest point. The goal is to stop the learning algorithm when error levels are at a best point. The network is then fixed and at this point it is just a mathematical function that maps input values into output values just like any old equation.
.
REFERENCE:
https://softwareengineering.stackexchange.com/questions/72093/what-is-a-neural-network-in-simple-words
.