Introduction to Neurons in Neural Networks
A neuron is the basic information processing unit in a neural network.
There are five basic elements in a neuron. A neuron is denoted as k.
Synapse
A synapse is basically an input signal to your neuron. A synapse is also known as a connecting link. Every neuron in a neural network expects a set of synapses.
A synapse is denoted as xkj, where k represents the neuron and j represents the index of the synapse. The total number of synapses accepted by a neuron is denoted by m.
Unfortunately, Medium does not support superscripts and subscripts. Therefore, I have chosen a simple alternative where the base component is written in bold characters whereas the superscript/subscript characters are written in italics. For example, x² becomes x2 (x is written in bold while 2 is written in italics).
Parameter
A synapse is multiplied with a corresponding special value known as parameter or weight. The parameter is determined during the learning phase from your training set. In a neuron with m synapses, there are m parameters. In other words, every synapse has a corresponding parameter. It should be noted that parameters can occur within intervals such as [-1, 1] or [0, 1].
A parameter is denoted as wkj, where k represents the neuron and j represents the index of the parameter. The total number of parameters is equal to the total number of synapses.
Linear Combiner
After the synapses are multiplied with their respective parameters, they are all added together at the summing junction, which is also known as the linear combiner.
The output of the summing junction is denoted as uk, where k represents the neuron.
Bias
An external value known as bias is added to the output of the summing junction. The newly evaluated value is known as the induced local field or activation potential. The bias either increases or decreases the result of the summing junction.
When the values of the induced local field and the output of the summing junction are plotted on a graph, an affine transformation is observed because of the presence of the bias value. In other words, the plotted graph does not pass through the origin.
The bias is denoted as bk. The induced local field or activation potential is denoted as vk.
Activation Function
A special function known as the activation function limits the amplitude of the output of the neuron. In other words, it normalizes the output to an interval such as [0, 1] or [-1, 1].
The activation function is denoted as φ.
Mathematical Equations
The above statements can be written mathematically as follows.