Introduction to Neurons in Neural Networks

Samuel Rowe
Artificial Neural Networks
3 min readSep 28, 2019

--

A neuron is the basic information processing unit in a neural network.

There are five basic elements in a neuron. A neuron is denoted as k.

A nonlinear model of a neuron.

Synapse

A synapse is basically an input signal to your neuron. A synapse is also known as a connecting link. Every neuron in a neural network expects a set of synapses.

A synapse is denoted as xkj, where k represents the neuron and j represents the index of the synapse. The total number of synapses accepted by a neuron is denoted by m.

Unfortunately, Medium does not support superscripts and subscripts. Therefore, I have chosen a simple alternative where the base component is written in bold characters whereas the superscript/subscript characters are written in italics. For example, x² becomes x2 (x is written in bold while 2 is written in italics).

Synapses are basically the input to your neuron.

Parameter

A synapse is multiplied with a corresponding special value known as parameter or weight. The parameter is determined during the learning phase from your training set. In a neuron with m synapses, there are m parameters. In other words, every synapse has a corresponding parameter. It should be noted that parameters can occur within intervals such as [-1, 1] or [0, 1].

A parameter is denoted as wkj, where k represents the neuron and j represents the index of the parameter. The total number of parameters is equal to the total number of synapses.

Weights are learned by your neural network during the training phase.

Linear Combiner

After the synapses are multiplied with their respective parameters, they are all added together at the summing junction, which is also known as the linear combiner.

The output of the summing junction is denoted as uk, where k represents the neuron.

The linear combiner or the summing junction adds all the products of the synapses and parameters.

Bias

An external value known as bias is added to the output of the summing junction. The newly evaluated value is known as the induced local field or activation potential. The bias either increases or decreases the result of the summing junction.

When the values of the induced local field and the output of the summing junction are plotted on a graph, an affine transformation is observed because of the presence of the bias value. In other words, the plotted graph does not pass through the origin.

The bias is denoted as bk. The induced local field or activation potential is denoted as vk.

The bias produces an affine transformation to produce the activation potential.

Activation Function

A special function known as the activation function limits the amplitude of the output of the neuron. In other words, it normalizes the output to an interval such as [0, 1] or [-1, 1].

The activation function is denoted as φ.

Mathematical Equations

The above statements can be written mathematically as follows.

This equation represents the output of the summing junction, it basically adds the products of the synapses and parameters of the neuron.
This equation represents the induced local field or activation potential. It is basically the sum of the bias and the output of the summing junction.
The output of the neuron is equal to the result of applying the activation function to the induced local field.

--

--

Samuel Rowe
Artificial Neural Networks

With software development, there is always something new to discover. Designing a platform that is helpful to millions of users is my ultimate goal.