Skip to main content

ANN in depth


Artificial Neural Network Layers

Artificial Neural network is typically organized in layers. Layers are being made up of many interconnected ‘nodes’ which contain an ‘activation function’. A neural network may contain the following 3 layers:

a. Input layer

The purpose of the input layer is to receive as input the values of the explanatory attributes for each observation. Usually, the number of input nodes in an input layer is equal to the number of explanatory variables. ‘input layer’ presents the patterns to the network, which communicates to one or more ‘hidden layers’.
The nodes of the input layer are passive, meaning they do not change the data. They receive a single value on their input and duplicate the value to their many outputs. From the input layer, it duplicates each value and sent to all the hidden nodes.

b. Hidden layer

The Hidden layers apply given transformations to the input values inside the network. In this, incoming arcs that go from other hidden nodes or from input nodes connected to each node. It connects with outgoing arcs to output nodes or to other hidden nodes. In hidden layer, the actual processing is done via a system of weighted ‘connections’. There may be one or more hidden layers. The values entering a hidden node multiplied by weights, a set of predetermined numbers stored in the program. The weighted inputs are then added to produce a single number.

c. Output layer

The hidden layers then link to an ‘output layer‘. Output layer receives connections from hidden layers or from input layer. It returns an output value that corresponds to the prediction of the response variable. In classification problems, there is usually only one output node. The active nodes of the output layer combine and change the data to produce the output values.
The ability of the neural network to provide useful data manipulation lies in the proper selection of the weights. This is different from conventional information processing.

Structure of a Neural Network

The structure of a neural network also referred to as its ‘architecture’ or ‘topology’. It consists of the number of layers, Elementary units. It also consists of Interconchangend Weight adjustment mechanism. The choice of the structure determines the results which are going to obtain. It is the most critical part of the implementation of a neural network.
The simplest structure is the one in which units distributes in two layers: An input layer and an output layer. Each unit in the input layer has a single input and a single output which is equal to the input. The output unit has all the units of the input layer connected to its input, with a combination function and a transfer function. There may be more than 1 output unit. In this case, resulting model is a linear or logistic regression.This is depending on whether transfer function is linear or logistic. The weights of the network are regression coefficients.
By adding 1 or more hidden layers between the input and output layers and units in this layer the predictive power of neural network increases. But a number of hidden layers should be as small as possible. This ensures that neural network does not store all information from learning set but can generalize it to avoid overfitting.
Overfitting can occur. It occurs when weights make the system learn details of learning set instead of discovering structures. This happens when size of learning set is too small in relation to the complexity of the model.
A hidden layer is present or not, the output layer of the network can sometimes have many units, when there are many classes to predict.

Advantages and Disadvantages of Neural Networks

Let us see few advantages and disadvantages of neural networks:
  • Neural networks perform well with linear and nonlinear data but a common criticism of neural networks, particularly in robotics, is that they require a large diversity of training for real-world operation. This is so because any learning machine needs sufficient representative examples in order to capture the underlying structure that allows it to generalize to new cases.
  • Neural networks works even if one or few units fail to respond to network but to implement large and effective software neural networks, much processing and storage resources need to be committed. While the brain has hardware tailored to the task of processing signals through a graph of neurons, simulating even a most simplified form on Von Neumann technology may compel a neural network designer to fill millions of database rows for its connections – which can consume vast amounts of computer memory and hard disk space.
  • Neural network learns from the analyzed data and does not require to reprogramming but they are referred to as black box” models, and provide very little insight into what these models really do. The user just needs to feed it input and watch it train and await the output.

Comments

Popular posts from this blog

Can AI be dangerous

CAN AI BE DANGEROUS? Most researchers agree that a superintelligent AI is unlikely to exhibit human emotions like love or hate, and that there is no reason to expect AI to become intentionally benevolent or malevolent.  Instead, when considering how AI might become a risk, experts think two scenarios most likely: The AI is programmed to do something devastating:   Autonomous weapons are artificial intelligence systems that are programmed to kill. In the hands of the wrong person, these weapons could easily cause mass casualties. Moreover, an AI arms race could inadvertently lead to an AI war that also results in mass casualties. To avoid being thwarted by the enemy, these weapons would be designed to be extremely difficult to simply “turn off,” so humans could plausibly lose control of such a situation. This risk is one that’s present even with narrow AI, but grows as levels of AI intelligence and autonomy increase. The AI is programmed to do something beneficial, but it d

What is ANN and CNN

An artificial neural network is a collection of smaller units called neurons, which are computing units modeled on the way the human brain processes information. Artificial neural networks borrow some ideas from the biological neural network of the brain, in order to approximate some of its processing results. These units or neurons take incoming data like the biological neural networks and learn to make decisions over time. Neural networks learn through a process called backpropagation. Backpropagation uses a set of training data that match known inputs to desired outputs. First, the inputs are plugged into the network and outputs are determined. Then, an error function determines how far the given output is from the desired output. Finally, adjustments are made in order to reduce errors. A collection of neurons is called a layer, and a layer takes in an input and provides an output. Any neural network will have one input layer and one output layer. It will also have one or more