Learnitweb

Artificial Neural Networks (ANN)

Artificial Neural Networks (ANNs) are the foundation of modern deep learning systems. Inspired by how the human brain processes information, ANNs are capable of learning from data, identifying complex patterns, and making intelligent decisions without explicit rule-based programming.

This tutorial will cover the intuition, architecture, working mechanism, a detailed example, visual representation, and advantages and disadvantages of ANNs.


1. What is an Artificial Neural Network?

An Artificial Neural Network (ANN) is a computational model made up of interconnected nodes called neurons, organized in layers. Each neuron processes inputs, applies a transformation (usually a weighted sum followed by an activation function), and produces an output that is passed to the next layer.

In essence, an ANN learns to map input data (like an image, a sentence, or a number) to the correct output (like a label or prediction) by adjusting internal parameters called weights.


2. Biological Inspiration

The idea of ANNs is derived from the human brain. In the brain, neurons communicate with each other through synapses, which strengthen or weaken based on experience. Similarly, in an ANN, connections between nodes have weights that determine how strongly one neuron influences another.

Just as humans learn from experience, an ANN learns from data — adjusting its weights through training to improve prediction accuracy.


3. Architecture of an ANN

A typical ANN is made up of three types of layers:

  1. Input Layer
    This layer receives the input data. For example, if you are predicting house prices based on area and number of rooms, these features form the input layer.
  2. Hidden Layers
    These layers perform intermediate computations. Each neuron in a hidden layer applies weights to its inputs, adds a bias term, and passes the result through a non-linear activation function. Hidden layers allow the model to learn complex, non-linear relationships.
  3. Output Layer
    The final layer produces the output. For classification problems, this could be a probability distribution over classes (for example, cat vs. dog).

Visual Representation (Conceptual)

Input Layer → Hidden Layer → Output Layer
   X1,X2 → (w1,w2,b) → Activation → Output

You can visualize it as:

  • Input neurons passing signals forward
  • Hidden layers transforming these signals
  • Output layer generating the final result

4. How ANN Works – Step by Step

The process of how ANN learns can be divided into three main phases:

Step 1: Forward Propagation

In forward propagation, input data passes through the network layer by layer. Each neuron performs:

  • A weighted sum of its inputs.
  • Adds a bias term.
  • Applies an activation function to introduce non-linearity.

The output from one layer becomes the input for the next.

Step 2: Loss Calculation

After the network produces an output, it is compared with the actual target value using a loss function (for example, mean squared error or cross-entropy). The loss quantifies how far the prediction is from the true answer.

Step 3: Backpropagation and Weight Update

The loss is then propagated backward through the network to adjust the weights. This process is called backpropagation. Using an optimization algorithm like gradient descent, the network updates its weights in the direction that minimizes the loss.

This cycle (forward → loss → backward → update) repeats for multiple iterations, called epochs, until the model learns the desired mapping.


5. Example: Predicting House Prices Using ANN

Let’s consider a simple example where we want to predict house prices based on two features:

  1. Size of the house (in square feet)
  2. Number of rooms

Input

We have a small dataset like:

Size (sqft)RoomsPrice ($)
10002150000
15003200000
20004250000

Step 1: Define the Architecture

  • Input layer: 2 neurons (size, rooms)
  • Hidden layer: 3 neurons
  • Output layer: 1 neuron (price)

Step 2: Forward Propagation

Each neuron in the hidden layer computes:

  • Weighted sum of inputs (size and rooms)
  • Adds bias
  • Applies an activation function (like ReLU)

Then, the hidden layer’s outputs go to the output neuron, which predicts the price.

Step 3: Calculate Loss

The predicted price is compared with the actual price using a loss function like Mean Squared Error (MSE).

Step 4: Backpropagation

The model adjusts weights slightly to reduce error. This is repeated many times until predictions are accurate.

Step 5: Prediction

Once trained, if you input a new house (say 1800 sqft and 3 rooms), the model predicts a price based on the learned weights.


6. Activation Functions in ANN

Activation functions add non-linearity, allowing the model to learn complex patterns. Common types include:

  • Sigmoid: Maps input between 0 and 1, useful for probabilities.
  • ReLU (Rectified Linear Unit): Outputs zero for negative inputs and the same value for positive ones, commonly used for hidden layers.
  • Tanh: Scales input between -1 and 1, helping center the data.

7. Visual Intuition

Imagine you are teaching a child to recognize fruits. Initially, the child makes mistakes, but as you correct them, they adjust their understanding. Similarly, the ANN continuously adjusts weights based on errors until it correctly identifies the pattern.

Each weight adjustment is like “learning” from past mistakes.


8. Advantages of Artificial Neural Networks

  1. Ability to Learn Non-Linear Relationships
    ANNs can model complex, non-linear patterns that traditional algorithms like linear regression cannot capture.
  2. Feature Learning
    Neural networks can automatically learn useful features from raw data, reducing the need for manual feature engineering.
  3. Adaptability
    ANNs can generalize well to new data if trained properly, making them suitable for diverse tasks like image recognition, speech processing, and text analysis.
  4. Scalability
    With enough data and computational power, ANNs can scale to very deep architectures (deep neural networks) for high accuracy.
  5. Parallel Processing Capability
    Neural networks can process multiple inputs simultaneously, leveraging GPUs for high-speed computations.

9. Disadvantages of Artificial Neural Networks

  1. High Computational Cost
    Training ANNs requires significant computational resources, especially for large networks with millions of parameters.
  2. Large Data Requirement
    ANNs perform best when trained on vast datasets. With limited data, they tend to overfit.
  3. Lack of Interpretability
    ANNs are often considered “black boxes” because understanding why a network made a particular decision is difficult.
  4. Difficult to Tune
    Selecting the right architecture, learning rate, and number of layers often requires extensive experimentation.
  5. Risk of Overfitting
    Without regularization techniques like dropout or early stopping, ANNs may memorize training data instead of generalizing.

Diagram

Here’s a clean textual diagram of an Artificial Neural Network:

             Deep Neural Network (DNN)

                   [Input Layer]
                /       |       \
           (x1)       (x2)      (x3)
                \       |       /
                 \      |      /
                [Hidden Layer 1]
             /      |      |      \
        (h11)    (h12)   (h13)   (h14)
             \      |      |      /
              \     |     |     /
                [Hidden Layer 2]
             /       |       |       \
        (h21)     (h22)    (h23)    (h24)
             \       |       |       /
              \      |      |      /
                  [Output Layer]
                        |
                    (y_pred)

Explanation of layers:

  • The Input Layer receives features (x1, x2, x3).
  • Hidden Layer 1 and Hidden Layer 2 extract increasingly complex representations of the data.
  • The Output Layer produces the final prediction (for example, a class label or probability).