RIPPLE : ABHISHEK SAPKOTA
Artificial Intelligence is a fascinating field and most of us are already aware of its applications in almost every fields.However, for many of us, AI can seem intimidating due to complex jargon and mathematical concepts behind it.One of the core concepts in AI is the Neural Network which is often considered the building block of AI but many beginners struggle to understand how neural network actually works.In this blog, we will break down neural network in the simplest possible way using an easy-to-understand analogy.
Basic Neural Network Architecture

A neural network consists of three key layers:
Input Layer — This is where data enters the network.
Hidden Layers — These layers process the data by applying mathematical operations.
Output Layer — This is where the final prediction is made.
A Simple Analogy
Think of a neural network as a student learning to solve math problems:
- The student reads a question (input layer).
- They think through different possibilities (hidden layers processing the information).
- They write an answer (output layer).
- If the answer is incorrect, a teacher corrects them, and they adjust their approach for the next attempt (learning through feedback).
This learning process demonstrates how a neural network improves its predictions over time through training.
Training A Neural Network
Let’s take an example dataset of students’ study habits and their pass/fail results.We want to train our neural network to predict whether students pass or fail an exam based on their study and sleep hours.

Here, 1 represents “Pass” and 0 represents “Fail” since neural networks process numerical values.
Let’s be familiar with some important terms before training neural network.
a) Weights: Determine the importance of each input (e.g., studying may have a higher weight than sleeping and vice versa).
b) Bias: Helps adjust predictions even when input values are zero.
c) Loss (Cost): Measures the difference between actual and predicted values.
d) Learning Rate (α): Determines how much weights adjust during training
e) Activation Function : It introduces non-linearity, allowing the network to learn complex patterns.
These concepts might seem complicated at first, but we will gain a good grasp of them while training the neural network.
How A Neural Network Learns?
It learns in 2 major stages :
a) Forward Propagation
b) Backward Propagation
Now, let’s understand these 2 phases of learning of neural network.
a) Forward Propagation
This is where the neural network predicts an output based on the given inputs.
Initially, a neural network starts with random weights and makes guesses. It then adjusts its weights based on feedback to improve accuracy.
We need to assign initial values:
Weight for study hours (W₁) = 0.6
Weight for sleep hours (W₂) = 0.4
Bias (b) = -3
Learning rate (α) = 0.1

Prediction Calculation:
Now,we need to calculate the weighted sum which is given by the formula:

We can alternatively write this formula as:
z = ((x1 × w1) + (x2 × w2) + … + (xn × wn)) + b
Using Student 1’s data (6 study hours, 8 sleep hours, actual outcome = 1):
z = (0.6 × 6) + (0.4 × 8) — 3
z = 3.6 + 3.2–3 = 3.8
To make predictions, we pass the value through an activation function like sigmoid, which converts it into a probability between 0 and 1:
σ(3.8) = 1 / (1 + e⁻³.⁸) ≈ 0.978
Since 0.978 is close to 1, the neural network predicts that the student will pass.
b) Backward Propagation
Now, the network adjusts its parameters to improve future predictions.We adjust the weights and bias using the learning rate:
W₁ = W₁ — α × (Predicted — Actual) × Study Hours
W₁ = 0.6–0.1 × (0.978–1) × 6 ≈ 0.6013
W₂ = W₂ — α × (Predicted — Actual) × Sleep Hours
W₂ = 0.4–0.1 × (0.978–1) × 8 ≈ 0.4027
b = b — α × (Predicted — Actual)
b = -3–0.1 × (0.978–1) ≈ -3.0022

By repeating this process multiple times for all data points, the neural network gradually learns the optimal weights and makes more accurate predictions.
CONCLUSION
Neural networks may seem complicated at first, but when broken down into simple steps, they become much easier to understand. By using concepts like forward propagation, backpropagation, and weight adjustments, neural networks learn just like students refining their knowledge over time.ming. However, with consistent practice, you can master it.