AI EducademyAIEducademy
🌳

AI की नींव

🌱
AI Seeds

शून्य से शुरू करें

🌿
AI Sprouts

नींव बनाएं

🌳
AI Branches

व्यवहार में लागू करें

🏕️
AI Canopy

गहराई में जाएं

🌲
AI Forest

AI में महारत हासिल करें

🔨

AI में महारत

✏️
AI Sketch

शून्य से शुरू करें

🪨
AI Chisel

नींव बनाएं

⚒️
AI Craft

व्यवहार में लागू करें

💎
AI Polish

गहराई में जाएं

🏆
AI Masterpiece

AI में महारत हासिल करें

🚀

करियर रेडी

🚀
इंटरव्यू लॉन्चपैड

अपनी यात्रा शुरू करें

🌟
व्यवहारिक इंटरव्यू में महारत

सॉफ्ट स्किल्स में महारत

💻
तकनीकी इंटरव्यू

कोडिंग राउंड में सफल हों

🤖
AI और ML इंटरव्यू

ML इंटरव्यू में महारत

🏆
ऑफर और उससे आगे

सबसे अच्छा ऑफर पाएं

सभी कार्यक्रम देखें→

लैब

7 प्रयोग लोड हुए
🧠न्यूरल नेटवर्क प्लेग्राउंड🤖AI या इंसान?💬प्रॉम्प्ट लैब🎨इमेज जनरेटर😊सेंटिमेंट एनालाइज़र💡चैटबॉट बिल्डर⚖️एथिक्स सिमुलेटर
🎯मॉक इंटरव्यूलैब में जाएँ→
nav.journeyब्लॉग
🎯
हमारे बारे में

हर जगह, हर किसी के लिए AI शिक्षा सुलभ बनाना

❓
nav.faq

Common questions answered

✉️
Contact

Get in touch with us

⭐
ओपन सोर्स

GitHub पर सार्वजनिक रूप से निर्मित

सीखना शुरू करें - यह मुफ्त है
AI EducademyAIEducademy

MIT लाइसेंस - ओपन सोर्स

सीखें

  • कार्यक्रम
  • पाठ
  • लैब

समुदाय

  • GitHub
  • योगदान करें
  • आचार संहिता
  • हमारे बारे में
  • सामान्य प्रश्न

सहायता

  • कॉफ़ी खरीदें ☕
  • footer.terms
  • footer.privacy
  • footer.contact
AI और इंजीनियरिंग प्रोग्राम›🌿 AI Sprouts›पाठ›निर्णय वृक्ष: वह एल्गोरिदम जिसे आप कागज पर बना सकते हैं
🌳
AI Sprouts • मध्यम⏱️ 25 मिनट पढ़ने का समय

निर्णय वृक्ष: वह एल्गोरिदम जिसे आप कागज पर बना सकते हैं

Decision Trees: The Algorithm You Can Draw on Paper 🌳

Most machine learning algorithms are black boxes — you feed in data, something mathematical happens inside, and a prediction comes out. Decision trees are different. They are one of the few algorithms you can fully explain to a non-technical colleague, draw on a whiteboard, and still trust to make accurate predictions.


🎮 The 20 Questions Analogy

You've probably played 20 Questions: one person thinks of something, and others ask yes/no questions to narrow it down. "Is it alive? Is it bigger than a car? Does it live in water?" Each answer eliminates a huge swath of possibilities until the answer becomes obvious.

A decision tree works exactly like this. Given a new data point to classify, the tree asks a series of questions about its features, following the branches that match each answer, until it reaches a leaf — a final prediction.

A decision tree for classifying animals: first split on 'has wings?', then 'lives in water?', leading to leaf nodes with animal names
A decision tree asks a series of questions about features, narrowing down to a prediction at each leaf node.

🌿 Anatomy of a Tree

Before we get into how trees learn, let's name the parts:

  • Root node — the very top question; the most important feature
  • Internal nodes — questions at each branch point
  • Branches — the paths taken based on yes/no (or value-range) answers
  • Leaf nodes — the endpoints; each holds a final prediction

A single data point travels from root to leaf, answering one question at each node, until it reaches a prediction.


📐 How a Decision Tree Learns

The clever part: how does the algorithm decide which question to ask at each node? It tries every possible split on every feature and picks the one that best separates the data.

Information Gain and Gini Impurity

Two common measures of "best separation":

Gini impurity measures how mixed a group is. A perfectly pure node — all examples belong to one class — has a Gini impurity of 0. A completely mixed node has the maximum impurity. The algorithm prefers splits that produce the purest child nodes.

Information gain is similar: it measures how much a split reduces uncertainty (entropy) about the class label. Higher information gain = better split.

Both measures ask the same underlying question:

पाठ 15 / 160% पूर्ण
←Supervised बनाम Unsupervised Learning: मुख्य अंतर समझाए गए

Discussion

Sign in to join the discussion

lessons.suggestEdit
after splitting on this feature, how much more certain am I about the class?
🤯

The CART algorithm (Classification and Regression Trees), introduced in 1984 by Breiman, Friedman, Olshen, and Stone, is the foundation of most modern decision tree implementations. Despite being 40 years old, it remains one of the most widely used ML algorithms.


✂️ Overfitting and Pruning

Left unconstrained, a decision tree will grow until every training example has its own leaf — achieving 100% accuracy on training data but failing completely on new data. This is overfitting.

Imagine memorising every past exam question word-for-word instead of understanding the subject. You'd ace the past papers but fail the real exam.

Two main remedies:

  1. Pre-pruning (early stopping) — set limits during training: maximum depth, minimum samples per leaf, minimum information gain threshold. The tree stops growing when it hits these limits.

  2. Post-pruning — grow the full tree, then trim back branches that don't improve performance on a validation set.

🤔
Think about it:

A decision tree with depth 1 (a single question) is called a "decision stump". It's extremely simple — almost certainly underfitting. A tree of depth 100 with one sample per leaf is overfitting. How would you decide where to stop?


🌲 From Trees to Forests

A single decision tree is powerful but brittle — small changes in training data can produce very different trees. The solution: grow hundreds of trees, each trained on a random subset of the data and features, then average their predictions.

This is a Random Forest — one of the most reliable and widely-used algorithms in all of machine learning. You'll cover it in depth in a later lesson. For now, remember: individual trees are interpretable, forests are robust.


✅ Strengths and ⚠️ Weaknesses

| Strengths | Weaknesses | |---|---| | Fully interpretable — can be visualised | Prone to overfitting without pruning | | No need to normalise or scale features | Small data changes = very different trees | | Handles both numerical and categorical features | Biased towards features with more values | | Works without feature engineering | Not great at capturing linear relationships | | Fast to train and predict | Single trees often underperform ensembles |