AI EducademyAIEducademy
🌳

AI基礎

🌱
AI Seeds(種)

ゼロから始める

🌿
AI Sprouts(芽)

基礎を築く

🌳
AI Branches(枝)

実践に活かす

🏕️
AI Canopy(樹冠)

深く学ぶ

🌲
AI Forest(森)

AIをマスターする

🔨

AIマスタリー

✏️
AI Sketch(スケッチ)

ゼロから始める

🪨
AI Chisel(鑿)

基礎を築く

⚒️
AI Craft(制作)

実践に活かす

💎
AI Polish(磨き上げ)

深く学ぶ

🏆
AI Masterpiece(傑作)

AIをマスターする

🚀

キャリア準備

🚀
面接ローンチパッド

旅を始めよう

🌟
行動面接マスター

ソフトスキルをマスター

💻
技術面接

コーディング面接を突破

🤖
AI・ML面接

ML面接をマスター

🏆
オファーとその先

最高のオファーを獲得

全プログラムを見る→

ラボ

7つの実験がロード済み
🧠ニューラルネットワーク プレイグラウンド🤖AIか人間か?💬プロンプトラボ🎨画像生成😊感情分析ツール💡チャットボットビルダー⚖️倫理シミュレーター
🎯模擬面接ラボへ入る→
nav.journeyブログ
🎯
概要

すべての人にAI教育をアクセス可能にする

❓
nav.faq

Common questions answered

✉️
Contact

Get in touch with us

⭐
オープンソース

GitHubで公開開発

始める
AI EducademyAIEducademy

MITライセンス。オープンソース

学ぶ

  • アカデミックス
  • レッスン
  • ラボ

コミュニティ

  • GitHub
  • 貢献する
  • 行動規範
  • 概要
  • よくある質問

サポート

  • コーヒーをおごる ☕
  • footer.terms
  • footer.privacy
  • footer.contact
AI & エンジニアリング アカデミックス›🌿 AI Sprouts(芽)›レッスン›ニューラルネットワーク入門
🕸️
AI Sprouts(芽) • 初級⏱️ 18 分で読める

ニューラルネットワーク入門

Introduction to Neural Networks

Decision trees and KNN are powerful, but the technology behind today's most impressive AI - from ChatGPT to self-driving cars - is the neural network. Inspired by the human brain, neural networks can learn incredibly complex patterns that simpler algorithms cannot.

Let us peel back the layers and see how they work.

The Brain Analogy

Your brain contains roughly 86 billion neurons connected by trillions of synapses. When you learn something new, certain neurons fire together and the connections between them strengthen. This is often summarised as: neurons that fire together, wire together.

Artificial neural networks borrow this idea. They use artificial neurons (small mathematical functions) connected in a network. When the network practises on data, the connections that lead to correct answers get strengthened, and the ones that lead to wrong answers get weakened.

🤯

The first artificial neuron - the Perceptron - was invented in 1958 by Frank Rosenblatt. It could only learn simple patterns, but it laid the groundwork for everything we have today.

The Three Types of Layers

Every neural network has three types of layers:

1. Input Layer

This is where data enters the network. Each neuron in this layer receives one feature from the dataset. For a 28×28 pixel image, the input layer would have 784 neurons - one for each pixel.

2. Hidden Layers

These are the layers between input and output where the real learning happens. Each neuron takes inputs, processes them, and passes the result forward. A network can have one hidden layer or hundreds - the more layers, the "deeper" the network.

3. Output Layer

This layer produces the final answer. For a digit classifier (0–9), the output layer has 10 neurons, each representing the probability of a different digit.

A diagram showing three layers of a neural network: input layer on the left, two hidden layers in the middle, and output layer on the right, with arrows connecting neurons between layers
A neural network processes data through layers - input, hidden, and output - to arrive at a prediction.
🧠クイックチェック

What is the role of hidden layers in a neural network?

Weights and Biases: The Volume Knobs

Every connection between two neurons has a weight - a number that controls how much influence one neuron has on the next. Think of weights as : turning one up makes that connection louder; turning it down makes it quieter.

レッスン 3 / 160%完了
←アルゴリズム入門

Discussion

Sign in to join the discussion

lessons.suggestEdit
volume knobs

Each neuron also has a bias - a number that shifts the output up or down, like adjusting the baseline volume before any signal arrives.

When a neural network learns, it is really just adjusting thousands or millions of these weights and biases until it finds the combination that gives the best predictions.

🤔
Think about it:

Imagine you are mixing music and you have hundreds of volume knobs - one for each instrument and microphone. Getting the perfect mix means carefully adjusting every knob. That is what training a neural network is like, except with millions of knobs adjusted automatically.

How a Neural Network Learns

Learning happens in a cycle with four steps:

Step 1: Forward Pass

Data flows through the network from input to output. Each neuron multiplies its inputs by its weights, adds its bias, and passes the result through an activation function (which decides whether the neuron should "fire" or stay quiet). The network produces a prediction.

Step 2: Calculate the Error

The prediction is compared to the correct answer. The difference is the error (also called loss). A prediction of "7" when the answer is "3" produces a large error; a prediction of "3" produces a small one.

Step 3: Backpropagation

The error is sent backwards through the network. Each weight learns how much it contributed to the mistake. This is the mathematical magic of backpropagation - it figures out which knobs to turn and by how much.

Step 4: Update Weights

The weights and biases are adjusted slightly to reduce the error. Then the cycle repeats with the next piece of data.

💡

A neural network does not learn in one go. It repeats this cycle thousands or millions of times, gradually getting better with each pass - much like how you improve at a skill through practice.

🧠クイックチェック

What does backpropagation do in a neural network?

Visual Walkthrough: Classifying a Handwritten Digit

Let us trace how a neural network classifies a handwritten "5" from the MNIST dataset:

  1. Input: The 28×28 pixel image is flattened into 784 numbers (pixel brightness values from 0 to 255). These enter the 784 input neurons.

  2. Hidden layers: The first hidden layer might detect simple edges and curves. The second hidden layer combines those into shapes like loops and strokes. Deeper layers recognise digit-like patterns.

  3. Output: The 10 output neurons produce probabilities. The network might output:

    • Digit 3: 5% confident
    • Digit 5: 89% confident
    • Digit 8: 4% confident
    • All others: less than 1%
  4. Decision: The network picks the digit with the highest probability - 5. Correct!

  5. If wrong: Backpropagation adjusts the weights so next time, the correct digit gets a higher score.

🤯

Modern neural networks can classify handwritten digits with over 99.7% accuracy - better than most humans. The MNIST dataset has become so "easy" for AI that researchers now use harder benchmarks to test new models.

🤔
Think about it:

When you look at a handwritten "5", your brain does not analyse individual pixels. You recognise the overall shape instantly. Neural networks learn to do something similar - but they build that understanding one layer at a time, starting from pixels and working up to shapes.

Activation Functions: The Gatekeepers

Not every signal should pass through a neuron at full strength. Activation functions act as gatekeepers that decide whether and how much a neuron should fire.

The most common activation function today is ReLU (Rectified Linear Unit). It has a simple rule: if the input is positive, let it through unchanged; if negative, output zero. This simplicity makes it fast to compute while still allowing the network to learn complex patterns.

Without activation functions, no matter how many layers you stack, the network could only learn simple linear relationships - like drawing straight lines through data. Activation functions give neural networks the ability to learn curves, boundaries, and intricate patterns.

Why "Deep" Learning?

When a neural network has many hidden layers, it is called a deep neural network, and training it is called deep learning. Depth allows the network to learn hierarchies of features:

  • Layer 1: Detects edges and gradients.
  • Layer 2: Combines edges into textures and simple shapes.
  • Layer 3: Recognises parts of objects (eyes, wheels, letters).
  • Layer 4+: Identifies whole objects and scenes.

This layered approach is why deep learning excels at complex tasks like image recognition, language understanding, and game playing.

🧠クイックチェック

Why are neural networks with many hidden layers called 'deep' learning?

Key Takeaways

  • Neural networks are inspired by the brain's neurons and synapses.
  • They have three layer types: input, hidden, and output.
  • Weights and biases are the adjustable knobs that control learning.
  • Learning follows a cycle: forward pass → error → backpropagation → update.
  • Deep learning uses many hidden layers to learn complex patterns.

In the next lesson, we will zoom in on the training process - how you actually teach a neural network to get smarter over time.