AI EducademyAIEducademy
🌳

AI基礎

🌱
AI Seeds(種)

ゼロから始める

🌿
AI Sprouts(芽)

基礎を築く

🌳
AI Branches(枝)

実践に活かす

🏕️
AI Canopy(樹冠)

深く学ぶ

🌲
AI Forest(森)

AIをマスターする

🔨

AIマスタリー

✏️
AI Sketch(スケッチ)

ゼロから始める

🪨
AI Chisel(鑿)

基礎を築く

⚒️
AI Craft(制作)

実践に活かす

💎
AI Polish(磨き上げ)

深く学ぶ

🏆
AI Masterpiece(傑作)

AIをマスターする

🚀

キャリア準備

🚀
面接ローンチパッド

旅を始めよう

🌟
行動面接マスター

ソフトスキルをマスター

💻
技術面接

コーディング面接を突破

🤖
AI・ML面接

ML面接をマスター

🏆
オファーとその先

最高のオファーを獲得

全プログラムを見る→

ラボ

7つの実験がロード済み
🧠ニューラルネットワーク プレイグラウンド🤖AIか人間か?💬プロンプトラボ🎨画像生成😊感情分析ツール💡チャットボットビルダー⚖️倫理シミュレーター
🎯模擬面接ラボへ入る→
nav.journeyブログ
🎯
概要

すべての人にAI教育をアクセス可能にする

❓
nav.faq

Common questions answered

✉️
Contact

Get in touch with us

⭐
オープンソース

GitHubで公開開発

始める
AI EducademyAIEducademy

MITライセンス。オープンソース

学ぶ

  • アカデミックス
  • レッスン
  • ラボ

コミュニティ

  • GitHub
  • 貢献する
  • 行動規範
  • 概要
  • よくある質問

サポート

  • コーヒーをおごる ☕
  • footer.terms
  • footer.privacy
  • footer.contact
AI & エンジニアリング アカデミックス›⚒️ AI Craft(制作)›レッスン›システム設計の基礎
🏗️
AI Craft(制作) • 上級⏱️ 20 分で読める

システム設計の基礎

Why System Design Matters

Every large-scale application is assembled from the same fundamental building blocks. Interviewers don't expect you to invent new technology - they want to see you select and combine the right components for a given problem. This lesson covers the toolkit you'll reach for in every design interview.

Client-Server Architecture

At its core, the web follows a simple model: clients send requests, servers process them, and responses flow back. But modern systems add layers between these two endpoints to handle scale, reliability, and performance.

A typical request might traverse: client → CDN → load balancer → API gateway → application server → cache → database. Each layer solves a specific problem, and understanding when each layer is needed separates strong candidates from average ones.

💡

In interviews, always start by sketching the high-level client-server flow before diving into specifics. It shows structured thinking.

Load Balancers: Distributing Traffic

A single server has finite capacity. Load balancers distribute incoming requests across multiple servers using strategies like:

  • Round-robin - requests cycle through servers sequentially
  • Least connections - routes to the server handling the fewest active requests
  • Consistent hashing - maps requests to specific servers based on a key (useful for caching)

Load balancers also perform health checks, automatically removing unhealthy servers from the pool. In cloud environments, services like AWS ALB or Azure Front Door handle this transparently.

🧠クイックチェック

A social media feed service receives 10x more reads than writes. Which load balancing approach best ensures users consistently hit the same cache-warm server?

Caching: Speed at Every Layer

Caching stores frequently accessed data closer to the consumer. It exists at multiple levels:

| Layer | Example | Latency | |-------|---------|---------| | Browser cache | Static assets, API responses | ~0ms | | CDN | Images, CSS, JS files | ~10-50ms | | Application cache | Redis, Memcached | ~1-5ms | | Database cache | Query result cache | ~5-10ms |

Cache invalidation is famously one of the hardest problems in computing. Common strategies include , (update cache on every write), and (application manages cache explicitly).

レッスン 1 / 100%完了
←プログラムに戻る

Discussion

Sign in to join the discussion

lessons.suggestEdit
TTL (time-to-live)
write-through
cache-aside
🤯

Phil Karlton's famous quote - "There are only two hard things in Computer Science: cache invalidation and naming things" - is so widely cited that it's practically a rite of passage in system design interviews.

Databases: SQL vs NoSQL

Choosing the right database is one of the most impactful design decisions:

SQL databases (PostgreSQL, MySQL) offer ACID transactions, strong consistency, and structured schemas. They excel when data has clear relationships and you need complex queries with joins.

NoSQL databases come in several flavours:

  • Document stores (MongoDB) - flexible schemas, great for varied data structures
  • Key-value stores (DynamoDB, Redis) - blazing fast lookups by key
  • Wide-column stores (Cassandra) - massive write throughput, distributed by design
  • Graph databases (Neo4j) - relationship-heavy data like social networks
🧠クイックチェック

You're designing a system that stores user profiles with frequently changing, semi-structured data (different fields per user type). Which database type is the strongest fit?

Message Queues: Async Processing

Not every operation needs an immediate response. Message queues (RabbitMQ, Apache Kafka, AWS SQS) decouple producers from consumers, enabling:

  • Asynchronous processing - send an email after signup without blocking the response
  • Load levelling - absorb traffic spikes by buffering requests
  • Fault tolerance - if a consumer crashes, messages wait in the queue

Kafka deserves special mention: it's not just a queue but a distributed event log, enabling event-driven architectures where multiple consumers can independently process the same stream of events.

Architecture diagram showing client, load balancer, application servers, cache layer, database, and message queue
The fundamental building blocks of a scalable system - each layer addresses a specific concern.

Microservices vs Monolith

| Aspect | Monolith | Microservices | |--------|----------|---------------| | Deployment | Single unit | Independent services | | Scaling | Scale everything together | Scale individual services | | Complexity | Simpler to start | Operational overhead | | Data | Shared database | Database per service |

Start monolith, extract microservices when needed. Most interviewers appreciate candidates who acknowledge that microservices introduce distributed system complexity (network failures, data consistency, deployment orchestration) and aren't always the right choice.

🤔
Think about it:

Netflix famously migrated from a monolith to microservices over several years. But Shopify - handling billions in transactions - still runs a modular monolith. What factors might make one approach better than the other for a given company?

CAP Theorem Simplified

The CAP theorem states that a distributed system can guarantee only two of three properties simultaneously:

  • Consistency - every read returns the most recent write
  • Availability - every request receives a response
  • Partition tolerance - the system continues operating despite network failures

Since network partitions are inevitable in distributed systems, the real choice is between CP (consistency over availability) and AP (availability over consistency).

  • CP systems (e.g., HBase, MongoDB with majority reads) - return errors rather than stale data
  • AP systems (e.g., Cassandra, DynamoDB) - always respond, but data might be slightly stale
🧠クイックチェック

You're designing a banking transaction system where incorrect balances could cause financial loss. Which CAP trade-off should you prioritise?

The Interview Framework

When you walk into a system design interview, follow this structure:

  1. Requirements - clarify functional and non-functional requirements (5 min)
  2. API design - define the key endpoints or interfaces (5 min)
  3. Data model - choose databases and define schemas (5 min)
  4. High-level architecture - sketch the system using today's building blocks (10 min)
  5. Deep dives - scale bottlenecks, handle failures, optimise (15 min)
🤔
Think about it:

Think about a system you use daily - Instagram, Uber, or Spotify. Can you identify which building blocks from this lesson it likely uses? Where would the load balancer sit? What would be cached? Which database type fits its data model?

Key Takeaways

  • Every scalable system uses the same core building blocks - learn them once, apply them everywhere
  • Load balancers, caches, and message queues each solve distinct scaling challenges
  • Database choice (SQL vs NoSQL) depends on data shape, consistency needs, and access patterns
  • The CAP theorem forces trade-offs - know which side your system should favour
  • Follow the structured interview framework: requirements → API → data → architecture → deep dives

Next up: we'll apply these fundamentals to design a URL shortener from scratch.