AI EducademyAIEducademy
🌳

AI 学习路径

🌱
AI 种子

从零开始

🌿
AI 萌芽

打好基础

🌳
AI 枝干

付诸实践

🏕️
AI 树冠

深入探索

🌲
AI 森林

精通AI

🔨

工程技能路径

✏️
AI 草图

从零开始

🪨
AI 雕刻

打好基础

⚒️
AI 匠心

付诸实践

💎
AI 打磨

深入探索

🏆
AI 杰作

精通AI

查看所有学习计划→

实验室

已加载 7 个实验
🧠神经网络游乐场🤖AI 还是人类?💬提示实验室🎨图像生成器😊情感分析器💡聊天机器人构建器⚖️伦理模拟器
进入实验室→
📝

博客

关于AI、教育和技术的最新文章

阅读博客→
nav.faq
🎯
使命

让AI教育触达每一个人、每一个角落

💜
价值观

开源、多语言、社区驱动

⭐
Open Source

在 GitHub 上公开构建

认识创始人→在 GitHub 上查看
立即开始
AI EducademyAIEducademy

MIT 许可证。开源项目

学习

  • 学习计划
  • 课程
  • 实验室

社区

  • GitHub
  • 参与贡献
  • 行为准则
  • 关于
  • 常见问题

支持

  • 请我喝杯咖啡 ☕
AI & 工程学习计划›🌿 AI 萌芽›课程›嵌入与向量数据库
🧭
AI 萌芽 • 中级⏱️ 16 分钟阅读

嵌入与向量数据库

Embeddings - How AI Understands Meaning

After tokenisation, each token is just a number - an index in a vocabulary. But index 4,821 tells the model nothing about meaning. How does AI know that "king" and "queen" are related, or that "bank" can mean a riverbank or a financial institution? The answer is embeddings.

The Problem with One-Hot Encoding

The naive approach represents each word as a vector with one 1 and thousands of 0s. "Cat" might be [0, 0, 1, 0, ..., 0] and "dog" [0, 0, 0, 1, ..., 0].

This has two fatal flaws:

  • No similarity: "Cat" and "dog" are equally distant from each other as "cat" and "democracy." The encoding captures zero semantic information.
  • Massive size: With a 50,000-word vocabulary, every word needs a 50,000-dimensional vector. Wildly inefficient.

Word Embeddings - Meaning as Geometry

An embedding maps each token to a dense vector of, say, 256 or 768 dimensions. Unlike one-hot vectors, these dimensions are learned during training and encode meaning.

Words used in similar contexts end up close together in this space. "Puppy" lands near "kitten." "London" lands near "Paris." The geometry of the space is the meaning.

A 2D projection of word embeddings showing clusters: animals (cat, dog, fish) grouped together, cities (London, Paris, Tokyo) grouped together, and the famous king-queen analogy as vector arithmetic
In embedding space, meaning becomes geometry. Similar concepts cluster together.

Word2Vec - King − Man + Woman = Queen

The 2013 Word2Vec paper showed something remarkable. Trained on large text corpora, the learned vectors exhibit arithmetic relationships:

vector("king") − vector("man") + vector("woman") ≈ vector("queen")

The direction from "man" to "woman" captures the concept of gender. Adding it to "king" moves to "queen." This is not programmed - it emerges from patterns in language.

Other examples: Paris − France + Italy ≈ Rome, bigger − big + small ≈ smaller.

\ud83e\udd2f

Word2Vec was created by Tomáš Mikolov at Google in 2013. The paper has over 40,000 citations and is considered one of the most influential NLP papers ever published. It demonstrated that simple neural networks trained on raw text could learn astonishing semantic relationships.

Embedding Dimensions

Modern models use different embedding sizes:

| Model | Embedding dimensions | |-------|---------------------| | Word2Vec | 100–300 | | BERT | 768 | | GPT-3 | 12,288 | | OpenAI text-embedding-3-large | 3,072 |

More dimensions capture finer distinctions but require more memory and compute. Think of it like describing a person: 3 dimensions (height, weight, age) give a rough sketch; 768 dimensions paint a detailed portrait.

\ud83e\udde0小测验

What does the famous equation 'king − man + woman ≈ queen' demonstrate?

From Words to Sentences

Word embeddings represent individual words, but we often need to compare entire sentences or documents. Sentence embeddings (from models like Sentence-BERT or OpenAI's embedding API) compress a whole passage into a single vector.

"How do I reset my password?" and "I forgot my login credentials" would have very similar sentence embeddings, even though they share almost no words. The embedding captures intent, not just vocabulary.

Measuring Similarity - Cosine Similarity

To compare two embeddings, we use cosine similarity - the cosine of the angle between two vectors. It ranges from −1 (opposite) to +1 (identical direction).

  • "Happy" and "joyful": cosine ≈ 0.85 (very similar).
  • "Happy" and "table": cosine ≈ 0.10 (unrelated).
  • "Love" and "hate": cosine might be ≈ 0.40 (related but opposite).

Cosine similarity ignores vector magnitude, focusing purely on direction - which is where meaning lives.

\ud83e\udd14
Think about it:

"Love" and "hate" are opposites in meaning but might have moderate cosine similarity because they appear in similar contexts (emotions, relationships). What does this tell us about the limitations of embeddings trained purely on word co-occurrence?

Vector Databases - Search by Meaning

A vector database stores millions of embeddings and retrieves the most similar ones blazingly fast. Instead of keyword matching ("find documents containing 'machine learning'"), you search by meaning ("find documents about AI education").

Popular vector databases include:

  • Pinecone - fully managed, scales effortlessly.
  • Weaviate - open-source with hybrid search (vectors + keywords).
  • ChromaDB - lightweight, great for prototyping.
  • pgvector - adds vector search to PostgreSQL.

These databases use algorithms like HNSW (Hierarchical Navigable Small World) to search billions of vectors in milliseconds.

\ud83e\udde0小测验

What advantage does vector search have over traditional keyword search?

RAG - Retrieval-Augmented Generation

RAG is one of the most important patterns in modern AI. It combines vector search with language models:

  1. Embed your documents and store them in a vector database.
  2. When a user asks a question, embed the query.
  3. Retrieve the most similar document chunks via vector search.
  4. Feed those chunks to the language model as context.
  5. The model generates an answer grounded in your data.

RAG lets language models answer questions about your specific data - company documents, product catalogues, research papers - without retraining. It dramatically reduces hallucination because the model has real sources to reference.

\ud83e\udde0小测验

In a RAG system, what role does the vector database play?

Practical Applications

Embeddings power countless real-world systems:

  • Semantic search - find relevant results regardless of exact wording.
  • Recommendations - "users who liked this also liked..." via embedding similarity.
  • Clustering - group similar support tickets, reviews, or documents automatically.
  • Anomaly detection - spot outliers that are far from any cluster.
  • Duplicate detection - find near-identical content across large corpora.
\ud83e\udd2f

Spotify uses audio embeddings to recommend songs. Each track is embedded based on its acoustic features, and recommendations come from finding nearby vectors - songs that "sound similar" in embedding space.

\ud83e\udd14
Think about it:

If you embedded every product in an online shop, how could you build a recommendation system that says "customers who viewed this item might also like..." without relying on purchase history?

Key Takeaways

  • Embeddings are dense vector representations where meaning becomes geometry.
  • Similar concepts cluster together; relationships appear as directions.
  • Cosine similarity measures how close two meanings are.
  • Vector databases enable search by meaning at massive scale.
  • RAG combines vector search with language models to answer questions from your own data.

📚 Further Reading

  • Jay Alammar - The Illustrated Word2Vec - Visual, intuitive walkthrough of how word embeddings work
  • Pinecone Learning Centre - What Are Embeddings? - Practical guide to embeddings and vector search
  • OpenAI Embeddings Guide - How to generate and use embeddings with the OpenAI API
第 9 课,共 16 课已完成 0%
←分词
评估指标→