AI EducademyAIEducademy
🌳

AI基础

🌱
AI 种子

从零开始

🌿
AI 萌芽

打好基础

🌳
AI 枝干

付诸实践

🏕️
AI 树冠

深入探索

🌲
AI 森林

精通AI

🔨

AI精通

✏️
AI 草图

从零开始

🪨
AI 雕刻

打好基础

⚒️
AI 匠心

付诸实践

💎
AI 打磨

深入探索

🏆
AI 杰作

精通AI

🚀

职业准备

🚀
面试发射台

开启你的旅程

🌟
行为面试精通

掌握软技能

💻
技术面试

通过编程轮次

🤖
AI与ML面试

ML面试精通

🏆
Offer与未来

拿下最好的Offer

查看所有学习计划→

实验室

已加载 7 个实验
🧠神经网络游乐场🤖AI 还是人类?💬提示实验室🎨图像生成器😊情感分析器💡聊天机器人构建器⚖️伦理模拟器
🎯模拟面试进入实验室→
学习旅程博客
🎯
关于

让AI教育触达每一个人、每一个角落

❓
常见问题

Common questions answered

✉️
Contact

Get in touch with us

⭐
Open Source

在 GitHub 上公开构建

立即开始
AI EducademyAIEducademy

MIT 许可证。开源项目

学习

  • 学习计划
  • 课程
  • 实验室

社区

  • GitHub
  • 参与贡献
  • 行为准则
  • 关于
  • 常见问题

支持

  • 请我喝杯咖啡 ☕
  • 服务条款
  • 隐私政策
  • 联系我们
AI & 工程学习计划›🌿 AI 萌芽›课程›AI 伦理与偏见
⚖️
AI 萌芽 • 入门⏱️ 15 分钟阅读

AI 伦理与偏见

AI Ethics and Bias

Throughout this programme, we have explored how data, algorithms, and neural networks come together to create intelligent systems. But intelligence without responsibility can cause real harm. In this final lesson, we examine the human side of AI - the biases it inherits, the ethical dilemmas it raises, and what we can all do about it.

What Is AI Bias?

AI bias occurs when a system produces results that are systematically unfair to certain groups of people. The AI is not deliberately prejudiced - it simply reflects the patterns in its training data and the assumptions of its designers.

Real-World Examples

Amazon's Hiring Tool (2018) Amazon built an AI to screen job applications. It was trained on CVs submitted over the previous ten years - a period when the tech industry was overwhelmingly male. The AI learned to penalise CVs that contained the word "women's" (as in "women's chess club") and downgraded graduates from all-women's universities. Amazon scrapped the tool.

Facial Recognition Failures Research by Joy Buolamwini at MIT found that commercial facial recognition systems had error rates of up to 34.7% for darker-skinned women, compared to just 0.8% for lighter-skinned men. The training data simply did not represent all faces equally.

A balance scale with a dataset on one side and diverse human figures on the other, illustrating the need for balanced, representative data in AI
Fair AI requires balanced data - when the scales tip, so do the outcomes.
🧠小测验

Why did Amazon's AI hiring tool discriminate against women?

Where Does Bias Come From?

Bias can enter an AI system at every stage:

  • Data collection - If the data over-represents one group, the model learns to favour that group.
  • Labelling - Human annotators bring their own unconscious biases when tagging data.
  • Feature selection - Choosing which variables to include (or exclude) can embed assumptions.
  • Evaluation - If we only test on certain demographics, we miss failures on others.
💡

AI does not create bias from thin air. It amplifies the biases already present in human decisions, historical records, and societal structures. The data is a mirror - and sometimes we do not like what it reflects.

第 5 课,共 16 课已完成 0%
←训练AI模型

Discussion

Sign in to join the discussion

建议修改本课内容

Deepfakes and Misinformation

AI can now generate realistic fake videos, images, and audio - known as deepfakes. While the technology has creative uses (film effects, accessibility tools), it also poses serious risks:

  • Political manipulation - Fabricated videos of public figures saying things they never said.
  • Fraud - Voice cloning used to impersonate executives and authorise fraudulent transactions.
  • Harassment - Non-consensual fake imagery targeting private individuals.

Detecting deepfakes is becoming an arms race. As generation tools improve, so must detection tools - but they are always playing catch-up.

🤯

In 2019, criminals used AI-generated voice cloning to impersonate a CEO and trick an employee into transferring £220,000. The voice was so convincing that the employee never suspected it was fake.

🤔
Think about it:

If you saw a video of a world leader declaring war, how would you verify whether it was real? What tools or sources would you trust? In a world of deepfakes, critical thinking about media becomes a survival skill.

Job Displacement and Economic Impact

AI automates tasks that were previously done by humans. This creates both opportunities and challenges:

Tasks at risk of automation:

  • Data entry and processing
  • Basic customer service (chatbots)
  • Routine legal document review
  • Simple medical image screening

Tasks less likely to be automated:

  • Creative problem-solving
  • Complex human relationships (therapy, teaching, leadership)
  • Work requiring physical dexterity in unpredictable environments
  • Ethical judgement and nuanced decision-making

The key distinction is between automating tasks and replacing jobs. Most jobs are collections of many tasks - AI tends to automate some tasks within a role rather than eliminating the role entirely.

🧠小测验

Which type of work is LEAST likely to be fully automated by AI?

Privacy Concerns

AI systems are hungry for data, and that hunger raises significant privacy questions:

  • Surveillance - Facial recognition in public spaces enables mass tracking without consent.
  • Data collection - Voice assistants, fitness trackers, and social media constantly gather personal information.
  • Profiling - AI can infer sensitive information (health conditions, political views, sexuality) from seemingly innocuous data patterns.

The tension is real: more data generally makes AI better, but collecting more data can violate individual privacy.

🤯

Researchers demonstrated that AI could predict a person's sexual orientation from a photo with higher accuracy than humans - raising profound questions about privacy, consent, and the limits of what AI should be allowed to infer.

Responsible AI Principles

Leading organisations have converged on a set of principles for building AI responsibly:

Fairness

AI should treat all people equitably. Models should be tested across different demographics to ensure no group is disadvantaged.

Transparency

People affected by AI decisions deserve to understand how those decisions are made. Black-box models should be accompanied by explanations.

Accountability

There must be clear ownership when AI causes harm. "The algorithm did it" is not an acceptable defence.

Privacy

AI systems must respect data protection laws and individual rights. Data collection should be minimised to what is truly necessary.

Safety

AI should be tested rigorously before deployment, especially in high-stakes domains like healthcare, criminal justice, and finance.

🤔
Think about it:

If an AI system denies someone a loan, who is responsible - the developer who built the model, the bank that deployed it, or the data that trained it? Accountability in AI is one of the hardest questions we face.

🧠小测验

Which responsible AI principle states that people should understand how AI decisions are made?

What You Can Do as a Learner

You do not need to be an AI engineer to make a difference. Here is how you can contribute to more responsible AI:

  • Ask questions - When you encounter an AI system, ask: whose data trained this? Who benefits and who might be harmed?
  • Stay informed - Follow developments in AI ethics. The landscape changes rapidly.
  • Demand transparency - Support organisations and products that explain how their AI works.
  • Diversify perspectives - If you go on to build AI, ensure your teams and your data represent the diversity of the people the system will serve.
  • Think critically - Not every AI application is a good idea, even if it is technically possible.
💡

Technology is not neutral. The choices made by the people who build, deploy, and regulate AI shape the world we all live in. Your awareness and your voice matter.

Key Takeaways

  • AI bias comes from biased data, not from the algorithm itself being prejudiced.
  • Deepfakes pose serious risks to trust, security, and privacy.
  • AI automates tasks rather than replacing entire jobs - but the impact is still significant.
  • Privacy is at risk when AI systems collect and infer personal information at scale.
  • Responsible AI rests on fairness, transparency, accountability, privacy, and safety.
  • Everyone has a role to play in shaping how AI is built and used.

Congratulations - you have completed Level 2: Foundations! You now understand how data, algorithms, neural networks, training, and ethics come together in the world of AI. The next step is to get hands-on.