AI EducademyAIEducademy
🌳

أسس الذكاء الاصطناعي

🌱
AI Seeds

Start from zero

🌿
AI Sprouts

Build foundations

🌳
AI Branches

Apply in practice

🏕️
AI Canopy

Go deep

🌲
AI Forest

Master AI

🔨

إتقان الذكاء الاصطناعي

✏️
AI Sketch

Start from zero

🪨
AI Chisel

Build foundations

⚒️
AI Craft

Apply in practice

💎
AI Polish

Go deep

🏆
AI Masterpiece

Master AI

🚀

جاهز للمسيرة المهنية

🚀
منصة انطلاق المقابلات

ابدأ رحلتك

🌟
إتقان المقابلات السلوكية

أتقن المهارات الشخصية

💻
المقابلات التقنية

تفوّق في جولة البرمجة

🤖
مقابلات الذكاء الاصطناعي وتعلم الآلة

إتقان مقابلات تعلم الآلة

🏆
العرض وما بعده

احصل على أفضل عرض

عرض كل البرامج→

المختبر

تم تحميل 7 تجارب
🧠ملعب الشبكة العصبية🤖ذكاء اصطناعي أم إنسان؟💬مختبر التوجيهات🎨مولّد الصور😊محلل المشاعر💡باني روبوت الدردشة⚖️محاكي الأخلاقيات
🎯مقابلة تجريبيةدخول المختبر→
nav.journeyالمدونة
🎯
عن المنصة

جعل تعليم الذكاء الاصطناعي متاحاً للجميع في كل مكان

❓
الأسئلة الشائعة

Common questions answered

✉️
Contact

Get in touch with us

⭐
مفتوح المصدر

مبني علناً على GitHub

ابدأ الآن
AI EducademyAIEducademy

رخصة MIT. مفتوح المصدر

تعلّم

  • البرامج الأكاديمية
  • الدروس
  • المختبر

المجتمع

  • GitHub
  • المساهمة
  • قواعد السلوك
  • عن المنصة
  • الأسئلة الشائعة

الدعم

  • اشترِ لي قهوة ☕
  • footer.terms
  • footer.privacy
  • footer.contact
البرامج الأكاديمية للذكاء الاصطناعي والهندسة›🌿 AI Sprouts›الدروس›أخلاقيات الذكاء الاصطناعي والتحيز
⚖️
AI Sprouts • مبتدئ⏱️ 15 دقيقة للقراءة

أخلاقيات الذكاء الاصطناعي والتحيز

AI Ethics and Bias

Throughout this programme, we have explored how data, algorithms, and neural networks come together to create intelligent systems. But intelligence without responsibility can cause real harm. In this final lesson, we examine the human side of AI - the biases it inherits, the ethical dilemmas it raises, and what we can all do about it.

What Is AI Bias?

AI bias occurs when a system produces results that are systematically unfair to certain groups of people. The AI is not deliberately prejudiced - it simply reflects the patterns in its training data and the assumptions of its designers.

Real-World Examples

Amazon's Hiring Tool (2018) Amazon built an AI to screen job applications. It was trained on CVs submitted over the previous ten years - a period when the tech industry was overwhelmingly male. The AI learned to penalise CVs that contained the word "women's" (as in "women's chess club") and downgraded graduates from all-women's universities. Amazon scrapped the tool.

Facial Recognition Failures Research by Joy Buolamwini at MIT found that commercial facial recognition systems had error rates of up to 34.7% for darker-skinned women, compared to just 0.8% for lighter-skinned men. The training data simply did not represent all faces equally.

A balance scale with a dataset on one side and diverse human figures on the other, illustrating the need for balanced, representative data in AI
Fair AI requires balanced data - when the scales tip, so do the outcomes.
🧠فحص سريع

Why did Amazon's AI hiring tool discriminate against women?

Where Does Bias Come From?

Bias can enter an AI system at every stage:

  • Data collection - If the data over-represents one group, the model learns to favour that group.
  • Labelling - Human annotators bring their own unconscious biases when tagging data.
  • Feature selection - Choosing which variables to include (or exclude) can embed assumptions.
  • Evaluation - If we only test on certain demographics, we miss failures on others.
💡

AI does not create bias from thin air. It amplifies the biases already present in human decisions, historical records, and societal structures. The data is a mirror - and sometimes we do not like what it reflects.

الدرس 5 من 160٪ مكتمل
←تدريب نماذج الذكاء الاصطناعي

Discussion

Sign in to join the discussion

lessons.suggestEdit

Deepfakes and Misinformation

AI can now generate realistic fake videos, images, and audio - known as deepfakes. While the technology has creative uses (film effects, accessibility tools), it also poses serious risks:

  • Political manipulation - Fabricated videos of public figures saying things they never said.
  • Fraud - Voice cloning used to impersonate executives and authorise fraudulent transactions.
  • Harassment - Non-consensual fake imagery targeting private individuals.

Detecting deepfakes is becoming an arms race. As generation tools improve, so must detection tools - but they are always playing catch-up.

🤯

In 2019, criminals used AI-generated voice cloning to impersonate a CEO and trick an employee into transferring £220,000. The voice was so convincing that the employee never suspected it was fake.

🤔
Think about it:

If you saw a video of a world leader declaring war, how would you verify whether it was real? What tools or sources would you trust? In a world of deepfakes, critical thinking about media becomes a survival skill.

Job Displacement and Economic Impact

AI automates tasks that were previously done by humans. This creates both opportunities and challenges:

Tasks at risk of automation:

  • Data entry and processing
  • Basic customer service (chatbots)
  • Routine legal document review
  • Simple medical image screening

Tasks less likely to be automated:

  • Creative problem-solving
  • Complex human relationships (therapy, teaching, leadership)
  • Work requiring physical dexterity in unpredictable environments
  • Ethical judgement and nuanced decision-making

The key distinction is between automating tasks and replacing jobs. Most jobs are collections of many tasks - AI tends to automate some tasks within a role rather than eliminating the role entirely.

🧠فحص سريع

Which type of work is LEAST likely to be fully automated by AI?

Privacy Concerns

AI systems are hungry for data, and that hunger raises significant privacy questions:

  • Surveillance - Facial recognition in public spaces enables mass tracking without consent.
  • Data collection - Voice assistants, fitness trackers, and social media constantly gather personal information.
  • Profiling - AI can infer sensitive information (health conditions, political views, sexuality) from seemingly innocuous data patterns.

The tension is real: more data generally makes AI better, but collecting more data can violate individual privacy.

🤯

Researchers demonstrated that AI could predict a person's sexual orientation from a photo with higher accuracy than humans - raising profound questions about privacy, consent, and the limits of what AI should be allowed to infer.

Responsible AI Principles

Leading organisations have converged on a set of principles for building AI responsibly:

Fairness

AI should treat all people equitably. Models should be tested across different demographics to ensure no group is disadvantaged.

Transparency

People affected by AI decisions deserve to understand how those decisions are made. Black-box models should be accompanied by explanations.

Accountability

There must be clear ownership when AI causes harm. "The algorithm did it" is not an acceptable defence.

Privacy

AI systems must respect data protection laws and individual rights. Data collection should be minimised to what is truly necessary.

Safety

AI should be tested rigorously before deployment, especially in high-stakes domains like healthcare, criminal justice, and finance.

🤔
Think about it:

If an AI system denies someone a loan, who is responsible - the developer who built the model, the bank that deployed it, or the data that trained it? Accountability in AI is one of the hardest questions we face.

🧠فحص سريع

Which responsible AI principle states that people should understand how AI decisions are made?

What You Can Do as a Learner

You do not need to be an AI engineer to make a difference. Here is how you can contribute to more responsible AI:

  • Ask questions - When you encounter an AI system, ask: whose data trained this? Who benefits and who might be harmed?
  • Stay informed - Follow developments in AI ethics. The landscape changes rapidly.
  • Demand transparency - Support organisations and products that explain how their AI works.
  • Diversify perspectives - If you go on to build AI, ensure your teams and your data represent the diversity of the people the system will serve.
  • Think critically - Not every AI application is a good idea, even if it is technically possible.
💡

Technology is not neutral. The choices made by the people who build, deploy, and regulate AI shape the world we all live in. Your awareness and your voice matter.

Key Takeaways

  • AI bias comes from biased data, not from the algorithm itself being prejudiced.
  • Deepfakes pose serious risks to trust, security, and privacy.
  • AI automates tasks rather than replacing entire jobs - but the impact is still significant.
  • Privacy is at risk when AI systems collect and infer personal information at scale.
  • Responsible AI rests on fairness, transparency, accountability, privacy, and safety.
  • Everyone has a role to play in shaping how AI is built and used.

Congratulations - you have completed Level 2: Foundations! You now understand how data, algorithms, neural networks, training, and ethics come together in the world of AI. The next step is to get hands-on.