AI EducademyAIEducademy
🌳

Fundamentos de IA

🌱
AI Seeds

Comece do zero

🌿
AI Sprouts

Construa bases

🌳
AI Branches

Aplique na prática

🏕️
AI Canopy

Aprofunde-se

🌲
AI Forest

Domine a IA

🔨

Mestria em IA

✏️
AI Sketch

Comece do zero

🪨
AI Chisel

Construa bases

⚒️
AI Craft

Aplique na prática

💎
AI Polish

Aprofunde-se

🏆
AI Masterpiece

Domine a IA

🚀

Preparação para Carreira

🚀
Plataforma de Lançamento de Entrevistas

Comece sua jornada

🌟
Domínio Comportamental

Domine habilidades interpessoais

💻
Entrevistas Técnicas

Passe na rodada de programação

🤖
Entrevistas de IA e ML

Domínio em entrevistas de ML

🏆
Oferta e Além

Conquiste a melhor oferta

Ver Todos os Programas→

Laboratório

7 experimentos carregados
🧠Playground de Rede Neural🤖IA ou Humano?💬Laboratório de Prompts🎨Gerador de Imagens😊Analisador de Sentimento💡Construtor de Chatbots⚖️Simulador de Ética
🎯Entrevista simuladaEntrar no Laboratório→
JornadaBlog
🎯
Sobre

Tornar a educação em IA acessível para todos, em todo lugar

❓
Perguntas Frequentes

Common questions answered

✉️
Contact

Get in touch with us

⭐
Open Source

Construído de forma aberta no GitHub

Começar
AI EducademyAIEducademy

Licença MIT. Open Source

Aprender

  • Acadêmicos
  • Aulas
  • Laboratório

Comunidade

  • GitHub
  • Contribuir
  • Código de Conduta
  • Sobre
  • Perguntas Frequentes

Suporte

  • Me Pague um Café ☕
  • Termos de Serviço
  • Política de Privacidade
  • Contato
Acadêmicos de IA e Engenharia›🌿 AI Sprouts›Aulas›Ética e Viés em IA
⚖️
AI Sprouts • Iniciante⏱️ 15 min de leitura

Ética e Viés em IA

AI Ethics and Bias

Throughout this programme, we have explored how data, algorithms, and neural networks come together to create intelligent systems. But intelligence without responsibility can cause real harm. In this final lesson, we examine the human side of AI - the biases it inherits, the ethical dilemmas it raises, and what we can all do about it.

What Is AI Bias?

AI bias occurs when a system produces results that are systematically unfair to certain groups of people. The AI is not deliberately prejudiced - it simply reflects the patterns in its training data and the assumptions of its designers.

Real-World Examples

Amazon's Hiring Tool (2018) Amazon built an AI to screen job applications. It was trained on CVs submitted over the previous ten years - a period when the tech industry was overwhelmingly male. The AI learned to penalise CVs that contained the word "women's" (as in "women's chess club") and downgraded graduates from all-women's universities. Amazon scrapped the tool.

Facial Recognition Failures Research by Joy Buolamwini at MIT found that commercial facial recognition systems had error rates of up to 34.7% for darker-skinned women, compared to just 0.8% for lighter-skinned men. The training data simply did not represent all faces equally.

A balance scale with a dataset on one side and diverse human figures on the other, illustrating the need for balanced, representative data in AI
Fair AI requires balanced data - when the scales tip, so do the outcomes.
🧠Verificação Rápida

Why did Amazon's AI hiring tool discriminate against women?

Where Does Bias Come From?

Bias can enter an AI system at every stage:

  • Data collection - If the data over-represents one group, the model learns to favour that group.
  • Labelling - Human annotators bring their own unconscious biases when tagging data.
  • Feature selection - Choosing which variables to include (or exclude) can embed assumptions.
  • Evaluation - If we only test on certain demographics, we miss failures on others.
💡

AI does not create bias from thin air. It amplifies the biases already present in human decisions, historical records, and societal structures. The data is a mirror - and sometimes we do not like what it reflects.

Aula 5 de 160% concluído
←Treinamento de Modelos de IA

Discussion

Sign in to join the discussion

Sugerir uma edição nesta lição

Deepfakes and Misinformation

AI can now generate realistic fake videos, images, and audio - known as deepfakes. While the technology has creative uses (film effects, accessibility tools), it also poses serious risks:

  • Political manipulation - Fabricated videos of public figures saying things they never said.
  • Fraud - Voice cloning used to impersonate executives and authorise fraudulent transactions.
  • Harassment - Non-consensual fake imagery targeting private individuals.

Detecting deepfakes is becoming an arms race. As generation tools improve, so must detection tools - but they are always playing catch-up.

🤯

In 2019, criminals used AI-generated voice cloning to impersonate a CEO and trick an employee into transferring £220,000. The voice was so convincing that the employee never suspected it was fake.

🤔
Think about it:

If you saw a video of a world leader declaring war, how would you verify whether it was real? What tools or sources would you trust? In a world of deepfakes, critical thinking about media becomes a survival skill.

Job Displacement and Economic Impact

AI automates tasks that were previously done by humans. This creates both opportunities and challenges:

Tasks at risk of automation:

  • Data entry and processing
  • Basic customer service (chatbots)
  • Routine legal document review
  • Simple medical image screening

Tasks less likely to be automated:

  • Creative problem-solving
  • Complex human relationships (therapy, teaching, leadership)
  • Work requiring physical dexterity in unpredictable environments
  • Ethical judgement and nuanced decision-making

The key distinction is between automating tasks and replacing jobs. Most jobs are collections of many tasks - AI tends to automate some tasks within a role rather than eliminating the role entirely.

🧠Verificação Rápida

Which type of work is LEAST likely to be fully automated by AI?

Privacy Concerns

AI systems are hungry for data, and that hunger raises significant privacy questions:

  • Surveillance - Facial recognition in public spaces enables mass tracking without consent.
  • Data collection - Voice assistants, fitness trackers, and social media constantly gather personal information.
  • Profiling - AI can infer sensitive information (health conditions, political views, sexuality) from seemingly innocuous data patterns.

The tension is real: more data generally makes AI better, but collecting more data can violate individual privacy.

🤯

Researchers demonstrated that AI could predict a person's sexual orientation from a photo with higher accuracy than humans - raising profound questions about privacy, consent, and the limits of what AI should be allowed to infer.

Responsible AI Principles

Leading organisations have converged on a set of principles for building AI responsibly:

Fairness

AI should treat all people equitably. Models should be tested across different demographics to ensure no group is disadvantaged.

Transparency

People affected by AI decisions deserve to understand how those decisions are made. Black-box models should be accompanied by explanations.

Accountability

There must be clear ownership when AI causes harm. "The algorithm did it" is not an acceptable defence.

Privacy

AI systems must respect data protection laws and individual rights. Data collection should be minimised to what is truly necessary.

Safety

AI should be tested rigorously before deployment, especially in high-stakes domains like healthcare, criminal justice, and finance.

🤔
Think about it:

If an AI system denies someone a loan, who is responsible - the developer who built the model, the bank that deployed it, or the data that trained it? Accountability in AI is one of the hardest questions we face.

🧠Verificação Rápida

Which responsible AI principle states that people should understand how AI decisions are made?

What You Can Do as a Learner

You do not need to be an AI engineer to make a difference. Here is how you can contribute to more responsible AI:

  • Ask questions - When you encounter an AI system, ask: whose data trained this? Who benefits and who might be harmed?
  • Stay informed - Follow developments in AI ethics. The landscape changes rapidly.
  • Demand transparency - Support organisations and products that explain how their AI works.
  • Diversify perspectives - If you go on to build AI, ensure your teams and your data represent the diversity of the people the system will serve.
  • Think critically - Not every AI application is a good idea, even if it is technically possible.
💡

Technology is not neutral. The choices made by the people who build, deploy, and regulate AI shape the world we all live in. Your awareness and your voice matter.

Key Takeaways

  • AI bias comes from biased data, not from the algorithm itself being prejudiced.
  • Deepfakes pose serious risks to trust, security, and privacy.
  • AI automates tasks rather than replacing entire jobs - but the impact is still significant.
  • Privacy is at risk when AI systems collect and infer personal information at scale.
  • Responsible AI rests on fairness, transparency, accountability, privacy, and safety.
  • Everyone has a role to play in shaping how AI is built and used.

Congratulations - you have completed Level 2: Foundations! You now understand how data, algorithms, neural networks, training, and ethics come together in the world of AI. The next step is to get hands-on.