You've probably heard people talking about ChatGPT. Maybe you've tried it yourself. But what is it, really? How does a computer write essays, answer questions, and even tell jokes?
Let's break it down in plain English.
AI chatbots like ChatGPT, Claude, and Gemini are programmes that generate human-like text. You type a message, and they respond with something that reads as if a person wrote it.
But here's the important part: they don't think. They don't understand the world the way you do. They're incredibly sophisticated text prediction machines.
Think of it like this: imagine you've read every book, every website, and every article ever written in English. If someone starts a sentence with "The capital of France is…" you'd predict the next word is "Paris" - not because you've been to Paris, but because you've seen that pattern thousands of times.
That's essentially what chatbots do, but at an enormous scale.
The process is surprisingly simple in concept:
It's like the world's most advanced autocomplete. Your phone keyboard predicts one word ahead. ChatGPT predicts hundreds of words ahead, keeping the whole response coherent and relevant.
GPT-4 was trained on roughly 13 trillion tokens (pieces of text). If you tried to read all that training data yourself, reading 24 hours a day, it would take you over 100,000 years.
How do AI chatbots like ChatGPT generate their responses?
AI chatbots genuinely shine in several areas:
They're like a very well-read assistant who can write quickly and never gets tired.
Here's the catch: chatbots sometimes make things up. And they do it with complete confidence.
This is called a hallucination. The AI generates text that sounds perfectly reasonable but is factually wrong. It might invent a book that doesn't exist, cite a study that was never published, or give you a recipe with measurements that would taste terrible.
Why does this happen? Remember, chatbots predict the most likely next word. Sometimes the most likely-sounding text isn't the most accurate text. The AI has no way to check whether what it's saying is true - it only knows what sounds right.
Never blindly trust a chatbot's response for important decisions. Always verify facts, especially for medical advice, legal information, financial matters, or academic research. Think of chatbots as a helpful starting point, not the final answer.
If you asked a chatbot "Who won the Nobel Prize for Literature in 2019?" it might give you the correct answer - or it might confidently name someone who never won. How would you verify the answer? What habits should we build when using AI tools?
The way you talk to a chatbot matters enormously. This is called prompting. Here are five simple tips:
Don't accept the first response. Say "Make it shorter," "Add more detail," or "Make the tone more formal."
Which of these is the BEST prompt for an AI chatbot?
AI chatbots are evolving rapidly. Recent developments include:
We're moving towards AI that acts less like a search engine and more like a knowledgeable colleague who knows your preferences and working style.
ChatGPT reached 100 million users within just two months of launching in November 2022. For comparison, it took Instagram over two years and TikTok about nine months to reach the same milestone.
What is a 'hallucination' in the context of AI chatbots?
You've learned how chatbots work in theory. In the next lesson, you'll get hands-on and try several AI tools yourself - from image recognition to text generation to AI art. Time to experiment!