How to Keep AI Chatbots from Making Things Up
Generative AI chatbots like Microsoft Copilot are pretty impressive. They can find facts quickly and help you create all kinds of documents. But beware — they also have a habit of making stuff up. These false facts are called hallucinations, and they happen more often than you might think. Knowing how to prevent or reduce these mistakes can make AI tools much more useful.
Why Do AI Chatbots Hallucinate?
AI systems like ChatGPT and Copilot are built on complex math models called large language models (LLMs). These models are trained on huge amounts of text data. However, they don’t truly understand the facts. Instead, they predict what words should come next based on patterns. When they’re unsure, they sometimes guess, producing plausible but false information. This guessing is built into how these models work, not a bug waiting to be fixed.
Research from OpenAI, the maker of ChatGPT, explains that these models tend to guess when faced with hard questions. They’re rewarded for providing an answer, even if it’s wrong, rather than admitting they don’t know. This leads to hallucinations, which can be harmless or, in some cases, quite problematic. For example, AI has produced fake citations or invented details in official-looking documents. But that doesn’t mean you should abandon these tools. Instead, there are ways to make them more reliable.
Tips to Reduce AI Hallucinations in Copilot
The first step is to give clear instructions. If you want Copilot to stick to the facts, ask it to use a “just-the-facts” tone. Be specific about what you need. For instance, instead of asking a vague question, say: “Write a 350-word report about the projected growth of the work-from-home office furniture market over the next five years. Use a businesslike tone. Provide links for all facts and projections.” Clear prompts help Copilot stay on track and minimize making things up.
Providing context is also key. When you craft your prompts, include details like the document’s purpose, target audience, and why you need it. For example, if you’re asking for a sales pitch, specify who it’s for, how it will be used, and any relevant files or data. This constrains the AI’s research scope and reduces the chance of hallucinations.
Using Reliable Sources and Files
Another effective way to keep AI honest is to direct it to trusted sources. For example, ask Copilot to use only official government websites (.gov) for statistics or reports. You can also point it to specific web pages or upload documents directly from your OneDrive or device. When you upload files, you can tell Copilot to base its answers solely on that document. For example, say: “Write a report based only on the uploaded sales data in homefurn.xlsx.” This limits the AI’s research scope and cuts down on false information.
It’s also wise to ask clear, specific questions. Avoid open-ended prompts like “Where should I spend my ad budget?” Instead, ask: “Based on the data in the uploaded file, what is the projected ROI for digital ads next year?” Being precise reduces the AI’s tendency to hallucinate and helps you get more accurate results.
In summary, AI chatbots are powerful tools, but they’re not perfect. By using clear prompts, providing context, directing them to trustworthy sources, and uploading relevant files, you can greatly reduce hallucinations. This way, you’ll get more accurate, reliable information from your AI assistants, making your work easier and more trustworthy.















What do you think?
It is nice to know your opinion. Leave a comment.