Evaluations
Hallucinations
-
Choose Advanced Models
-
Write Clear Instructions
Tell the AI: “Answer only with verified information. If you’re not sure, say you don’t know.”
- Use Step-by-Step Reasoning
-
Guide the model: “Let’s solve this step by step.”
-
More Explicit. For example:
- “First, identify the key variables.
- Second, check what information is missing.
- Third, calculate each component separately. Finally, combine the results.”
-
For more control, you can implement this logic in code as a state graph
- Provide Examples
- few-shot prompting
- Ground with Real Data
- RAG
-
Lower the Temperature
-
Implement Self-Checks
- ask it to verify: “Are you sure? Can you double-check that information?”
- Voting mechanism
- Add External Verification
- Rule based external validation
- Fine-Tune on Your Domain
- Fine-tune on verified
- Use Human Feedback
- Reinforcement Learning from Human Feedback
Evaluations
References:
- LLM Eval FAQ, Hamel Husain Shreya Shankar, 2025-06-29
- Huggingface Evaluations